clonebox 0.1.12__tar.gz → 0.1.13__tar.gz
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- {clonebox-0.1.12/src/clonebox.egg-info → clonebox-0.1.13}/PKG-INFO +211 -18
- {clonebox-0.1.12 → clonebox-0.1.13}/README.md +210 -17
- {clonebox-0.1.12 → clonebox-0.1.13}/pyproject.toml +1 -1
- {clonebox-0.1.12 → clonebox-0.1.13}/src/clonebox/__init__.py +1 -1
- {clonebox-0.1.12 → clonebox-0.1.13}/src/clonebox/cli.py +73 -24
- {clonebox-0.1.12 → clonebox-0.1.13}/src/clonebox/cloner.py +6 -2
- {clonebox-0.1.12 → clonebox-0.1.13/src/clonebox.egg-info}/PKG-INFO +211 -18
- {clonebox-0.1.12 → clonebox-0.1.13}/LICENSE +0 -0
- {clonebox-0.1.12 → clonebox-0.1.13}/setup.cfg +0 -0
- {clonebox-0.1.12 → clonebox-0.1.13}/src/clonebox/__main__.py +0 -0
- {clonebox-0.1.12 → clonebox-0.1.13}/src/clonebox/detector.py +0 -0
- {clonebox-0.1.12 → clonebox-0.1.13}/src/clonebox.egg-info/SOURCES.txt +0 -0
- {clonebox-0.1.12 → clonebox-0.1.13}/src/clonebox.egg-info/dependency_links.txt +0 -0
- {clonebox-0.1.12 → clonebox-0.1.13}/src/clonebox.egg-info/entry_points.txt +0 -0
- {clonebox-0.1.12 → clonebox-0.1.13}/src/clonebox.egg-info/requires.txt +0 -0
- {clonebox-0.1.12 → clonebox-0.1.13}/src/clonebox.egg-info/top_level.txt +0 -0
- {clonebox-0.1.12 → clonebox-0.1.13}/tests/test_cli.py +0 -0
- {clonebox-0.1.12 → clonebox-0.1.13}/tests/test_cloner.py +0 -0
- {clonebox-0.1.12 → clonebox-0.1.13}/tests/test_detector.py +0 -0
- {clonebox-0.1.12 → clonebox-0.1.13}/tests/test_network.py +0 -0
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
Metadata-Version: 2.4
|
|
2
2
|
Name: clonebox
|
|
3
|
-
Version: 0.1.
|
|
3
|
+
Version: 0.1.13
|
|
4
4
|
Summary: Clone your workstation environment to an isolated VM with selective apps, paths and services
|
|
5
5
|
Author: CloneBox Team
|
|
6
6
|
License: Apache-2.0
|
|
@@ -235,6 +235,9 @@ clonebox open . --user
|
|
|
235
235
|
|
|
236
236
|
# 6. Stop VM when done
|
|
237
237
|
clonebox stop . --user
|
|
238
|
+
|
|
239
|
+
# 7. Delete VM if needed
|
|
240
|
+
clonebox delete . --user --yes
|
|
238
241
|
```
|
|
239
242
|
|
|
240
243
|
### Development Environment with Browser Profiles
|
|
@@ -578,9 +581,10 @@ clonebox clone . --network auto
|
|
|
578
581
|
| `clonebox start .` | Start VM from `.clonebox.yaml` in current dir |
|
|
579
582
|
| `clonebox start . --viewer` | Start VM and open GUI window |
|
|
580
583
|
| `clonebox start <name>` | Start existing VM by name |
|
|
581
|
-
| `clonebox stop
|
|
582
|
-
| `clonebox stop -f
|
|
583
|
-
| `clonebox delete
|
|
584
|
+
| `clonebox stop .` | Stop VM from `.clonebox.yaml` in current dir |
|
|
585
|
+
| `clonebox stop . -f` | Force stop VM |
|
|
586
|
+
| `clonebox delete .` | Delete VM from `.clonebox.yaml` in current dir |
|
|
587
|
+
| `clonebox delete . --yes` | Delete VM without confirmation |
|
|
584
588
|
| `clonebox list` | List all VMs |
|
|
585
589
|
| `clonebox detect` | Show detected services/apps/paths |
|
|
586
590
|
| `clonebox detect --yaml` | Output as YAML config |
|
|
@@ -665,22 +669,57 @@ sudo apt install virt-viewer
|
|
|
665
669
|
virt-viewer --connect qemu:///session <vm-name>
|
|
666
670
|
```
|
|
667
671
|
|
|
668
|
-
### Browser Profiles Not
|
|
672
|
+
### Browser Profiles and PyCharm Not Working
|
|
669
673
|
|
|
670
|
-
If browser profiles or
|
|
674
|
+
If browser profiles or PyCharm configs aren't available, or you get permission errors:
|
|
671
675
|
|
|
672
|
-
|
|
673
|
-
```bash
|
|
674
|
-
rm .clonebox.yaml
|
|
675
|
-
clonebox clone . --user --run --replace
|
|
676
|
-
```
|
|
676
|
+
**Root cause:** VM was created with old version without proper mount permissions.
|
|
677
677
|
|
|
678
|
-
|
|
679
|
-
|
|
680
|
-
|
|
681
|
-
|
|
682
|
-
|
|
683
|
-
|
|
678
|
+
**Solution - Rebuild VM with latest fixes:**
|
|
679
|
+
|
|
680
|
+
```bash
|
|
681
|
+
# Stop and delete old VM
|
|
682
|
+
clonebox stop . --user
|
|
683
|
+
clonebox delete . --user --yes
|
|
684
|
+
|
|
685
|
+
# Recreate VM with fixed permissions and app data mounts
|
|
686
|
+
clonebox clone . --user --run --replace
|
|
687
|
+
```
|
|
688
|
+
|
|
689
|
+
**After rebuild, verify mounts in VM:**
|
|
690
|
+
```bash
|
|
691
|
+
# Check all mounts are accessible
|
|
692
|
+
ls ~/.config/google-chrome # Chrome profile
|
|
693
|
+
ls ~/.mozilla/firefox # Firefox profile
|
|
694
|
+
ls ~/.config/JetBrains # PyCharm settings
|
|
695
|
+
ls ~/Downloads # Downloads folder
|
|
696
|
+
ls ~/Documents # Documents folder
|
|
697
|
+
```
|
|
698
|
+
|
|
699
|
+
**What changed in v0.1.12:**
|
|
700
|
+
- All mounts use `uid=1000,gid=1000` for ubuntu user access
|
|
701
|
+
- Both `paths` and `app_data_paths` are properly mounted
|
|
702
|
+
- No sudo needed to access any shared directories
|
|
703
|
+
|
|
704
|
+
### Mount Points Empty or Permission Denied
|
|
705
|
+
|
|
706
|
+
If you get "must be superuser to use mount" error when accessing Downloads/Documents:
|
|
707
|
+
|
|
708
|
+
**Solution:** VM was created with old mount configuration. Recreate VM:
|
|
709
|
+
|
|
710
|
+
```bash
|
|
711
|
+
# Stop and delete old VM
|
|
712
|
+
clonebox stop . --user
|
|
713
|
+
clonebox delete . --user --yes
|
|
714
|
+
|
|
715
|
+
# Recreate with fixed permissions
|
|
716
|
+
clonebox clone . --user --run --replace
|
|
717
|
+
```
|
|
718
|
+
|
|
719
|
+
**What was fixed:**
|
|
720
|
+
- Mounts now use `uid=1000,gid=1000` so ubuntu user has access
|
|
721
|
+
- No need for sudo to access shared directories
|
|
722
|
+
- Applies to new VMs created after v0.1.12
|
|
684
723
|
|
|
685
724
|
### Mount Points Empty After Reboot
|
|
686
725
|
|
|
@@ -698,7 +737,7 @@ If shared directories appear empty after VM restart:
|
|
|
698
737
|
|
|
699
738
|
3. **Verify access mode:**
|
|
700
739
|
- VMs created with `accessmode="mapped"` allow any user to access mounts
|
|
701
|
-
-
|
|
740
|
+
- Mount options include `uid=1000,gid=1000` for user access
|
|
702
741
|
|
|
703
742
|
## Advanced Usage
|
|
704
743
|
|
|
@@ -761,6 +800,160 @@ virsh --connect qemu:///session console clone-clonebox
|
|
|
761
800
|
# Press Ctrl + ] to exit console
|
|
762
801
|
```
|
|
763
802
|
|
|
803
|
+
## Exporting to Proxmox
|
|
804
|
+
|
|
805
|
+
To use CloneBox VMs in Proxmox, you need to convert the qcow2 disk image to Proxmox format.
|
|
806
|
+
|
|
807
|
+
### Step 1: Locate VM Disk Image
|
|
808
|
+
|
|
809
|
+
```bash
|
|
810
|
+
# Find VM disk location
|
|
811
|
+
clonebox list
|
|
812
|
+
|
|
813
|
+
# Check VM details for disk path
|
|
814
|
+
virsh --connect qemu:///session dominfo clone-clonebox
|
|
815
|
+
|
|
816
|
+
# Typical locations:
|
|
817
|
+
# User session: ~/.local/share/libvirt/images/<vm-name>/<vm-name>.qcow2
|
|
818
|
+
# System session: /var/lib/libvirt/images/<vm-name>/<vm-name>.qcow2
|
|
819
|
+
```
|
|
820
|
+
|
|
821
|
+
### Step 2: Export VM with CloneBox
|
|
822
|
+
|
|
823
|
+
```bash
|
|
824
|
+
# Export VM with all data (from current directory with .clonebox.yaml)
|
|
825
|
+
clonebox export . --user --include-data -o clonebox-vm.tar.gz
|
|
826
|
+
|
|
827
|
+
# Or export specific VM by name
|
|
828
|
+
clonebox export safetytwin-vm --include-data -o safetytwin.tar.gz
|
|
829
|
+
|
|
830
|
+
# Extract to get the disk image
|
|
831
|
+
tar -xzf clonebox-vm.tar.gz
|
|
832
|
+
cd clonebox-clonebox
|
|
833
|
+
ls -la # Should show disk.qcow2, vm.xml, etc.
|
|
834
|
+
```
|
|
835
|
+
|
|
836
|
+
### Step 3: Convert to Proxmox Format
|
|
837
|
+
|
|
838
|
+
```bash
|
|
839
|
+
# Install qemu-utils if not installed
|
|
840
|
+
sudo apt install qemu-utils
|
|
841
|
+
|
|
842
|
+
# Convert qcow2 to raw format (Proxmox preferred)
|
|
843
|
+
qemu-img convert -f qcow2 -O raw disk.qcow2 vm-disk.raw
|
|
844
|
+
|
|
845
|
+
# Or convert to qcow2 with compression for smaller size
|
|
846
|
+
qemu-img convert -f qcow2 -O qcow2 -c disk.qcow2 vm-disk-compressed.qcow2
|
|
847
|
+
```
|
|
848
|
+
|
|
849
|
+
### Step 4: Transfer to Proxmox Host
|
|
850
|
+
|
|
851
|
+
```bash
|
|
852
|
+
# Using scp (replace with your Proxmox host IP)
|
|
853
|
+
scp vm-disk.raw root@proxmox:/var/lib/vz/template/iso/
|
|
854
|
+
|
|
855
|
+
# Or using rsync for large files
|
|
856
|
+
rsync -avh --progress vm-disk.raw root@proxmox:/var/lib/vz/template/iso/
|
|
857
|
+
```
|
|
858
|
+
|
|
859
|
+
### Step 5: Create VM in Proxmox
|
|
860
|
+
|
|
861
|
+
1. **Log into Proxmox Web UI**
|
|
862
|
+
|
|
863
|
+
2. **Create new VM:**
|
|
864
|
+
- Click "Create VM"
|
|
865
|
+
- Enter VM ID and Name
|
|
866
|
+
- Set OS: "Do not use any media"
|
|
867
|
+
|
|
868
|
+
3. **Configure Hardware:**
|
|
869
|
+
- **Hard Disk:**
|
|
870
|
+
- Delete default disk
|
|
871
|
+
- Click "Add" → "Hard Disk"
|
|
872
|
+
- Select your uploaded image file
|
|
873
|
+
- Set Disk size (can be larger than image)
|
|
874
|
+
- Set Bus: "VirtIO SCSI"
|
|
875
|
+
- Set Cache: "Write back" for better performance
|
|
876
|
+
|
|
877
|
+
4. **CPU & Memory:**
|
|
878
|
+
- Set CPU cores (match original VM config)
|
|
879
|
+
- Set Memory (match original VM config)
|
|
880
|
+
|
|
881
|
+
5. **Network:**
|
|
882
|
+
- Set Model: "VirtIO (paravirtualized)"
|
|
883
|
+
|
|
884
|
+
6. **Confirm:** Click "Finish" to create VM
|
|
885
|
+
|
|
886
|
+
### Step 6: Post-Import Configuration
|
|
887
|
+
|
|
888
|
+
1. **Start the VM in Proxmox**
|
|
889
|
+
|
|
890
|
+
2. **Update network configuration:**
|
|
891
|
+
```bash
|
|
892
|
+
# In VM console, update network interfaces
|
|
893
|
+
sudo nano /etc/netplan/01-netcfg.yaml
|
|
894
|
+
|
|
895
|
+
# Example for Proxmox bridge:
|
|
896
|
+
network:
|
|
897
|
+
version: 2
|
|
898
|
+
renderer: networkd
|
|
899
|
+
ethernets:
|
|
900
|
+
ens18: # Proxmox typically uses ens18
|
|
901
|
+
dhcp4: true
|
|
902
|
+
```
|
|
903
|
+
|
|
904
|
+
3. **Apply network changes:**
|
|
905
|
+
```bash
|
|
906
|
+
sudo netplan apply
|
|
907
|
+
```
|
|
908
|
+
|
|
909
|
+
4. **Update mount points (if needed):**
|
|
910
|
+
```bash
|
|
911
|
+
# Mount points will fail in Proxmox, remove them
|
|
912
|
+
sudo nano /etc/fstab
|
|
913
|
+
# Comment out or remove 9p mount entries
|
|
914
|
+
|
|
915
|
+
# Reboot to apply changes
|
|
916
|
+
sudo reboot
|
|
917
|
+
```
|
|
918
|
+
|
|
919
|
+
### Alternative: Direct Import to Proxmox Storage
|
|
920
|
+
|
|
921
|
+
If you have Proxmox with shared storage:
|
|
922
|
+
|
|
923
|
+
```bash
|
|
924
|
+
# On Proxmox host
|
|
925
|
+
# Create a temporary directory
|
|
926
|
+
mkdir /tmp/import
|
|
927
|
+
|
|
928
|
+
# Copy disk directly to Proxmox storage (example for local-lvm)
|
|
929
|
+
scp vm-disk.raw root@proxmox:/tmp/import/
|
|
930
|
+
|
|
931
|
+
# On Proxmox host, create VM using CLI
|
|
932
|
+
qm create 9000 --name clonebox-vm --memory 4096 --cores 4 --net0 virtio,bridge=vmbr0
|
|
933
|
+
|
|
934
|
+
# Import disk to VM
|
|
935
|
+
qm importdisk 9000 /tmp/import/vm-disk.raw local-lvm
|
|
936
|
+
|
|
937
|
+
# Attach disk to VM
|
|
938
|
+
qm set 9000 --scsihw virtio-scsi-pci --scsi0 local-lvm:vm-9000-disk-0
|
|
939
|
+
|
|
940
|
+
# Set boot disk
|
|
941
|
+
qm set 9000 --boot c --bootdisk scsi0
|
|
942
|
+
```
|
|
943
|
+
|
|
944
|
+
### Troubleshooting
|
|
945
|
+
|
|
946
|
+
- **VM won't boot:** Check if disk format is compatible (raw is safest)
|
|
947
|
+
- **Network not working:** Update network configuration for Proxmox's NIC naming
|
|
948
|
+
- **Performance issues:** Use VirtIO drivers and set cache to "Write back"
|
|
949
|
+
- **Mount errors:** Remove 9p mount entries from /etc/fstab as they won't work in Proxmox
|
|
950
|
+
|
|
951
|
+
### Notes
|
|
952
|
+
|
|
953
|
+
- CloneBox's bind mounts (9p filesystem) are specific to libvirt/QEMU and won't work in Proxmox
|
|
954
|
+
- Browser profiles and app data exported with `--include-data` will be available in the VM disk
|
|
955
|
+
- For shared folders in Proxmox, use Proxmox's shared folders or network shares instead
|
|
956
|
+
|
|
764
957
|
## License
|
|
765
958
|
|
|
766
959
|
MIT License - see [LICENSE](LICENSE) file.
|
|
@@ -196,6 +196,9 @@ clonebox open . --user
|
|
|
196
196
|
|
|
197
197
|
# 6. Stop VM when done
|
|
198
198
|
clonebox stop . --user
|
|
199
|
+
|
|
200
|
+
# 7. Delete VM if needed
|
|
201
|
+
clonebox delete . --user --yes
|
|
199
202
|
```
|
|
200
203
|
|
|
201
204
|
### Development Environment with Browser Profiles
|
|
@@ -539,9 +542,10 @@ clonebox clone . --network auto
|
|
|
539
542
|
| `clonebox start .` | Start VM from `.clonebox.yaml` in current dir |
|
|
540
543
|
| `clonebox start . --viewer` | Start VM and open GUI window |
|
|
541
544
|
| `clonebox start <name>` | Start existing VM by name |
|
|
542
|
-
| `clonebox stop
|
|
543
|
-
| `clonebox stop -f
|
|
544
|
-
| `clonebox delete
|
|
545
|
+
| `clonebox stop .` | Stop VM from `.clonebox.yaml` in current dir |
|
|
546
|
+
| `clonebox stop . -f` | Force stop VM |
|
|
547
|
+
| `clonebox delete .` | Delete VM from `.clonebox.yaml` in current dir |
|
|
548
|
+
| `clonebox delete . --yes` | Delete VM without confirmation |
|
|
545
549
|
| `clonebox list` | List all VMs |
|
|
546
550
|
| `clonebox detect` | Show detected services/apps/paths |
|
|
547
551
|
| `clonebox detect --yaml` | Output as YAML config |
|
|
@@ -626,22 +630,57 @@ sudo apt install virt-viewer
|
|
|
626
630
|
virt-viewer --connect qemu:///session <vm-name>
|
|
627
631
|
```
|
|
628
632
|
|
|
629
|
-
### Browser Profiles Not
|
|
633
|
+
### Browser Profiles and PyCharm Not Working
|
|
630
634
|
|
|
631
|
-
If browser profiles or
|
|
635
|
+
If browser profiles or PyCharm configs aren't available, or you get permission errors:
|
|
632
636
|
|
|
633
|
-
|
|
634
|
-
```bash
|
|
635
|
-
rm .clonebox.yaml
|
|
636
|
-
clonebox clone . --user --run --replace
|
|
637
|
-
```
|
|
637
|
+
**Root cause:** VM was created with old version without proper mount permissions.
|
|
638
638
|
|
|
639
|
-
|
|
640
|
-
|
|
641
|
-
|
|
642
|
-
|
|
643
|
-
|
|
644
|
-
|
|
639
|
+
**Solution - Rebuild VM with latest fixes:**
|
|
640
|
+
|
|
641
|
+
```bash
|
|
642
|
+
# Stop and delete old VM
|
|
643
|
+
clonebox stop . --user
|
|
644
|
+
clonebox delete . --user --yes
|
|
645
|
+
|
|
646
|
+
# Recreate VM with fixed permissions and app data mounts
|
|
647
|
+
clonebox clone . --user --run --replace
|
|
648
|
+
```
|
|
649
|
+
|
|
650
|
+
**After rebuild, verify mounts in VM:**
|
|
651
|
+
```bash
|
|
652
|
+
# Check all mounts are accessible
|
|
653
|
+
ls ~/.config/google-chrome # Chrome profile
|
|
654
|
+
ls ~/.mozilla/firefox # Firefox profile
|
|
655
|
+
ls ~/.config/JetBrains # PyCharm settings
|
|
656
|
+
ls ~/Downloads # Downloads folder
|
|
657
|
+
ls ~/Documents # Documents folder
|
|
658
|
+
```
|
|
659
|
+
|
|
660
|
+
**What changed in v0.1.12:**
|
|
661
|
+
- All mounts use `uid=1000,gid=1000` for ubuntu user access
|
|
662
|
+
- Both `paths` and `app_data_paths` are properly mounted
|
|
663
|
+
- No sudo needed to access any shared directories
|
|
664
|
+
|
|
665
|
+
### Mount Points Empty or Permission Denied
|
|
666
|
+
|
|
667
|
+
If you get "must be superuser to use mount" error when accessing Downloads/Documents:
|
|
668
|
+
|
|
669
|
+
**Solution:** VM was created with old mount configuration. Recreate VM:
|
|
670
|
+
|
|
671
|
+
```bash
|
|
672
|
+
# Stop and delete old VM
|
|
673
|
+
clonebox stop . --user
|
|
674
|
+
clonebox delete . --user --yes
|
|
675
|
+
|
|
676
|
+
# Recreate with fixed permissions
|
|
677
|
+
clonebox clone . --user --run --replace
|
|
678
|
+
```
|
|
679
|
+
|
|
680
|
+
**What was fixed:**
|
|
681
|
+
- Mounts now use `uid=1000,gid=1000` so ubuntu user has access
|
|
682
|
+
- No need for sudo to access shared directories
|
|
683
|
+
- Applies to new VMs created after v0.1.12
|
|
645
684
|
|
|
646
685
|
### Mount Points Empty After Reboot
|
|
647
686
|
|
|
@@ -659,7 +698,7 @@ If shared directories appear empty after VM restart:
|
|
|
659
698
|
|
|
660
699
|
3. **Verify access mode:**
|
|
661
700
|
- VMs created with `accessmode="mapped"` allow any user to access mounts
|
|
662
|
-
-
|
|
701
|
+
- Mount options include `uid=1000,gid=1000` for user access
|
|
663
702
|
|
|
664
703
|
## Advanced Usage
|
|
665
704
|
|
|
@@ -722,6 +761,160 @@ virsh --connect qemu:///session console clone-clonebox
|
|
|
722
761
|
# Press Ctrl + ] to exit console
|
|
723
762
|
```
|
|
724
763
|
|
|
764
|
+
## Exporting to Proxmox
|
|
765
|
+
|
|
766
|
+
To use CloneBox VMs in Proxmox, you need to convert the qcow2 disk image to Proxmox format.
|
|
767
|
+
|
|
768
|
+
### Step 1: Locate VM Disk Image
|
|
769
|
+
|
|
770
|
+
```bash
|
|
771
|
+
# Find VM disk location
|
|
772
|
+
clonebox list
|
|
773
|
+
|
|
774
|
+
# Check VM details for disk path
|
|
775
|
+
virsh --connect qemu:///session dominfo clone-clonebox
|
|
776
|
+
|
|
777
|
+
# Typical locations:
|
|
778
|
+
# User session: ~/.local/share/libvirt/images/<vm-name>/<vm-name>.qcow2
|
|
779
|
+
# System session: /var/lib/libvirt/images/<vm-name>/<vm-name>.qcow2
|
|
780
|
+
```
|
|
781
|
+
|
|
782
|
+
### Step 2: Export VM with CloneBox
|
|
783
|
+
|
|
784
|
+
```bash
|
|
785
|
+
# Export VM with all data (from current directory with .clonebox.yaml)
|
|
786
|
+
clonebox export . --user --include-data -o clonebox-vm.tar.gz
|
|
787
|
+
|
|
788
|
+
# Or export specific VM by name
|
|
789
|
+
clonebox export safetytwin-vm --include-data -o safetytwin.tar.gz
|
|
790
|
+
|
|
791
|
+
# Extract to get the disk image
|
|
792
|
+
tar -xzf clonebox-vm.tar.gz
|
|
793
|
+
cd clonebox-clonebox
|
|
794
|
+
ls -la # Should show disk.qcow2, vm.xml, etc.
|
|
795
|
+
```
|
|
796
|
+
|
|
797
|
+
### Step 3: Convert to Proxmox Format
|
|
798
|
+
|
|
799
|
+
```bash
|
|
800
|
+
# Install qemu-utils if not installed
|
|
801
|
+
sudo apt install qemu-utils
|
|
802
|
+
|
|
803
|
+
# Convert qcow2 to raw format (Proxmox preferred)
|
|
804
|
+
qemu-img convert -f qcow2 -O raw disk.qcow2 vm-disk.raw
|
|
805
|
+
|
|
806
|
+
# Or convert to qcow2 with compression for smaller size
|
|
807
|
+
qemu-img convert -f qcow2 -O qcow2 -c disk.qcow2 vm-disk-compressed.qcow2
|
|
808
|
+
```
|
|
809
|
+
|
|
810
|
+
### Step 4: Transfer to Proxmox Host
|
|
811
|
+
|
|
812
|
+
```bash
|
|
813
|
+
# Using scp (replace with your Proxmox host IP)
|
|
814
|
+
scp vm-disk.raw root@proxmox:/var/lib/vz/template/iso/
|
|
815
|
+
|
|
816
|
+
# Or using rsync for large files
|
|
817
|
+
rsync -avh --progress vm-disk.raw root@proxmox:/var/lib/vz/template/iso/
|
|
818
|
+
```
|
|
819
|
+
|
|
820
|
+
### Step 5: Create VM in Proxmox
|
|
821
|
+
|
|
822
|
+
1. **Log into Proxmox Web UI**
|
|
823
|
+
|
|
824
|
+
2. **Create new VM:**
|
|
825
|
+
- Click "Create VM"
|
|
826
|
+
- Enter VM ID and Name
|
|
827
|
+
- Set OS: "Do not use any media"
|
|
828
|
+
|
|
829
|
+
3. **Configure Hardware:**
|
|
830
|
+
- **Hard Disk:**
|
|
831
|
+
- Delete default disk
|
|
832
|
+
- Click "Add" → "Hard Disk"
|
|
833
|
+
- Select your uploaded image file
|
|
834
|
+
- Set Disk size (can be larger than image)
|
|
835
|
+
- Set Bus: "VirtIO SCSI"
|
|
836
|
+
- Set Cache: "Write back" for better performance
|
|
837
|
+
|
|
838
|
+
4. **CPU & Memory:**
|
|
839
|
+
- Set CPU cores (match original VM config)
|
|
840
|
+
- Set Memory (match original VM config)
|
|
841
|
+
|
|
842
|
+
5. **Network:**
|
|
843
|
+
- Set Model: "VirtIO (paravirtualized)"
|
|
844
|
+
|
|
845
|
+
6. **Confirm:** Click "Finish" to create VM
|
|
846
|
+
|
|
847
|
+
### Step 6: Post-Import Configuration
|
|
848
|
+
|
|
849
|
+
1. **Start the VM in Proxmox**
|
|
850
|
+
|
|
851
|
+
2. **Update network configuration:**
|
|
852
|
+
```bash
|
|
853
|
+
# In VM console, update network interfaces
|
|
854
|
+
sudo nano /etc/netplan/01-netcfg.yaml
|
|
855
|
+
|
|
856
|
+
# Example for Proxmox bridge:
|
|
857
|
+
network:
|
|
858
|
+
version: 2
|
|
859
|
+
renderer: networkd
|
|
860
|
+
ethernets:
|
|
861
|
+
ens18: # Proxmox typically uses ens18
|
|
862
|
+
dhcp4: true
|
|
863
|
+
```
|
|
864
|
+
|
|
865
|
+
3. **Apply network changes:**
|
|
866
|
+
```bash
|
|
867
|
+
sudo netplan apply
|
|
868
|
+
```
|
|
869
|
+
|
|
870
|
+
4. **Update mount points (if needed):**
|
|
871
|
+
```bash
|
|
872
|
+
# Mount points will fail in Proxmox, remove them
|
|
873
|
+
sudo nano /etc/fstab
|
|
874
|
+
# Comment out or remove 9p mount entries
|
|
875
|
+
|
|
876
|
+
# Reboot to apply changes
|
|
877
|
+
sudo reboot
|
|
878
|
+
```
|
|
879
|
+
|
|
880
|
+
### Alternative: Direct Import to Proxmox Storage
|
|
881
|
+
|
|
882
|
+
If you have Proxmox with shared storage:
|
|
883
|
+
|
|
884
|
+
```bash
|
|
885
|
+
# On Proxmox host
|
|
886
|
+
# Create a temporary directory
|
|
887
|
+
mkdir /tmp/import
|
|
888
|
+
|
|
889
|
+
# Copy disk directly to Proxmox storage (example for local-lvm)
|
|
890
|
+
scp vm-disk.raw root@proxmox:/tmp/import/
|
|
891
|
+
|
|
892
|
+
# On Proxmox host, create VM using CLI
|
|
893
|
+
qm create 9000 --name clonebox-vm --memory 4096 --cores 4 --net0 virtio,bridge=vmbr0
|
|
894
|
+
|
|
895
|
+
# Import disk to VM
|
|
896
|
+
qm importdisk 9000 /tmp/import/vm-disk.raw local-lvm
|
|
897
|
+
|
|
898
|
+
# Attach disk to VM
|
|
899
|
+
qm set 9000 --scsihw virtio-scsi-pci --scsi0 local-lvm:vm-9000-disk-0
|
|
900
|
+
|
|
901
|
+
# Set boot disk
|
|
902
|
+
qm set 9000 --boot c --bootdisk scsi0
|
|
903
|
+
```
|
|
904
|
+
|
|
905
|
+
### Troubleshooting
|
|
906
|
+
|
|
907
|
+
- **VM won't boot:** Check if disk format is compatible (raw is safest)
|
|
908
|
+
- **Network not working:** Update network configuration for Proxmox's NIC naming
|
|
909
|
+
- **Performance issues:** Use VirtIO drivers and set cache to "Write back"
|
|
910
|
+
- **Mount errors:** Remove 9p mount entries from /etc/fstab as they won't work in Proxmox
|
|
911
|
+
|
|
912
|
+
### Notes
|
|
913
|
+
|
|
914
|
+
- CloneBox's bind mounts (9p filesystem) are specific to libvirt/QEMU and won't work in Proxmox
|
|
915
|
+
- Browser profiles and app data exported with `--include-data` will be available in the VM disk
|
|
916
|
+
- For shared folders in Proxmox, use Proxmox's shared folders or network shares instead
|
|
917
|
+
|
|
725
918
|
## License
|
|
726
919
|
|
|
727
920
|
MIT License - see [LICENSE](LICENSE) file.
|
|
@@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta"
|
|
|
4
4
|
|
|
5
5
|
[project]
|
|
6
6
|
name = "clonebox"
|
|
7
|
-
version = "0.1.
|
|
7
|
+
version = "0.1.13"
|
|
8
8
|
description = "Clone your workstation environment to an isolated VM with selective apps, paths and services"
|
|
9
9
|
readme = "README.md"
|
|
10
10
|
license = {text = "Apache-2.0"}
|
|
@@ -515,21 +515,47 @@ def cmd_open(args):
|
|
|
515
515
|
|
|
516
516
|
def cmd_stop(args):
|
|
517
517
|
"""Stop a VM."""
|
|
518
|
+
name = args.name
|
|
519
|
+
|
|
520
|
+
# If name is a path, load config
|
|
521
|
+
if name and (name.startswith(".") or name.startswith("/") or name.startswith("~")):
|
|
522
|
+
target_path = Path(name).expanduser().resolve()
|
|
523
|
+
config_file = target_path / ".clonebox.yaml" if target_path.is_dir() else target_path
|
|
524
|
+
if config_file.exists():
|
|
525
|
+
config = load_clonebox_config(config_file)
|
|
526
|
+
name = config["vm"]["name"]
|
|
527
|
+
else:
|
|
528
|
+
console.print(f"[red]❌ Config not found: {config_file}[/]")
|
|
529
|
+
return
|
|
530
|
+
|
|
518
531
|
cloner = SelectiveVMCloner(user_session=getattr(args, "user", False))
|
|
519
|
-
cloner.stop_vm(
|
|
532
|
+
cloner.stop_vm(name, force=args.force, console=console)
|
|
520
533
|
|
|
521
534
|
|
|
522
535
|
def cmd_delete(args):
|
|
523
536
|
"""Delete a VM."""
|
|
537
|
+
name = args.name
|
|
538
|
+
|
|
539
|
+
# If name is a path, load config
|
|
540
|
+
if name and (name.startswith(".") or name.startswith("/") or name.startswith("~")):
|
|
541
|
+
target_path = Path(name).expanduser().resolve()
|
|
542
|
+
config_file = target_path / ".clonebox.yaml" if target_path.is_dir() else target_path
|
|
543
|
+
if config_file.exists():
|
|
544
|
+
config = load_clonebox_config(config_file)
|
|
545
|
+
name = config["vm"]["name"]
|
|
546
|
+
else:
|
|
547
|
+
console.print(f"[red]❌ Config not found: {config_file}[/]")
|
|
548
|
+
return
|
|
549
|
+
|
|
524
550
|
if not args.yes:
|
|
525
551
|
if not questionary.confirm(
|
|
526
|
-
f"Delete VM '{
|
|
552
|
+
f"Delete VM '{name}' and its storage?", default=False, style=custom_style
|
|
527
553
|
).ask():
|
|
528
554
|
console.print("[yellow]Cancelled.[/]")
|
|
529
555
|
return
|
|
530
556
|
|
|
531
557
|
cloner = SelectiveVMCloner(user_session=getattr(args, "user", False))
|
|
532
|
-
cloner.delete_vm(
|
|
558
|
+
cloner.delete_vm(name, delete_storage=not args.keep_storage, console=console)
|
|
533
559
|
|
|
534
560
|
|
|
535
561
|
def cmd_list(args):
|
|
@@ -710,28 +736,49 @@ def cmd_export(args):
|
|
|
710
736
|
else:
|
|
711
737
|
console.print(f"[red]❌ Config not found: {config_file}[/]")
|
|
712
738
|
return
|
|
713
|
-
|
|
714
|
-
if not name:
|
|
739
|
+
elif not name or name == ".":
|
|
715
740
|
config_file = Path.cwd() / ".clonebox.yaml"
|
|
716
741
|
if config_file.exists():
|
|
717
742
|
config = load_clonebox_config(config_file)
|
|
718
743
|
name = config["vm"]["name"]
|
|
719
744
|
else:
|
|
720
|
-
console.print("[red]❌ No
|
|
745
|
+
console.print("[red]❌ No .clonebox.yaml found in current directory[/]")
|
|
746
|
+
console.print("[dim]Usage: clonebox export . or clonebox export <vm-name>[/]")
|
|
721
747
|
return
|
|
722
748
|
|
|
723
749
|
console.print(f"[bold cyan]📦 Exporting VM: {name}[/]\n")
|
|
724
750
|
|
|
725
|
-
#
|
|
726
|
-
|
|
727
|
-
|
|
728
|
-
|
|
729
|
-
|
|
730
|
-
|
|
731
|
-
|
|
732
|
-
|
|
733
|
-
|
|
734
|
-
|
|
751
|
+
# Get actual disk location from virsh
|
|
752
|
+
try:
|
|
753
|
+
result = subprocess.run(
|
|
754
|
+
["virsh", "--connect", conn_uri, "domblklist", name, "--details"],
|
|
755
|
+
capture_output=True, text=True, timeout=10
|
|
756
|
+
)
|
|
757
|
+
if result.returncode != 0:
|
|
758
|
+
console.print(f"[red]❌ VM '{name}' not found[/]")
|
|
759
|
+
return
|
|
760
|
+
|
|
761
|
+
# Parse disk paths from output
|
|
762
|
+
disk_path = None
|
|
763
|
+
cloudinit_path = None
|
|
764
|
+
for line in result.stdout.split('\n'):
|
|
765
|
+
if 'disk' in line and '.qcow2' in line:
|
|
766
|
+
parts = line.split()
|
|
767
|
+
if len(parts) >= 4:
|
|
768
|
+
disk_path = Path(parts[3])
|
|
769
|
+
elif 'cdrom' in line or '.iso' in line:
|
|
770
|
+
parts = line.split()
|
|
771
|
+
if len(parts) >= 4:
|
|
772
|
+
cloudinit_path = Path(parts[3])
|
|
773
|
+
|
|
774
|
+
if not disk_path or not disk_path.exists():
|
|
775
|
+
console.print(f"[red]❌ VM disk not found[/]")
|
|
776
|
+
return
|
|
777
|
+
|
|
778
|
+
console.print(f"[dim]Disk location: {disk_path}[/]")
|
|
779
|
+
|
|
780
|
+
except Exception as e:
|
|
781
|
+
console.print(f"[red]❌ Error getting VM disk: {e}[/]")
|
|
735
782
|
return
|
|
736
783
|
|
|
737
784
|
# Create export directory
|
|
@@ -771,14 +818,16 @@ def cmd_export(args):
|
|
|
771
818
|
|
|
772
819
|
# Copy disk image
|
|
773
820
|
console.print("[cyan]Copying disk image (this may take a while)...[/]")
|
|
774
|
-
|
|
775
|
-
|
|
776
|
-
|
|
821
|
+
if disk_path and disk_path.exists():
|
|
822
|
+
shutil.copy2(disk_path, temp_dir / "disk.qcow2")
|
|
823
|
+
console.print(f"[green]✅ Disk copied: {disk_path.stat().st_size / (1024**3):.2f} GB[/]")
|
|
824
|
+
else:
|
|
825
|
+
console.print("[yellow]⚠️ Disk image not found[/]")
|
|
777
826
|
|
|
778
827
|
# Copy cloud-init ISO
|
|
779
|
-
|
|
780
|
-
|
|
781
|
-
|
|
828
|
+
if cloudinit_path and cloudinit_path.exists():
|
|
829
|
+
shutil.copy2(cloudinit_path, temp_dir / "cloud-init.iso")
|
|
830
|
+
console.print("[green]✅ Cloud-init ISO copied[/]")
|
|
782
831
|
|
|
783
832
|
# Copy config file
|
|
784
833
|
config_file = Path.cwd() / ".clonebox.yaml"
|
|
@@ -1827,7 +1876,7 @@ def main():
|
|
|
1827
1876
|
|
|
1828
1877
|
# Stop command
|
|
1829
1878
|
stop_parser = subparsers.add_parser("stop", help="Stop a VM")
|
|
1830
|
-
stop_parser.add_argument("name", help="VM name")
|
|
1879
|
+
stop_parser.add_argument("name", nargs="?", default=None, help="VM name or '.' to use .clonebox.yaml")
|
|
1831
1880
|
stop_parser.add_argument("--force", "-f", action="store_true", help="Force stop")
|
|
1832
1881
|
stop_parser.add_argument(
|
|
1833
1882
|
"-u",
|
|
@@ -1839,7 +1888,7 @@ def main():
|
|
|
1839
1888
|
|
|
1840
1889
|
# Delete command
|
|
1841
1890
|
delete_parser = subparsers.add_parser("delete", help="Delete a VM")
|
|
1842
|
-
delete_parser.add_argument("name", help="VM name")
|
|
1891
|
+
delete_parser.add_argument("name", nargs="?", default=None, help="VM name or '.' to use .clonebox.yaml")
|
|
1843
1892
|
delete_parser.add_argument("--yes", "-y", action="store_true", help="Skip confirmation")
|
|
1844
1893
|
delete_parser.add_argument("--keep-storage", action="store_true", help="Keep disk images")
|
|
1845
1894
|
delete_parser.add_argument(
|
|
@@ -666,12 +666,16 @@ fi
|
|
|
666
666
|
for idx, (host_path, guest_path) in enumerate(config.paths.items()):
|
|
667
667
|
if Path(host_path).exists():
|
|
668
668
|
tag = f"mount{idx}"
|
|
669
|
+
# Use uid=1000,gid=1000 to give ubuntu user access to mounts
|
|
670
|
+
# mmap allows proper file mapping
|
|
671
|
+
mount_opts = "trans=virtio,version=9p2000.L,mmap,uid=1000,gid=1000"
|
|
669
672
|
mount_commands.append(f" - mkdir -p {guest_path}")
|
|
673
|
+
mount_commands.append(f" - chown 1000:1000 {guest_path}")
|
|
670
674
|
mount_commands.append(
|
|
671
|
-
f" - mount -t 9p -o
|
|
675
|
+
f" - mount -t 9p -o {mount_opts} {tag} {guest_path} || true"
|
|
672
676
|
)
|
|
673
677
|
# Add fstab entry for persistence after reboot
|
|
674
|
-
fstab_entries.append(f"{tag} {guest_path} 9p
|
|
678
|
+
fstab_entries.append(f"{tag} {guest_path} 9p {mount_opts},nofail 0 0")
|
|
675
679
|
|
|
676
680
|
# User-data
|
|
677
681
|
# Add desktop environment if GUI is enabled
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
Metadata-Version: 2.4
|
|
2
2
|
Name: clonebox
|
|
3
|
-
Version: 0.1.
|
|
3
|
+
Version: 0.1.13
|
|
4
4
|
Summary: Clone your workstation environment to an isolated VM with selective apps, paths and services
|
|
5
5
|
Author: CloneBox Team
|
|
6
6
|
License: Apache-2.0
|
|
@@ -235,6 +235,9 @@ clonebox open . --user
|
|
|
235
235
|
|
|
236
236
|
# 6. Stop VM when done
|
|
237
237
|
clonebox stop . --user
|
|
238
|
+
|
|
239
|
+
# 7. Delete VM if needed
|
|
240
|
+
clonebox delete . --user --yes
|
|
238
241
|
```
|
|
239
242
|
|
|
240
243
|
### Development Environment with Browser Profiles
|
|
@@ -578,9 +581,10 @@ clonebox clone . --network auto
|
|
|
578
581
|
| `clonebox start .` | Start VM from `.clonebox.yaml` in current dir |
|
|
579
582
|
| `clonebox start . --viewer` | Start VM and open GUI window |
|
|
580
583
|
| `clonebox start <name>` | Start existing VM by name |
|
|
581
|
-
| `clonebox stop
|
|
582
|
-
| `clonebox stop -f
|
|
583
|
-
| `clonebox delete
|
|
584
|
+
| `clonebox stop .` | Stop VM from `.clonebox.yaml` in current dir |
|
|
585
|
+
| `clonebox stop . -f` | Force stop VM |
|
|
586
|
+
| `clonebox delete .` | Delete VM from `.clonebox.yaml` in current dir |
|
|
587
|
+
| `clonebox delete . --yes` | Delete VM without confirmation |
|
|
584
588
|
| `clonebox list` | List all VMs |
|
|
585
589
|
| `clonebox detect` | Show detected services/apps/paths |
|
|
586
590
|
| `clonebox detect --yaml` | Output as YAML config |
|
|
@@ -665,22 +669,57 @@ sudo apt install virt-viewer
|
|
|
665
669
|
virt-viewer --connect qemu:///session <vm-name>
|
|
666
670
|
```
|
|
667
671
|
|
|
668
|
-
### Browser Profiles Not
|
|
672
|
+
### Browser Profiles and PyCharm Not Working
|
|
669
673
|
|
|
670
|
-
If browser profiles or
|
|
674
|
+
If browser profiles or PyCharm configs aren't available, or you get permission errors:
|
|
671
675
|
|
|
672
|
-
|
|
673
|
-
```bash
|
|
674
|
-
rm .clonebox.yaml
|
|
675
|
-
clonebox clone . --user --run --replace
|
|
676
|
-
```
|
|
676
|
+
**Root cause:** VM was created with old version without proper mount permissions.
|
|
677
677
|
|
|
678
|
-
|
|
679
|
-
|
|
680
|
-
|
|
681
|
-
|
|
682
|
-
|
|
683
|
-
|
|
678
|
+
**Solution - Rebuild VM with latest fixes:**
|
|
679
|
+
|
|
680
|
+
```bash
|
|
681
|
+
# Stop and delete old VM
|
|
682
|
+
clonebox stop . --user
|
|
683
|
+
clonebox delete . --user --yes
|
|
684
|
+
|
|
685
|
+
# Recreate VM with fixed permissions and app data mounts
|
|
686
|
+
clonebox clone . --user --run --replace
|
|
687
|
+
```
|
|
688
|
+
|
|
689
|
+
**After rebuild, verify mounts in VM:**
|
|
690
|
+
```bash
|
|
691
|
+
# Check all mounts are accessible
|
|
692
|
+
ls ~/.config/google-chrome # Chrome profile
|
|
693
|
+
ls ~/.mozilla/firefox # Firefox profile
|
|
694
|
+
ls ~/.config/JetBrains # PyCharm settings
|
|
695
|
+
ls ~/Downloads # Downloads folder
|
|
696
|
+
ls ~/Documents # Documents folder
|
|
697
|
+
```
|
|
698
|
+
|
|
699
|
+
**What changed in v0.1.12:**
|
|
700
|
+
- All mounts use `uid=1000,gid=1000` for ubuntu user access
|
|
701
|
+
- Both `paths` and `app_data_paths` are properly mounted
|
|
702
|
+
- No sudo needed to access any shared directories
|
|
703
|
+
|
|
704
|
+
### Mount Points Empty or Permission Denied
|
|
705
|
+
|
|
706
|
+
If you get "must be superuser to use mount" error when accessing Downloads/Documents:
|
|
707
|
+
|
|
708
|
+
**Solution:** VM was created with old mount configuration. Recreate VM:
|
|
709
|
+
|
|
710
|
+
```bash
|
|
711
|
+
# Stop and delete old VM
|
|
712
|
+
clonebox stop . --user
|
|
713
|
+
clonebox delete . --user --yes
|
|
714
|
+
|
|
715
|
+
# Recreate with fixed permissions
|
|
716
|
+
clonebox clone . --user --run --replace
|
|
717
|
+
```
|
|
718
|
+
|
|
719
|
+
**What was fixed:**
|
|
720
|
+
- Mounts now use `uid=1000,gid=1000` so ubuntu user has access
|
|
721
|
+
- No need for sudo to access shared directories
|
|
722
|
+
- Applies to new VMs created after v0.1.12
|
|
684
723
|
|
|
685
724
|
### Mount Points Empty After Reboot
|
|
686
725
|
|
|
@@ -698,7 +737,7 @@ If shared directories appear empty after VM restart:
|
|
|
698
737
|
|
|
699
738
|
3. **Verify access mode:**
|
|
700
739
|
- VMs created with `accessmode="mapped"` allow any user to access mounts
|
|
701
|
-
-
|
|
740
|
+
- Mount options include `uid=1000,gid=1000` for user access
|
|
702
741
|
|
|
703
742
|
## Advanced Usage
|
|
704
743
|
|
|
@@ -761,6 +800,160 @@ virsh --connect qemu:///session console clone-clonebox
|
|
|
761
800
|
# Press Ctrl + ] to exit console
|
|
762
801
|
```
|
|
763
802
|
|
|
803
|
+
## Exporting to Proxmox
|
|
804
|
+
|
|
805
|
+
To use CloneBox VMs in Proxmox, you need to convert the qcow2 disk image to Proxmox format.
|
|
806
|
+
|
|
807
|
+
### Step 1: Locate VM Disk Image
|
|
808
|
+
|
|
809
|
+
```bash
|
|
810
|
+
# Find VM disk location
|
|
811
|
+
clonebox list
|
|
812
|
+
|
|
813
|
+
# Check VM details for disk path
|
|
814
|
+
virsh --connect qemu:///session dominfo clone-clonebox
|
|
815
|
+
|
|
816
|
+
# Typical locations:
|
|
817
|
+
# User session: ~/.local/share/libvirt/images/<vm-name>/<vm-name>.qcow2
|
|
818
|
+
# System session: /var/lib/libvirt/images/<vm-name>/<vm-name>.qcow2
|
|
819
|
+
```
|
|
820
|
+
|
|
821
|
+
### Step 2: Export VM with CloneBox
|
|
822
|
+
|
|
823
|
+
```bash
|
|
824
|
+
# Export VM with all data (from current directory with .clonebox.yaml)
|
|
825
|
+
clonebox export . --user --include-data -o clonebox-vm.tar.gz
|
|
826
|
+
|
|
827
|
+
# Or export specific VM by name
|
|
828
|
+
clonebox export safetytwin-vm --include-data -o safetytwin.tar.gz
|
|
829
|
+
|
|
830
|
+
# Extract to get the disk image
|
|
831
|
+
tar -xzf clonebox-vm.tar.gz
|
|
832
|
+
cd clonebox-clonebox
|
|
833
|
+
ls -la # Should show disk.qcow2, vm.xml, etc.
|
|
834
|
+
```
|
|
835
|
+
|
|
836
|
+
### Step 3: Convert to Proxmox Format
|
|
837
|
+
|
|
838
|
+
```bash
|
|
839
|
+
# Install qemu-utils if not installed
|
|
840
|
+
sudo apt install qemu-utils
|
|
841
|
+
|
|
842
|
+
# Convert qcow2 to raw format (Proxmox preferred)
|
|
843
|
+
qemu-img convert -f qcow2 -O raw disk.qcow2 vm-disk.raw
|
|
844
|
+
|
|
845
|
+
# Or convert to qcow2 with compression for smaller size
|
|
846
|
+
qemu-img convert -f qcow2 -O qcow2 -c disk.qcow2 vm-disk-compressed.qcow2
|
|
847
|
+
```
|
|
848
|
+
|
|
849
|
+
### Step 4: Transfer to Proxmox Host
|
|
850
|
+
|
|
851
|
+
```bash
|
|
852
|
+
# Using scp (replace with your Proxmox host IP)
|
|
853
|
+
scp vm-disk.raw root@proxmox:/var/lib/vz/template/iso/
|
|
854
|
+
|
|
855
|
+
# Or using rsync for large files
|
|
856
|
+
rsync -avh --progress vm-disk.raw root@proxmox:/var/lib/vz/template/iso/
|
|
857
|
+
```
|
|
858
|
+
|
|
859
|
+
### Step 5: Create VM in Proxmox
|
|
860
|
+
|
|
861
|
+
1. **Log into Proxmox Web UI**
|
|
862
|
+
|
|
863
|
+
2. **Create new VM:**
|
|
864
|
+
- Click "Create VM"
|
|
865
|
+
- Enter VM ID and Name
|
|
866
|
+
- Set OS: "Do not use any media"
|
|
867
|
+
|
|
868
|
+
3. **Configure Hardware:**
|
|
869
|
+
- **Hard Disk:**
|
|
870
|
+
- Delete default disk
|
|
871
|
+
- Click "Add" → "Hard Disk"
|
|
872
|
+
- Select your uploaded image file
|
|
873
|
+
- Set Disk size (can be larger than image)
|
|
874
|
+
- Set Bus: "VirtIO SCSI"
|
|
875
|
+
- Set Cache: "Write back" for better performance
|
|
876
|
+
|
|
877
|
+
4. **CPU & Memory:**
|
|
878
|
+
- Set CPU cores (match original VM config)
|
|
879
|
+
- Set Memory (match original VM config)
|
|
880
|
+
|
|
881
|
+
5. **Network:**
|
|
882
|
+
- Set Model: "VirtIO (paravirtualized)"
|
|
883
|
+
|
|
884
|
+
6. **Confirm:** Click "Finish" to create VM
|
|
885
|
+
|
|
886
|
+
### Step 6: Post-Import Configuration
|
|
887
|
+
|
|
888
|
+
1. **Start the VM in Proxmox**
|
|
889
|
+
|
|
890
|
+
2. **Update network configuration:**
|
|
891
|
+
```bash
|
|
892
|
+
# In VM console, update network interfaces
|
|
893
|
+
sudo nano /etc/netplan/01-netcfg.yaml
|
|
894
|
+
|
|
895
|
+
# Example for Proxmox bridge:
|
|
896
|
+
network:
|
|
897
|
+
version: 2
|
|
898
|
+
renderer: networkd
|
|
899
|
+
ethernets:
|
|
900
|
+
ens18: # Proxmox typically uses ens18
|
|
901
|
+
dhcp4: true
|
|
902
|
+
```
|
|
903
|
+
|
|
904
|
+
3. **Apply network changes:**
|
|
905
|
+
```bash
|
|
906
|
+
sudo netplan apply
|
|
907
|
+
```
|
|
908
|
+
|
|
909
|
+
4. **Update mount points (if needed):**
|
|
910
|
+
```bash
|
|
911
|
+
# Mount points will fail in Proxmox, remove them
|
|
912
|
+
sudo nano /etc/fstab
|
|
913
|
+
# Comment out or remove 9p mount entries
|
|
914
|
+
|
|
915
|
+
# Reboot to apply changes
|
|
916
|
+
sudo reboot
|
|
917
|
+
```
|
|
918
|
+
|
|
919
|
+
### Alternative: Direct Import to Proxmox Storage
|
|
920
|
+
|
|
921
|
+
If you have Proxmox with shared storage:
|
|
922
|
+
|
|
923
|
+
```bash
|
|
924
|
+
# On Proxmox host
|
|
925
|
+
# Create a temporary directory
|
|
926
|
+
mkdir /tmp/import
|
|
927
|
+
|
|
928
|
+
# Copy disk directly to Proxmox storage (example for local-lvm)
|
|
929
|
+
scp vm-disk.raw root@proxmox:/tmp/import/
|
|
930
|
+
|
|
931
|
+
# On Proxmox host, create VM using CLI
|
|
932
|
+
qm create 9000 --name clonebox-vm --memory 4096 --cores 4 --net0 virtio,bridge=vmbr0
|
|
933
|
+
|
|
934
|
+
# Import disk to VM
|
|
935
|
+
qm importdisk 9000 /tmp/import/vm-disk.raw local-lvm
|
|
936
|
+
|
|
937
|
+
# Attach disk to VM
|
|
938
|
+
qm set 9000 --scsihw virtio-scsi-pci --scsi0 local-lvm:vm-9000-disk-0
|
|
939
|
+
|
|
940
|
+
# Set boot disk
|
|
941
|
+
qm set 9000 --boot c --bootdisk scsi0
|
|
942
|
+
```
|
|
943
|
+
|
|
944
|
+
### Troubleshooting
|
|
945
|
+
|
|
946
|
+
- **VM won't boot:** Check if disk format is compatible (raw is safest)
|
|
947
|
+
- **Network not working:** Update network configuration for Proxmox's NIC naming
|
|
948
|
+
- **Performance issues:** Use VirtIO drivers and set cache to "Write back"
|
|
949
|
+
- **Mount errors:** Remove 9p mount entries from /etc/fstab as they won't work in Proxmox
|
|
950
|
+
|
|
951
|
+
### Notes
|
|
952
|
+
|
|
953
|
+
- CloneBox's bind mounts (9p filesystem) are specific to libvirt/QEMU and won't work in Proxmox
|
|
954
|
+
- Browser profiles and app data exported with `--include-data` will be available in the VM disk
|
|
955
|
+
- For shared folders in Proxmox, use Proxmox's shared folders or network shares instead
|
|
956
|
+
|
|
764
957
|
## License
|
|
765
958
|
|
|
766
959
|
MIT License - see [LICENSE](LICENSE) file.
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|