vagrant-libvirt 0.0.42 → 0.2.1

Sign up to get free protection for your applications and to get access to all the features.
Files changed (60) hide show
  1. checksums.yaml +5 -5
  2. data/README.md +393 -147
  3. data/lib/vagrant-libvirt/action.rb +3 -2
  4. data/lib/vagrant-libvirt/action/create_domain.rb +87 -37
  5. data/lib/vagrant-libvirt/action/create_domain_volume.rb +19 -14
  6. data/lib/vagrant-libvirt/action/create_network_interfaces.rb +9 -5
  7. data/lib/vagrant-libvirt/action/create_networks.rb +7 -2
  8. data/lib/vagrant-libvirt/action/destroy_domain.rb +1 -1
  9. data/lib/vagrant-libvirt/action/destroy_networks.rb +5 -0
  10. data/lib/vagrant-libvirt/action/forward_ports.rb +10 -8
  11. data/lib/vagrant-libvirt/action/halt_domain.rb +1 -1
  12. data/lib/vagrant-libvirt/action/handle_box_image.rb +26 -15
  13. data/lib/vagrant-libvirt/action/handle_storage_pool.rb +9 -4
  14. data/lib/vagrant-libvirt/action/package_domain.rb +58 -12
  15. data/lib/vagrant-libvirt/action/prepare_nfs_settings.rb +3 -9
  16. data/lib/vagrant-libvirt/action/prune_nfs_exports.rb +19 -9
  17. data/lib/vagrant-libvirt/action/remove_libvirt_image.rb +2 -2
  18. data/lib/vagrant-libvirt/action/remove_stale_volume.rb +17 -11
  19. data/lib/vagrant-libvirt/action/set_boot_order.rb +2 -2
  20. data/lib/vagrant-libvirt/action/set_name_of_domain.rb +6 -9
  21. data/lib/vagrant-libvirt/action/start_domain.rb +2 -2
  22. data/lib/vagrant-libvirt/action/wait_till_up.rb +31 -16
  23. data/lib/vagrant-libvirt/cap/public_address.rb +16 -0
  24. data/lib/vagrant-libvirt/cap/synced_folder.rb +3 -3
  25. data/lib/vagrant-libvirt/config.rb +177 -29
  26. data/lib/vagrant-libvirt/driver.rb +31 -2
  27. data/lib/vagrant-libvirt/errors.rb +5 -1
  28. data/lib/vagrant-libvirt/plugin.rb +7 -2
  29. data/lib/vagrant-libvirt/templates/default_storage_pool.xml.erb +3 -3
  30. data/lib/vagrant-libvirt/templates/domain.xml.erb +48 -8
  31. data/lib/vagrant-libvirt/util.rb +1 -0
  32. data/lib/vagrant-libvirt/util/erb_template.rb +6 -7
  33. data/lib/vagrant-libvirt/util/network_util.rb +33 -13
  34. data/lib/vagrant-libvirt/util/nfs.rb +17 -0
  35. data/lib/vagrant-libvirt/util/storage_util.rb +27 -0
  36. data/lib/vagrant-libvirt/version.rb +1 -1
  37. data/locales/en.yml +8 -4
  38. data/spec/support/environment_helper.rb +1 -1
  39. data/spec/support/libvirt_context.rb +1 -1
  40. data/spec/support/sharedcontext.rb +2 -2
  41. data/spec/unit/action/destroy_domain_spec.rb +2 -2
  42. data/spec/unit/action/set_name_of_domain_spec.rb +3 -3
  43. data/spec/unit/config_spec.rb +173 -0
  44. data/spec/unit/templates/domain_all_settings.xml +20 -4
  45. data/spec/unit/templates/domain_custom_cpu_model.xml +48 -0
  46. data/spec/unit/templates/domain_defaults.xml +2 -0
  47. data/spec/unit/templates/domain_spec.rb +26 -2
  48. metadata +24 -32
  49. data/.coveralls.yml +0 -1
  50. data/.github/issue_template.md +0 -37
  51. data/.gitignore +0 -21
  52. data/.travis.yml +0 -24
  53. data/Gemfile +0 -26
  54. data/Rakefile +0 -8
  55. data/example_box/README.md +0 -29
  56. data/example_box/Vagrantfile +0 -60
  57. data/example_box/metadata.json +0 -5
  58. data/tools/create_box.sh +0 -130
  59. data/tools/prepare_redhat_for_box.sh +0 -119
  60. data/vagrant-libvirt.gemspec +0 -54
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
- SHA1:
3
- metadata.gz: d032026c991c626b3b48677af7e977cb9ed3092c
4
- data.tar.gz: 36104f3a0e77326d19665b97ec4854443959914d
2
+ SHA256:
3
+ metadata.gz: accf8b1ee506fd707b7e81a973e150e7663a7c2e4e40f21f63444faf3b0f88fc
4
+ data.tar.gz: 73b24cb64924caabce7813433f066cd8f74f39eaf28291a17e27c5ee112b0d80
5
5
  SHA512:
6
- metadata.gz: b7fdf0aa740282235fe782d72974149b6ebede496e6e8e48554c1625a47f1767e3befd674d3a115b9de2c6a9d0c1bb045e922b767ba7d9fa24176cc96a2fc63f
7
- data.tar.gz: e0477096ac9324861676ea69cc83ed4ce17146a9f1644fbfeb11facd69a1af83a3a09454334922a58b8d8bb8daa465eb743060655f84a34655f69bdd21873b68
6
+ metadata.gz: 4e9d4668fbc56a6836d88a86ee069c808ec9d3cd1b2d1afda127ffd21787a2b83cce29227d7a42f79e64a8201a3ed11dec13a01fa16d42223c2dae20981107ac
7
+ data.tar.gz: e3c2f7a56a0c909d851e989554092630715e9b35095131d3945ea0406f1b56cade9b5645405babd43d834a5a12cacfb853b06a7631cb8b1e68f8220cf3826a9f
data/README.md CHANGED
@@ -4,61 +4,69 @@
4
4
  [![Build Status](https://travis-ci.org/vagrant-libvirt/vagrant-libvirt.svg)](https://travis-ci.org/vagrant-libvirt/vagrant-libvirt)
5
5
  [![Coverage Status](https://coveralls.io/repos/github/vagrant-libvirt/vagrant-libvirt/badge.svg?branch=master)](https://coveralls.io/github/vagrant-libvirt/vagrant-libvirt?branch=master)
6
6
 
7
- This is a [Vagrant](http://www.vagrantup.com) plugin that adds an
7
+ This is a [Vagrant](http://www.vagrantup.com) plugin that adds a
8
8
  [Libvirt](http://libvirt.org) provider to Vagrant, allowing Vagrant to
9
9
  control and provision machines via Libvirt toolkit.
10
10
 
11
11
  **Note:** Actual version is still a development one. Feedback is welcome and
12
12
  can help a lot :-)
13
13
 
14
- ## QA status
15
-
16
- We periodically test basic functionality for vagrant-libvirt on various distributions.
17
- In the table below, build passing means that specific version combination of Vagrant + Vagrant-libvirt was installed correctly and `vagrant up` is working. Click the badge to see the log.
18
-
19
- |Vagrant|Vagrant-libvirt|ubuntu-12.04|ubuntu-14.04|ubuntu-16.04|debian-8|debian-9|centos-6|centos-7|fedora-21|fedora-22|fedora-23|fedora-24|arch|
20
- |---|---|---|---|---|---|---|---|---|---|---|---|---|---|
21
- |2.0.1|master|[![Build Status](https://jenkins.infernix.net/job/vagrant-libvirt-qa/qa_vagrant_libvirt_version=master,qa_vagrant_version=2.0.1,distro=ubuntu-12.04/badge/icon)](https://jenkins.infernix.net/job/vagrant-libvirt-qa/qa_vagrant_libvirt_version=master,qa_vagrant_version=2.0.1,distro=ubuntu-12.04/)|[![Build Status](https://jenkins.infernix.net/job/vagrant-libvirt-qa/qa_vagrant_libvirt_version=master,qa_vagrant_version=2.0.1,distro=ubuntu-14.04/badge/icon)](https://jenkins.infernix.net/job/vagrant-libvirt-qa/qa_vagrant_libvirt_version=master,qa_vagrant_version=2.0.1,distro=ubuntu-14.04/)|[![Build Status](https://jenkins.infernix.net/job/vagrant-libvirt-qa/qa_vagrant_libvirt_version=master,qa_vagrant_version=2.0.1,distro=ubuntu-16.04/badge/icon)](https://jenkins.infernix.net/job/vagrant-libvirt-qa/qa_vagrant_libvirt_version=master,qa_vagrant_version=2.0.1,distro=ubuntu-16.04/)|[![Build Status](https://jenkins.infernix.net/job/vagrant-libvirt-qa/qa_vagrant_libvirt_version=master,qa_vagrant_version=2.0.1,distro=debian-8/badge/icon)](https://jenkins.infernix.net/job/vagrant-libvirt-qa/qa_vagrant_libvirt_version=master,qa_vagrant_version=2.0.1,distro=debian-8/)|[![Build Status](https://jenkins.infernix.net/job/vagrant-libvirt-qa/qa_vagrant_libvirt_version=master,qa_vagrant_version=2.0.1,distro=debian-9/badge/icon)](https://jenkins.infernix.net/job/vagrant-libvirt-qa/qa_vagrant_libvirt_version=master,qa_vagrant_version=2.0.1,distro=debian-9/)|[![Build Status](https://jenkins.infernix.net/job/vagrant-libvirt-qa/qa_vagrant_libvirt_version=master,qa_vagrant_version=2.0.1,distro=centos-6/badge/icon)](https://jenkins.infernix.net/job/vagrant-libvirt-qa/qa_vagrant_libvirt_version=master,qa_vagrant_version=2.0.1,distro=centos-6/)|[![Build Status](https://jenkins.infernix.net/job/vagrant-libvirt-qa/qa_vagrant_libvirt_version=master,qa_vagrant_version=2.0.1,distro=centos-7/badge/icon)](https://jenkins.infernix.net/job/vagrant-libvirt-qa/qa_vagrant_libvirt_version=master,qa_vagrant_version=2.0.1,distro=centos-7/)|[![Build Status](https://jenkins.infernix.net/job/vagrant-libvirt-qa/qa_vagrant_libvirt_version=master,qa_vagrant_version=2.0.1,distro=fedora-21/badge/icon)](https://jenkins.infernix.net/job/vagrant-libvirt-qa/qa_vagrant_libvirt_version=master,qa_vagrant_version=2.0.1,distro=fedora-21/)|[![Build Status](https://jenkins.infernix.net/job/vagrant-libvirt-qa/qa_vagrant_libvirt_version=master,qa_vagrant_version=2.0.1,distro=fedora-22/badge/icon)](https://jenkins.infernix.net/job/vagrant-libvirt-qa/qa_vagrant_libvirt_version=master,qa_vagrant_version=2.0.1,distro=fedora-22/)|[![Build Status](https://jenkins.infernix.net/job/vagrant-libvirt-qa/qa_vagrant_libvirt_version=master,qa_vagrant_version=2.0.1,distro=fedora-23/badge/icon)](https://jenkins.infernix.net/job/vagrant-libvirt-qa/qa_vagrant_libvirt_version=master,qa_vagrant_version=2.0.1,distro=fedora-23/)|[![Build Status](https://jenkins.infernix.net/job/vagrant-libvirt-qa/qa_vagrant_libvirt_version=master,qa_vagrant_version=2.0.1,distro=fedora-24/badge/icon)](https://jenkins.infernix.net/job/vagrant-libvirt-qa/qa_vagrant_libvirt_version=master,qa_vagrant_version=2.0.1,distro=fedora-24/)|[![Build Status](https://jenkins.infernix.net/job/vagrant-libvirt-qa/qa_vagrant_libvirt_version=master,qa_vagrant_version=2.0.1,distro=arch/badge/icon)](https://jenkins.infernix.net/job/vagrant-libvirt-qa/qa_vagrant_libvirt_version=master,qa_vagrant_version=2.0.1,distro=arch/)|
22
-
23
14
  ## Index
24
15
 
25
-
26
- - [Features](#features)
27
- - [Future work](#future-work)
28
- - [Installation](#installation)
29
- - [Possible problems with plugin installation on Linux](#possible-problems-with-plugin-installation-on-linux)
30
- - [Vagrant Project Preparation](#vagrant-project-preparation)
31
- - [Add Box](#add-box)
32
- - [Create Vagrantfile](#create-vagrantfile)
33
- - [Start VM](#start-vm)
34
- - [How Project Is Created](#how-project-is-created)
35
- - [Libvirt Configuration](#libvirt-configuration)
36
- - [Provider Options](#provider-options)
37
- - [Domain Specific Options](#domain-specific-options)
38
- - [Reload behavior](#reload-behavior)
39
- - [Networks](#networks)
40
- - [Private Network Options](#private-network-options)
41
- - [Public Network Options](#public-network-options)
42
- - [Management Network](#management-network)
43
- - [Additional Disks](#additional-disks)
44
- - [Reload behavior](#reload-behavior-1)
45
- - [CDROMs](#cdroms)
46
- - [Input](#input)
47
- - [PCI device passthrough](#pci-device-passthrough)
48
- - [USB Redirector Devices](#usb-redirector-devices)
49
- - [Random number generator passthrough](#random-number-generator-passthrough)
50
- - [Watchdog·Device](#watchdog-device)
51
- - [Smartcard device](#smartcard-device)
52
- - [CPU Features](#cpu-features)
53
- - [No box and PXE boot](#no-box-and-pxe-boot)
54
- - [SSH Access To VM](#ssh-access-to-vm)
55
- - [Forwarded Ports](#forwarded-ports)
56
- - [Synced Folders](#synced-folders)
57
- - [Customized Graphics](#customized-graphics)
58
- - [Box Format](#box-format)
59
- - [Create Box](#create-box)
60
- - [Development](#development)
61
- - [Contributing](#contributing)
16
+ <!-- note in vim set "let g:vmt_list_item_char='-'" to generate the correct output -->
17
+ <!-- vim-markdown-toc GFM -->
18
+
19
+ * [Features](#features)
20
+ * [Future work](#future-work)
21
+ * [Using Docker based Installation](#using-docker-based-installation)
22
+ * [Installation](#installation)
23
+ * [Possible problems with plugin installation on Linux](#possible-problems-with-plugin-installation-on-linux)
24
+ * [Vagrant Project Preparation](#vagrant-project-preparation)
25
+ * [Add Box](#add-box)
26
+ * [Create Vagrantfile](#create-vagrantfile)
27
+ * [Start VM](#start-vm)
28
+ * [How Project Is Created](#how-project-is-created)
29
+ * [Libvirt Configuration](#libvirt-configuration)
30
+ * [Provider Options](#provider-options)
31
+ * [Domain Specific Options](#domain-specific-options)
32
+ * [Reload behavior](#reload-behavior)
33
+ * [Networks](#networks)
34
+ * [Private Network Options](#private-network-options)
35
+ * [Public Network Options](#public-network-options)
36
+ * [Management Network](#management-network)
37
+ * [Additional Disks](#additional-disks)
38
+ * [Reload behavior](#reload-behavior-1)
39
+ * [CDROMs](#cdroms)
40
+ * [Input](#input)
41
+ * [PCI device passthrough](#pci-device-passthrough)
42
+ * [Using USB Devices](#using-usb-devices)
43
+ * [USB Controller Configuration](#usb-controller-configuration)
44
+ * [USB Device Passthrough](#usb-device-passthrough)
45
+ * [USB Redirector Devices](#usb-redirector-devices)
46
+ * [Filter for USB Redirector Devices](#filter-for-usb-redirector-devices)
47
+ * [Random number generator passthrough](#random-number-generator-passthrough)
48
+ * [Watchdog device](#watchdog-device)
49
+ * [Smartcard device](#smartcard-device)
50
+ * [Hypervisor Features](#hypervisor-features)
51
+ * [CPU features](#cpu-features)
52
+ * [Memory Backing](#memory-backing)
53
+ * [No box and PXE boot](#no-box-and-pxe-boot)
54
+ * [SSH Access To VM](#ssh-access-to-vm)
55
+ * [Forwarded Ports](#forwarded-ports)
56
+ * [Synced Folders](#synced-folders)
57
+ * [QEMU Session Support](#qemu-session-support)
58
+ * [Customized Graphics](#customized-graphics)
59
+ * [TPM Devices](#tpm-devices)
60
+ * [Libvirt communication channels](#libvirt-communication-channels)
61
+ * [Custom command line arguments and environment variables](#custom-command-line-arguments-and-environment-variables)
62
+ * [Box Format](#box-format)
63
+ * [Create Box](#create-box)
64
+ * [Package Box from VM](#package-box-from-vm)
65
+ * [Troubleshooting VMs](#troubleshooting-vms)
66
+ * [Development](#development)
67
+ * [Contributing](#contributing)
68
+
69
+ <!-- vim-markdown-toc -->
62
70
 
63
71
  ## Features
64
72
 
@@ -85,29 +93,74 @@ In the table below, build passing means that specific version combination of Vag
85
93
  * Take a look at [open
86
94
  issues](https://github.com/vagrant-libvirt/vagrant-libvirt/issues?state=open).
87
95
 
96
+ ## Using Docker based Installation
97
+
98
+ Due to the number of issues encountered around compatibility between the ruby runtime environment
99
+ that is part of the upstream vagrant installation and the library dependencies of libvirt that
100
+ this project requires to communicate with libvirt, there is a docker image build and published.
101
+
102
+ This should allow users to execute vagrant with vagrant-libvirt without needing to deal with
103
+ the compatibility issues, though you may need to extend the image for your own needs should
104
+ you make use of additional plugins.
105
+
106
+ To get the image:
107
+ ```bash
108
+ docker pull vagrantlibvirt/vagrant-libvirt:latest
109
+ ```
110
+
111
+ Running the image:
112
+ ```bash
113
+ docker run -it -rm \
114
+ -e LIBVIRT_DEFAULT_URI \
115
+ -v /var/run/libvirt/:/var/run/libvirt/ \
116
+ -v ~/.vagrant.d:/.vagrant.d \
117
+ -v $(pwd):$(pwd) \
118
+ -w $(pwd) \
119
+ vagrantlibvirt/vagrant-libvirt:latest \
120
+ vagrant status
121
+ ```
122
+
123
+ Note that if you are connecting to a remote system libvirt, you may omit the
124
+ `-v /var/run/libvirt/:/var/run/libvirt/` mount bind. Some distributions patch the local
125
+ vagrant environment to ensure vagrant-libvirt uses `qemu:///session`, which means you
126
+ may need to set the environment variable `LIBVIRT_DEFAULT_URI` to the same value if
127
+ looking to use this in place of your distribution provided installation.
128
+
88
129
  ## Installation
89
130
 
90
- First, you should have both qemu and libvirt installed if you plan to run VMs
91
- on your local system. For instructions, refer to your linux distribution's
131
+ First, you should have both QEMU and Libvirt installed if you plan to run VMs
132
+ on your local system. For instructions, refer to your Linux distribution's
92
133
  documentation.
93
134
 
94
- **NOTE:** Before you start using Vagrant-libvirt, please make sure your libvirt
95
- and qemu installation is working correctly and you are able to create qemu or
96
- kvm type virtual machines with `virsh` or `virt-manager`.
135
+ **NOTE:** Before you start using vagrant-libvirt, please make sure your Libvirt
136
+ and QEMU installation is working correctly and you are able to create QEMU or
137
+ KVM type virtual machines with `virsh` or `virt-manager`.
97
138
 
98
139
  Next, you must have [Vagrant
99
140
  installed](http://docs.vagrantup.com/v2/installation/index.html).
100
- Vagrant-libvirt supports Vagrant 1.5, 1.6, 1.7 and 1.8.
101
- *We only test with the upstream version!* If you decide to install your distros
141
+ Vagrant-libvirt supports Vagrant 2.0, 2.1 & 2.2. It should also work with earlier
142
+ releases from 1.5 onwards but they are not actively tested.
143
+
144
+ Check the [.travis.yml](https://github.com/vagrant-libvirt/vagrant-libvirt/blob/master/.travis.yml)
145
+ for the current list of tested versions.
146
+
147
+ *We only test with the upstream version!* If you decide to install your distro's
102
148
  version and you run into problems, as a first step you should switch to upstream.
103
149
 
104
150
  Now you need to make sure your have all the build dependencies installed for
105
151
  vagrant-libvirt. This depends on your distro. An overview:
106
152
 
107
- * Ubuntu 12.04/14.04/16.04, Debian:
153
+ * Ubuntu 18.10, Debian 9 and up:
154
+ ```shell
155
+ apt-get build-dep vagrant ruby-libvirt
156
+ apt-get install qemu libvirt-daemon-system libvirt-clients ebtables dnsmasq-base
157
+ apt-get install libxslt-dev libxml2-dev libvirt-dev zlib1g-dev ruby-dev
158
+ ```
159
+
160
+ * Ubuntu 18.04, Debian 8 and older:
108
161
  ```shell
109
162
  apt-get build-dep vagrant ruby-libvirt
110
- apt-get install qemu libvirt-bin ebtables dnsmasq
163
+ apt-get install qemu libvirt-bin ebtables dnsmasq-base
111
164
  apt-get install libxslt-dev libxml2-dev libvirt-dev zlib1g-dev ruby-dev
112
165
  ```
113
166
 
@@ -120,10 +173,15 @@ yum install qemu libvirt libvirt-devel ruby-devel gcc qemu-kvm
120
173
 
121
174
  * Fedora 22 and up:
122
175
  ```shell
123
- dnf -y install qemu libvirt libvirt-devel ruby-devel gcc
176
+ dnf install -y gcc libvirt libvirt-devel libxml2-devel make ruby-devel
177
+ ```
178
+
179
+ * OpenSUSE leap 15.1:
180
+ ```shell
181
+ zypper install qemu libvirt libvirt-devel ruby-devel gcc qemu-kvm
124
182
  ```
125
183
 
126
- * Arch linux: please read the related [ArchWiki](https://wiki.archlinux.org/index.php/Vagrant#vagrant-libvirt) page.
184
+ * Arch Linux: please read the related [ArchWiki](https://wiki.archlinux.org/index.php/Vagrant#vagrant-libvirt) page.
127
185
  ```shell
128
186
  pacman -S vagrant
129
187
  ```
@@ -131,8 +189,16 @@ pacman -S vagrant
131
189
  Now you're ready to install vagrant-libvirt using standard [Vagrant
132
190
  plugin](http://docs.vagrantup.com/v2/plugins/usage.html) installation methods.
133
191
 
192
+ For some distributions you will need to specify `CONFIGURE_ARGS` variable before
193
+ running `vagrant plugin install`:
194
+
195
+ * Fedora 32 + upstream Vagrant:
196
+ ```shell
197
+ export CONFIGURE_ARGS="with-libvirt-include=/usr/include/libvirt with-libvirt-lib=/usr/lib64"
198
+ ```
199
+
134
200
  ```shell
135
- $ vagrant plugin install vagrant-libvirt
201
+ vagrant plugin install vagrant-libvirt
136
202
  ```
137
203
 
138
204
  ### Possible problems with plugin installation on Linux
@@ -151,7 +217,7 @@ $ sudo dnf install libxslt-devel libxml2-devel libvirt-devel \
151
217
  libguestfs-tools-c ruby-devel gcc
152
218
  ```
153
219
 
154
- On Arch linux it is recommended to follow [steps from ArchWiki](https://wiki.archlinux.org/index.php/Vagrant#vagrant-libvirt).
220
+ On Arch Linux it is recommended to follow [steps from ArchWiki](https://wiki.archlinux.org/index.php/Vagrant#vagrant-libvirt).
155
221
 
156
222
  If have problem with installation - check your linker. It should be `ld.gold`:
157
223
 
@@ -173,12 +239,12 @@ CONFIGURE_ARGS='with-ldflags=-L/opt/vagrant/embedded/lib with-libvirt-include=/u
173
239
  After installing the plugin (instructions above), the quickest way to get
174
240
  started is to add Libvirt box and specify all the details manually within a
175
241
  `config.vm.provider` block. So first, add Libvirt box using any name you want.
176
- You can find more libvirt ready boxes at
177
- [Atlas](https://atlas.hashicorp.com/boxes/search?provider=libvirt). For
242
+ You can find more Libvirt-ready boxes at
243
+ [Vagrant Cloud](https://app.vagrantup.com/boxes/search?provider=libvirt). For
178
244
  example:
179
245
 
180
246
  ```shell
181
- vagrant init fedora/24-cloud-base
247
+ vagrant init fedora/32-cloud-base
182
248
  ```
183
249
 
184
250
  ### Create Vagrantfile
@@ -189,7 +255,7 @@ information where necessary. For example:
189
255
  ```ruby
190
256
  Vagrant.configure("2") do |config|
191
257
  config.vm.define :test_vm do |test_vm|
192
- test_vm.vm.box = "fedora/24-cloud-base"
258
+ test_vm.vm.box = "fedora/32-cloud-base"
193
259
  end
194
260
  end
195
261
  ```
@@ -214,7 +280,7 @@ export VAGRANT_DEFAULT_PROVIDER=libvirt
214
280
 
215
281
  Vagrant goes through steps below when creating new project:
216
282
 
217
- 1. Connect to Libvirt localy or remotely via SSH.
283
+ 1. Connect to Libvirt locally or remotely via SSH.
218
284
  2. Check if box image is available in Libvirt storage pool. If not, upload it
219
285
  to remote Libvirt storage pool as new volume.
220
286
  3. Create COW diff image of base box image for new Libvirt domain.
@@ -231,29 +297,32 @@ Vagrant goes through steps below when creating new project:
231
297
  Although it should work without any configuration for most people, this
232
298
  provider exposes quite a few provider-specific configuration options. The
233
299
  following options allow you to configure how vagrant-libvirt connects to
234
- libvirt, and are used to generate the [libvirt connection
300
+ Libvirt, and are used to generate the [Libvirt connection
235
301
  URI](http://libvirt.org/uri.html):
236
302
 
237
- * `driver` - A hypervisor name to access. For now only kvm and qemu are
303
+ * `driver` - A hypervisor name to access. For now only KVM and QEMU are
238
304
  supported
239
- * `host` - The name of the server, where libvirtd is running
305
+ * `host` - The name of the server, where Libvirtd is running
240
306
  * `connect_via_ssh` - If use ssh tunnel to connect to Libvirt. Absolutely
241
- needed to access libvirt on remote host. It will not be able to get the IP
307
+ needed to access Libvirt on remote host. It will not be able to get the IP
242
308
  address of a started VM otherwise.
243
309
  * `username` - Username and password to access Libvirt
244
310
  * `password` - Password to access Libvirt
245
311
  * `id_ssh_key_file` - If not nil, uses this ssh private key to access Libvirt.
246
312
  Default is `$HOME/.ssh/id_rsa`. Prepends `$HOME/.ssh/` if no directory
247
- * `socket` - Path to the libvirt unix socket (e.g.
313
+ * `socket` - Path to the Libvirt unix socket (e.g.
248
314
  `/var/run/libvirt/libvirt-sock`)
249
- * `uri` - For advanced usage. Directly specifies what libvirt connection URI
315
+ * `uri` - For advanced usage. Directly specifies what Libvirt connection URI
250
316
  vagrant-libvirt should use. Overrides all other connection configuration
251
317
  options
252
318
 
253
319
  Connection-independent options:
254
320
 
255
321
  * `storage_pool_name` - Libvirt storage pool name, where box image and instance
256
- snapshots will be stored.
322
+ snapshots (if `snapshot_pool_name` is not set) will be stored.
323
+ * `snapshot_pool_name` - Libvirt storage pool name. If set, the created
324
+ snapshot of the instance will be stored at this location instead of
325
+ `storage_pool_name`.
257
326
 
258
327
  For example:
259
328
 
@@ -267,8 +336,10 @@ end
267
336
 
268
337
  ### Domain Specific Options
269
338
 
339
+ * `title` - A short description of the domain.
340
+ * `description` - A human readable description of the virtual machine.
270
341
  * `disk_bus` - The type of disk device to emulate. Defaults to virtio if not
271
- set. Possible values are documented in libvirt's [description for
342
+ set. Possible values are documented in Libvirt's [description for
272
343
  _target_](http://libvirt.org/formatdomain.html#elementsDisks). NOTE: this
273
344
  option applies only to disks associated with a box image. To set the bus type
274
345
  on additional disks, see the [Additional Disks](#additional-disks) section.
@@ -279,22 +350,26 @@ end
279
350
  * `nic_model_type` - parameter specifies the model of the network adapter when
280
351
  you create a domain value by default virtio KVM believe possible values, see
281
352
  the [documentation for
282
- libvirt](https://libvirt.org/formatdomain.html#elementsNICSModel).
353
+ Libvirt](https://libvirt.org/formatdomain.html#elementsNICSModel).
354
+ * `shares` - Proportional weighted share for the domain relative to others. For more details see [documentation](https://libvirt.org/formatdomain.html#elementsCPUTuning).
283
355
  * `memory` - Amount of memory in MBytes. Defaults to 512 if not set.
284
356
  * `cpus` - Number of virtual cpus. Defaults to 1 if not set.
357
+ * `cpuset` - Physical cpus to which the vcpus can be pinned. For more details see [documentation](https://libvirt.org/formatdomain.html#elementsCPUAllocation).
285
358
  * `cputopology` - Number of CPU sockets, cores and threads running per core. All fields of `:sockets`, `:cores` and `:threads` are mandatory, `cpus` domain option must be present and must be equal to total count of **sockets * cores * threads**. For more details see [documentation](https://libvirt.org/formatdomain.html#elementsCPU).
359
+ * `nodeset` - Physical NUMA nodes where virtual memory can be pinned. For more details see [documentation](https://libvirt.org/formatdomain.html#elementsNUMATuning).
286
360
 
287
361
  ```ruby
288
362
  Vagrant.configure("2") do |config|
289
363
  config.vm.provider :libvirt do |libvirt|
290
364
  libvirt.cpus = 4
365
+ libvirt.cpuset = '1-4,^3,6'
291
366
  libvirt.cputopology :sockets => '2', :cores => '2', :threads => '1'
292
367
  end
293
368
  end
294
369
  ```
295
370
 
296
371
  * `nested` - [Enable nested
297
- virtualization](https://github.com/torvalds/linux/blob/master/Documentation/virtual/kvm/nested-vmx.txt).
372
+ virtualization](https://docs.fedoraproject.org/en-US/quick-docs/using-nested-virtualization-in-kvm/).
298
373
  Default is false.
299
374
  * `cpu_mode` - [CPU emulation
300
375
  mode](https://libvirt.org/formatdomain.html#elementsCPU). Defaults to
@@ -303,7 +378,7 @@ end
303
378
  * `cpu_model` - CPU Model. Defaults to 'qemu64' if not set and `cpu_mode` is
304
379
  `custom` and to '' otherwise. This can really only be used when setting
305
380
  `cpu_mode` to `custom`.
306
- * `cpu_fallback` - Whether to allow libvirt to fall back to a CPU model close
381
+ * `cpu_fallback` - Whether to allow Libvirt to fall back to a CPU model close
307
382
  to the specified model if features in the guest CPU are not supported on the
308
383
  host. Defaults to 'allow' if not set. Allowed values: `allow`, `forbid`.
309
384
  * `numa_nodes` - Specify an array of NUMA nodes for the guest. The syntax is similar to what would be set in the domain XML. `memory` must be in MB. Symmetrical and asymmetrical topologies are supported but make sure your total count of defined CPUs adds up to `v.cpus`.
@@ -319,7 +394,7 @@ end
319
394
  * `loader` - Sets path to custom UEFI loader.
320
395
  * `volume_cache` - Controls the cache mechanism. Possible values are "default",
321
396
  "none", "writethrough", "writeback", "directsync" and "unsafe". [See
322
- driver->cache in libvirt
397
+ driver->cache in Libvirt
323
398
  documentation](http://libvirt.org/formatdomain.html#elementsDisks).
324
399
  * `kernel` - To launch the guest with a kernel residing on host filesystems.
325
400
  Equivalent to qemu `-kernel`.
@@ -327,6 +402,9 @@ end
327
402
  to qemu `-initrd`.
328
403
  * `random_hostname` - To create a domain name with extra information on the end
329
404
  to prevent hostname conflicts.
405
+ * `default_prefix` - The default Libvirt guest name becomes a concatenation of the
406
+ `<current_directory>_<guest_name>`. The current working directory is the default prefix
407
+ to the guest name. The `default_prefix` options allow you to set the guest name prefix.
330
408
  * `cmd_line` - Arguments passed on to the guest kernel initramfs or initrd to
331
409
  use. Equivalent to qemu `-append`, only possible to use in combination with `initrd` and `kernel`.
332
410
  * `graphics_type` - Sets the protocol used to expose the guest display.
@@ -337,8 +415,8 @@ end
337
415
  * `graphics_ip` - Sets the IP for the display protocol to bind to. Defaults to
338
416
  "127.0.0.1".
339
417
  * `graphics_passwd` - Sets the password for the display protocol. Working for
340
- vnc and spice. by default working without passsword.
341
- * `graphics_autoport` - Sets autoport for graphics, libvirt in this case
418
+ vnc and Spice. by default working without passsword.
419
+ * `graphics_autoport` - Sets autoport for graphics, Libvirt in this case
342
420
  ignores graphics_port value, Defaults to 'yes'. Possible value are "yes" and
343
421
  "no"
344
422
  * `keymap` - Set keymap for vm. default: en-us
@@ -355,8 +433,8 @@ end
355
433
  Defaults to "ich6".
356
434
  * `machine_type` - Sets machine type. Equivalent to qemu `-machine`. Use
357
435
  `qemu-system-x86_64 -machine help` to get a list of supported machines.
358
- * `machine_arch` - Sets machine architecture. This helps libvirt to determine
359
- the correct emulator type. Possible values depend on your version of qemu.
436
+ * `machine_arch` - Sets machine architecture. This helps Libvirt to determine
437
+ the correct emulator type. Possible values depend on your version of QEMU.
360
438
  For possible values, see which emulator executable `qemu-system-*` your
361
439
  system provides. Common examples are `aarch64`, `alpha`, `arm`, `cris`,
362
440
  `i386`, `lm32`, `m68k`, `microblaze`, `microblazeel`, `mips`, `mips64`,
@@ -380,7 +458,7 @@ end
380
458
  * `nic_adapter_count` - Defaults to '8'. Only use case for increasing this
381
459
  count is for VMs that virtualize switches such as Cumulus Linux. Max value
382
460
  for Cumulus Linux VMs is 33.
383
- * `uuid` - Force a domain UUID. Defaults to autogenerated value by libvirt if
461
+ * `uuid` - Force a domain UUID. Defaults to autogenerated value by Libvirt if
384
462
  not set.
385
463
  * `suspend_mode` - What is done on vagrant suspend. Possible values: 'pause',
386
464
  'managedsave'. Pause mode executes a la `virsh suspend`, which just pauses
@@ -394,10 +472,10 @@ end
394
472
  specified here.
395
473
  * `autostart` - Automatically start the domain when the host boots. Defaults to
396
474
  'false'.
397
- * `channel` - [libvirt
475
+ * `channel` - [Libvirt
398
476
  channels](https://libvirt.org/formatdomain.html#elementCharChannel).
399
477
  Configure a private communication channel between the host and guest, e.g.
400
- for use by the [qemu guest
478
+ for use by the [QEMU guest
401
479
  agent](http://wiki.libvirt.org/page/Qemu_guest_agent) and the Spice/QXL
402
480
  graphics type.
403
481
  * `mgmt_attach` - Decide if VM has interface in mgmt network. If set to 'false'
@@ -478,11 +556,11 @@ https://libvirt.org/formatdomain.html#elementsNICSTCP
478
556
 
479
557
  http://libvirt.org/formatdomain.html#elementsNICSMulticast
480
558
 
481
- http://libvirt.org/formatdomain.html#elementsNICSUDP _(in libvirt v1.2.20 and higher)_
559
+ http://libvirt.org/formatdomain.html#elementsNICSUDP _(in Libvirt v1.2.20 and higher)_
482
560
 
483
561
  Public Network interfaces are currently implemented using the macvtap driver.
484
562
  The macvtap driver is only available with the Linux Kernel version >= 2.6.24.
485
- See the following libvirt documentation for the details of the macvtap usage.
563
+ See the following Libvirt documentation for the details of the macvtap usage.
486
564
 
487
565
  http://www.libvirt.org/formatdomain.html#elementsNICSDirect
488
566
 
@@ -551,7 +629,7 @@ In example below, one network interface is configured for VM `test_vm1`. After
551
629
  you run `vagrant up`, VM will be accessible on IP address `10.20.30.40`. So if
552
630
  you install a web server via provisioner, you will be able to access your
553
631
  testing server on `http://10.20.30.40` URL. But beware that this address is
554
- private to libvirt host only. It's not visible outside of the hypervisor box.
632
+ private to Libvirt host only. It's not visible outside of the hypervisor box.
555
633
 
556
634
  If network `10.20.30.0/24` doesn't exist, provider will create it. By default
557
635
  created networks are NATed to outside world, so your VM will be able to connect
@@ -568,11 +646,11 @@ reachable by anyone with access to the public network.
568
646
 
569
647
  *Note: These options are not applicable to public network interfaces.*
570
648
 
571
- There is a way to pass specific options for libvirt provider when using
649
+ There is a way to pass specific options for Libvirt provider when using
572
650
  `config.vm.network` to configure new network interface. Each parameter name
573
651
  starts with `libvirt__` string. Here is a list of those options:
574
652
 
575
- * `:libvirt__network_name` - Name of libvirt network to connect to. By default,
653
+ * `:libvirt__network_name` - Name of Libvirt network to connect to. By default,
576
654
  network 'default' is used.
577
655
  * `:libvirt__netmask` - Used only together with `:ip` option. Default is
578
656
  '255.255.255.0'.
@@ -611,7 +689,7 @@ starts with `libvirt__` string. Here is a list of those options:
611
689
  between Guests. Useful for Switch VMs like Cumulus Linux. No virtual switch
612
690
  setting like `libvirt__network_name` applies with tunnel interfaces and will
613
691
  be ignored if configured.
614
- * `:libvirt__tunnel_ip` - Sets the source IP of the libvirt tunnel interface.
692
+ * `:libvirt__tunnel_ip` - Sets the source IP of the Libvirt tunnel interface.
615
693
  By default this is `127.0.0.1` for TCP and UDP tunnels and `239.255.1.1` for
616
694
  Multicast tunnels. It populates the address field in the `<source
617
695
  address="XXX">` of the interface xml configuration.
@@ -621,11 +699,11 @@ starts with `libvirt__` string. Here is a list of those options:
621
699
  * `:libvirt__tunnel_local_port` - Sets the local port used by the udp tunnel
622
700
  interface type. It populates the port field in the `<local port=XXX">`
623
701
  section of the interface xml configuration. _(This feature only works in
624
- libvirt 1.2.20 and higher)_
702
+ Libvirt 1.2.20 and higher)_
625
703
  * `:libvirt__tunnel_local_ip` - Sets the local IP used by the udp tunnel
626
704
  interface type. It populates the ip entry of the `<local address=XXX">`
627
705
  section of the interface xml configuration. _(This feature only works in
628
- libvirt 1.2.20 and higher)_
706
+ Libvirt 1.2.20 and higher)_
629
707
  * `:libvirt__guest_ipv6` - Enable or disable guest-to-guest IPv6 communication.
630
708
  See [here](https://libvirt.org/formatnetwork.html#examplesPrivate6), and
631
709
  [here](http://libvirt.org/git/?p=libvirt.git;a=commitdiff;h=705e67d40b09a905cd6a4b8b418d5cb94eaa95a8)
@@ -637,18 +715,18 @@ starts with `libvirt__` string. Here is a list of those options:
637
715
  failures](https://github.com/vagrant-libvirt/vagrant-libvirt/pull/498)
638
716
  * `:mac` - MAC address for the interface. *Note: specify this in lowercase
639
717
  since Vagrant network scripts assume it will be!*
640
- * `:libvirt__mtu` - MTU size for the libvirt network, if not defined, the
641
- created network will use the libvirt default (1500). VMs still need to set the
718
+ * `:libvirt__mtu` - MTU size for the Libvirt network, if not defined, the
719
+ created network will use the Libvirt default (1500). VMs still need to set the
642
720
  MTU accordingly.
643
721
  * `:model_type` - parameter specifies the model of the network adapter when you
644
722
  create a domain value by default virtio KVM believe possible values, see the
645
- documentation for libvirt
723
+ documentation for Libvirt
646
724
  * `:libvirt__driver_name` - Define which network driver to use. [More
647
725
  info](https://libvirt.org/formatdomain.html#elementsDriverBackendOptions)
648
726
  * `:libvirt__driver_queues` - Define a number of queues to be used for network
649
727
  interface. Set equal to numer of vCPUs for best performance. [More
650
728
  info](http://www.linux-kvm.org/page/Multiqueue)
651
- * `:autostart` - Automatic startup of network by the libvirt daemon.
729
+ * `:autostart` - Automatic startup of network by the Libvirt daemon.
652
730
  If not specified the default is 'false'.
653
731
  * `:bus` - The bus of the PCI device. Both :bus and :slot have to be defined.
654
732
  * `:slot` - The slot of the PCI device. Both :bus and :slot have to be defined.
@@ -669,8 +747,8 @@ virtual network.
669
747
  Default mode is 'bridge'.
670
748
  * `:type` - is type of interface.(`<interface type="#{@type}">`)
671
749
  * `:mac` - MAC address for the interface.
672
- * `:network_name` - Name of libvirt network to connect to.
673
- * `:portgroup` - Name of libvirt portgroup to connect to.
750
+ * `:network_name` - Name of Libvirt network to connect to.
751
+ * `:portgroup` - Name of Libvirt portgroup to connect to.
674
752
  * `:ovs` - Support to connect to an Open vSwitch bridge device. Default is
675
753
  'false'.
676
754
  * `:trust_guest_rx_filters` - Support trustGuestRxFilters attribute. Details
@@ -681,17 +759,17 @@ virtual network.
681
759
 
682
760
  vagrant-libvirt uses a private network to perform some management operations on
683
761
  VMs. All VMs will have an interface connected to this network and an IP address
684
- dynamically assigned by libvirt unless you set `:mgmt_attach` to 'false'.
762
+ dynamically assigned by Libvirt unless you set `:mgmt_attach` to 'false'.
685
763
  This is in addition to any networks you configure. The name and address
686
764
  used by this network are configurable at the provider level.
687
765
 
688
- * `management_network_name` - Name of libvirt network to which all VMs will be
766
+ * `management_network_name` - Name of Libvirt network to which all VMs will be
689
767
  connected. If not specified the default is 'vagrant-libvirt'.
690
768
  * `management_network_address` - Address of network to which all VMs will be
691
769
  connected. Must include the address and subnet mask. If not specified the
692
770
  default is '192.168.121.0/24'.
693
- * `management_network_mode` - Network mode for the libvirt management network.
694
- Specify one of veryisolated, none, nat or route options. Further documentated
771
+ * `management_network_mode` - Network mode for the Libvirt management network.
772
+ Specify one of veryisolated, none, nat or route options. Further documented
695
773
  under [Private Networks](#private-network-options)
696
774
  * `management_network_guest_ipv6` - Enable or disable guest-to-guest IPv6
697
775
  communication. See
@@ -700,9 +778,10 @@ used by this network are configurable at the provider level.
700
778
  for for more information.
701
779
  * `management_network_autostart` - Automatic startup of mgmt network, if not
702
780
  specified the default is 'false'.
703
- * `:management_network_pci_bus` - The bus of the PCI device.
704
- * `:management_network_pci_slot` - The slot of the PCI device.
781
+ * `management_network_pci_bus` - The bus of the PCI device.
782
+ * `management_network_pci_slot` - The slot of the PCI device.
705
783
  * `management_network_mac` - MAC address of management network interface.
784
+ * `management_network_domain` - Domain name assigned to the management network.
706
785
 
707
786
  You may wonder how vagrant-libvirt knows the IP address a VM received. Libvirt
708
787
  doesn't provide a standard way to find out the IP address of a running domain.
@@ -734,6 +813,7 @@ It has a number of options:
734
813
  Disks with this option set to true need to be removed manually.
735
814
  * `shareable` - Set to true if you want to simulate shared SAN storage.
736
815
  * `serial` - Serial number of the disk device.
816
+ * `wwn` - WWN number of the disk device.
737
817
 
738
818
  The following example creates two additional disks.
739
819
 
@@ -810,29 +890,30 @@ end
810
890
 
811
891
  You can specify multiple PCI devices to passthrough to the VM via
812
892
  `libvirt.pci`. Available options are listed below. Note that all options are
813
- required:
893
+ required, except domain, which defaults to `0x0000`:
814
894
 
895
+ * `domain` - The domain of the PCI device
815
896
  * `bus` - The bus of the PCI device
816
897
  * `slot` - The slot of the PCI device
817
898
  * `function` - The function of the PCI device
818
899
 
819
900
  You can extract that information from output of `lspci` command. First
820
- characters of each line are in format `[<bus>]:[<slot>].[<func>]`. For example:
901
+ characters of each line are in format `[<domain>]:[<bus>]:[<slot>].[<func>]`. For example:
821
902
 
822
903
  ```shell
823
904
  $ lspci| grep NVIDIA
824
- 03:00.0 VGA compatible controller: NVIDIA Corporation GK110B [GeForce GTX TITAN Black] (rev a1)
905
+ 0000:03:00.0 VGA compatible controller: NVIDIA Corporation GK110B [GeForce GTX TITAN Black] (rev a1)
825
906
  ```
826
907
 
827
- In that case `bus` is `0x03`, `slot` is `0x00` and `function` is `0x0`.
908
+ In that case `domain` is `0x0000`, `bus` is `0x03`, `slot` is `0x00` and `function` is `0x0`.
828
909
 
829
910
  ```ruby
830
911
  Vagrant.configure("2") do |config|
831
912
  config.vm.provider :libvirt do |libvirt|
832
- libvirt.pci :bus => '0x06', :slot => '0x12', :function => '0x5'
913
+ libvirt.pci :domain => '0x0000', :bus => '0x06', :slot => '0x12', :function => '0x5'
833
914
 
834
915
  # Add another one if it is neccessary
835
- libvirt.pci :bus => '0x03', :slot => '0x00', :function => '0x0'
916
+ libvirt.pci :domain => '0x0000', :bus => '0x03', :slot => '0x00', :function => '0x0'
836
917
  end
837
918
  end
838
919
  ```
@@ -841,7 +922,64 @@ Note! Above options affect configuration only at domain creation. It won't chang
841
922
 
842
923
  Don't forget to [set](#domain-specific-options) `kvm_hidden` option to `true` especially if you are passthroughing NVIDIA GPUs. Otherwise GPU is visible from VM but cannot be operated.
843
924
 
844
- ## USB Redirector Devices
925
+
926
+ ## Using USB Devices
927
+
928
+ There are several ways to pass a USB device through to a running instance:
929
+ * Use `libvirt.usb` to [attach a USB device at boot](#usb-device-passthrough), with the device ID specified in the Vagrantfile
930
+ * Use a client (such as `virt-viewer` or `virt-manager`) to attach the device at runtime [via USB redirectors](#usb-redirector-devices)
931
+ * Use `virsh attach-device` once the VM is running (however, this is outside the scope of this readme)
932
+
933
+ In all cases, if you wish to use a high-speed USB device,
934
+ you will need to use `libvirt.usb_controller` to specify a USB2 or USB3 controller,
935
+ as the default configuration only exposes a USB1.1 controller.
936
+
937
+ ### USB Controller Configuration
938
+
939
+ The USB controller can be configured using `libvirt.usb_controller`, with the following options:
940
+
941
+ * `model` - The USB controller device model to emulate. (mandatory)
942
+ * `ports` - The number of devices that can be connected to the controller.
943
+
944
+ ```ruby
945
+ Vagrant.configure("2") do |config|
946
+ config.vm.provider :libvirt do |libvirt|
947
+ # Set up a USB3 controller
948
+ libvirt.usb_controller :model => "nec-xhci"
949
+ end
950
+ end
951
+ ```
952
+
953
+ See the [libvirt documentation](https://libvirt.org/formatdomain.html#elementsControllers) for a list of valid models.
954
+
955
+
956
+ ### USB Device Passthrough
957
+
958
+ You can specify multiple USB devices to passthrough to the VM via
959
+ `libvirt.usb`. The device can be specified by the following options:
960
+
961
+ * `bus` - The USB bus ID, e.g. "1"
962
+ * `device` - The USB device ID, e.g. "2"
963
+ * `vendor` - The USB devices vendor ID (VID), e.g. "0x1234"
964
+ * `product` - The USB devices product ID (PID), e.g. "0xabcd"
965
+
966
+ At least one of these has to be specified, and `bus` and `device` may only be
967
+ used together.
968
+
969
+ The example values above match the device from the following output of `lsusb`:
970
+
971
+ ```
972
+ Bus 001 Device 002: ID 1234:abcd Example device
973
+ ```
974
+
975
+ Additionally, the following options can be used:
976
+
977
+ * `startupPolicy` - Is passed through to Libvirt and controls if the device has
978
+ to exist. Libvirt currently allows the following values: "mandatory",
979
+ "requisite", "optional".
980
+
981
+
982
+ ### USB Redirector Devices
845
983
  You can specify multiple redirect devices via `libvirt.redirdev`. There are two types, `tcp` and `spicevmc` supported, for forwarding USB-devices to the guest. Available options are listed below.
846
984
 
847
985
  * `type` - The type of the USB redirector device. (`tcp` or `spicevmc`)
@@ -861,7 +999,10 @@ Vagrant.configure("2") do |config|
861
999
  end
862
1000
  ```
863
1001
 
864
- ### Filter for USB Redirector Devices
1002
+ Note that in order to enable USB redirection with Spice clients,
1003
+ you may need to also set `libvirt.graphics_type = "spice"`
1004
+
1005
+ #### Filter for USB Redirector Devices
865
1006
  You can define filter for redirected devices. These filters can be positiv or negative, by setting the mandatory option `allow=yes` or `allow=no`. All available options are listed below. Note the option `allow` is mandatory.
866
1007
 
867
1008
  * `class` - The device class of the USB device. A list of device classes is available on [Wikipedia](https://en.wikipedia.org/wiki/USB#Device_classes).
@@ -928,7 +1069,7 @@ The optional action attribute describes what `action` to take when the watchdog
928
1069
  ```ruby
929
1070
  Vagrant.configure("2") do |config|
930
1071
  config.vm.provider :libvirt do |libvirt|
931
- # Add libvirt watchdog device model i6300esb
1072
+ # Add Libvirt watchdog device model i6300esb
932
1073
  libvirt.watchdog :model => 'i6300esb', :action => 'reset'
933
1074
  end
934
1075
  end
@@ -954,7 +1095,7 @@ Vagrant.configure("2") do |config|
954
1095
  end
955
1096
  end
956
1097
  ```
957
- ## Features
1098
+ ## Hypervisor Features
958
1099
 
959
1100
  Hypervisor features can be specified via `libvirt.features` as a list. The default
960
1101
  options that are enabled are `acpi`, `apic` and `pae`. If you define `libvirt.features`
@@ -982,15 +1123,35 @@ Vagrant.configure("2") do |config|
982
1123
  end
983
1124
  ```
984
1125
 
1126
+ You can also specify a special set of features that help improve the behavior of guests
1127
+ running Microsoft Windows.
1128
+
1129
+ You can specify HyperV features via `libvirt.hyperv_feature`. Available
1130
+ options are listed below. Note that both options are required:
1131
+
1132
+ * `name` - The name of the feature Hypervisor feature (see Libvirt doc)
1133
+ * `state` - The state for this feature which can be either `on` or `off`.
1134
+
1135
+ ```ruby
1136
+ Vagrant.configure("2") do |config|
1137
+ config.vm.provider :libvirt do |libvirt|
1138
+ # Relax constraints on timers
1139
+ libvirt.hyperv_feature :name => 'relaxed', :state => 'on'
1140
+ # Enable virtual APIC
1141
+ libvirt.hyperv_feature :name => 'vapic', :state => 'on'
1142
+ end
1143
+ end
1144
+ ```
1145
+
985
1146
  ## CPU features
986
1147
 
987
1148
  You can specify CPU feature policies via `libvirt.cpu_feature`. Available
988
1149
  options are listed below. Note that both options are required:
989
1150
 
990
- * `name` - The name of the feature for the chosen CPU (see libvirts
1151
+ * `name` - The name of the feature for the chosen CPU (see Libvirt's
991
1152
  `cpu_map.xml`)
992
1153
  * `policy` - The policy for this feature (one of `force`, `require`,
993
- `optional`, `disable` and `forbid` - see libvirt documentation)
1154
+ `optional`, `disable` and `forbid` - see Libvirt documentation)
994
1155
 
995
1156
  ```ruby
996
1157
  Vagrant.configure("2") do |config|
@@ -1023,30 +1184,6 @@ Vagrant.configure("2") do |config|
1023
1184
  end
1024
1185
  end
1025
1186
  ```
1026
- ## USB device passthrough
1027
-
1028
- You can specify multiple USB devices to passthrough to the VM via
1029
- `libvirt.usb`. The device can be specified by the following options:
1030
-
1031
- * `bus` - The USB bus ID, e.g. "1"
1032
- * `device` - The USB device ID, e.g. "2"
1033
- * `vendor` - The USB devices vendor ID (VID), e.g. "0x1234"
1034
- * `product` - The USB devices product ID (PID), e.g. "0xabcd"
1035
-
1036
- At least one of these has to be specified, and `bus` and `device` may only be
1037
- used together.
1038
-
1039
- The example values above match the device from the following output of `lsusb`:
1040
-
1041
- ```
1042
- Bus 001 Device 002: ID 1234:abcd Example device
1043
- ```
1044
-
1045
- Additionally, the following options can be used:
1046
-
1047
- * `startupPolicy` - Is passed through to libvirt and controls if the device has
1048
- to exist. libvirt currently allows the following values: "mandatory",
1049
- "requisite", "optional".
1050
1187
 
1051
1188
  ## No box and PXE boot
1052
1189
 
@@ -1169,9 +1306,44 @@ mounting them at boot.
1169
1306
 
1170
1307
  Further documentation on using 9p can be found in [kernel docs](https://www.kernel.org/doc/Documentation/filesystems/9p.txt) and in [QEMU wiki](https://wiki.qemu.org/Documentation/9psetup#Starting_the_Guest_directly). Please do note that 9p depends on support in the guest and not all distros come with the 9p module by default.
1171
1308
 
1172
- **SECURITY NOTE:** for remote libvirt, nfs synced folders requires a bridged
1173
- public network interface and you must connect to libvirt via ssh.
1309
+ **SECURITY NOTE:** for remote Libvirt, nfs synced folders requires a bridged
1310
+ public network interface and you must connect to Libvirt via ssh.
1311
+
1312
+ ## QEMU Session Support
1174
1313
 
1314
+ vagrant-libvirt supports using QEMU user sessions to maintain Vagrant VMs. As the session connection does not have root access to the system features which require root will not work. Access to networks created by the system QEMU connection can be granted by using the [QEMU bridge helper](https://wiki.qemu.org/Features/HelperNetworking). The bridge helper is enabled by default on some distros but may need to be enabled/installed on others.
1315
+
1316
+ There must be a virbr network defined in the QEMU system session. The libvirt `default` network which comes by default, the vagrant `vagrant-libvirt` network which is generated if you run a Vagrantfile using the System session, or a manually defined network can be used. These networks can be set to autostart with `sudo virsh net-autostart <net-name>`, which'll mean no further root access is required even after reboots.
1317
+
1318
+ The QEMU bridge helper is configured via `/etc/qemu/bridge.conf`. This file must include the virbr you wish to use (e.g. virbr0, virbr1, etc). You can find this out via `sudo virsh net-dumpxml <net-name>`.
1319
+ ```
1320
+ allow virbr0
1321
+ ```
1322
+
1323
+ An example configuration of a machine using the QEMU session connection:
1324
+
1325
+ ```ruby
1326
+ Vagrant.configure("2") do |config|
1327
+ config.vm.provider :libvirt do |libvirt|
1328
+ # Use QEMU session instead of system connection
1329
+ libvirt.qemu_use_session = true
1330
+ # URI of QEMU session connection, default is as below
1331
+ libvirt.uri = 'qemu:///session'
1332
+ # URI of QEMU system connection, use to obtain IP address for management, default is below
1333
+ libvirt.system_uri = 'qemu:///system'
1334
+ # Path to store Libvirt images for the virtual machine, default is as ~/.local/share/libvirt/images
1335
+ libvirt.storage_pool_path = '/home/user/.local/share/libvirt/images'
1336
+ # Management network device, default is below
1337
+ libvirt.management_network_device = 'virbr0'
1338
+ end
1339
+
1340
+ # Public network configuration using existing network device
1341
+ # Note: Private networks do not work with QEMU session enabled as root access is required to create new network devices
1342
+ config.vm.network :public_network, :dev => "virbr1",
1343
+ :mode => "bridge",
1344
+ :type => "bridge"
1345
+ end
1346
+ ```
1175
1347
 
1176
1348
  ## Customized Graphics
1177
1349
 
@@ -1222,7 +1394,7 @@ end
1222
1394
 
1223
1395
  For certain functionality to be available within a guest, a private
1224
1396
  communication channel must be established with the host. Two notable examples
1225
- of this are the qemu guest agent, and the Spice/QXL graphics type.
1397
+ of this are the QEMU guest agent, and the Spice/QXL graphics type.
1226
1398
 
1227
1399
  Below is a simple example which exposes a virtio serial channel to the guest.
1228
1400
  Note: in a multi-VM environment, the channel would be created for all VMs.
@@ -1248,7 +1420,7 @@ end
1248
1420
 
1249
1421
  These settings can be specified on a per-VM basis, however the per-guest
1250
1422
  settings will OVERRIDE any global 'config' setting. In the following example,
1251
- we create 3 VM with the following configuration:
1423
+ we create 3 VMs with the following configuration:
1252
1424
 
1253
1425
  * **master**: No channel settings specified, so we default to the provider
1254
1426
  setting of a single virtio guest agent channel.
@@ -1263,7 +1435,7 @@ For example:
1263
1435
 
1264
1436
  ```ruby
1265
1437
  Vagrant.configure(2) do |config|
1266
- config.vm.box = "fedora/24-cloud-base"
1438
+ config.vm.box = "fedora/32-cloud-base"
1267
1439
  config.vm.provider :libvirt do |libvirt|
1268
1440
  libvirt.channel :type => 'unix', :target_name => 'org.qemu.guest_agent.0', :target_type => 'virtio'
1269
1441
  end
@@ -1289,8 +1461,8 @@ Vagrant.configure(2) do |config|
1289
1461
  end
1290
1462
  ```
1291
1463
 
1292
- ## Custom command line arguments
1293
- You can also specify multiple qemuargs arguments for qemu-system
1464
+ ## Custom command line arguments and environment variables
1465
+ You can also specify multiple qemuargs arguments or qemuenv environment variables for qemu-system
1294
1466
 
1295
1467
  * `value` - Value
1296
1468
 
@@ -1299,6 +1471,9 @@ Vagrant.configure("2") do |config|
1299
1471
  config.vm.provider :libvirt do |libvirt|
1300
1472
  libvirt.qemuargs :value => "-device"
1301
1473
  libvirt.qemuargs :value => "intel-iommu"
1474
+ libvirt.qemuenv QEMU_AUDIO_DRV: 'pa'
1475
+ libvirt.qemuenv QEMU_AUDIO_TIMER_PERIOD: '150'
1476
+ libvirt.qemuenv QEMU_PA_SAMPLES: '1024', QEMU_PA_SERVER: '/run/user/1000/pulse/native'
1302
1477
  end
1303
1478
  end
1304
1479
  ```
@@ -1336,6 +1511,72 @@ $ cd packer-qemu-templates
1336
1511
  $ packer build ubuntu-14.04-server-amd64-vagrant.json
1337
1512
  ```
1338
1513
 
1514
+ ## Package Box from VM
1515
+
1516
+ vagrant-libvirt has native support for [`vagrant
1517
+ package`](https://www.vagrantup.com/docs/cli/package.html) via
1518
+ libguestfs [virt-sysprep](http://libguestfs.org/virt-sysprep.1.html).
1519
+ virt-sysprep operations can be customized via the
1520
+ `VAGRANT_LIBVIRT_VIRT_SYSPREP_OPERATIONS` environment variable; see the
1521
+ [upstream
1522
+ documentation](http://libguestfs.org/virt-sysprep.1.html#operations) for
1523
+ further details especially on default sysprep operations enabled for
1524
+ your system.
1525
+
1526
+ Options to the virt-sysprep command call can be passed via
1527
+ `VAGRANT_LIBVIRT_VIRT_SYSPREP_OPTIONS` environment variable.
1528
+
1529
+ ```shell
1530
+ $ export VAGRANT_LIBVIRT_VIRT_SYSPREP_OPTIONS="--delete /etc/hostname"
1531
+ $ vagrant package
1532
+ ```
1533
+
1534
+ For example, on Chef [bento](https://github.com/chef/bento) VMs that
1535
+ require SSH hostkeys already set (e.g. bento/debian-7) as well as leave
1536
+ existing LVM UUIDs untouched (e.g. bento/ubuntu-18.04), these can be
1537
+ packaged into vagrant-libvirt boxes like so:
1538
+
1539
+ ```shell
1540
+ $ export VAGRANT_LIBVIRT_VIRT_SYSPREP_OPERATIONS="defaults,-ssh-userdir,-ssh-hostkeys,-lvm-uuids"
1541
+ $ vagrant package
1542
+ ```
1543
+
1544
+ ## Troubleshooting VMs
1545
+
1546
+ The first step for troubleshooting a VM image that appears to not boot correctly,
1547
+ or hangs waiting to get an IP, is to check it with a VNC viewer. A key thing
1548
+ to remember is that if the VM doesn't get an IP, then vagrant can't communicate
1549
+ with it to configure anything, so a problem at this stage is likely to come from
1550
+ the VM, but we'll outline the tools and common problems to help you troubleshoot
1551
+ that.
1552
+
1553
+ By default, when you create a new VM, a vnc server will listen on `127.0.0.1` on
1554
+ port `TCP5900`. If you connect with a vnc viewer you can see the boot process. If
1555
+ your VM isn't listening on `5900` by default, you can use `virsh dumpxml` to find
1556
+ out which port it's listening on, or can configure it with `graphics_port` and
1557
+ `graphics_ip` (see 'Domain Specific Options' above).
1558
+
1559
+ Note: Connecting with the console (`virsh console`) requires additional config,
1560
+ so some VMs may not show anything on the console at all, instead displaying it in
1561
+ the VNC console. The issue with the text console is that you also need to build the
1562
+ image used to tell the kernel to output to the console during boot, and typically
1563
+ most do not have this built in.
1564
+
1565
+ Problems we've seen in the past include:
1566
+ - Forgetting to remove `/etc/udev/rules.d/70-persistent-net.rules` before packaging
1567
+ the VM
1568
+ - VMs expecting a specific disk device to be connected
1569
+
1570
+ If you're still confused, check the Github Issues for this repo for anything that
1571
+ looks similar to your problem.
1572
+
1573
+ [Github Issue #1032](https://github.com/vagrant-libvirt/vagrant-libvirt/issues/1032)
1574
+ contains some historical troubleshooting for VMs that appeared to hang.
1575
+
1576
+ Did you hit a problem that you'd like to note here to save time in the future?
1577
+ Please do!
1578
+
1579
+
1339
1580
  ## Development
1340
1581
 
1341
1582
  To work on the `vagrant-libvirt` plugin, clone this repository out, and use
@@ -1378,3 +1619,8 @@ $ bundle exec vagrant up --provider=libvirt
1378
1619
  3. Commit your changes (`git commit -am 'Add some feature'`)
1379
1620
  4. Push to the branch (`git push origin my-new-feature`)
1380
1621
  5. Create new Pull Request
1622
+
1623
+ <!--
1624
+ # styling for TOC
1625
+ vim: expandtab shiftwidth=2
1626
+ -->