vagrant-libvirt 0.0.35 → 0.0.36

Sign up to get free protection for your applications and to get access to all the features.
data/README.md CHANGED
@@ -5,8 +5,8 @@ This is a [Vagrant](http://www.vagrantup.com) plugin that adds an
5
5
  [Libvirt](http://libvirt.org) provider to Vagrant, allowing Vagrant to
6
6
  control and provision machines via Libvirt toolkit.
7
7
 
8
- **Note:** Actual version is still a development one. Feedback is
9
- welcome and can help a lot :-)
8
+ **Note:** Actual version is still a development one. Feedback is welcome and
9
+ can help a lot :-)
10
10
 
11
11
  - [Features](#features)
12
12
  - [Future work](#future-work)
@@ -20,16 +20,17 @@ welcome and can help a lot :-)
20
20
  - [Libvirt Configuration](#libvirt-configuration)
21
21
  - [Provider Options](#provider-options)
22
22
  - [Domain Specific Options](#domain-specific-options)
23
- - [Reload behavior](#reload-behavior)
23
+ - [Reload behavior](#reload-behavior)
24
24
  - [Networks](#networks)
25
25
  - [Private Network Options](#private-network-options)
26
26
  - [Public Network Options](#public-network-options)
27
27
  - [Management Network](#management-network)
28
28
  - [Additional Disks](#additional-disks)
29
- - [Reload behavior](#reload-behavior-1)
29
+ - [Reload behavior](#reload-behavior-1)
30
30
  - [CDROMs](#cdroms)
31
31
  - [Input](#input)
32
32
  - [PCI device passthrough](#pci-device-passthrough)
33
+ - [Random number generator passthrough](#random-number-generator-passthrough)
33
34
  - [CPU Features](#cpu-features)
34
35
  - [No box and PXE boot](#no-box-and-pxe-boot)
35
36
  - [SSH Access To VM](#ssh-access-to-vm)
@@ -44,7 +45,8 @@ welcome and can help a lot :-)
44
45
  ## Features
45
46
 
46
47
  * Control local Libvirt hypervisors.
47
- * Vagrant `up`, `destroy`, `suspend`, `resume`, `halt`, `ssh`, `reload`, `package` and `provision` commands.
48
+ * Vagrant `up`, `destroy`, `suspend`, `resume`, `halt`, `ssh`, `reload`,
49
+ `package` and `provision` commands.
48
50
  * Upload box image (qcow2 format) to Libvirt storage pool.
49
51
  * Create volume as COW diff image for domains.
50
52
  * Create private networks.
@@ -54,23 +56,35 @@ welcome and can help a lot :-)
54
56
  * Provision domains with any built-in Vagrant provisioner.
55
57
  * Synced folder support via `rsync`, `nfs` or `9p`.
56
58
  * Snapshots via [sahara](https://github.com/jedi4ever/sahara).
57
- * Package caching via [vagrant-cachier](http://fgrehm.viewdocs.io/vagrant-cachier/).
58
- * Use boxes from other Vagrant providers via [vagrant-mutate](https://github.com/sciurus/vagrant-mutate).
59
+ * Package caching via
60
+ [vagrant-cachier](http://fgrehm.viewdocs.io/vagrant-cachier/).
61
+ * Use boxes from other Vagrant providers via
62
+ [vagrant-mutate](https://github.com/sciurus/vagrant-mutate).
59
63
  * Support VMs with no box for PXE boot purposes (Vagrant 1.6 and up)
60
64
 
61
65
  ## Future work
62
66
 
63
- * Take a look at [open issues](https://github.com/vagrant-libvirt/vagrant-libvirt/issues?state=open).
67
+ * Take a look at [open
68
+ issues](https://github.com/vagrant-libvirt/vagrant-libvirt/issues?state=open).
64
69
 
65
70
  ## Installation
66
71
 
67
- First, you should have both qemu and libvirt installed if you plan to run VMs on your local system. For instructions, refer to your linux distribution's documentation. *Before you start using Vagrant-libvirt, please make sure your libvirt and qemu installation is working correctly and you are able to create qemu or kvm type virtual machines with `virsh` or `virt-manager`.*
72
+ First, you should have both qemu and libvirt installed if you plan to run VMs
73
+ on your local system. For instructions, refer to your linux distribution's
74
+ documentation.
68
75
 
69
- Next, you must have [Vagrant installed](http://docs.vagrantup.com/v2/installation/index.html). Vagrant-libvirt supports Vagrant 1.5, 1.6, 1.7 and 1.8.
76
+ **NOTE:** Before you start using Vagrant-libvirt, please make sure your libvirt
77
+ and qemu installation is working correctly and you are able to create qemu or
78
+ kvm type virtual machines with `virsh` or `virt-manager`.
70
79
 
71
- Now you're ready to install vagrant-libvirt using standard [Vagrant plugin](http://docs.vagrantup.com/v2/plugins/usage.html) installation methods.
80
+ Next, you must have [Vagrant
81
+ installed](http://docs.vagrantup.com/v2/installation/index.html).
82
+ Vagrant-libvirt supports Vagrant 1.5, 1.6, 1.7 and 1.8.
72
83
 
73
- ```
84
+ Now you're ready to install vagrant-libvirt using standard [Vagrant
85
+ plugin](http://docs.vagrantup.com/v2/plugins/usage.html) installation methods.
86
+
87
+ ```shell
74
88
  $ vagrant plugin install vagrant-libvirt
75
89
  ```
76
90
 
@@ -79,18 +93,22 @@ $ vagrant plugin install vagrant-libvirt
79
93
  In case of problems with building nokogiri and ruby-libvirt gem, install
80
94
  missing development libraries for libxslt, libxml2 and libvirt.
81
95
 
82
- In Ubuntu, Debian, ...
83
- ```
96
+ On Ubuntu, Debian, ...
97
+
98
+ ```shell
84
99
  $ sudo apt-get install libxslt-dev libxml2-dev libvirt-dev zlib1g-dev ruby-dev
85
100
  ```
86
101
 
87
- In RedHat, Centos, Fedora, ...
88
- ```
89
- # yum install libxslt-devel libxml2-devel libvirt-devel libguestfs-tools-c ruby-devel
90
- ```
102
+ On RedHat, Centos, Fedora, ...
91
103
 
92
- If have problem with installation - check your linker. It should be ld.gold:
104
+ ```shell
105
+ $ sudo dnf install libxslt-devel libxml2-devel libvirt-devel \
106
+ libguestfs-tools-c ruby-devel gcc
93
107
  ```
108
+
109
+ If have problem with installation - check your linker. It should be `ld.gold`:
110
+
111
+ ```shell
94
112
  sudo alternatives --set ld /usr/bin/ld.gold
95
113
  # OR
96
114
  sudo ln -fs /usr/bin/ld.gold /usr/bin/ld
@@ -101,28 +119,25 @@ sudo ln -fs /usr/bin/ld.gold /usr/bin/ld
101
119
  ### Add Box
102
120
 
103
121
  After installing the plugin (instructions above), the quickest way to get
104
- started is to add Libvirt box and specify all the details manually within
105
- a `config.vm.provider` block. So first, add Libvirt box using any name you
106
- want. You can find more libvirt ready boxes at https://atlas.hashicorp.com/boxes/search?provider=libvirt
107
-
108
- Some examples:
109
-
110
- ```
111
- vagrant init fedora/23-cloud-base
112
- # or
113
- vagrant init centos/7
122
+ started is to add Libvirt box and specify all the details manually within a
123
+ `config.vm.provider` block. So first, add Libvirt box using any name you want.
124
+ You can find more libvirt ready boxes at
125
+ [Atlas](https://atlas.hashicorp.com/boxes/search?provider=libvirt). For
126
+ example:
127
+
128
+ ```shell
129
+ vagrant init fedora/24-cloud-base
114
130
  ```
115
131
 
116
132
  ### Create Vagrantfile
117
133
 
118
134
  And then make a Vagrantfile that looks like the following, filling in your
119
- information where necessary. In example below, VM named test_vm is created from
120
- centos64 box.
135
+ information where necessary. For example:
121
136
 
122
137
  ```ruby
123
138
  Vagrant.configure("2") do |config|
124
139
  config.vm.define :test_vm do |test_vm|
125
- test_vm.vm.box = "centos64"
140
+ test_vm.vm.box = "fedora/24-cloud-base"
126
141
  end
127
142
  end
128
143
  ```
@@ -131,7 +146,7 @@ end
131
146
 
132
147
  In prepared project directory, run following command:
133
148
 
134
- ```
149
+ ```shell
135
150
  $ vagrant up --provider=libvirt
136
151
  ```
137
152
 
@@ -139,43 +154,56 @@ Vagrant needs to know that we want to use Libvirt and not default VirtualBox.
139
154
  That's why there is `--provider=libvirt` option specified. Other way to tell
140
155
  Vagrant to use Libvirt provider is to setup environment variable
141
156
 
142
- `export VAGRANT_DEFAULT_PROVIDER=libvirt`.
157
+ ```shell
158
+ export VAGRANT_DEFAULT_PROVIDER=libvirt
159
+ ```
143
160
 
144
161
  ### How Project Is Created
145
162
 
146
163
  Vagrant goes through steps below when creating new project:
147
164
 
148
- 1. Connect to Libvirt localy or remotely via SSH.
149
- 2. Check if box image is available in Libvirt storage pool. If not, upload it to
150
- remote Libvirt storage pool as new volume.
151
- 3. Create COW diff image of base box image for new Libvirt domain.
152
- 4. Create and start new domain on Libvirt host.
153
- 5. Check for DHCP lease from dnsmasq server.
154
- 6. Wait till SSH is available.
155
- 7. Sync folders and run Vagrant provisioner on new domain if
156
- setup in Vagrantfile.
157
-
165
+ 1. Connect to Libvirt localy or remotely via SSH.
166
+ 2. Check if box image is available in Libvirt storage pool. If not, upload it
167
+ to remote Libvirt storage pool as new volume.
168
+ 3. Create COW diff image of base box image for new Libvirt domain.
169
+ 4. Create and start new domain on Libvirt host.
170
+ 5. Check for DHCP lease from dnsmasq server.
171
+ 6. Wait till SSH is available.
172
+ 7. Sync folders and run Vagrant provisioner on new domain if setup in
173
+ Vagrantfile.
158
174
 
159
175
  ### Libvirt Configuration
160
176
 
161
177
  ### Provider Options
162
178
 
163
- Although it should work without any configuration for most people, this provider exposes quite a few provider-specific configuration options. The following options allow you to configure how vagrant-libvirt connects to libvirt, and are used to generate the [libvirt connection URI](http://libvirt.org/uri.html):
164
-
165
- * `driver` - A hypervisor name to access. For now only kvm and qemu are supported.
166
- * `host` - The name of the server, where libvirtd is running.
167
- * `connect_via_ssh` - If use ssh tunnel to connect to Libvirt. Absolutely needed to access libvirt on remote host. It will not be able to get the IP address of a started VM otherwise.
168
- * `username` - Username and password to access Libvirt.
169
- * `password` - Password to access Libvirt.
170
- * `id_ssh_key_file` - If not nil, uses this ssh private key to access Libvirt. Default is $HOME/.ssh/id_rsa. Prepends $HOME/.ssh/ if no directory.
171
- * `socket` - Path to the libvirt unix socket (eg: /var/run/libvirt/libvirt-sock)
172
- * `uri` - For advanced usage. Directly specifies what libvirt connection URI vagrant-libvirt should use. Overrides all other connection configuration options.
179
+ Although it should work without any configuration for most people, this
180
+ provider exposes quite a few provider-specific configuration options. The
181
+ following options allow you to configure how vagrant-libvirt connects to
182
+ libvirt, and are used to generate the [libvirt connection
183
+ URI](http://libvirt.org/uri.html):
184
+
185
+ * `driver` - A hypervisor name to access. For now only kvm and qemu are
186
+ supported
187
+ * `host` - The name of the server, where libvirtd is running
188
+ * `connect_via_ssh` - If use ssh tunnel to connect to Libvirt. Absolutely
189
+ needed to access libvirt on remote host. It will not be able to get the IP
190
+ address of a started VM otherwise.
191
+ * `username` - Username and password to access Libvirt
192
+ * `password` - Password to access Libvirt
193
+ * `id_ssh_key_file` - If not nil, uses this ssh private key to access Libvirt.
194
+ Default is `$HOME/.ssh/id_rsa`. Prepends `$HOME/.ssh/` if no directory
195
+ * `socket` - Path to the libvirt unix socket (e.g.
196
+ `/var/run/libvirt/libvirt-sock`)
197
+ * `uri` - For advanced usage. Directly specifies what libvirt connection URI
198
+ vagrant-libvirt should use. Overrides all other connection configuration
199
+ options
173
200
 
174
201
  Connection-independent options:
175
202
 
176
- * `storage_pool_name` - Libvirt storage pool name, where box image and instance snapshots will be stored.
203
+ * `storage_pool_name` - Libvirt storage pool name, where box image and instance
204
+ snapshots will be stored.
177
205
 
178
- Here is an example of how to set these options.
206
+ For example:
179
207
 
180
208
  ```ruby
181
209
  Vagrant.configure("2") do |config|
@@ -187,43 +215,112 @@ end
187
215
 
188
216
  ### Domain Specific Options
189
217
 
190
- * `disk_bus` - The type of disk device to emulate. Defaults to virtio if not set. Possible values are documented in libvirt's [description for _target_](http://libvirt.org/formatdomain.html#elementsDisks). NOTE: this option applies only to disks associated with a box image. To set the bus type on additional disks, see the [Additional Disks](#additional-disks) section.
191
- * `nic_model_type` - parameter specifies the model of the network adapter when you create a domain value by default virtio KVM believe possible values, see the [documentation for libvirt](https://libvirt.org/formatdomain.html#elementsNICSModel).
218
+ * `disk_bus` - The type of disk device to emulate. Defaults to virtio if not
219
+ set. Possible values are documented in libvirt's [description for
220
+ _target_](http://libvirt.org/formatdomain.html#elementsDisks). NOTE: this
221
+ option applies only to disks associated with a box image. To set the bus type
222
+ on additional disks, see the [Additional Disks](#additional-disks) section.
223
+ * `nic_model_type` - parameter specifies the model of the network adapter when
224
+ you create a domain value by default virtio KVM believe possible values, see
225
+ the [documentation for
226
+ libvirt](https://libvirt.org/formatdomain.html#elementsNICSModel).
192
227
  * `memory` - Amount of memory in MBytes. Defaults to 512 if not set.
193
228
  * `cpus` - Number of virtual cpus. Defaults to 1 if not set.
194
- * `nested` - [Enable nested virtualization](https://github.com/torvalds/linux/blob/master/Documentation/virtual/kvm/nested-vmx.txt). Default is false.
195
- * `cpu_mode` - [CPU emulation mode](https://libvirt.org/formatdomain.html#elementsCPU). Defaults to 'host-model' if not set. Allowed values: host-model, host-passthrough, custom.
196
- * `cpu_model` - CPU Model. Defaults to 'qemu64' if not set. This can really only be used when setting `cpu_mode` to `custom`.
197
- * `cpu_fallback` - Whether to allow libvirt to fall back to a CPU model close to the specified model if features in the guest CPU are not supported on the host. Defaults to 'allow' if not set. Allowed values: `allow`, `forbid`.
229
+ * `nested` - [Enable nested
230
+ virtualization](https://github.com/torvalds/linux/blob/master/Documentation/virtual/kvm/nested-vmx.txt).
231
+ Default is false.
232
+ * `cpu_mode` - [CPU emulation
233
+ mode](https://libvirt.org/formatdomain.html#elementsCPU). Defaults to
234
+ 'host-model' if not set. Allowed values: host-model, host-passthrough,
235
+ custom.
236
+ * `cpu_model` - CPU Model. Defaults to 'qemu64' if not set. This can really
237
+ only be used when setting `cpu_mode` to `custom`.
238
+ * `cpu_fallback` - Whether to allow libvirt to fall back to a CPU model close
239
+ to the specified model if features in the guest CPU are not supported on the
240
+ host. Defaults to 'allow' if not set. Allowed values: `allow`, `forbid`.
241
+ * `numa_nodes` - Number of NUMA nodes on guest. Must be a factor of `cpu`.
198
242
  * `loader` - Sets path to custom UEFI loader.
199
- * `volume_cache` - Controls the cache mechanism. Possible values are "default", "none", "writethrough", "writeback", "directsync" and "unsafe". [See driver->cache in libvirt documentation](http://libvirt.org/formatdomain.html#elementsDisks).
200
- * `kernel` - To launch the guest with a kernel residing on host filesystems. Equivalent to qemu `-kernel`.
201
- * `initrd` - To specify the initramfs/initrd to use for the guest. Equivalent to qemu `-initrd`.
202
- * `random_hostname` - To create a domain name with extra information on the end to prevent hostname conflicts.
203
- * `cmd_line` - Arguments passed on to the guest kernel initramfs or initrd to use. Equivalent to qemu `-append`.
204
- * `graphics_type` - Sets the protocol used to expose the guest display. Defaults to `vnc`. Possible values are "sdl", "curses", "none", "gtk", "vnc" or "spice".
205
- * `graphics_port` - Sets the port for the display protocol to bind to. Defaults to 5900.
206
- * `graphics_ip` - Sets the IP for the display protocol to bind to. Defaults to "127.0.0.1".
207
- * `graphics_passwd` - Sets the password for the display protocol. Working for vnc and spice. by default working without passsword.
208
- * `graphics_autoport` - Sets autoport for graphics, libvirt in this case ignores graphics_port value, Defaults to 'yes'. Possible value are "yes" and "no"
243
+ * `volume_cache` - Controls the cache mechanism. Possible values are "default",
244
+ "none", "writethrough", "writeback", "directsync" and "unsafe". [See
245
+ driver->cache in libvirt
246
+ documentation](http://libvirt.org/formatdomain.html#elementsDisks).
247
+ * `kernel` - To launch the guest with a kernel residing on host filesystems.
248
+ Equivalent to qemu `-kernel`.
249
+ * `initrd` - To specify the initramfs/initrd to use for the guest. Equivalent
250
+ to qemu `-initrd`.
251
+ * `random_hostname` - To create a domain name with extra information on the end
252
+ to prevent hostname conflicts.
253
+ * `cmd_line` - Arguments passed on to the guest kernel initramfs or initrd to
254
+ use. Equivalent to qemu `-append`.
255
+ * `graphics_type` - Sets the protocol used to expose the guest display.
256
+ Defaults to `vnc`. Possible values are "sdl", "curses", "none", "gtk", "vnc"
257
+ or "spice".
258
+ * `graphics_port` - Sets the port for the display protocol to bind to.
259
+ Defaults to 5900.
260
+ * `graphics_ip` - Sets the IP for the display protocol to bind to. Defaults to
261
+ "127.0.0.1".
262
+ * `graphics_passwd` - Sets the password for the display protocol. Working for
263
+ vnc and spice. by default working without passsword.
264
+ * `graphics_autoport` - Sets autoport for graphics, libvirt in this case
265
+ ignores graphics_port value, Defaults to 'yes'. Possible value are "yes" and
266
+ "no"
209
267
  * `keymap` - Set keymap for vm. default: en-us
210
- * `kvm_hidden` - [Hide the hypervisor from the guest](https://libvirt.org/formatdomain.html#elementsFeatures). Useful for GPU passthrough on stubborn drivers. Default is false.
211
- * `video_type` - Sets the graphics card type exposed to the guest. Defaults to "cirrus". [Possible values](http://libvirt.org/formatdomain.html#elementsVideo) are "vga", "cirrus", "vmvga", "xen", "vbox", or "qxl".
212
- * `video_vram` - Used by some graphics card types to vary the amount of RAM dedicated to video. Defaults to 9216.
213
- * `machine_type` - Sets machine type. Equivalent to qemu `-machine`. Use `qemu-system-x86_64 -machine help` to get a list of supported machines.
214
- * `machine_arch` - Sets machine architecture. This helps libvirt to determine the correct emulator type. Possible values depend on your version of qemu. For possible values, see which emulator executable `qemu-system-*` your system provides. Common examples are `aarch64`, `alpha`, `arm`, `cris`, `i386`, `lm32`, `m68k`, `microblaze`, `microblazeel`, `mips`, `mips64`, `mips64el`, `mipsel`, `moxie`, `or32`, `ppc`, `ppc64`, `ppcemb`, `s390x`, `sh4`, `sh4eb`, `sparc`, `sparc64`, `tricore`, `unicore32`, `x86_64`, `xtensa`, `xtensaeb`.
215
- * `machine_virtual_size` - Sets the disk size in GB for the machine overriding the default specified in the box. Allows boxes to defined with a minimal size disk by default and to be grown to a larger size at creation time. Will ignore sizes smaller than the size specified by the box metadata. Note that currently there is no support for automatically resizing the filesystem to take advantage of the larger disk.
216
- * `emulator_path` - Explicitly select which device model emulator to use by providing the path, e.g. `/usr/bin/qemu-system-x86_64`. This is especially useful on systems that fail to select it automatically based on `machine_arch` which then results in a capability error.
217
- * `boot` - Change the boot order and enables the boot menu. Possible options are "hd", "network", "cdrom". Defaults to "hd" with boot menu disabled. When "network" is set without "hd", only all NICs will be tried; see below for more detail.
218
- * `nic_adapter_count` - Defaults to '8'. Only use case for increasing this count is for VMs that virtualize switches such as Cumulus Linux. Max value for Cumulus Linux VMs is 33.
219
- * `uuid` - Force a domain UUID. Defaults to autogenerated value by libvirt if not set.
220
- * `suspend_mode` - What is done on vagrant suspend. Possible values: 'pause', 'managedsave'. Pause mode executes a la `virsh suspend`, which just pauses execution of a VM, not freeing resources. Managed save mode does a la `virsh managedsave` which frees resources suspending a domain.
268
+ * `kvm_hidden` - [Hide the hypervisor from the
269
+ guest](https://libvirt.org/formatdomain.html#elementsFeatures). Useful for
270
+ GPU passthrough on stubborn drivers. Default is false.
271
+ * `video_type` - Sets the graphics card type exposed to the guest. Defaults to
272
+ "cirrus". [Possible
273
+ values](http://libvirt.org/formatdomain.html#elementsVideo) are "vga",
274
+ "cirrus", "vmvga", "xen", "vbox", or "qxl".
275
+ * `video_vram` - Used by some graphics card types to vary the amount of RAM
276
+ dedicated to video. Defaults to 9216.
277
+ * `machine_type` - Sets machine type. Equivalent to qemu `-machine`. Use
278
+ `qemu-system-x86_64 -machine help` to get a list of supported machines.
279
+ * `machine_arch` - Sets machine architecture. This helps libvirt to determine
280
+ the correct emulator type. Possible values depend on your version of qemu.
281
+ For possible values, see which emulator executable `qemu-system-*` your
282
+ system provides. Common examples are `aarch64`, `alpha`, `arm`, `cris`,
283
+ `i386`, `lm32`, `m68k`, `microblaze`, `microblazeel`, `mips`, `mips64`,
284
+ `mips64el`, `mipsel`, `moxie`, `or32`, `ppc`, `ppc64`, `ppcemb`, `s390x`,
285
+ `sh4`, `sh4eb`, `sparc`, `sparc64`, `tricore`, `unicore32`, `x86_64`,
286
+ `xtensa`, `xtensaeb`.
287
+ * `machine_virtual_size` - Sets the disk size in GB for the machine overriding
288
+ the default specified in the box. Allows boxes to defined with a minimal size
289
+ disk by default and to be grown to a larger size at creation time. Will
290
+ ignore sizes smaller than the size specified by the box metadata. Note that
291
+ currently there is no support for automatically resizing the filesystem to
292
+ take advantage of the larger disk.
293
+ * `emulator_path` - Explicitly select which device model emulator to use by
294
+ providing the path, e.g. `/usr/bin/qemu-system-x86_64`. This is especially
295
+ useful on systems that fail to select it automatically based on
296
+ `machine_arch` which then results in a capability error.
297
+ * `boot` - Change the boot order and enables the boot menu. Possible options
298
+ are "hd", "network", "cdrom". Defaults to "hd" with boot menu disabled. When
299
+ "network" is set without "hd", only all NICs will be tried; see below for
300
+ more detail.
301
+ * `nic_adapter_count` - Defaults to '8'. Only use case for increasing this
302
+ count is for VMs that virtualize switches such as Cumulus Linux. Max value
303
+ for Cumulus Linux VMs is 33.
304
+ * `uuid` - Force a domain UUID. Defaults to autogenerated value by libvirt if
305
+ not set.
306
+ * `suspend_mode` - What is done on vagrant suspend. Possible values: 'pause',
307
+ 'managedsave'. Pause mode executes a la `virsh suspend`, which just pauses
308
+ execution of a VM, not freeing resources. Managed save mode does a la `virsh
309
+ managedsave` which frees resources suspending a domain.
221
310
  * `tpm_model` - The model of the TPM to which you wish to connect.
222
311
  * `tpm_type` - The type of TPM device to which you are connecting.
223
312
  * `tpm_path` - The path to the TPM device on the host system.
224
- * `dtb` - The device tree blob file, mostly used for non-x86 platforms. In case the device tree isn't added in-line to the kernel, it can be manually specified here.
225
- * `autostart` - Automatically start the domain when the host boots. Defaults to 'false'.
226
- * `channel` - [libvirt channels](https://libvirt.org/formatdomain.html#elementCharChannel). Configure a private communication channel between the host and guest, e.g. for use by the [qemu guest agent](http://wiki.libvirt.org/page/Qemu_guest_agent) and the Spice/QXL graphics type.
313
+ * `dtb` - The device tree blob file, mostly used for non-x86 platforms. In case
314
+ the device tree isn't added in-line to the kernel, it can be manually
315
+ specified here.
316
+ * `autostart` - Automatically start the domain when the host boots. Defaults to
317
+ 'false'.
318
+ * `channel` - [libvirt
319
+ channels](https://libvirt.org/formatdomain.html#elementCharChannel).
320
+ Configure a private communication channel between the host and guest, e.g.
321
+ for use by the [qemu guest
322
+ agent](http://wiki.libvirt.org/page/Qemu_guest_agent) and the Spice/QXL
323
+ graphics type.
227
324
 
228
325
  Specific domain settings can be set for each domain separately in multi-VM
229
326
  environment. Example below shows a part of Vagrantfile, where specific options
@@ -244,10 +341,10 @@ Vagrant.configure("2") do |config|
244
341
  # ...
245
342
  ```
246
343
 
247
- The following example shows part of a Vagrantfile that enables the VM to
248
- boot from a network interface first and a hard disk second. This could be
249
- used to run VMs that are meant to be a PXE booted machines. Be aware that
250
- if `hd` is not specified as a boot option, it will never be tried.
344
+ The following example shows part of a Vagrantfile that enables the VM to boot
345
+ from a network interface first and a hard disk second. This could be used to
346
+ run VMs that are meant to be a PXE booted machines. Be aware that if `hd` is
347
+ not specified as a boot option, it will never be tried.
251
348
 
252
349
  ```ruby
253
350
  Vagrant.configure("2") do |config|
@@ -263,32 +360,34 @@ Vagrant.configure("2") do |config|
263
360
  ```
264
361
 
265
362
  #### Reload behavior
266
- On vagrant reload the following domain specific attributes are updated in defined domain:
267
-
268
- * `disk_bus` - Is updated only on disks. It skips cdroms.
269
- * `nic_model_type` - Updated.
270
- * `memory` - Updated.
271
- * `cpus` - Updated.
272
- * `nested` - Updated.
273
- * `cpu_mode` - Updated. Pay attention that custom mode is not supported.
274
- * `graphics_type` - Updated.
275
- * `graphics_port` - Updated.
276
- * `graphics_ip` - Updated.
277
- * `graphics_passwd` - Updated.
278
- * `graphics_autoport` - Updated.
279
- * `keymap` - Updated.
280
- * `video_type` - Updated.
281
- * `video_vram` - Updated.
282
- * `tpm_model` - Updated.
283
- * `tpm_type` - Updated.
284
- * `tpm_path` - Updated.
285
363
 
364
+ On `vagrant reload` the following domain specific attributes are updated in
365
+ defined domain:
366
+
367
+ * `disk_bus` - Is updated only on disks. It skips CDROMs
368
+ * `nic_model_type` - Updated
369
+ * `memory` - Updated
370
+ * `cpus` - Updated
371
+ * `nested` - Updated
372
+ * `cpu_mode` - Updated. Pay attention that custom mode is not supported
373
+ * `graphics_type` - Updated
374
+ * `graphics_port` - Updated
375
+ * `graphics_ip` - Updated
376
+ * `graphics_passwd` - Updated
377
+ * `graphics_autoport` - Updated
378
+ * `keymap` - Updated
379
+ * `video_type` - Updated
380
+ * `video_vram` - Updated
381
+ * `tpm_model` - Updated
382
+ * `tpm_type` - Updated
383
+ * `tpm_path` - Updated
286
384
 
287
385
  ## Networks
288
386
 
289
387
  Networking features in the form of `config.vm.network` support private networks
290
- concept. It supports both the virtual network switch routing types and the point to
291
- point Guest OS to Guest OS setting using UDP/Mcast/TCP tunnel interfaces.
388
+ concept. It supports both the virtual network switch routing types and the
389
+ point to point Guest OS to Guest OS setting using UDP/Mcast/TCP tunnel
390
+ interfaces.
292
391
 
293
392
  http://wiki.libvirt.org/page/VirtualNetworking
294
393
 
@@ -298,13 +397,12 @@ http://libvirt.org/formatdomain.html#elementsNICSMulticast
298
397
 
299
398
  http://libvirt.org/formatdomain.html#elementsNICSUDP _(in libvirt v1.2.20 and higher)_
300
399
 
301
- Public Network interfaces are currently implemented using the macvtap driver. The macvtap
302
- driver is only available with the Linux Kernel version >= 2.6.24. See the following libvirt
303
- documentation for the details of the macvtap usage.
400
+ Public Network interfaces are currently implemented using the macvtap driver.
401
+ The macvtap driver is only available with the Linux Kernel version >= 2.6.24.
402
+ See the following libvirt documentation for the details of the macvtap usage.
304
403
 
305
404
  http://www.libvirt.org/formatdomain.html#elementsNICSDirect
306
405
 
307
-
308
406
  An examples of network interface definitions:
309
407
 
310
408
  ```ruby
@@ -340,22 +438,22 @@ An examples of network interface definitions:
340
438
  end
341
439
  ```
342
440
 
343
- In example below, one network interface is configured for VM test_vm1. After
344
- you run `vagrant up`, VM will be accessible on IP address 10.20.30.40. So if
441
+ In example below, one network interface is configured for VM `test_vm1`. After
442
+ you run `vagrant up`, VM will be accessible on IP address `10.20.30.40`. So if
345
443
  you install a web server via provisioner, you will be able to access your
346
- testing server on http://10.20.30.40 URL. But beware that this address is
444
+ testing server on `http://10.20.30.40` URL. But beware that this address is
347
445
  private to libvirt host only. It's not visible outside of the hypervisor box.
348
446
 
349
- If network 10.20.30.0/24 doesn't exist, provider will create it. By default
447
+ If network `10.20.30.0/24` doesn't exist, provider will create it. By default
350
448
  created networks are NATed to outside world, so your VM will be able to connect
351
449
  to the internet (if hypervisor can). And by default, DHCP is offering addresses
352
450
  on newly created networks.
353
451
 
354
- The second interface is created and bridged into the physical device 'eth0'.
355
- This mechanism uses the macvtap Kernel driver and therefore does not require
356
- an existing bridge device. This configuration assumes that DHCP and DNS services
357
- are being provided by the public network. This public interface should be reachable
358
- by anyone with access to the public network.
452
+ The second interface is created and bridged into the physical device `eth0`.
453
+ This mechanism uses the macvtap Kernel driver and therefore does not require an
454
+ existing bridge device. This configuration assumes that DHCP and DNS services
455
+ are being provided by the public network. This public interface should be
456
+ reachable by anyone with access to the public network.
359
457
 
360
458
  ### Private Network Options
361
459
 
@@ -363,109 +461,144 @@ by anyone with access to the public network.
363
461
 
364
462
  There is a way to pass specific options for libvirt provider when using
365
463
  `config.vm.network` to configure new network interface. Each parameter name
366
- starts with 'libvirt__' string. Here is a list of those options:
464
+ starts with `libvirt__` string. Here is a list of those options:
367
465
 
368
466
  * `:libvirt__network_name` - Name of libvirt network to connect to. By default,
369
467
  network 'default' is used.
370
468
  * `:libvirt__netmask` - Used only together with `:ip` option. Default is
371
469
  '255.255.255.0'.
372
- * `:libvirt__host_ip` - Adress to use for the host (not guest).
373
- Default is first possible address (after network address).
470
+ * `:libvirt__host_ip` - Adress to use for the host (not guest). Default is
471
+ first possible address (after network address).
374
472
  * `:libvirt__dhcp_enabled` - If DHCP will offer addresses, or not. Used only
375
473
  when creating new network. Default is true.
376
- * `:libvirt__dhcp_start` - First address given out via DHCP.
377
- Default is third address in range (after network name and gateway).
378
- * `:libvirt__dhcp_stop` - Last address given out via DHCP.
379
- Default is last possible address in range (before broadcast address).
380
- * `:libvirt__dhcp_bootp_file` - The file to be used for the boot image.
381
- Used only when dhcp is enabled.
382
- * `:libvirt__dhcp_bootp_server` - The server that runs the DHCP server.
383
- Used only when dhcp is enabled.By default is the same host that runs the DHCP server.
474
+ * `:libvirt__dhcp_start` - First address given out via DHCP. Default is third
475
+ address in range (after network name and gateway).
476
+ * `:libvirt__dhcp_stop` - Last address given out via DHCP. Default is last
477
+ possible address in range (before broadcast address).
478
+ * `:libvirt__dhcp_bootp_file` - The file to be used for the boot image. Used
479
+ only when dhcp is enabled.
480
+ * `:libvirt__dhcp_bootp_server` - The server that runs the DHCP server. Used
481
+ only when dhcp is enabled.By default is the same host that runs the DHCP
482
+ server.
384
483
  * `:libvirt__adapter` - Number specifiyng sequence number of interface.
385
- * `:libvirt__forward_mode` - Specify one of `veryisolated`, `none`, `nat` or `route` options.
386
- This option is used only when creating new network. Mode `none` will create
387
- isolated network without NATing or routing outside. You will want to use
388
- NATed forwarding typically to reach networks outside of hypervisor. Routed
389
- forwarding is typically useful to reach other networks within hypervisor.
390
- `veryisolated` described [here](https://libvirt.org/formatnetwork.html#examplesNoGateway).
391
- By default, option `nat` is used.
484
+ * `:libvirt__forward_mode` - Specify one of `veryisolated`, `none`, `nat` or
485
+ `route` options. This option is used only when creating new network. Mode
486
+ `none` will create isolated network without NATing or routing outside. You
487
+ will want to use NATed forwarding typically to reach networks outside of
488
+ hypervisor. Routed forwarding is typically useful to reach other networks
489
+ within hypervisor. `veryisolated` described
490
+ [here](https://libvirt.org/formatnetwork.html#examplesNoGateway). By
491
+ default, option `nat` is used.
392
492
  * `:libvirt__forward_device` - Name of interface/device, where network should
393
493
  be forwarded (NATed or routed). Used only when creating new network. By
394
494
  default, all physical interfaces are used.
395
- * `:libvirt__tunnel_type` - Set to 'udp' if using UDP unicast tunnel mode (libvirt v1.2.20 or higher).
396
- Set this to either "server" or "client" for tcp tunneling. Set this to 'mcast' if using multicast
397
- tunneling. This configuration type uses tunnels to
398
- generate point to point connections between Guests. Useful for Switch VMs like
399
- Cumulus Linux. No virtual switch setting like "libvirt__network_name" applies with
400
- tunnel interfaces and will be ignored if configured.
401
- * `:libvirt__tunnel_ip` - Sets the source IP of the libvirt tunnel interface. By
402
- default this is `127.0.0.1` for TCP and UDP tunnels and `239.255.1.1` for Multicast
403
- tunnels. It populates the address field in the `<source address="XXX">` of the
495
+ * `:libvirt__tunnel_type` - Set to 'udp' if using UDP unicast tunnel mode
496
+ (libvirt v1.2.20 or higher). Set this to either "server" or "client" for tcp
497
+ tunneling. Set this to 'mcast' if using multicast tunneling. This
498
+ configuration type uses tunnels to generate point to point connections
499
+ between Guests. Useful for Switch VMs like Cumulus Linux. No virtual switch
500
+ setting like `libvirt__network_name` applies with tunnel interfaces and will
501
+ be ignored if configured.
502
+ * `:libvirt__tunnel_ip` - Sets the source IP of the libvirt tunnel interface.
503
+ By default this is `127.0.0.1` for TCP and UDP tunnels and `239.255.1.1` for
504
+ Multicast tunnels. It populates the address field in the `<source
505
+ address="XXX">` of the interface xml configuration.
506
+ * `:libvirt__tunnel_port` - Sets the source port the tcp/udp/mcast tunnel with
507
+ use. This port information is placed in the `<source port=XXX/>` section of
404
508
  interface xml configuration.
405
- * `:libvirt__tunnel_port` - Sets the source port the tcp/udp/mcast tunnel
406
- with use. This port information is placed in the `<source port=XXX/>` section of
407
- interface xml configuration.
408
509
  * `:libvirt__tunnel_local_port` - Sets the local port used by the udp tunnel
409
- interface type. It populates the port field in the `<local port=XXX">` section of the
410
- interface xml configuration. _(This feature only works in libvirt 1.2.20 and higher)_
510
+ interface type. It populates the port field in the `<local port=XXX">`
511
+ section of the interface xml configuration. _(This feature only works in
512
+ libvirt 1.2.20 and higher)_
411
513
  * `:libvirt__tunnel_local_ip` - Sets the local IP used by the udp tunnel
412
- interface type. It populates the ip entry of the `<local address=XXX">` section of
413
- the interface xml configuration. _(This feature only works in libvirt 1.2.20 and higher)_
514
+ interface type. It populates the ip entry of the `<local address=XXX">`
515
+ section of the interface xml configuration. _(This feature only works in
516
+ libvirt 1.2.20 and higher)_
414
517
  * `:libvirt__guest_ipv6` - Enable or disable guest-to-guest IPv6 communication.
415
- See [here](https://libvirt.org/formatnetwork.html#examplesPrivate6), and [here](http://libvirt.org/git/?p=libvirt.git;a=commitdiff;h=705e67d40b09a905cd6a4b8b418d5cb94eaa95a8) for for more information.
416
- * `:libvirt__iface_name` - Define a name for the private network interface. With this feature one can [simulate physical link failures](https://github.com/vagrant-libvirt/vagrant-libvirt/pull/498)
417
- * `:mac` - MAC address for the interface. *Note: specify this in lowercase since Vagrant network scripts assume it will be!*
418
- * `:model_type` - parameter specifies the model of the network adapter when you create a domain value by default virtio KVM believe possible values, see the documentation for libvirt
419
-
518
+ See [here](https://libvirt.org/formatnetwork.html#examplesPrivate6), and
519
+ [here](http://libvirt.org/git/?p=libvirt.git;a=commitdiff;h=705e67d40b09a905cd6a4b8b418d5cb94eaa95a8)
520
+ for for more information.
521
+ * `:libvirt__iface_name` - Define a name for the private network interface.
522
+ With this feature one can [simulate physical link
523
+ failures](https://github.com/vagrant-libvirt/vagrant-libvirt/pull/498)
524
+ * `:mac` - MAC address for the interface. *Note: specify this in lowercase
525
+ since Vagrant network scripts assume it will be!*
526
+ * `:model_type` - parameter specifies the model of the network adapter when you
527
+ create a domain value by default virtio KVM believe possible values, see the
528
+ documentation for libvirt
420
529
 
421
530
  When the option `:libvirt__dhcp_enabled` is to to 'false' it shouldn't matter
422
531
  whether the virtual network contains a DHCP server or not and vagrant-libvirt
423
- should not fail on it. The only situation where vagrant-libvirt should fail
424
- is when DHCP is requested but isn't configured on a matching already existing
532
+ should not fail on it. The only situation where vagrant-libvirt should fail is
533
+ when DHCP is requested but isn't configured on a matching already existing
425
534
  virtual network.
426
535
 
427
536
  ### Public Network Options
428
- * `:dev` - Physical device that the public interface should use. Default is 'eth0'.
537
+
538
+ * `:dev` - Physical device that the public interface should use. Default is
539
+ 'eth0'.
429
540
  * `:mode` - The mode in which the public interface should operate in. Supported
430
- modes are available from the [libvirt documentation](http://www.libvirt.org/formatdomain.html#elementsNICSDirect).
541
+ modes are available from the [libvirt
542
+ documentation](http://www.libvirt.org/formatdomain.html#elementsNICSDirect).
431
543
  Default mode is 'bridge'.
432
544
  * `:type` - is type of interface.(`<interface type="#{@type}">`)
433
545
  * `:mac` - MAC address for the interface.
434
546
  * `:network_name` - Name of libvirt network to connect to.
435
547
  * `:portgroup` - Name of libvirt portgroup to connect to.
436
- * `:ovs` - Support to connect to an open vSwitch bridge device. Default is 'false'.
548
+ * `:ovs` - Support to connect to an Open vSwitch bridge device. Default is
549
+ 'false'.
550
+ * `:trust_guest_rx_filters` - Support trustGuestRxFilters attribute. Details
551
+ are listed [here](http://www.libvirt.org/formatdomain.html#elementsNICSDirect).
552
+ Default is 'false'.
437
553
 
438
554
  ### Management Network
439
555
 
440
- Vagrant-libvirt uses a private network to perform some management operations
441
- on VMs. All VMs will have an interface connected to this network and
442
- an IP address dynamically assigned by libvirt. This is in addition to any
443
- networks you configure. The name and address used by this network are
444
- configurable at the provider level.
445
-
446
- * `management_network_name` - Name of libvirt network to which all VMs will be connected. If not specified the default is 'vagrant-libvirt'.
447
- * `management_network_address` - Address of network to which all VMs will be connected. Must include the address and subnet mask. If not specified the default is '192.168.121.0/24'.
448
- * `management_network_guest_ipv6` - Enable or disable guest-to-guest IPv6 communication. See [here](https://libvirt.org/formatnetwork.html#examplesPrivate6), and [here](http://libvirt.org/git/?p=libvirt.git;a=commitdiff;h=705e67d40b09a905cd6a4b8b418d5cb94eaa95a8) for for more information.
449
-
450
- You may wonder how vagrant-libvirt knows the IP address a VM received.
451
- Libvirt doesn't provide a standard way to find out the IP address of a running
452
- domain. But we do know the MAC address of the virtual machine's interface on
453
- the management network. Libvirt is closely connected with dnsmasq, which acts as
454
- a DHCP server. dnsmasq writes lease information in the `/var/lib/libvirt/dnsmasq`
556
+ vagrant-libvirt uses a private network to perform some management operations on
557
+ VMs. All VMs will have an interface connected to this network and an IP address
558
+ dynamically assigned by libvirt. This is in addition to any networks you
559
+ configure. The name and address used by this network are configurable at the
560
+ provider level.
561
+
562
+ * `management_network_name` - Name of libvirt network to which all VMs will be
563
+ connected. If not specified the default is 'vagrant-libvirt'.
564
+ * `management_network_address` - Address of network to which all VMs will be
565
+ connected. Must include the address and subnet mask. If not specified the
566
+ default is '192.168.121.0/24'.
567
+ * `management_network_guest_ipv6` - Enable or disable guest-to-guest IPv6
568
+ communication. See
569
+ [here](https://libvirt.org/formatnetwork.html#examplesPrivate6), and
570
+ [here](http://libvirt.org/git/?p=libvirt.git;a=commitdiff;h=705e67d40b09a905cd6a4b8b418d5cb94eaa95a8)
571
+ for for more information.
572
+
573
+ You may wonder how vagrant-libvirt knows the IP address a VM received. Libvirt
574
+ doesn't provide a standard way to find out the IP address of a running domain.
575
+ But we do know the MAC address of the virtual machine's interface on the
576
+ management network. Libvirt is closely connected with dnsmasq, which acts as a
577
+ DHCP server. dnsmasq writes lease information in the `/var/lib/libvirt/dnsmasq`
455
578
  directory. Vagrant-libvirt looks for the MAC address in this file and extracts
456
579
  the corresponding IP address.
457
580
 
458
581
  ## Additional Disks
459
582
 
460
- You can create and attach additional disks to a VM via `libvirt.storage :file`. It has a number of options:
583
+ You can create and attach additional disks to a VM via `libvirt.storage :file`.
584
+ It has a number of options:
461
585
 
462
- * `path` - Location of the disk image. If unspecified, a path is automtically chosen in the same storage pool as the VMs primary disk.
463
- * `device` - Name of the device node the disk image will have in the VM, e.g. *vdb*. If unspecified, the next available device is chosen.
586
+ * `path` - Location of the disk image. If unspecified, a path is automtically
587
+ chosen in the same storage pool as the VMs primary disk.
588
+ * `device` - Name of the device node the disk image will have in the VM, e.g.
589
+ *vdb*. If unspecified, the next available device is chosen.
464
590
  * `size` - Size of the disk image. If unspecified, defaults to 10G.
465
591
  * `type` - Type of disk image to create. Defaults to *qcow2*.
466
592
  * `bus` - Type of bus to connect device to. Defaults to *virtio*.
467
- * `cache` - Cache mode to use, e.g. `none`, `writeback`, `writethrough` (see the [libvirt documentation for possible values](http://libvirt.org/formatdomain.html#elementsDisks) or [here](https://www.suse.com/documentation/sles11/book_kvm/data/sect1_chapter_book_kvm.html) for a fuller explanation). Defaults to *default*.
468
- * `allow_existing` - Set to true if you want to allow the VM to use a pre-existing disk. This is useful for sharing disks between VMs, e.g. in order to simulate shared SAN storage. Shared disks removed only manually. If not exists - will created. If exists - using existed.
593
+ * `cache` - Cache mode to use, e.g. `none`, `writeback`, `writethrough` (see
594
+ the [libvirt documentation for possible
595
+ values](http://libvirt.org/formatdomain.html#elementsDisks) or
596
+ [here](https://www.suse.com/documentation/sles11/book_kvm/data/sect1_chapter_book_kvm.html)
597
+ for a fuller explanation). Defaults to *default*.
598
+ * `allow_existing` - Set to true if you want to allow the VM to use a
599
+ pre-existing disk. If the disk doesn't exist it will be created.
600
+ Disks with this option set to true need to be removed manually.
601
+ * `shareable` - Set to true if you want to simulate shared SAN storage.
469
602
 
470
603
  The following example creates two additional disks.
471
604
 
@@ -478,18 +611,32 @@ Vagrant.configure("2") do |config|
478
611
  end
479
612
  ```
480
613
 
614
+ For shared SAN storage to work the following example can be used:
615
+ ```ruby
616
+ Vagrant.configure("2") do |config|
617
+ config.vm.provider :libvirt do |libvirt|
618
+ libvirt.storage :file, :size => '20G', :path => 'my_shared_disk.img', :allow_existing => true, :shareable => true, :type => 'raw'
619
+ end
620
+ end
621
+ ```
622
+
481
623
  ### Reload behavior
482
624
 
483
- On vagrant reload the following additional disk attributes are updated in defined domain:
625
+ On `vagrant reload` the following additional disk attributes are updated in
626
+ defined domain:
484
627
 
485
- * `bus` - Updated. Uses `device` as a search marker. It is not required to define `device`, but it's recommended. If `device` is defined then the order of addtitional disk definition becomes irrelevant.
628
+ * `bus` - Updated. Uses `device` as a search marker. It is not required to
629
+ define `device`, but it's recommended. If `device` is defined then the order
630
+ of addtitional disk definition becomes irrelevant.
486
631
 
487
632
  ## CDROMs
488
633
 
489
- You can attach up to four (4) CDROMs to a VM via `libvirt.storage :file, :device => :cdrom`. Available options are:
634
+ You can attach up to four CDROMs to a VM via `libvirt.storage :file,
635
+ :device => :cdrom`. Available options are:
490
636
 
491
637
  * `path` - The path to the iso to be used for the CDROM drive.
492
- * `dev` - The device to use (`hda`, `hdb`, `hdc`, or `hdd`). This will be automatically determined if unspecified.
638
+ * `dev` - The device to use (`hda`, `hdb`, `hdc`, or `hdd`). This will be
639
+ automatically determined if unspecified.
493
640
  * `bus` - The bus to use for the CDROM drive. Defaults to `ide`
494
641
 
495
642
  The following example creates three CDROM drives in the VM:
@@ -506,8 +653,8 @@ end
506
653
 
507
654
  ## Input
508
655
 
509
- You can specify multiple inputs to the VM via `libvirt.input`. Available options are
510
- listed below. Note that both options are required:
656
+ You can specify multiple inputs to the VM via `libvirt.input`. Available
657
+ options are listed below. Note that both options are required:
511
658
 
512
659
  * `type` - The type of the input
513
660
  * `bus` - The bust of the input
@@ -526,15 +673,18 @@ end
526
673
 
527
674
  ## PCI device passthrough
528
675
 
529
- You can specify multiple PCI devices to passthrough to the VM via `libvirt.pci`. Available options are listed below. Note that all options are required:
676
+ You can specify multiple PCI devices to passthrough to the VM via
677
+ `libvirt.pci`. Available options are listed below. Note that all options are
678
+ required:
530
679
 
531
680
  * `bus` - The bus of the PCI device
532
681
  * `slot` - The slot of the PCI device
533
682
  * `function` - The function of the PCI device
534
683
 
535
- You can extract that information from output of `lspci` command. First characters of each line are in format `[<bus>]:[<slot>].[<func>]`. Example
684
+ You can extract that information from output of `lspci` command. First
685
+ characters of each line are in format `[<bus>]:[<slot>].[<func>]`. For example:
536
686
 
537
- ```
687
+ ```shell
538
688
  $ lspci| grep NVIDIA
539
689
  03:00.0 VGA compatible controller: NVIDIA Corporation GK110B [GeForce GTX TITAN Black] (rev a1)
540
690
  ```
@@ -552,13 +702,30 @@ Vagrant.configure("2") do |config|
552
702
  end
553
703
  ```
554
704
 
705
+ ## Random number generator passthrough
706
+
707
+ You can pass through `/dev/random` to your VM by configuring the domain like this:
708
+
709
+ ```ruby
710
+ Vagrant.configure("2") do |config|
711
+ config.vm.provider :libvirt do |libvirt|
712
+ # Pass through /dev/random from the host to the VM
713
+ libvirt.random :model => 'random'
714
+ end
715
+ end
716
+ ```
717
+
718
+ At the moment only the `random` backend is supported.
719
+
555
720
  ## CPU features
556
721
 
557
- You can specify CPU feature policies via `libvirt.cpu_feature`. Available options are
558
- listed below. Note that both options are required:
722
+ You can specify CPU feature policies via `libvirt.cpu_feature`. Available
723
+ options are listed below. Note that both options are required:
559
724
 
560
- * `name` - The name of the feature for the chosen CPU (see libvirts `cpu_map.xml`)
561
- * `policy` - The policy for this feature (one of `force`, `require`, `optional`, `disable` and `forbid` - see libvirt documentation)
725
+ * `name` - The name of the feature for the chosen CPU (see libvirts
726
+ `cpu_map.xml`)
727
+ * `policy` - The policy for this feature (one of `force`, `require`,
728
+ `optional`, `disable` and `forbid` - see libvirt documentation)
562
729
 
563
730
  ```ruby
564
731
  Vagrant.configure("2") do |config|
@@ -575,14 +742,16 @@ end
575
742
 
576
743
  ## USB device passthrough
577
744
 
578
- You can specify multiple USB devices to passthrough to the VM via `libvirt.usb`. The device can be specified by the following options:
745
+ You can specify multiple USB devices to passthrough to the VM via
746
+ `libvirt.usb`. The device can be specified by the following options:
579
747
 
580
748
  * `bus` - The USB bus ID, e.g. "1"
581
749
  * `device` - The USB device ID, e.g. "2"
582
750
  * `vendor` - The USB devices vendor ID (VID), e.g. "0x1234"
583
751
  * `product` - The USB devices product ID (PID), e.g. "0xabcd"
584
752
 
585
- At least one of these has to be specified, and `bus` and `device` may only be used together.
753
+ At least one of these has to be specified, and `bus` and `device` may only be
754
+ used together.
586
755
 
587
756
  The example values above match the device from the following output of `lsusb`:
588
757
 
@@ -592,20 +761,23 @@ Bus 001 Device 002: ID 1234:abcd Example device
592
761
 
593
762
  Additionally, the following options can be used:
594
763
 
595
- * `startupPolicy` - Is passed through to libvirt and controls if the device has to exist.
596
- libvirt currently allows the following values: "mandatory", "requisite", "optional".
764
+ * `startupPolicy` - Is passed through to libvirt and controls if the device has
765
+ to exist. libvirt currently allows the following values: "mandatory",
766
+ "requisite", "optional".
597
767
 
598
768
  ## No box and PXE boot
599
769
 
600
- There is support for PXE booting VMs with no disks as well as PXE booting VMs with blank disks. There are some limitations:
770
+ There is support for PXE booting VMs with no disks as well as PXE booting VMs
771
+ with blank disks. There are some limitations:
601
772
 
602
773
  * Requires Vagrant 1.6.0 or newer
603
774
  * No provisioning scripts are ran
604
775
  * No network configuration is being applied to the VM
605
776
  * No SSH connection can be made
606
- * ```vagrant halt``` will only work cleanly if the VM handles ACPI shutdown signals
777
+ * `vagrant halt` will only work cleanly if the VM handles ACPI shutdown signals
607
778
 
608
- In short, VMs without a box can be created, halted and destroyed but all other functionality cannot be used.
779
+ In short, VMs without a box can be created, halted and destroyed but all other
780
+ functionality cannot be used.
609
781
 
610
782
  An example for a PXE booted VM with no disks whatsoever:
611
783
 
@@ -635,50 +807,63 @@ end
635
807
 
636
808
  ## SSH Access To VM
637
809
 
638
- vagrant-libvirt supports vagrant's [standard ssh settings](https://docs.vagrantup.com/v2/vagrantfile/ssh_settings.html).
810
+ vagrant-libvirt supports vagrant's [standard ssh
811
+ settings](https://docs.vagrantup.com/v2/vagrantfile/ssh_settings.html).
639
812
 
640
813
  ## Forwarded Ports
641
814
 
642
- vagrant-libvirt supports Forwarded Ports via ssh port forwarding. Please note that due to a well known limitation only the TCP protocol is supported. For each `forwarded_port` directive you specify in your Vagrantfile, vagrant-libvirt will maintain an active ssh process for the lifetime of the VM.
815
+ vagrant-libvirt supports Forwarded Ports via ssh port forwarding. Please note
816
+ that due to a well known limitation only the TCP protocol is supported. For
817
+ each `forwarded_port` directive you specify in your Vagrantfile,
818
+ vagrant-libvirt will maintain an active ssh process for the lifetime of the VM.
643
819
 
644
- vagrant-libvirt supports an additional `forwarded_port` option
645
- `gateway_ports` which defaults to `false`, but can be set to `true` if
646
- you want the forwarded port to be accessible from outside the Vagrant
647
- host. In this case you should also set the `host_ip` option to `'*'`
648
- since it defaults to `'localhost'`.
820
+ vagrant-libvirt supports an additional `forwarded_port` option `gateway_ports`
821
+ which defaults to `false`, but can be set to `true` if you want the forwarded
822
+ port to be accessible from outside the Vagrant host. In this case you should
823
+ also set the `host_ip` option to `'*'` since it defaults to `'localhost'`.
649
824
 
650
- You can also provide a custom adapter to forward from by 'adapter' option. Default is 'eth0'.
825
+ You can also provide a custom adapter to forward from by 'adapter' option.
826
+ Default is `eth0`.
651
827
 
652
828
  ## Synced Folders
653
829
 
654
830
  vagrant-libvirt supports bidirectional synced folders via nfs or 9p and
655
- unidirectional via rsync. The default is nfs. Vagrant automatically syncs
656
- the project folder on the host to */vagrant* in the guest. You can also
657
- configure additional synced folders.
831
+ unidirectional via rsync. The default is nfs. Vagrant automatically syncs the
832
+ project folder on the host to `/vagrant` in the guest. You can also configure
833
+ additional synced folders.
658
834
 
659
- You can change the synced folder type for */vagrant* by explicity configuring
835
+ You can change the synced folder type for `/vagrant` by explicity configuring
660
836
  it an setting the type, e.g.
661
837
 
662
- config.vm.synced_folder './', '/vagrant', type: 'rsync'
838
+ ```shell
839
+ config.vm.synced_folder './', '/vagrant', type: 'rsync'
840
+ ```
663
841
 
664
- or
842
+ or
665
843
 
666
- config.vm.synced_folder './', '/vagrant', type: '9p', disabled: false, accessmode: "squash", owner: "vagrant"
844
+ ```shell
845
+ config.vm.synced_folder './', '/vagrant', type: '9p', disabled: false, accessmode: "squash", owner: "vagrant"
846
+ ```
667
847
 
668
- or
848
+ or
669
849
 
670
- config.vm.synced_folder './', '/vagrant', type: '9p', disabled: false, accessmode: "mapped", mount: false
850
+ ```shell
851
+ config.vm.synced_folder './', '/vagrant', type: '9p', disabled: false, accessmode: "mapped", mount: false
852
+ ```
671
853
 
672
- For 9p shares, a `mount: false` option allows to define synced folders without mounting them at boot.
854
+ For 9p shares, a `mount: false` option allows to define synced folders without
855
+ mounting them at boot.
673
856
 
674
- **SECURITY NOTE:** for remote libvirt, nfs synced folders requires a bridged public network interface and you must connect to libvirt via ssh.
857
+ **SECURITY NOTE:** for remote libvirt, nfs synced folders requires a bridged
858
+ public network interface and you must connect to libvirt via ssh.
675
859
 
676
860
 
677
861
  ## Customized Graphics
678
862
 
679
863
  vagrant-libvirt supports customizing the display and video settings of the
680
- managed guest. This is probably most useful for VNC-type displays with multiple
681
- guests. It lets you specify the exact port for each guest to use deterministically.
864
+ managed guest. This is probably most useful for VNC-type displays with
865
+ multiple guests. It lets you specify the exact port for each guest to use
866
+ deterministically.
682
867
 
683
868
  Here is an example of using custom display options:
684
869
 
@@ -721,10 +906,11 @@ end
721
906
  ## Libvirt communication channels
722
907
 
723
908
  For certain functionality to be available within a guest, a private
724
- communication channel must be established with the host. Two notable examples of
725
- this are the qemu guest agent, and the Spice/QXL graphics type.
909
+ communication channel must be established with the host. Two notable examples
910
+ of this are the qemu guest agent, and the Spice/QXL graphics type.
726
911
 
727
- Below is a simple example which exposes a virtio serial channel to the guest. Note: in a multi-VM environment, the channel would be created for all VMs.
912
+ Below is a simple example which exposes a virtio serial channel to the guest.
913
+ Note: in a multi-VM environment, the channel would be created for all VMs.
728
914
 
729
915
  ```ruby
730
916
  vagrant.configure(2) do |config|
@@ -734,77 +920,99 @@ vagrant.configure(2) do |config|
734
920
  end
735
921
  ```
736
922
 
737
- Below is the syntax for creating a spicevmc channel for use by a qxl graphics card.
923
+ Below is the syntax for creating a spicevmc channel for use by a qxl graphics
924
+ card.
738
925
 
739
926
  ```ruby
740
927
  vagrant.configure(2) do |config|
741
928
  config.vm.provider :libvirt do |libvirt|
742
- libvirt.channel :type => 'spicevmc', :target_name => 'com.redhat.spice.0', :target_type => 'virtio'
929
+ libvirt.channel :type => 'spicevmc', :target_name => 'com.redhat.spice.0', :target_type => 'virtio'
743
930
  end
744
931
  end
745
932
  ```
746
933
 
747
- These settings can be specified on a per-VM basis, however the per-guest settings will OVERRIDE any global 'config' setting. In the following example, we create 3 VM with the following configuration:
934
+ These settings can be specified on a per-VM basis, however the per-guest
935
+ settings will OVERRIDE any global 'config' setting. In the following example,
936
+ we create 3 VM with the following configuration:
748
937
 
749
- master: No channel settings specified, so we default to the provider setting of a single virtio guest agent channel.
750
- node1: Override the channel setting, setting both the guest agent channel, and a spicevmc channel
751
- node2: Override the channel setting, setting both the guest agent channel, and a 'guestfwd' channel. TCP traffic sent by the guest to the given IP address and port is forwarded to the host socket /tmp/foo. Note: this device must be unique for each VM.
938
+ * **master**: No channel settings specified, so we default to the provider
939
+ setting of a single virtio guest agent channel.
940
+ * **node1**: Override the channel setting, setting both the guest agent
941
+ channel, and a spicevmc channel
942
+ * **node2**: Override the channel setting, setting both the guest agent
943
+ channel, and a 'guestfwd' channel. TCP traffic sent by the guest to the given
944
+ IP address and port is forwarded to the host socket `/tmp/foo`. Note: this
945
+ device must be unique for each VM.
752
946
 
753
- Example
947
+ For example:
754
948
 
755
949
  ```ruby
756
950
  Vagrant.configure(2) do |config|
757
- config.vm.box = "fedora/23-cloud-base"
951
+ config.vm.box = "fedora/24-cloud-base"
758
952
  config.vm.provider :libvirt do |libvirt|
759
953
  libvirt.channel :type => 'unix', :target_name => 'org.qemu.guest_agent.0', :target_type => 'virtio'
760
- end
954
+ end
761
955
 
762
956
  config.vm.define "master" do |master|
763
957
  master.vm.provider :libvirt do |domain|
764
958
  domain.memory = 1024
765
- end
766
- end
959
+ end
960
+ end
767
961
  config.vm.define "node1" do |node1|
768
962
  node1.vm.provider :libvirt do |domain|
769
963
  domain.channel :type => 'unix', :target_name => 'org.qemu.guest_agent.0', :target_type => 'virtio'
770
964
  domain.channel :type => 'spicevmc', :target_name => 'com.redhat.spice.0', :target_type => 'virtio'
771
- end
772
- end
965
+ end
966
+ end
773
967
  config.vm.define "node2" do |node2|
774
968
  node2.vm.provider :libvirt do |domain|
775
969
  domain.channel :type => 'unix', :target_name => 'org.qemu.guest_agent.0', :target_type => 'virtio'
776
970
  domain.channel :type => 'unix', :target_type => 'guestfwd', :target_address => '192.0.2.42', :target_port => '4242',
777
971
  :source_path => '/tmp/foo'
778
- end
779
- end
972
+ end
973
+ end
780
974
  end
781
975
  ```
782
976
 
783
977
  ## Box Format
784
978
 
785
- You can view an example box in the [example_box/directory](https://github.com/vagrant-libvirt/vagrant-libvirt/tree/master/example_box). That directory also contains instructions on how to build a box.
979
+ You can view an example box in the
980
+ [`example_box/directory`](https://github.com/vagrant-libvirt/vagrant-libvirt/tree/master/example_box).
981
+ That directory also contains instructions on how to build a box.
786
982
 
787
983
  The box is a tarball containing:
788
984
 
789
- * qcow2 image file named `box.img`.
790
- * `metadata.json` file describing box image (provider, virtual_size, format).
791
- * `Vagrantfile` that does default settings for the provider-specific configuration for this provider.
985
+ * qcow2 image file named `box.img`
986
+ * `metadata.json` file describing box image (`provider`, `virtual_size`,
987
+ `format`)
988
+ * `Vagrantfile` that does default settings for the provider-specific
989
+ configuration for this provider
792
990
 
793
991
  ## Create Box
794
- To create a vagrant-libvirt box from a qcow2 image, run `create_box.sh` (located in the tools directory):
795
992
 
796
- ```$ create_box.sh ubuntu14.qcow2```
993
+ To create a vagrant-libvirt box from a qcow2 image, run `create_box.sh`
994
+ (located in the tools directory):
797
995
 
798
- You can also create a box by using [Packer](https://packer.io). Packer templates for use with vagrant-libvirt are available at https://github.com/jakobadam/packer-qemu-templates. After cloning that project you can build a vagrant-libvirt box by running:
996
+ ```shell
997
+ $ create_box.sh ubuntu14.qcow2
998
+ ```
999
+
1000
+ You can also create a box by using [Packer](https://packer.io). Packer
1001
+ templates for use with vagrant-libvirt are available at
1002
+ https://github.com/jakobadam/packer-qemu-templates. After cloning that project
1003
+ you can build a vagrant-libvirt box by running:
799
1004
 
800
- ``` ~/packer-qemu-templates/ubuntu$ packer build ubuntu-14.04-server-amd64-vagrant.json```
1005
+ ```shell
1006
+ $ cd packer-qemu-templates
1007
+ $ packer build ubuntu-14.04-server-amd64-vagrant.json
1008
+ ```
801
1009
 
802
1010
  ## Development
803
1011
 
804
1012
  To work on the `vagrant-libvirt` plugin, clone this repository out, and use
805
1013
  [Bundler](http://gembundler.com) to get the dependencies:
806
1014
 
807
- ```
1015
+ ```shell
808
1016
  $ git clone https://github.com/vagrant-libvirt/vagrant-libvirt.git
809
1017
  $ cd vagrant-libvirt
810
1018
  $ bundle install
@@ -812,15 +1020,15 @@ $ bundle install
812
1020
 
813
1021
  Once you have the dependencies, verify the unit tests pass with `rspec`:
814
1022
 
815
- ```
1023
+ ```shell
816
1024
  $ bundle exec rspec spec/
817
1025
  ```
818
1026
 
819
- If those pass, you're ready to start developing the plugin. You can test
820
- the plugin without installing it into your Vagrant environment by just
821
- creating a `Vagrantfile` in the top level of this directory (it is gitignored)
822
- that uses it. Don't forget to add following line at the beginning of your
823
- `Vagrantfile` while in development mode:
1027
+ If those pass, you're ready to start developing the plugin. You can test the
1028
+ plugin without installing it into your Vagrant environment by just creating a
1029
+ `Vagrantfile` in the top level of this directory (it is gitignored) that uses
1030
+ it. Don't forget to add following line at the beginning of your `Vagrantfile`
1031
+ while in development mode:
824
1032
 
825
1033
  ```ruby
826
1034
  Vagrant.require_plugin "vagrant-libvirt"
@@ -828,16 +1036,16 @@ Vagrant.require_plugin "vagrant-libvirt"
828
1036
 
829
1037
  Now you can use bundler to execute Vagrant:
830
1038
 
831
- ```
1039
+ ```shell
832
1040
  $ bundle exec vagrant up --provider=libvirt
833
1041
  ```
834
1042
 
835
- IMPORTANT NOTE: bundle is crucial. You need to use bundled vagrant.
1043
+ **IMPORTANT NOTE:** bundle is crucial. You need to use bundled Vagrant.
836
1044
 
837
1045
  ## Contributing
838
1046
 
839
- 1. Fork it.
840
- 2. Create your feature branch (`git checkout -b my-new-feature`).
841
- 3. Commit your changes (`git commit -am 'Add some feature'`).
842
- 4. Push to the branch (`git push origin my-new-feature`).
843
- 5. Create new Pull Request.
1047
+ 1. Fork it
1048
+ 2. Create your feature branch (`git checkout -b my-new-feature`)
1049
+ 3. Commit your changes (`git commit -am 'Add some feature'`)
1050
+ 4. Push to the branch (`git push origin my-new-feature`)
1051
+ 5. Create new Pull Request