vagrant-libvirt 0.8.2 → 0.10.1

Sign up to get free protection for your applications and to get access to all the features.
Files changed (88) hide show
  1. checksums.yaml +4 -4
  2. data/README.md +51 -2079
  3. data/lib/vagrant-libvirt/action/create_domain.rb +57 -94
  4. data/lib/vagrant-libvirt/action/create_network_interfaces.rb +1 -1
  5. data/lib/vagrant-libvirt/action/create_networks.rb +3 -3
  6. data/lib/vagrant-libvirt/action/destroy_domain.rb +21 -5
  7. data/lib/vagrant-libvirt/action/destroy_networks.rb +2 -2
  8. data/lib/vagrant-libvirt/action/handle_box_image.rb +3 -1
  9. data/lib/vagrant-libvirt/action/package_domain.rb +1 -5
  10. data/lib/vagrant-libvirt/action/remove_libvirt_image.rb +3 -1
  11. data/lib/vagrant-libvirt/action/resolve_disk_settings.rb +181 -0
  12. data/lib/vagrant-libvirt/action/snapshot_delete.rb +26 -0
  13. data/lib/vagrant-libvirt/action/snapshot_restore.rb +22 -0
  14. data/lib/vagrant-libvirt/action/snapshot_save.rb +27 -0
  15. data/lib/vagrant-libvirt/action/start_domain.rb +80 -11
  16. data/lib/vagrant-libvirt/action.rb +53 -1
  17. data/lib/vagrant-libvirt/cap/snapshots.rb +12 -0
  18. data/lib/vagrant-libvirt/cap/synced_folder_9p.rb +4 -4
  19. data/lib/vagrant-libvirt/cap/synced_folder_virtiofs.rb +4 -4
  20. data/lib/vagrant-libvirt/config.rb +104 -6
  21. data/lib/vagrant-libvirt/driver.rb +108 -46
  22. data/lib/vagrant-libvirt/errors.rb +23 -3
  23. data/lib/vagrant-libvirt/plugin.rb +7 -3
  24. data/lib/vagrant-libvirt/provider.rb +1 -1
  25. data/lib/vagrant-libvirt/templates/domain.xml.erb +32 -6
  26. data/lib/vagrant-libvirt/util/byte_number.rb +0 -1
  27. data/lib/vagrant-libvirt/util/compat.rb +23 -0
  28. data/lib/vagrant-libvirt/util/domain_flags.rb +15 -0
  29. data/lib/vagrant-libvirt/util/unindent.rb +7 -0
  30. data/lib/vagrant-libvirt/util.rb +1 -0
  31. data/lib/vagrant-libvirt/version +1 -1
  32. data/locales/en.yml +28 -4
  33. data/spec/acceptance/additional_storage_spec.rb +32 -0
  34. data/spec/acceptance/package_domain_spec.rb +90 -0
  35. data/spec/acceptance/provider_settings_spec.rb +54 -0
  36. data/spec/acceptance/simple_vm_provision_via_shell_spec.rb +31 -0
  37. data/spec/acceptance/snapshots_spec.rb +41 -0
  38. data/spec/acceptance/support-skeletons/package_complex/Vagrantfile.testbox +14 -0
  39. data/spec/acceptance/support-skeletons/package_complex/scripts/sysprep.sh +32 -0
  40. data/spec/acceptance/support-skeletons/package_simple/Vagrantfile.testbox +10 -0
  41. data/spec/acceptance/two_disks_spec.rb +29 -0
  42. data/spec/acceptance/use_qemu_agent_for_connectivity_spec.rb +35 -0
  43. data/spec/spec_helper.rb +3 -0
  44. data/spec/support/acceptance/configuration.rb +21 -0
  45. data/spec/support/acceptance/context.rb +70 -0
  46. data/spec/support/acceptance/isolated_environment.rb +41 -0
  47. data/spec/support/libvirt_acceptance_context.rb +64 -0
  48. data/spec/support/libvirt_context.rb +4 -0
  49. data/spec/support/sharedcontext.rb +1 -0
  50. data/spec/unit/action/cleanup_on_failure_spec.rb +0 -2
  51. data/spec/unit/action/create_domain_spec/sysinfo.xml +66 -0
  52. data/spec/unit/action/create_domain_spec/sysinfo_only_required.xml +49 -0
  53. data/spec/unit/action/create_domain_spec.rb +188 -140
  54. data/spec/unit/action/create_domain_volume_spec.rb +0 -3
  55. data/spec/unit/action/destroy_domain_spec.rb +43 -10
  56. data/spec/unit/action/forward_ports_spec.rb +0 -1
  57. data/spec/unit/action/handle_box_image_spec.rb +31 -14
  58. data/spec/unit/action/package_domain_spec.rb +0 -5
  59. data/spec/unit/action/remove_libvirt_image_spec.rb +43 -0
  60. data/spec/unit/action/resolve_disk_settings_spec/default_domain.xml +43 -0
  61. data/spec/unit/action/resolve_disk_settings_spec/default_no_aliases.xml +42 -0
  62. data/spec/unit/action/{create_domain_spec → resolve_disk_settings_spec}/default_system_storage_pool.xml +0 -0
  63. data/spec/unit/action/resolve_disk_settings_spec/multi_volume_box.xml +55 -0
  64. data/spec/unit/action/resolve_disk_settings_spec/multi_volume_box_additional_and_custom_no_aliases.xml +67 -0
  65. data/spec/unit/action/resolve_disk_settings_spec/multi_volume_box_additional_storage.xml +67 -0
  66. data/spec/unit/action/resolve_disk_settings_spec.rb +385 -0
  67. data/spec/unit/action/start_domain_spec/clock_timer_removed.xml +38 -0
  68. data/spec/unit/action/start_domain_spec/clock_timer_rtc_tsc.xml +39 -0
  69. data/spec/unit/action/start_domain_spec/existing_added_nvram.xml +62 -0
  70. data/spec/unit/action/start_domain_spec/nvram_domain.xml +64 -0
  71. data/spec/unit/action/start_domain_spec/nvram_domain_other_setting.xml +64 -0
  72. data/spec/unit/action/start_domain_spec/nvram_domain_removed.xml +64 -0
  73. data/spec/unit/action/start_domain_spec.rb +122 -22
  74. data/spec/unit/action/wait_till_up_spec.rb +0 -2
  75. data/spec/unit/action_spec.rb +88 -3
  76. data/spec/unit/cap/synced_folder_9p_spec.rb +120 -0
  77. data/spec/unit/cap/synced_folder_virtiofs_spec.rb +120 -0
  78. data/spec/unit/config_spec.rb +153 -6
  79. data/spec/unit/driver_spec.rb +3 -1
  80. data/spec/unit/plugin_spec.rb +42 -0
  81. data/spec/unit/templates/domain_all_settings.xml +15 -6
  82. data/spec/unit/templates/domain_scsi_bus_storage.xml +44 -0
  83. data/spec/unit/templates/domain_scsi_device_storage.xml +44 -0
  84. data/spec/unit/templates/domain_scsi_multiple_controllers_storage.xml +130 -0
  85. data/spec/unit/templates/domain_spec.rb +105 -21
  86. data/spec/unit/util/byte_number_spec.rb +1 -1
  87. metadata +169 -79
  88. data/spec/unit/provider_spec.rb +0 -11
data/README.md CHANGED
@@ -1,2105 +1,71 @@
1
1
  # Vagrant Libvirt Provider
2
2
 
3
- [![Join the chat at https://gitter.im/vagrant-libvirt/vagrant-libvirt](https://badges.gitter.im/vagrant-libvirt/vagrant-libvirt.svg)](https://gitter.im/vagrant-libvirt/vagrant-libvirt?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
4
- [![Build Status](https://github.com/vagrant-libvirt/vagrant-libvirt/actions/workflows/unit-tests.yml/badge.svg)](https://github.com/vagrant-libvirt/vagrant-libvirt/actions/workflows/unit-tests.yml)
5
- [![Coverage Status](https://coveralls.io/repos/github/vagrant-libvirt/vagrant-libvirt/badge.svg?branch=master)](https://coveralls.io/github/vagrant-libvirt/vagrant-libvirt?branch=master)
6
- [![Gem Version](https://badge.fury.io/rb/vagrant-libvirt.svg)](https://badge.fury.io/rb/vagrant-libvirt)
7
-
8
- This is a [Vagrant](http://www.vagrantup.com) plugin that adds a
9
- [Libvirt](http://libvirt.org) provider to Vagrant, allowing Vagrant to
10
- control and provision machines via Libvirt toolkit.
11
-
12
- **Note:** Actual version is still a development one. Feedback is welcome and
13
- can help a lot :-)
14
-
15
- ## Index
16
-
17
- <!-- note in vim set "let g:vmt_list_item_char='-'" to generate the correct output -->
18
- <!-- vim-markdown-toc GFM -->
19
-
20
- * [Features](#features)
21
- * [Future work](#future-work)
22
- * [Using the container image](#using-the-container-image)
23
- * [Using Docker](#using-docker)
24
- * [Using Podman](#using-podman)
25
- * [Extending the Docker image with additional vagrant plugins](#extending-the-docker-image-with-additional-vagrant-plugins)
26
- * [Installation](#installation)
27
- * [Possible problems with plugin installation on Linux](#possible-problems-with-plugin-installation-on-linux)
28
- * [Additional Notes for Fedora and Similar Linux Distributions](#additional-notes-for-fedora-and-similar-linux-distributions)
29
- * [Vagrant Project Preparation](#vagrant-project-preparation)
30
- * [Add Box](#add-box)
31
- * [Create Vagrantfile](#create-vagrantfile)
32
- * [Start VM](#start-vm)
33
- * [How Project Is Created](#how-project-is-created)
34
- * [Libvirt Configuration](#libvirt-configuration)
35
- * [Provider Options](#provider-options)
36
- * [Domain Specific Options](#domain-specific-options)
37
- * [Reload behavior](#reload-behavior)
38
- * [Networks](#networks)
39
- * [Private Network Options](#private-network-options)
40
- * [Public Network Options](#public-network-options)
41
- * [Management Network](#management-network)
42
- * [Additional Disks](#additional-disks)
43
- * [Reload behavior](#reload-behavior-1)
44
- * [CDROMs](#cdroms)
45
- * [Input](#input)
46
- * [PCI device passthrough](#pci-device-passthrough)
47
- * [Using USB Devices](#using-usb-devices)
48
- * [USB Controller Configuration](#usb-controller-configuration)
49
- * [USB Device Passthrough](#usb-device-passthrough)
50
- * [USB Redirector Devices](#usb-redirector-devices)
51
- * [Filter for USB Redirector Devices](#filter-for-usb-redirector-devices)
52
- * [Serial Console Devices](#serial-console-devices)
53
- * [Random number generator passthrough](#random-number-generator-passthrough)
54
- * [Watchdog device](#watchdog-device)
55
- * [Smartcard device](#smartcard-device)
56
- * [Hypervisor Features](#hypervisor-features)
57
- * [Clock](#clock)
58
- * [CPU features](#cpu-features)
59
- * [Memory Backing](#memory-backing)
60
- * [No box and PXE boot](#no-box-and-pxe-boot)
61
- * [SSH Access To VM](#ssh-access-to-vm)
62
- * [Forwarded Ports](#forwarded-ports)
63
- * [Forwarding the ssh-port](#forwarding-the-ssh-port)
64
- * [Synced Folders](#synced-folders)
65
- * [QEMU Session Support](#qemu-session-support)
66
- * [Customized Graphics](#customized-graphics)
67
- * [TPM Devices](#tpm-devices)
68
- * [Memory balloon](#memory-balloon)
69
- * [Libvirt communication channels](#libvirt-communication-channels)
70
- * [Custom command line arguments and environment variables](#custom-command-line-arguments-and-environment-variables)
71
- * [Box Formats](#box-formats)
72
- * [Version 1](#version-1)
73
- * [Version 2 (Experimental)](#version-2-experimental)
74
- * [Create Box](#create-box)
75
- * [Package Box from VM](#package-box-from-vm)
76
- * [Troubleshooting VMs](#troubleshooting-vms)
77
- * [Development](#development)
78
- * [Contributing](#contributing)
79
-
80
- <!-- vim-markdown-toc -->
81
-
82
- ## Features
83
-
84
- * Control local Libvirt hypervisors.
85
- * Vagrant `up`, `destroy`, `suspend`, `resume`, `halt`, `ssh`, `reload`,
86
- `package` and `provision` commands.
87
- * Upload box image (qcow2 format) to Libvirt storage pool.
88
- * Create volume as COW diff image for domains.
89
- * Create private networks.
90
- * Create and boot Libvirt domains.
91
- * SSH into domains.
92
- * Setup hostname and network interfaces.
93
- * Provision domains with any built-in Vagrant provisioner.
94
- * Synced folder support via `rsync`, `nfs`, `9p` or `virtiofs`.
95
- * Snapshots via [sahara](https://github.com/jedi4ever/sahara).
96
- * Package caching via
97
- [vagrant-cachier](http://fgrehm.viewdocs.io/vagrant-cachier/).
98
- * Use boxes from other Vagrant providers via
99
- [vagrant-mutate](https://github.com/sciurus/vagrant-mutate).
100
- * Support VMs with no box for PXE boot purposes (Vagrant 1.6 and up)
101
-
102
- ## Future work
103
-
104
- * Take a look at [open
105
- issues](https://github.com/vagrant-libvirt/vagrant-libvirt/issues?state=open).
106
-
107
- ## Using the container image
108
-
109
- Due to the number of issues encountered around compatibility between the ruby runtime environment
110
- that is part of the upstream vagrant installation and the library dependencies of libvirt that
111
- this project requires to communicate with libvirt, there is a docker image build and published.
112
-
113
- This should allow users to execute vagrant with vagrant-libvirt without needing to deal with
114
- the compatibility issues, though you may need to extend the image for your own needs should
115
- you make use of additional plugins.
116
-
117
- Note the default image contains the full toolchain required to build and install vagrant-libvirt
118
- and it's dependencies. There is also a smaller image published with the `-slim` suffix if you
119
- just need vagrant-libvirt and don't need to install any additional plugins for your environment.
120
-
121
- If you are connecting to a remote system libvirt, you may omit the
122
- `-v /var/run/libvirt/:/var/run/libvirt/` mount bind. Some distributions patch the local
123
- vagrant environment to ensure vagrant-libvirt uses `qemu:///session`, which means you
124
- may need to set the environment variable `LIBVIRT_DEFAULT_URI` to the same value if
125
- looking to use this in place of your distribution provided installation.
126
-
127
- ### Using Docker
128
-
129
- To get the image with the most recent release:
130
- ```bash
131
- docker pull vagrantlibvirt/vagrant-libvirt:latest
132
- ```
133
-
134
- ---
135
- **Note** If you want the very latest code you can use the `edge` tag instead.
136
-
137
- ```bash
138
- docker pull vagrantlibvirt/vagrant-libvirt:edge
139
- ```
140
- ---
141
-
142
- Running the image:
143
- ```bash
144
- docker run -i --rm \
145
- -e LIBVIRT_DEFAULT_URI \
146
- -v /var/run/libvirt/:/var/run/libvirt/ \
147
- -v ~/.vagrant.d:/.vagrant.d \
148
- -v $(realpath "${PWD}"):${PWD} \
149
- -w $(realpath "${PWD}") \
150
- --network host \
151
- vagrantlibvirt/vagrant-libvirt:latest \
152
- vagrant status
153
- ```
154
-
155
- It's possible to define a function in `~/.bashrc`, for example:
156
- ```bash
157
- vagrant(){
158
- docker run -i --rm \
159
- -e LIBVIRT_DEFAULT_URI \
160
- -v /var/run/libvirt/:/var/run/libvirt/ \
161
- -v ~/.vagrant.d:/.vagrant.d \
162
- -v $(realpath "${PWD}"):${PWD} \
163
- -w $(realpath "${PWD}") \
164
- --network host \
165
- vagrantlibvirt/vagrant-libvirt:latest \
166
- vagrant $@
167
- }
168
-
169
- ```
170
-
171
- ### Using Podman
172
-
173
- Preparing the podman run, only once:
174
-
175
- ```bash
176
- mkdir -p ~/.vagrant.d/{boxes,data,tmp}
177
- ```
178
- _N.B. This is needed until the entrypoint works for podman to only mount the `~/.vagrant.d` directory_
179
-
180
- To run with Podman you need to include
181
-
182
- ```bash
183
- --entrypoint /bin/bash \
184
- --security-opt label=disable \
185
- -v ~/.vagrant.d/boxes:/vagrant/boxes \
186
- -v ~/.vagrant.d/data:/vagrant/data \
187
- -v ~/.vagrant.d/tmp:/vagrant/tmp \
188
- ```
189
-
190
- for example:
191
-
192
- ```bash
193
- vagrant(){
194
- podman run -it --rm \
195
- -e LIBVIRT_DEFAULT_URI \
196
- -v /var/run/libvirt/:/var/run/libvirt/ \
197
- -v ~/.vagrant.d/boxes:/vagrant/boxes \
198
- -v ~/.vagrant.d/data:/vagrant/data \
199
- -v ~/.vagrant.d/tmp:/vagrant/tmp \
200
- -v $(realpath "${PWD}"):${PWD} \
201
- -w $(realpath "${PWD}") \
202
- --network host \
203
- --entrypoint /bin/bash \
204
- --security-opt label=disable \
205
- docker.io/vagrantlibvirt/vagrant-libvirt:latest \
206
- vagrant $@
207
- }
208
- ```
209
-
210
- Running Podman in rootless mode maps the root user inside the container to your host user so we need to bypass [entrypoint.sh](https://github.com/vagrant-libvirt/vagrant-libvirt/blob/master/entrypoint.sh) and mount persistent storage directly to `/vagrant`.
211
-
212
- ### Extending the Docker image with additional vagrant plugins
213
-
214
- By default the image published and used contains the entire tool chain required
215
- to reinstall the vagrant-libvirt plugin and it's dependencies, as this is the
216
- default behaviour of vagrant anytime a new plugin is installed. This means it
217
- should be possible to use a simple `FROM` statement and ask vagrant to install
218
- additional plugins.
219
-
220
- ```
221
- FROM vagrantlibvirt/vagrant-libvirt:latest
222
-
223
- RUN vagrant plugin install <plugin>
224
- ```
225
-
226
- ## Installation
227
-
228
- First, you should have both QEMU and Libvirt installed if you plan to run VMs
229
- on your local system. For instructions, refer to your Linux distribution's
230
- documentation.
231
-
232
- **NOTE:** Before you start using vagrant-libvirt, please make sure your Libvirt
233
- and QEMU installation is working correctly and you are able to create QEMU or
234
- KVM type virtual machines with `virsh` or `virt-manager`.
235
-
236
- Next, you must have [Vagrant
237
- installed](http://docs.vagrantup.com/v2/installation/index.html).
238
- Vagrant-libvirt supports Vagrant 2.0, 2.1 & 2.2. It should also work with earlier
239
- releases from 1.5 onwards but they are not actively tested.
240
-
241
- Check the [unit tests](https://github.com/vagrant-libvirt/vagrant-libvirt/blob/master/.github/workflows/unit-tests.yml)
242
- for the current list of tested versions.
243
-
244
- *We only test with the upstream version!* If you decide to install your distro's
245
- version and you run into problems, as a first step you should switch to upstream.
246
-
247
- Now you need to make sure your have all the build dependencies installed for
248
- vagrant-libvirt. This depends on your distro. An overview:
249
-
250
- * Ubuntu 18.10, Debian 9 and up:
251
- ```shell
252
- apt-get build-dep vagrant ruby-libvirt
253
- apt-get install qemu libvirt-daemon-system libvirt-clients ebtables dnsmasq-base
254
- apt-get install libxslt-dev libxml2-dev libvirt-dev zlib1g-dev ruby-dev
255
- apt-get install libguestfs-tools
256
- ```
257
-
258
- * Ubuntu 18.04, Debian 8 and older:
259
- ```shell
260
- apt-get build-dep vagrant ruby-libvirt
261
- apt-get install qemu libvirt-bin ebtables dnsmasq-base
262
- apt-get install libxslt-dev libxml2-dev libvirt-dev zlib1g-dev ruby-dev
263
- apt-get install libguestfs-tools
264
- ```
265
-
266
- (It is possible some users will already have libraries from the third line installed, but this is the way to make it work OOTB.)
267
-
268
- * CentOS 6, 7, Fedora 21:
269
- ```shell
270
- yum install qemu libvirt libvirt-devel ruby-devel gcc qemu-kvm libguestfs-tools
271
- ```
272
-
273
- * Fedora 22 and up:
274
- ```shell
275
- dnf install -y gcc libvirt libvirt-devel libxml2-devel make ruby-devel libguestfs-tools
276
- ```
277
-
278
- * OpenSUSE leap 15.1:
279
- ```shell
280
- zypper install qemu libvirt libvirt-devel ruby-devel gcc qemu-kvm libguestfs
281
- ```
282
-
283
- * Arch Linux: please read the related [ArchWiki](https://wiki.archlinux.org/index.php/Vagrant#vagrant-libvirt) page.
284
- ```shell
285
- pacman -S vagrant
286
- ```
287
-
288
- Now you're ready to install vagrant-libvirt using standard [Vagrant
289
- plugin](http://docs.vagrantup.com/v2/plugins/usage.html) installation methods.
290
-
291
- For some distributions you will need to specify `CONFIGURE_ARGS` variable before
292
- running `vagrant plugin install`:
293
-
294
- * Fedora 32 + upstream Vagrant:
295
- ```shell
296
- export CONFIGURE_ARGS="with-libvirt-include=/usr/include/libvirt with-libvirt-lib=/usr/lib64"
297
- ```
298
-
299
- ```shell
300
- vagrant plugin install vagrant-libvirt
301
- ```
302
-
303
- ### Possible problems with plugin installation on Linux
304
-
305
- In case of problems with building nokogiri and ruby-libvirt gem, install
306
- missing development libraries for libxslt, libxml2 and libvirt.
307
-
308
-
309
- On Ubuntu, Debian, make sure you are running all three of the `apt` commands above with `sudo`.
310
-
311
-
312
- On RedHat, Centos, Fedora, ...
313
-
314
- ```shell
315
- $ sudo dnf install libxslt-devel libxml2-devel libvirt-devel ruby-devel gcc
316
- ```
317
-
318
- On Arch Linux it is recommended to follow [steps from ArchWiki](https://wiki.archlinux.org/index.php/Vagrant#vagrant-libvirt).
319
-
320
- If have problem with installation - check your linker. It should be `ld.gold`:
321
-
322
- ```shell
323
- sudo alternatives --set ld /usr/bin/ld.gold
324
- # OR
325
- sudo ln -fs /usr/bin/ld.gold /usr/bin/ld
326
- ```
327
-
328
- If you have issues building ruby-libvirt, try the following:
329
- ```shell
330
- CONFIGURE_ARGS='with-ldflags=-L/opt/vagrant/embedded/lib with-libvirt-include=/usr/include/libvirt with-libvirt-lib=/usr/lib' GEM_HOME=~/.vagrant.d/gems GEM_PATH=$GEM_HOME:/opt/vagrant/embedded/gems PATH=/opt/vagrant/embedded/bin:$PATH vagrant plugin install vagrant-libvirt
331
- ```
332
- ### Additional Notes for Fedora and Similar Linux Distributions
333
-
334
- If you encounter the following load error when using the vagrant-libvirt plugin (note the required by libssh):
335
-
336
- ```shell
337
- /opt/vagrant/embedded/lib/ruby/2.4.0/rubygems/core_ext/kernel_require.rb:55:in `require': /opt/vagrant/embedded/lib64/libcrypto.so.1.1: version `OPENSSL_1_1_1b' not found (required by /lib64/libssh.so.4) - /home/xxx/.vagrant.d/gems/2.4.6/gems/ruby-libvirt-0.7.1/lib/_libvirt.so (LoadError)
338
- ```
339
- then the following steps have been found to resolve the problem. Thanks to James Reynolds (see https://github.com/hashicorp/vagrant/issues/11020#issuecomment-540043472). The specific version of libssh will change over time so references to the rpm in the commands below will need to be adjusted accordingly.
340
-
341
- ```shell
342
- # Fedora
343
- dnf download --source libssh
344
-
345
- # centos 8 stream, doesn't provide source RPMs, so you need to download like so
346
- git clone https://git.centos.org/centos-git-common
347
- # centos-git-common needs its tools in PATH
348
- export PATH=$(readlink -f ./centos-git-common):$PATH
349
- git clone https://git.centos.org/rpms/libssh
350
- cd libssh
351
- git checkout imports/c8s/libssh-0.9.4-1.el8
352
- into_srpm.sh -d c8s
353
- cd SRPMS
354
-
355
- # common commands (make sure to adjust verison accordingly)
356
- rpm2cpio libssh-0.9.4-1c8s.src.rpm | cpio -imdV
357
- tar xf libssh-0.9.4.tar.xz
358
- mkdir build
359
- cd build
360
- cmake ../libssh-0.9.4 -DOPENSSL_ROOT_DIR=/opt/vagrant/embedded/
361
- make
362
- sudo cp lib/libssh* /opt/vagrant/embedded/lib64
363
- ```
364
-
365
- If you encounter the following load error when using the vagrant-libvirt plugin (note the required by libk5crypto):
366
-
367
- ```shell
368
- /opt/vagrant/embedded/lib/ruby/2.4.0/rubygems/core_ext/kernel_require.rb:55:in `require': /usr/lib64/libk5crypto.so.3: undefined symbol: EVP_KDF_ctrl, version OPENSSL_1_1_1b - /home/rbelgrave/.vagrant.d/gems/2.4.9/gems/ruby-libvirt-0.7.1/lib/_libvirt.so (LoadError)
369
- ```
370
-
371
- then the following steps have been found to resolve the problem. After the steps below are complete, then reinstall the vagrant-libvirt plugin without setting the `CONFIGURE_ARGS`. Thanks to Marco Bevc (see https://github.com/hashicorp/vagrant/issues/11020#issuecomment-625801983):
372
-
373
- ```shell
374
- # Fedora
375
- dnf download --source krb5-libs
376
-
377
- # centos 8 stream, doesn't provide source RPMs, so you need to download like so
378
- git clone https://git.centos.org/centos-git-common
379
- # make get_sources.sh executable as it is needed in krb5
380
- chmod +x centos-git-common/get_sources.sh
381
- # centos-git-common needs its tools in PATH
382
- export PATH=$(readlink -f ./centos-git-common):$PATH
383
- git clone https://git.centos.org/rpms/krb5
384
- cd krb5
385
- git checkout imports/c8s/krb5-1.18.2-8.el8
386
- get_sources.sh
387
- into_srpm.sh -d c8s
388
- cd SRPMS
389
-
390
- # common commands (make sure to adjust verison accordingly)
391
- rpm2cpio krb5-1.18.2-8c8s.src.rpm | cpio -imdV
392
- tar xf krb5-1.18.2.tar.gz
393
- cd krb5-1.18.2/src
394
- ./configure
395
- make
396
- sudo cp -P lib/crypto/libk5crypto.* /opt/vagrant/embedded/lib64/
397
- ```
398
-
399
- ## Vagrant Project Preparation
400
-
401
- ### Add Box
402
-
403
- After installing the plugin (instructions above), the quickest way to get
404
- started is to add Libvirt box and specify all the details manually within a
405
- `config.vm.provider` block. So first, add Libvirt box using any name you want.
406
- You can find more Libvirt-ready boxes at
407
- [Vagrant Cloud](https://app.vagrantup.com/boxes/search?provider=libvirt). For
408
- example:
409
-
410
- ```shell
411
- vagrant init fedora/32-cloud-base
412
- ```
413
-
414
- ### Create Vagrantfile
415
-
416
- And then make a Vagrantfile that looks like the following, filling in your
417
- information where necessary. For example:
418
-
419
- ```ruby
420
- Vagrant.configure("2") do |config|
421
- config.vm.define :test_vm do |test_vm|
422
- test_vm.vm.box = "fedora/32-cloud-base"
423
- end
424
- end
425
- ```
426
-
427
- ### Start VM
428
-
429
- In prepared project directory, run following command:
430
-
431
- ```shell
432
- $ vagrant up --provider=libvirt
433
- ```
434
-
435
- Vagrant needs to know that we want to use Libvirt and not default VirtualBox.
436
- That's why there is `--provider=libvirt` option specified. Other way to tell
437
- Vagrant to use Libvirt provider is to setup environment variable
438
-
439
- ```shell
440
- export VAGRANT_DEFAULT_PROVIDER=libvirt
441
- ```
442
-
443
- ### How Project Is Created
444
-
445
- Vagrant goes through steps below when creating new project:
446
-
447
- 1. Connect to Libvirt locally or remotely via SSH.
448
- 2. Check if box image is available in Libvirt storage pool. If not, upload it
449
- to remote Libvirt storage pool as new volume.
450
- 3. Create COW diff image of base box image for new Libvirt domain.
451
- 4. Create and start new domain on Libvirt host.
452
- 5. Check for DHCP lease from dnsmasq server.
453
- 6. Wait till SSH is available.
454
- 7. Sync folders and run Vagrant provisioner on new domain if setup in
455
- Vagrantfile.
456
-
457
- ### Libvirt Configuration
458
-
459
- ### Provider Options
460
-
461
- Although it should work without any configuration for most people, this
462
- provider exposes quite a few provider-specific configuration options. The
463
- following options allow you to configure how vagrant-libvirt connects to
464
- Libvirt, and are used to generate the [Libvirt connection
465
- URI](http://libvirt.org/uri.html):
466
-
467
- * `driver` - A hypervisor name to access. For now only KVM and QEMU are
468
- supported
469
- * `host` - The name of the server, where Libvirtd is running
470
- * `connect_via_ssh` - If use ssh tunnel to connect to Libvirt. Absolutely
471
- needed to access Libvirt on remote host. It will not be able to get the IP
472
- address of a started VM otherwise.
473
- * `username` - Username and password to access Libvirt
474
- * `password` - Password to access Libvirt
475
- * `id_ssh_key_file` - If not nil, uses this ssh private key to access Libvirt.
476
- Default is `$HOME/.ssh/id_rsa`. Prepends `$HOME/.ssh/` if no directory
477
- * `socket` - Path to the Libvirt unix socket (e.g.
478
- `/var/run/libvirt/libvirt-sock`)
479
- * `proxy_command` - For advanced usage. When connecting to remote libvirt
480
- instances, if the default constructed proxy\_command which uses `-W %h:%p`
481
- does not work, set this as needed. It performs interpolation using `{key}`
482
- and supports only `{host}`, `{username}`, and `{id_ssh_key_file}`. This is
483
- to try and avoid issues with escaping `%` and `$` which might be necessary
484
- to the ssh command itself. e.g.:
485
- `libvirt.proxy_command = "ssh {host} -l {username} -i {id_ssh_key_file} nc %h %p"`
486
- * `uri` - For advanced usage. Directly specifies what Libvirt connection URI
487
- vagrant-libvirt should use. Overrides all other connection configuration
488
- options
489
-
490
- In the event that none of these are set (excluding the `driver` option) the
491
- provider will attempt to retrieve the uri from the environment variable
492
- `LIBVIRT_DEFAULT_URI` similar to how virsh works. If any of them are set, it
493
- will ignore the environment variable. The reason the driver option is ignored
494
- is that it is not uncommon for this to be explicitly set on the box itself
495
- and there is no easily to determine whether it is being set by the user or
496
- the box packager.
497
-
498
- Connection-independent options:
499
-
500
- * `storage_pool_name` - Libvirt storage pool name, where box image and instance
501
- snapshots (if `snapshot_pool_name` is not set) will be stored.
502
- * `snapshot_pool_name` - Libvirt storage pool name. If set, the created
503
- snapshot of the instance will be stored at this location instead of
504
- `storage_pool_name`.
505
-
506
- For example:
507
-
508
- ```ruby
509
- Vagrant.configure("2") do |config|
510
- config.vm.provider :libvirt do |libvirt|
511
- libvirt.host = "example.com"
512
- end
513
- end
514
- ```
515
-
516
- ### Domain Specific Options
517
-
518
- * `title` - A short description of the domain.
519
- * `description` - A human readable description of the virtual machine.
520
- * `disk_bus` - The type of disk device to emulate. Defaults to virtio if not
521
- set. Possible values are documented in Libvirt's [description for
522
- _target_](http://libvirt.org/formatdomain.html#elementsDisks). NOTE: this
523
- option applies only to disks associated with a box image. To set the bus type
524
- on additional disks, see the [Additional Disks](#additional-disks) section.
525
- * `disk_device` - The disk device to emulate. Defaults to vda if not
526
- set, which should be fine for paravirtualized guests, but some fully
527
- virtualized guests may require hda. NOTE: this option also applies only to
528
- disks associated with a box image.
529
- * `disk_driver` - Extra options for the main disk driver ([see Libvirt documentation](http://libvirt.org/formatdomain.html#elementsDisks)).
530
- NOTE: this option also applies only to disks associated with a box image. In all cases, the value `nil` can be used to force the hypervisor default behaviour (e.g. to override settings defined in top-level Vagrantfiles). Supported options include:
531
- * `:cache` - Controls the cache mechanism. Possible values are "default", "none", "writethrough", "writeback", "directsync" and "unsafe".
532
- * `:io` - Controls specific policies on I/O. Possible values are "threads" and "native".
533
- * `:copy_on_read` - Controls whether to copy read backing file into the image file. The value can be either "on" or "off".
534
- * `:discard` - Controls whether discard requests (also known as "trim" or "unmap") are ignored or passed to the filesystem. Possible values are "unmap" or "ignore".
535
- Note: for discard to work, you will likely also need to set `disk_bus = 'scsi'`
536
- * `:detect_zeroes` - Controls whether to detect zero write requests. The value can be "off", "on" or "unmap".
537
- * `nic_model_type` - parameter specifies the model of the network adapter when
538
- you create a domain value by default virtio KVM believe possible values, see
539
- the [documentation for
540
- Libvirt](https://libvirt.org/formatdomain.html#elementsNICSModel).
541
- * `shares` - Proportional weighted share for the domain relative to others. For more details see [documentation](https://libvirt.org/formatdomain.html#elementsCPUTuning).
542
- * `memory` - Amount of memory in MBytes. Defaults to 512 if not set.
543
- * `cpus` - Number of virtual cpus. Defaults to 1 if not set.
544
- * `cpuset` - Physical cpus to which the vcpus can be pinned. For more details see [documentation](https://libvirt.org/formatdomain.html#elementsCPUAllocation).
545
- * `cputopology` - Number of CPU sockets, cores and threads running per core. All fields of `:sockets`, `:cores` and `:threads` are mandatory, `cpus` domain option must be present and must be equal to total count of **sockets * cores * threads**. For more details see [documentation](https://libvirt.org/formatdomain.html#elementsCPU).
546
- * `nodeset` - Physical NUMA nodes where virtual memory can be pinned. For more details see [documentation](https://libvirt.org/formatdomain.html#elementsNUMATuning).
547
-
548
- ```ruby
549
- Vagrant.configure("2") do |config|
550
- config.vm.provider :libvirt do |libvirt|
551
- libvirt.cpus = 4
552
- libvirt.cpuset = '1-4,^3,6'
553
- libvirt.cputopology :sockets => '2', :cores => '2', :threads => '1'
554
- end
555
- end
556
- ```
557
-
558
- * `nested` - [Enable nested
559
- virtualization](https://docs.fedoraproject.org/en-US/quick-docs/using-nested-virtualization-in-kvm/).
560
- Default is false.
561
- * `cpu_mode` - [CPU emulation
562
- mode](https://libvirt.org/formatdomain.html#elementsCPU). Defaults to
563
- 'host-model' if not set. Allowed values: host-model, host-passthrough,
564
- custom.
565
- * `cpu_model` - CPU Model. Defaults to 'qemu64' if not set and `cpu_mode` is
566
- `custom` and to '' otherwise. This can really only be used when setting
567
- `cpu_mode` to `custom`.
568
- * `cpu_fallback` - Whether to allow Libvirt to fall back to a CPU model close
569
- to the specified model if features in the guest CPU are not supported on the
570
- host. Defaults to 'allow' if not set. Allowed values: `allow`, `forbid`.
571
- * `numa_nodes` - Specify an array of NUMA nodes for the guest. The syntax is similar to what would be set in the domain XML. `memory` must be in MB. Symmetrical and asymmetrical topologies are supported but make sure your total count of defined CPUs adds up to `v.cpus`.
572
-
573
- The sum of all the memory defined here will act as your total memory for your guest VM. **This sum will override what is set in `v.memory`**
574
- ```
575
- v.cpus = 4
576
- v.numa_nodes = [
577
- {:cpus => "0-1", :memory => "1024"},
578
- {:cpus => "2-3", :memory => "4096"}
579
- ]
580
- ```
581
- * `loader` - Sets path to custom UEFI loader.
582
- * `kernel` - To launch the guest with a kernel residing on host filesystems.
583
- Equivalent to qemu `-kernel`.
584
- * `initrd` - To specify the initramfs/initrd to use for the guest. Equivalent
585
- to qemu `-initrd`.
586
- * `random_hostname` - To create a domain name with extra information on the end
587
- to prevent hostname conflicts.
588
- * `default_prefix` - The default Libvirt guest name becomes a concatenation of the
589
- `<current_directory>_<guest_name>`. The current working directory is the default prefix
590
- to the guest name. The `default_prefix` options allow you to set the guest name prefix.
591
- * `cmd_line` - Arguments passed on to the guest kernel initramfs or initrd to
592
- use. Equivalent to qemu `-append`, only possible to use in combination with `initrd` and `kernel`.
593
- * `graphics_type` - Sets the protocol used to expose the guest display.
594
- Defaults to `vnc`. Possible values are "sdl", "curses", "none", "gtk", "vnc"
595
- or "spice".
596
- * `graphics_port` - Sets the port for the display protocol to bind to.
597
- Defaults to 5900.
598
- * `graphics_ip` - Sets the IP for the display protocol to bind to. Defaults to
599
- "127.0.0.1".
600
- * `graphics_passwd` - Sets the password for the display protocol. Working for
601
- vnc and Spice. by default working without passsword.
602
- * `graphics_autoport` - Sets autoport for graphics, Libvirt in this case
603
- ignores graphics_port value, Defaults to 'yes'. Possible value are "yes" and
604
- "no"
605
- * `graphics_gl` - Set to `true` to enable OpenGL. Defaults to `true` if
606
- `video_accel3d` is `true`.
607
- * `keymap` - Set keymap for vm. default: en-us
608
- * `kvm_hidden` - [Hide the hypervisor from the
609
- guest](https://libvirt.org/formatdomain.html#elementsFeatures). Useful for
610
- [GPU passthrough](#pci-device-passthrough) on stubborn drivers. Default is false.
611
- * `video_type` - Sets the graphics card type exposed to the guest. Defaults to
612
- "cirrus". [Possible
613
- values](http://libvirt.org/formatdomain.html#elementsVideo) are "vga",
614
- "cirrus", "vmvga", "xen", "vbox", or "qxl".
615
- * `video_vram` - Used by some graphics card types to vary the amount of RAM
616
- dedicated to video. Defaults to 16384.
617
- * `video_accel3d` - Set to `true` to enable 3D acceleration. Defaults to
618
- `false`.
619
- * `sound_type` - [Set the virtual sound card](https://libvirt.org/formatdomain.html#elementsSound)
620
- Defaults to "ich6".
621
- * `machine_type` - Sets machine type. Equivalent to qemu `-machine`. Use
622
- `qemu-system-x86_64 -machine help` to get a list of supported machines.
623
- * `machine_arch` - Sets machine architecture. This helps Libvirt to determine
624
- the correct emulator type. Possible values depend on your version of QEMU.
625
- For possible values, see which emulator executable `qemu-system-*` your
626
- system provides. Common examples are `aarch64`, `alpha`, `arm`, `cris`,
627
- `i386`, `lm32`, `m68k`, `microblaze`, `microblazeel`, `mips`, `mips64`,
628
- `mips64el`, `mipsel`, `moxie`, `or32`, `ppc`, `ppc64`, `ppcemb`, `s390x`,
629
- `sh4`, `sh4eb`, `sparc`, `sparc64`, `tricore`, `unicore32`, `x86_64`,
630
- `xtensa`, `xtensaeb`.
631
- * `machine_virtual_size` - Sets the disk size in GB for the machine overriding
632
- the default specified in the box. Allows boxes to defined with a minimal size
633
- disk by default and to be grown to a larger size at creation time. Will
634
- ignore sizes smaller than the size specified by the box metadata. Note that
635
- currently there is no support for automatically resizing the filesystem to
636
- take advantage of the larger disk.
637
- * `emulator_path` - Explicitly select which device model emulator to use by
638
- providing the path, e.g. `/usr/bin/qemu-system-x86_64`. This is especially
639
- useful on systems that fail to select it automatically based on
640
- `machine_arch` which then results in a capability error.
641
- * `boot` - Change the boot order and enables the boot menu. Possible options
642
- are "hd", "network", "cdrom". Defaults to "hd" with boot menu disabled. When
643
- "network" is set without "hd", only all NICs will be tried; see below for
644
- more detail.
645
- * `nic_adapter_count` - Defaults to '8'. Only use case for increasing this
646
- count is for VMs that virtualize switches such as Cumulus Linux. Max value
647
- for Cumulus Linux VMs is 33.
648
- * `uuid` - Force a domain UUID. Defaults to autogenerated value by Libvirt if
649
- not set.
650
- * `suspend_mode` - What is done on vagrant suspend. Possible values: 'pause',
651
- 'managedsave'. Pause mode executes a la `virsh suspend`, which just pauses
652
- execution of a VM, not freeing resources. Managed save mode does a la `virsh
653
- managedsave` which frees resources suspending a domain.
654
- * `tpm_model` - The model of the TPM to which you wish to connect.
655
- * `tpm_type` - The type of TPM device to which you are connecting.
656
- * `tpm_path` - The path to the TPM device on the host system.
657
- * `tpm_version` - The TPM version to use.
658
- * `dtb` - The device tree blob file, mostly used for non-x86 platforms. In case
659
- the device tree isn't added in-line to the kernel, it can be manually
660
- specified here.
661
- * `autostart` - Automatically start the domain when the host boots. Defaults to
662
- 'false'.
663
- * `channel` - [Libvirt
664
- channels](https://libvirt.org/formatdomain.html#elementCharChannel).
665
- Configure a private communication channel between the host and guest, e.g.
666
- for use by the [QEMU guest
667
- agent](http://wiki.libvirt.org/page/Qemu_guest_agent) and the Spice/QXL
668
- graphics type.
669
- * `mgmt_attach` - Decide if VM has interface in mgmt network. If set to 'false'
670
- it is not possible to communicate with VM through `vagrant ssh` or run
671
- provisioning. Setting to 'false' is only possible when VM doesn't use box.
672
- Defaults set to 'true'.
673
- * `serial` - [libvirt serial devices](https://libvirt.org/formatdomain.html#elementsConsole).
674
- Configure a serial/console port to communicate with the guest. Can be used
675
- to log to file boot time messages sent to ttyS0 console by the guest.
676
-
677
- Specific domain settings can be set for each domain separately in multi-VM
678
- environment. Example below shows a part of Vagrantfile, where specific options
679
- are set for dbserver domain.
680
-
681
- ```ruby
682
- Vagrant.configure("2") do |config|
683
- config.vm.define :dbserver do |dbserver|
684
- dbserver.vm.box = "centos64"
685
- dbserver.vm.provider :libvirt do |domain|
686
- domain.memory = 2048
687
- domain.cpus = 2
688
- domain.nested = true
689
- domain.disk_driver :cache => 'none'
690
- end
691
- end
692
-
693
- # ...
694
- ```
695
-
696
- The following example shows part of a Vagrantfile that enables the VM to boot
697
- from a network interface first and a hard disk second. This could be used to
698
- run VMs that are meant to be a PXE booted machines. Be aware that if `hd` is
699
- not specified as a boot option, it will never be tried.
700
-
701
- ```ruby
702
- Vagrant.configure("2") do |config|
703
- config.vm.define :pxeclient do |pxeclient|
704
- pxeclient.vm.box = "centos64"
705
- pxeclient.vm.provider :libvirt do |domain|
706
- domain.boot 'network'
707
- domain.boot 'hd'
708
- end
709
- end
710
-
711
- # ...
712
- ```
713
-
714
- #### Reload behavior
715
-
716
- On `vagrant reload` the following domain specific attributes are updated in
717
- defined domain:
718
-
719
- * `disk_bus` - Is updated only on disks. It skips CDROMs
720
- * `nic_model_type` - Updated
721
- * `memory` - Updated
722
- * `cpus` - Updated
723
- * `nested` - Updated
724
- * `cpu_mode` - Updated. Pay attention that custom mode is not supported
725
- * `graphics_type` - Updated
726
- * `graphics_port` - Updated
727
- * `graphics_ip` - Updated
728
- * `graphics_passwd` - Updated
729
- * `graphics_autoport` - Updated
730
- * `keymap` - Updated
731
- * `video_type` - Updated
732
- * `video_vram` - Updated
733
- * `tpm_model` - Updated
734
- * `tpm_type` - Updated
735
- * `tpm_path` - Updated
736
- * `tpm_version` - Updated
737
-
738
- ## Networks
739
-
740
- Networking features in the form of `config.vm.network` support private networks
741
- concept. It supports both the virtual network switch routing types and the
742
- point to point Guest OS to Guest OS setting using UDP/Mcast/TCP tunnel
743
- interfaces.
744
-
745
- http://wiki.libvirt.org/page/VirtualNetworking
746
-
747
- https://libvirt.org/formatdomain.html#elementsNICSTCP
748
-
749
- http://libvirt.org/formatdomain.html#elementsNICSMulticast
750
-
751
- http://libvirt.org/formatdomain.html#elementsNICSUDP _(in Libvirt v1.2.20 and higher)_
752
-
753
- Public Network interfaces are currently implemented using the macvtap driver.
754
- The macvtap driver is only available with the Linux Kernel version >= 2.6.24.
755
- See the following Libvirt documentation for the details of the macvtap usage.
756
-
757
- http://www.libvirt.org/formatdomain.html#elementsNICSDirect
758
-
759
- An examples of network interface definitions:
760
-
761
- ```ruby
762
- # Private network using virtual network switching
763
- config.vm.define :test_vm1 do |test_vm1|
764
- test_vm1.vm.network :private_network, :ip => "10.20.30.40"
765
- end
766
-
767
- # Private network using DHCP and a custom network
768
- config.vm.define :test_vm1 do |test_vm1|
769
- test_vm1.vm.network :private_network,
770
- :type => "dhcp",
771
- :libvirt__network_address => '10.20.30.0'
772
- end
773
-
774
- # Private network (as above) using a domain name
775
- config.vm.define :test_vm1 do |test_vm1|
776
- test_vm1.vm.network :private_network,
777
- :ip => "10.20.30.40",
778
- :libvirt__domain_name => "test.local"
779
- end
780
-
781
- # Private network. Point to Point between 2 Guest OS using a TCP tunnel
782
- # Guest 1
783
- config.vm.define :test_vm1 do |test_vm1|
784
- test_vm1.vm.network :private_network,
785
- :libvirt__tunnel_type => 'server',
786
- # default is 127.0.0.1 if omitted
787
- # :libvirt__tunnel_ip => '127.0.0.1',
788
- :libvirt__tunnel_port => '11111'
789
- # network with ipv6 support
790
- test_vm1.vm.network :private_network,
791
- :ip => "10.20.5.42",
792
- :libvirt__guest_ipv6 => "yes",
793
- :libvirt__ipv6_address => "2001:db8:ca2:6::1",
794
- :libvirt__ipv6_prefix => "64"
795
-
796
- # Guest 2
797
- config.vm.define :test_vm2 do |test_vm2|
798
- test_vm2.vm.network :private_network,
799
- :libvirt__tunnel_type => 'client',
800
- # default is 127.0.0.1 if omitted
801
- # :libvirt__tunnel_ip => '127.0.0.1',
802
- :libvirt__tunnel_port => '11111'
803
- # network with ipv6 support
804
- test_vm2.vm.network :private_network,
805
- :ip => "10.20.5.45",
806
- :libvirt__guest_ipv6 => "yes",
807
- :libvirt__ipv6_address => "2001:db8:ca2:6::1",
808
- :libvirt__ipv6_prefix => "64"
809
-
810
-
811
- # Public Network
812
- config.vm.define :test_vm1 do |test_vm1|
813
- test_vm1.vm.network :public_network,
814
- :dev => "virbr0",
815
- :mode => "bridge",
816
- :type => "bridge"
817
- end
818
- ```
819
-
820
- In example below, one network interface is configured for VM `test_vm1`. After
821
- you run `vagrant up`, VM will be accessible on IP address `10.20.30.40`. So if
822
- you install a web server via provisioner, you will be able to access your
823
- testing server on `http://10.20.30.40` URL. But beware that this address is
824
- private to Libvirt host only. It's not visible outside of the hypervisor box.
825
-
826
- If network `10.20.30.0/24` doesn't exist, provider will create it. By default
827
- created networks are NATed to outside world, so your VM will be able to connect
828
- to the internet (if hypervisor can). And by default, DHCP is offering addresses
829
- on newly created networks.
830
-
831
- The second interface is created and bridged into the physical device `eth0`.
832
- This mechanism uses the macvtap Kernel driver and therefore does not require an
833
- existing bridge device. This configuration assumes that DHCP and DNS services
834
- are being provided by the public network. This public interface should be
835
- reachable by anyone with access to the public network.
836
-
837
- ### Private Network Options
838
-
839
- *Note: These options are not applicable to public network interfaces.*
840
-
841
- There is a way to pass specific options for Libvirt provider when using
842
- `config.vm.network` to configure new network interface. Each parameter name
843
- starts with `libvirt__` string. Here is a list of those options:
844
-
845
- * `:libvirt__network_name` - Name of Libvirt network to connect to. By default,
846
- network 'default' is used.
847
- * `:libvirt__netmask` - Used only together with `:ip` option. Default is
848
- '255.255.255.0'.
849
- * `:libvirt__network_address` - Used only when `:type` is set to `dhcp`. Only `/24` subnet is supported. Default is `172.28.128.0`.
850
- * `:libvirt__host_ip` - Address to use for the host (not guest). Default is
851
- first possible address (after network address).
852
- * `:libvirt__domain_name` - DNS domain of the DHCP server. Used only
853
- when creating new network.
854
- * `:libvirt__dhcp_enabled` - If DHCP will offer addresses, or not. Used only
855
- when creating new network. Default is true.
856
- * `:libvirt__dhcp_start` - First address given out via DHCP. Default is third
857
- address in range (after network name and gateway).
858
- * `:libvirt__dhcp_stop` - Last address given out via DHCP. Default is last
859
- possible address in range (before broadcast address).
860
- * `:libvirt__dhcp_bootp_file` - The file to be used for the boot image. Used
861
- only when dhcp is enabled.
862
- * `:libvirt__dhcp_bootp_server` - The server that runs the DHCP server. Used
863
- only when dhcp is enabled.By default is the same host that runs the DHCP
864
- server.
865
- * `:libvirt__tftp_root` - Path to the root directory served via TFTP.
866
- * `:libvirt__adapter` - Number specifiyng sequence number of interface.
867
- * `:libvirt__forward_mode` - Specify one of `veryisolated`, `none`, `open`, `nat`
868
- or `route` options. This option is used only when creating new network. Mode
869
- `none` will create isolated network without NATing or routing outside. You
870
- will want to use NATed forwarding typically to reach networks outside of
871
- hypervisor. Routed forwarding is typically useful to reach other networks
872
- within hypervisor. `veryisolated` described
873
- [here](https://libvirt.org/formatnetwork.html#examplesNoGateway). By
874
- default, option `nat` is used.
875
- * `:libvirt__forward_device` - Name of interface/device, where network should
876
- be forwarded (NATed or routed). Used only when creating new network. By
877
- default, all physical interfaces are used.
878
- * `:libvirt__tunnel_type` - Set to 'udp' if using UDP unicast tunnel mode
879
- (libvirt v1.2.20 or higher). Set this to either "server" or "client" for tcp
880
- tunneling. Set this to 'mcast' if using multicast tunneling. This
881
- configuration type uses tunnels to generate point to point connections
882
- between Guests. Useful for Switch VMs like Cumulus Linux. No virtual switch
883
- setting like `libvirt__network_name` applies with tunnel interfaces and will
884
- be ignored if configured.
885
- * `:libvirt__tunnel_ip` - Sets the source IP of the Libvirt tunnel interface.
886
- By default this is `127.0.0.1` for TCP and UDP tunnels and `239.255.1.1` for
887
- Multicast tunnels. It populates the address field in the `<source
888
- address="XXX">` of the interface xml configuration.
889
- * `:libvirt__tunnel_port` - Sets the source port the tcp/udp/mcast tunnel with
890
- use. This port information is placed in the `<source port=XXX/>` section of
891
- interface xml configuration.
892
- * `:libvirt__tunnel_local_port` - Sets the local port used by the udp tunnel
893
- interface type. It populates the port field in the `<local port=XXX">`
894
- section of the interface xml configuration. _(This feature only works in
895
- Libvirt 1.2.20 and higher)_
896
- * `:libvirt__tunnel_local_ip` - Sets the local IP used by the udp tunnel
897
- interface type. It populates the ip entry of the `<local address=XXX">`
898
- section of the interface xml configuration. _(This feature only works in
899
- Libvirt 1.2.20 and higher)_
900
- * `:libvirt__guest_ipv6` - Enable or disable guest-to-guest IPv6 communication.
901
- See [here](https://libvirt.org/formatnetwork.html#examplesPrivate6), and
902
- [here](http://libvirt.org/git/?p=libvirt.git;a=commitdiff;h=705e67d40b09a905cd6a4b8b418d5cb94eaa95a8)
903
- for for more information. *Note: takes either 'yes' or 'no' for value*
904
- * `:libvirt__ipv6_address` - Define ipv6 address, require also prefix.
905
- * `:libvirt__ipv6_prefix` - Define ipv6 prefix. generate string `<ip family="ipv6" address="address" prefix="prefix" >`
906
- * `:libvirt__iface_name` - Define a name for the private network interface.
907
- With this feature one can [simulate physical link
908
- failures](https://github.com/vagrant-libvirt/vagrant-libvirt/pull/498)
909
- * `:mac` - MAC address for the interface. *Note: specify this in lowercase
910
- since Vagrant network scripts assume it will be!*
911
- * `:libvirt__mtu` - MTU size for the Libvirt network, if not defined, the
912
- created network will use the Libvirt default (1500). VMs still need to set the
913
- MTU accordingly.
914
- * `:model_type` - parameter specifies the model of the network adapter when you
915
- create a domain value by default virtio KVM believe possible values, see the
916
- documentation for Libvirt
917
- * `:libvirt__driver_name` - Define which network driver to use. [More
918
- info](https://libvirt.org/formatdomain.html#elementsDriverBackendOptions)
919
- * `:libvirt__driver_queues` - Define a number of queues to be used for network
920
- interface. Set equal to numer of vCPUs for best performance. [More
921
- info](http://www.linux-kvm.org/page/Multiqueue)
922
- * `:autostart` - Automatic startup of network by the Libvirt daemon.
923
- If not specified the default is 'false'.
924
- * `:bus` - The bus of the PCI device. Both :bus and :slot have to be defined.
925
- * `:slot` - The slot of the PCI device. Both :bus and :slot have to be defined.
926
- * `:libvirt__always_destroy` - Allow domains that use but did not create a
927
- network to destroy it when the domain is destroyed (default: `true`). Set to
928
- `false` to only allow the domain that created the network to destroy it.
929
-
930
- When the option `:libvirt__dhcp_enabled` is to to 'false' it shouldn't matter
931
- whether the virtual network contains a DHCP server or not and vagrant-libvirt
932
- should not fail on it. The only situation where vagrant-libvirt should fail is
933
- when DHCP is requested but isn't configured on a matching already existing
934
- virtual network.
935
-
936
- ### Public Network Options
937
-
938
- * `:dev` - Physical device that the public interface should use. Default is
939
- 'eth0'.
940
- * `:mode` - The mode in which the public interface should operate in. Supported
941
- modes are available from the [libvirt
942
- documentation](http://www.libvirt.org/formatdomain.html#elementsNICSDirect).
943
- Default mode is 'bridge'.
944
- * `:type` - is type of interface.(`<interface type="#{@type}">`)
945
- * `:mac` - MAC address for the interface.
946
- * `:network_name` - Name of Libvirt network to connect to.
947
- * `:portgroup` - Name of Libvirt portgroup to connect to.
948
- * `:ovs` - Support to connect to an Open vSwitch bridge device. Default is
949
- 'false'.
950
- * :ovs_interfaceid - Add Open vSwitch 'interfaceid' parameter.
951
- * `:trust_guest_rx_filters` - Support trustGuestRxFilters attribute. Details
952
- are listed [here](http://www.libvirt.org/formatdomain.html#elementsNICSDirect).
953
- Default is 'false'.
954
-
955
- ### Management Network
956
-
957
- vagrant-libvirt uses a private network to perform some management operations on
958
- VMs. All VMs will have an interface connected to this network and an IP address
959
- dynamically assigned by Libvirt unless you set `:mgmt_attach` to 'false'.
960
- This is in addition to any networks you configure. The name and address
961
- used by this network are configurable at the provider level.
962
-
963
- * `management_network_name` - Name of Libvirt network to which all VMs will be
964
- connected. If not specified the default is 'vagrant-libvirt'.
965
- * `management_network_address` - Address of network to which all VMs will be
966
- connected. Must include the address and subnet mask. If not specified the
967
- default is '192.168.121.0/24'.
968
- * `management_network_mode` - Network mode for the Libvirt management network.
969
- Specify one of veryisolated, none, open, nat or route options. Further
970
- documented under [Private Networks](#private-network-options)
971
- * `management_network_guest_ipv6` - Enable or disable guest-to-guest IPv6
972
- communication. See
973
- [here](https://libvirt.org/formatnetwork.html#examplesPrivate6), and
974
- [here](http://libvirt.org/git/?p=libvirt.git;a=commitdiff;h=705e67d40b09a905cd6a4b8b418d5cb94eaa95a8)
975
- for for more information.
976
- * `management_network_autostart` - Automatic startup of mgmt network, if not
977
- specified the default is 'false'.
978
- * `management_network_pci_bus` - The bus of the PCI device.
979
- * `management_network_pci_slot` - The slot of the PCI device.
980
- * `management_network_mac` - MAC address of management network interface.
981
- * `management_network_domain` - Domain name assigned to the management network.
982
- * `management_network_mtu` - MTU size of management network. If not specified,
983
- the Libvirt default (1500) will be used.
984
- * `management_network_keep` - Starting from version *0.7.0*, *always_destroy* is set to *true* by default for any network.
985
- This option allows to change this behaviour for the management network.
986
-
987
- You may wonder how vagrant-libvirt knows the IP address a VM received. Libvirt
988
- doesn't provide a standard way to find out the IP address of a running domain.
989
- But we do know the MAC address of the virtual machine's interface on the
990
- management network. Libvirt is closely connected with dnsmasq, which acts as a
991
- DHCP server. dnsmasq writes lease information in the `/var/lib/libvirt/dnsmasq`
992
- directory. Vagrant-libvirt looks for the MAC address in this file and extracts
993
- the corresponding IP address.
994
-
995
- It is also possible to use the Qemu Agent to extract the management interface
996
- configuration from the booted virtual machine. This is helpful in libvirt
997
- environments where no local dnsmasq is used for automatic address assigment,
998
- but external dhcp services via bridged libvirt networks.
999
-
1000
- Prerequisite is to enable the qemu agent channel via ([Libvirt communication
1001
- channels](#libvirt-communication-channels)) and the virtual machine image must
1002
- have the agent pre-installed before deploy. The agent will start automatically
1003
- if it detects an attached channel during boot.
1004
-
1005
- * `qemu_use_agent` - false by default, if set to true, attempt to extract configured
1006
- ip address via qemu agent.
1007
-
1008
- By default if `qemu_use_agent` is set to `true` the code will automatically
1009
- inject a suitable channel unless there already exists an entry with a
1010
- `:target_name` matching `'org.qemu.guest_agent.'`.
1011
- Alternatively if setting `qemu_use_agent` but, needing to disable the addition
1012
- of the channel, simply use a disabled flag as follows:
1013
- ```ruby
1014
- Vagrant.configure(2) do |config|
1015
- config.vm.provider :libvirt do |libvirt|
1016
- libvirt.channel :type => 'unix', :target_name => 'org.qemu.guest_agent.0', :disabled => true
1017
- end
1018
- end
1019
- ```
1020
-
1021
- To use the management network interface with an external dhcp service you need
1022
- to setup a bridged host network manually and define it via
1023
- `management_network_name` in your Vagrantfile.
1024
-
1025
- ## Additional Disks
1026
-
1027
- You can create and attach additional disks to a VM via `libvirt.storage :file`.
1028
- It has a number of options:
1029
-
1030
- * `path` - Location of the disk image. If unspecified, a path is automtically
1031
- chosen in the same storage pool as the VMs primary disk.
1032
- * `device` - Name of the device node the disk image will have in the VM, e.g.
1033
- *vdb*. If unspecified, the next available device is chosen.
1034
- * `size` - Size of the disk image. If unspecified, defaults to 10G.
1035
- * `type` - Type of disk image to create. Defaults to *qcow2*.
1036
- * `bus` - Type of bus to connect device to. Defaults to *virtio*.
1037
- * `allow_existing` - Set to true if you want to allow the VM to use a
1038
- pre-existing disk. If the disk doesn't exist it will be created.
1039
- Disks with this option set to true need to be removed manually.
1040
- * `shareable` - Set to true if you want to simulate shared SAN storage.
1041
- * `serial` - Serial number of the disk device.
1042
- * `wwn` - WWN number of the disk device.
1043
-
1044
- The following disk performance options can also be configured
1045
- (see the [libvirt documentation for possible values](http://libvirt.org/formatdomain.html#elementsDisks)
1046
- or [here](https://www.suse.com/documentation/sles11/book_kvm/data/sect1_chapter_book_kvm.html) for a fuller explanation).
1047
- In all cases, the options use the hypervisor default if not specified, or if set to `nil`.
1048
-
1049
- * `cache` - Cache mode to use. Value may be `default`, `none`, `writeback`, `writethrough`, `directsync` or `unsafe`.
1050
- * `io` - Controls specific policies on I/O. Value may be `threads` or `native`.
1051
- * `copy_on_read` - Controls whether to copy read backing file into the image file. Value may be `on` or `off`.
1052
- * `discard` - Controls whether discard requests (also known as "trim" or "unmap") are ignored or passed to the filesystem. Value may be `unmap` or `ignore`.
1053
- Note: for discard to work, you will likely also need to set `:bus => 'scsi'`
1054
- * `detect_zeroes` - Controls whether to detect zero write requests. Value may be `off`, `on` or `unmap`.
1055
-
1056
- The following example creates two additional disks.
1057
-
1058
- ```ruby
1059
- Vagrant.configure("2") do |config|
1060
- config.vm.provider :libvirt do |libvirt|
1061
- libvirt.storage :file, :size => '20G'
1062
- libvirt.storage :file, :size => '40G', :bus => 'scsi', :type => 'raw', :discard => 'unmap', :detect_zeroes => 'on'
1063
- end
1064
- end
1065
- ```
1066
-
1067
- For shared SAN storage to work the following example can be used:
1068
- ```ruby
1069
- Vagrant.configure("2") do |config|
1070
- config.vm.provider :libvirt do |libvirt|
1071
- libvirt.storage :file, :size => '20G', :path => 'my_shared_disk.img', :allow_existing => true, :shareable => true, :type => 'raw'
1072
- end
1073
- end
1074
- ```
1075
-
1076
- ### Reload behavior
1077
-
1078
- On `vagrant reload` the following additional disk attributes are updated in
1079
- defined domain:
1080
-
1081
- * `bus` - Updated. Uses `device` as a search marker. It is not required to
1082
- define `device`, but it's recommended. If `device` is defined then the order
1083
- of addtitional disk definition becomes irrelevant.
1084
-
1085
- ## CDROMs
1086
-
1087
- You can attach up to four CDROMs to a VM via `libvirt.storage :file,
1088
- :device => :cdrom`. Available options are:
1089
-
1090
- * `path` - The path to the iso to be used for the CDROM drive.
1091
- * `dev` - The device to use (`hda`, `hdb`, `hdc`, or `hdd`). This will be
1092
- automatically determined if unspecified.
1093
- * `bus` - The bus to use for the CDROM drive. Defaults to `ide`
1094
-
1095
- The following example creates three CDROM drives in the VM:
1096
-
1097
- ```ruby
1098
- Vagrant.configure("2") do |config|
1099
- config.vm.provider :libvirt do |libvirt|
1100
- libvirt.storage :file, :device => :cdrom, :path => '/path/to/iso1.iso'
1101
- libvirt.storage :file, :device => :cdrom, :path => '/path/to/iso2.iso'
1102
- libvirt.storage :file, :device => :cdrom, :path => '/path/to/iso3.iso'
1103
- end
1104
- end
1105
- ```
1106
-
1107
- ## Input
1108
-
1109
- You can specify multiple inputs to the VM via `libvirt.input`. Available
1110
- options are listed below. Note that both options are required:
1111
-
1112
- * `type` - The type of the input
1113
- * `bus` - The bus of the input
1114
-
1115
- ```ruby
1116
- Vagrant.configure("2") do |config|
1117
- config.vm.provider :libvirt do |libvirt|
1118
- # this is the default
1119
- # libvirt.input :type => "mouse", :bus => "ps2"
1120
-
1121
- # very useful when having mouse issues when viewing VM via VNC
1122
- libvirt.input :type => "tablet", :bus => "usb"
1123
- end
1124
- end
1125
- ```
1126
-
1127
- ## PCI device passthrough
1128
-
1129
- You can specify multiple PCI devices to passthrough to the VM via
1130
- `libvirt.pci`. Available options are listed below. Note that all options are
1131
- required, except domain, which defaults to `0x0000`:
1132
-
1133
- * `domain` - The domain of the PCI device
1134
- * `bus` - The bus of the PCI device
1135
- * `slot` - The slot of the PCI device
1136
- * `function` - The function of the PCI device
1137
-
1138
- You can extract that information from output of `lspci` command. First
1139
- characters of each line are in format `[<domain>]:[<bus>]:[<slot>].[<func>]`. For example:
1140
-
1141
- ```shell
1142
- $ lspci| grep NVIDIA
1143
- 0000:03:00.0 VGA compatible controller: NVIDIA Corporation GK110B [GeForce GTX TITAN Black] (rev a1)
1144
- ```
1145
-
1146
- In that case `domain` is `0x0000`, `bus` is `0x03`, `slot` is `0x00` and `function` is `0x0`.
1147
-
1148
- ```ruby
1149
- Vagrant.configure("2") do |config|
1150
- config.vm.provider :libvirt do |libvirt|
1151
- libvirt.pci :domain => '0x0000', :bus => '0x06', :slot => '0x12', :function => '0x5'
1152
-
1153
- # Add another one if it is neccessary
1154
- libvirt.pci :domain => '0x0000', :bus => '0x03', :slot => '0x00', :function => '0x0'
1155
- end
1156
- end
1157
- ```
1158
-
1159
- Note! Above options affect configuration only at domain creation. It won't change VM behaviour on `vagrant reload` after domain was created.
1160
-
1161
- Don't forget to [set](#domain-specific-options) `kvm_hidden` option to `true` especially if you are passthroughing NVIDIA GPUs. Otherwise GPU is visible from VM but cannot be operated.
1162
-
1163
-
1164
- ## Using USB Devices
1165
-
1166
- There are several ways to pass a USB device through to a running instance:
1167
- * Use `libvirt.usb` to [attach a USB device at boot](#usb-device-passthrough), with the device ID specified in the Vagrantfile
1168
- * Use a client (such as `virt-viewer` or `virt-manager`) to attach the device at runtime [via USB redirectors](#usb-redirector-devices)
1169
- * Use `virsh attach-device` once the VM is running (however, this is outside the scope of this readme)
1170
-
1171
- In all cases, if you wish to use a high-speed USB device,
1172
- you will need to use `libvirt.usb_controller` to specify a USB2 or USB3 controller,
1173
- as the default configuration only exposes a USB1.1 controller.
1174
-
1175
- ### USB Controller Configuration
1176
-
1177
- The USB controller can be configured using `libvirt.usb_controller`, with the following options:
1178
-
1179
- * `model` - The USB controller device model to emulate. (mandatory)
1180
- * `ports` - The number of devices that can be connected to the controller.
1181
-
1182
- ```ruby
1183
- Vagrant.configure("2") do |config|
1184
- config.vm.provider :libvirt do |libvirt|
1185
- # Set up a USB3 controller
1186
- libvirt.usb_controller :model => "qemu-xhci"
1187
- end
1188
- end
1189
- ```
1190
-
1191
- See the [libvirt documentation](https://libvirt.org/formatdomain.html#elementsControllers) for a list of valid models.
1192
-
1193
- If any USB devices are passed through by setting `libvirt.usb` or `libvirt.redirdev`, a default controller will be added using the model `qemu-xhci` in the absence of a user specified one. This should help ensure more devices work out of the box as the default configured by libvirt is pii3-uhci, which appears to only work for USB 1 devices and does not work as expected when connected via a USB 2 controller, while the xhci stack should work for all versions of USB.
1194
-
1195
- ### USB Device Passthrough
1196
-
1197
- You can specify multiple USB devices to passthrough to the VM via
1198
- `libvirt.usb`. The device can be specified by the following options:
1199
-
1200
- * `bus` - The USB bus ID, e.g. "1"
1201
- * `device` - The USB device ID, e.g. "2"
1202
- * `vendor` - The USB devices vendor ID (VID), e.g. "0x1234"
1203
- * `product` - The USB devices product ID (PID), e.g. "0xabcd"
1204
-
1205
- At least one of these has to be specified, and `bus` and `device` may only be
1206
- used together.
1207
-
1208
- The example values above match the device from the following output of `lsusb`:
1209
-
1210
- ```
1211
- Bus 001 Device 002: ID 1234:abcd Example device
1212
- ```
1213
-
1214
- ```ruby
1215
- Vagrant.configure("2") do |config|
1216
- config.vm.provider :libvirt do |libvirt|
1217
- # pass through specific device based on identifying it
1218
- libvirt.usb :vendor => '0x1234', :product => '0xabcd'
1219
- # pass through a host device where multiple of the same vendor/product exist
1220
- libvirt.usb :bus => '1', :device => '1'
1221
- end
1222
- end
1223
- ```
1224
-
1225
- Additionally, the following options can be used:
1226
-
1227
- * `startupPolicy` - Is passed through to Libvirt and controls if the device has
1228
- to exist. Libvirt currently allows the following values: "mandatory",
1229
- "requisite", "optional".
1230
-
1231
-
1232
- ### USB Redirector Devices
1233
- You can specify multiple redirect devices via `libvirt.redirdev`. There are two types, `tcp` and `spicevmc` supported, for forwarding USB-devices to the guest. Available options are listed below.
1234
-
1235
- * `type` - The type of the USB redirector device. (`tcp` or `spicevmc`)
1236
- * `host` - The host where the device is attached to. (mandatory for type `tcp`)
1237
- * `port` - The port where the device is listening. (mandatory for type `tcp`)
1238
-
1239
- ```ruby
1240
- Vagrant.configure("2") do |config|
1241
- config.vm.provider :libvirt do |libvirt|
1242
- # add two devices using spicevmc channel
1243
- (1..2).each do
1244
- libvirt.redirdev :type => "spicevmc"
1245
- end
1246
- # add device, provided by localhost:4000
1247
- libvirt.redirdev :type => "tcp", :host => "localhost", :port => "4000"
1248
- end
1249
- end
1250
- ```
1251
-
1252
- Note that in order to enable USB redirection with Spice clients,
1253
- you may need to also set `libvirt.graphics_type = "spice"`
1254
-
1255
- #### Filter for USB Redirector Devices
1256
- You can define filter for redirected devices. These filters can be positiv or negative, by setting the mandatory option `allow=yes` or `allow=no`. All available options are listed below. Note the option `allow` is mandatory.
1257
-
1258
- * `class` - The device class of the USB device. A list of device classes is available on [Wikipedia](https://en.wikipedia.org/wiki/USB#Device_classes).
1259
- * `vendor` - The vendor of the USB device.
1260
- * `product` - The product id of the USB device.
1261
- * `version` - The version of the USB device. Note that this is the version of `bcdDevice`
1262
- * `allow` - allow or disallow redirecting this device. (mandatory)
1263
-
1264
- You can extract that information from output of `lsusb` command. Every line contains the information in format `Bus [<bus>] Device [<device>]: ID [<vendor>:[<product>]`. The `version` can be extracted from the detailed output of the device using `lsusb -D /dev/usb/[<bus>]/[<device>]`. For example:
1265
-
1266
- ```shell
1267
- # get bcdDevice from
1268
- $: lsusb
1269
- Bus 001 Device 009: ID 08e6:3437 Gemalto (was Gemplus) GemPC Twin SmartCard Reader
1270
-
1271
- $: lsusb -D /dev/bus/usb/001/009 | grep bcdDevice
1272
- bcdDevice 2.00
1273
- ```
1274
-
1275
- In this case, the USB device with `class 0x0b`, `vendor 0x08e6`, `product 0x3437` and `bcdDevice version 2.00` is allowed to be redirected to the guest. All other devices will be refused.
1276
-
1277
- ```ruby
1278
- Vagrant.configure("2") do |config|
1279
- config.vm.provider :libvirt do |libvirt|
1280
- libvirt.redirdev :type => "spicevmc"
1281
- libvirt.redirfilter :class => "0x0b", :vendor => "0x08e6", :product => "0x3437", :version => "2.00", :allow => "yes"
1282
- libvirt.redirfilter :allow => "no"
1283
- end
1284
- end
1285
- ```
1286
-
1287
- ## Serial Console Devices
1288
- You can define settings to redirect output from the serial console of any VM brought up with libvirt to a file or other devices that are listening. [See libvirt documentation](https://libvirt.org/formatdomain.html#elementCharSerial).
1289
-
1290
- Currently only redirecting to a file is supported.
1291
-
1292
- * `type` - only value that has an effect is file, in the future support may be added for virtual console, pty, dev, pipe, tcp, udp, unix socket, spiceport & nmdm.
1293
- * `source` - options pertaining to how the connection attaches to the host, contains sub-settings dependent on `type`.
1294
- `source` options for type `file`
1295
- * `path` - file on host to connect to the serial port to record all output. May be created by qemu system user causing some permissions issues.
1296
-
1297
- ```ruby
1298
- Vagrant.configure("2") do |config|
1299
- config.vm.define :test do |test|
1300
- test.vm.provider :libvirt do |domain|
1301
- domain.serial :type => "file", :source => {:path => "/var/log/vm_consoles/test.log"}
1302
- end
1303
- end
1304
- end
1305
- ```
1306
-
1307
- ## Random number generator passthrough
1308
-
1309
- You can pass through `/dev/random` to your VM by configuring the domain like this:
1310
-
1311
- ```ruby
1312
- Vagrant.configure("2") do |config|
1313
- config.vm.provider :libvirt do |libvirt|
1314
- # Pass through /dev/random from the host to the VM
1315
- libvirt.random :model => 'random'
1316
- end
1317
- end
1318
- ```
1319
-
1320
- At the moment only the `random` backend is supported.
1321
-
1322
- ## Watchdog device
1323
- A virtual hardware watchdog device can be added to the guest via the `libvirt.watchdog` element. The option `model` is mandatory and could have on of the following values.
1324
-
1325
- * `i6300esb` - the recommended device, emulating a PCI Intel 6300ESB
1326
- * 'ib700` - emulating an ISA iBase IB700
1327
- * `diag288` - emulating an S390 DIAG288 device
1328
-
1329
- The optional action attribute describes what `action` to take when the watchdog expires. Valid values are specific to the underlying hypervisor. The default behavior is `reset`.
1330
-
1331
- * `reset` - default, forcefully reset the guest
1332
- * `shutdown` - gracefully shutdown the guest (not recommended)
1333
- * `poweroff` - forcefully power off the guest
1334
- * `pause` - pause the guest
1335
- * `none` - do nothing
1336
- * `dump` - automatically dump the guest
1337
- * `inject-nmi` - inject a non-maskable interrupt into the guest
1338
-
1339
- ```ruby
1340
- Vagrant.configure("2") do |config|
1341
- config.vm.provider :libvirt do |libvirt|
1342
- # Add Libvirt watchdog device model i6300esb
1343
- libvirt.watchdog :model => 'i6300esb', :action => 'reset'
1344
- end
1345
- end
1346
- ```
1347
-
1348
- ## Smartcard device
1349
- A virtual smartcard device can be supplied to the guest via the `libvirt.smartcard` element. The option `mode` is mandatory and currently only value `passthrough` is supported. The value `spicevmc` for option `type` is default value and can be supressed. On using `type = tcp`, the options `source_mode`, `source_host` and `source_service` are mandatory.
1350
-
1351
- ```ruby
1352
- Vagrant.configure("2") do |config|
1353
- config.vm.provider :libvirt do |libvirt|
1354
- # Add smartcard device with type 'spicevmc'
1355
- libvirt.smartcard :mode => 'passthrough', :type => 'spicevmc'
1356
- end
1357
- end
1358
- ```
1359
-
1360
- ```ruby
1361
- Vagrant.configure("2") do |config|
1362
- config.vm.provider :libvirt do |libvirt|
1363
- # Add smartcard device with type 'tcp'
1364
- domain.smartcard :mode => 'passthrough', :type => 'tcp', :source_mode => 'bind', :source_host => '127.0.0.1', :source_service => '2001'
1365
- end
1366
- end
1367
- ```
1368
- ## Hypervisor Features
1369
-
1370
- Hypervisor features can be specified via `libvirt.features` as a list. The default
1371
- options that are enabled are `acpi`, `apic` and `pae`. If you define `libvirt.features`
1372
- you overwrite the defaults, so keep that in mind.
1373
-
1374
- An example:
1375
-
1376
- ```ruby
1377
- Vagrant.configure("2") do |config|
1378
- config.vm.provider :libvirt do |libvirt|
1379
- # Specify the default hypervisor features
1380
- libvirt.features = ['acpi', 'apic', 'pae' ]
1381
- end
1382
- end
1383
- ```
1384
-
1385
- A different example for ARM boards:
1386
-
1387
- ```ruby
1388
- Vagrant.configure("2") do |config|
1389
- config.vm.provider :libvirt do |libvirt|
1390
- # Specify the default hypervisor features
1391
- libvirt.features = ["apic", "gic version='2'" ]
1392
- end
1393
- end
1394
- ```
1395
-
1396
- You can also specify a special set of features that help improve the behavior of guests
1397
- running Microsoft Windows.
1398
-
1399
- You can specify HyperV features via `libvirt.hyperv_feature`. Available
1400
- options are listed below. Note that both options are required:
1401
-
1402
- * `name` - The name of the feature Hypervisor feature (see Libvirt doc)
1403
- * `state` - The state for this feature which can be either `on` or `off`.
1404
-
1405
- ```ruby
1406
- Vagrant.configure("2") do |config|
1407
- config.vm.provider :libvirt do |libvirt|
1408
- # Relax constraints on timers
1409
- libvirt.hyperv_feature :name => 'relaxed', :state => 'on'
1410
- # Enable virtual APIC
1411
- libvirt.hyperv_feature :name => 'vapic', :state => 'on'
1412
- # Enable spinlocks (requires retries to be specified)
1413
- libvirt.hyperv_feature :name => 'spinlocks', :state => 'on', :retries => '8191'
1414
- end
1415
- end
1416
- ```
1417
-
1418
- ## Clock
1419
-
1420
- Clock offset can be specified via `libvirt.clock_offset`. (Default is utc)
1421
-
1422
- Additionally timers can be specified via `libvirt.clock_timer`.
1423
- Available options for timers are: name, track, tickpolicy, frequency, mode, present
1424
-
1425
- ```ruby
1426
- Vagrant.configure("2") do |config|
1427
- config.vm.provider :libvirt do |libvirt|
1428
- # Set clock offset to localtime
1429
- libvirt.clock_offset = 'localtime'
1430
- # Timers ...
1431
- libvirt.clock_timer :name => 'rtc', :tickpolicy => 'catchup'
1432
- libvirt.clock_timer :name => 'pit', :tickpolicy => 'delay'
1433
- libvirt.clock_timer :name => 'hpet', :present => 'no'
1434
- libvirt.clock_timer :name => 'hypervclock', :present => 'yes'
1435
- end
1436
- end
1437
- ```
1438
-
1439
- ## CPU features
1440
-
1441
- You can specify CPU feature policies via `libvirt.cpu_feature`. Available
1442
- options are listed below. Note that both options are required:
1443
-
1444
- * `name` - The name of the feature for the chosen CPU (see Libvirt's
1445
- `cpu_map.xml`)
1446
- * `policy` - The policy for this feature (one of `force`, `require`,
1447
- `optional`, `disable` and `forbid` - see Libvirt documentation)
1448
-
1449
- ```ruby
1450
- Vagrant.configure("2") do |config|
1451
- config.vm.provider :libvirt do |libvirt|
1452
- # The feature will not be supported by virtual CPU.
1453
- libvirt.cpu_feature :name => 'hypervisor', :policy => 'disable'
1454
- # Guest creation will fail unless the feature is supported by host CPU.
1455
- libvirt.cpu_feature :name => 'vmx', :policy => 'require'
1456
- # The virtual CPU will claim the feature is supported regardless of it being supported by host CPU.
1457
- libvirt.cpu_feature :name => 'pdpe1gb', :policy => 'force'
1458
- end
1459
- end
1460
- ```
1461
-
1462
- ## Memory Backing
1463
-
1464
- You can specify memoryBacking options via `libvirt.memorybacking`. Available options are shown below. Full documentation is available at the [libvirt _memoryBacking_ section](https://libvirt.org/formatdomain.html#elementsMemoryBacking).
1465
-
1466
- NOTE: The hugepages `<page>` element is not yet supported
1467
-
1468
- ```ruby
1469
- Vagrant.configure("2") do |config|
1470
- config.vm.provider :libvirt do |libvirt|
1471
- libvirt.memorybacking :hugepages
1472
- libvirt.memorybacking :nosharepages
1473
- libvirt.memorybacking :locked
1474
- libvirt.memorybacking :source, :type => 'file'
1475
- libvirt.memorybacking :access, :mode => 'shared'
1476
- libvirt.memorybacking :allocation, :mode => 'immediate'
1477
- end
1478
- end
1479
- ```
1480
-
1481
- ## No box and PXE boot
1482
-
1483
- There is support for PXE booting VMs with no disks as well as PXE booting VMs
1484
- with blank disks. There are some limitations:
1485
-
1486
- * Requires Vagrant 1.6.0 or newer
1487
- * No provisioning scripts are ran
1488
- * No network configuration is being applied to the VM
1489
- * No SSH connection can be made
1490
- * `vagrant halt` will only work cleanly if the VM handles ACPI shutdown signals
1491
-
1492
- In short, VMs without a box can be created, halted and destroyed but all other
1493
- functionality cannot be used.
1494
-
1495
- An example for a PXE booted VM with no disks whatsoever:
1496
-
1497
- ```ruby
1498
- Vagrant.configure("2") do |config|
1499
- config.vm.define :pxeclient do |pxeclient|
1500
- pxeclient.vm.provider :libvirt do |domain|
1501
- domain.boot 'network'
1502
- end
1503
- end
1504
- end
1505
- ```
1506
-
1507
- And an example for a PXE booted VM with no box but a blank disk which will boot from this HD if the NICs fail to PXE boot:
1508
-
1509
- ```ruby
1510
- Vagrant.configure("2") do |config|
1511
- config.vm.define :pxeclient do |pxeclient|
1512
- pxeclient.vm.provider :libvirt do |domain|
1513
- domain.storage :file, :size => '100G', :type => 'qcow2'
1514
- domain.boot 'network'
1515
- domain.boot 'hd'
1516
- end
1517
- end
1518
- end
1519
- ```
1520
-
1521
- Example for vm with 2 networks and only 1 is bootable and has dhcp server in this subnet, for example foreman with dhcp server
1522
- Name of network "foreman_managed" is key for define boot order
1523
- ```ruby
1524
- config.vm.define :pxeclient do |pxeclient|
1525
- pxeclient.vm.network :private_network,ip: '10.0.0.5',
1526
- libvirt__network_name: "foreman_managed",
1527
- libvirt__dhcp_enabled: false,
1528
- libvirt__host_ip: '10.0.0.1'
1529
-
1530
- pxeclient.vm.provider :libvirt do |domain|
1531
- domain.memory = 1000
1532
- boot_network = {'network' => 'foreman_managed'}
1533
- domain.storage :file, :size => '100G', :type => 'qcow2'
1534
- domain.boot boot_network
1535
- domain.boot 'hd'
1536
- end
1537
- end
1538
- ```
1539
-
1540
- An example VM that is PXE booted from the `br1` device (which must already be configured in the host machine), and if that fails, is booted from the disk:
1541
-
1542
- ```ruby
1543
- Vagrant.configure("2") do |config|
1544
- config.vm.define :pxeclient do |pxeclient|
1545
- pxeclient.vm.network :public_network,
1546
- dev: 'br1',
1547
- auto_config: false
1548
- pxeclient.vm.provider :libvirt do |domain|
1549
- boot_network = {'dev' => 'br1'}
1550
- domain.storage :file, :size => '100G'
1551
- domain.boot boot_network
1552
- domain.boot 'hd'
1553
- end
1554
- end
1555
- end
1556
- ```
1557
-
1558
- ## SSH Access To VM
1559
-
1560
- vagrant-libvirt supports vagrant's [standard ssh
1561
- settings](https://docs.vagrantup.com/v2/vagrantfile/ssh_settings.html).
1562
-
1563
- ## Forwarded Ports
1564
-
1565
- vagrant-libvirt supports Forwarded Ports via ssh port forwarding. Please note
1566
- that due to a well known limitation only the TCP protocol is supported. For
1567
- each `forwarded_port` directive you specify in your Vagrantfile,
1568
- vagrant-libvirt will maintain an active ssh process for the lifetime of the VM.
1569
- If your VM should happen to be rebooted, the SSH session will need to be
1570
- restablished by halting the VM and bringing it back up.
1571
-
1572
- vagrant-libvirt supports an additional `forwarded_port` option `gateway_ports`
1573
- which defaults to `false`, but can be set to `true` if you want the forwarded
1574
- port to be accessible from outside the Vagrant host. In this case you should
1575
- also set the `host_ip` option to `'*'` since it defaults to `'localhost'`.
1576
-
1577
- You can also provide a custom adapter to forward from by 'adapter' option.
1578
- Default is `eth0`.
1579
-
1580
- **Internally Accessible Port Forward**
1581
-
1582
- `config.vm.network :forwarded_port, guest: 80, host: 2000`
1583
-
1584
- **Externally Accessible Port Forward**
1585
-
1586
- `config.vm.network :forwarded_port, guest: 80, host: 2000, host_ip: "0.0.0.0"`
1587
-
1588
- ### Forwarding the ssh-port
1589
-
1590
- Vagrant-libvirt now supports forwarding the standard ssh-port on port 2222 from
1591
- the localhost to allow for consistent provisioning steps/ports to be used when
1592
- defining across multiple providers.
1593
-
1594
- To enable, set the following:
1595
- ```ruby
1596
- Vagrant.configure("2") do |config|
1597
- config.vm.provider :libvirt do |libvirt|
1598
- # Enable forwarding of forwarded_port with id 'ssh'.
1599
- libvirt.forward_ssh_port = true
1600
- end
1601
- end
1602
- ```
1603
-
1604
- Previously by default libvirt skipped the forwarding of the ssh-port because
1605
- you can access the machine directly. In the future it is expected that this
1606
- will be enabled by default once autocorrect support is added to handle port
1607
- collisions for multi machine environments gracefully.
1608
-
1609
- ## Synced Folders
1610
-
1611
- Vagrant automatically syncs the project folder on the host to `/vagrant` in
1612
- the guest. You can also configure additional synced folders.
1613
-
1614
- **SECURITY NOTE:** for remote Libvirt, nfs synced folders requires a bridged
1615
- public network interface and you must connect to Libvirt via ssh.
1616
-
1617
- **NFS**
1618
-
1619
- `vagrant-libvirt` supports
1620
- [NFS](https://www.vagrantup.com/docs/synced-folders/nfs) as default with
1621
- bidirectional synced folders.
1622
-
1623
- Example with NFS:
1624
-
1625
- ``` ruby
1626
- Vagrant.configure("2") do |config|
1627
- config.vm.synced_folder "./", "/vagrant"
1628
- end
1629
- ```
1630
-
1631
- **RSync**
1632
-
1633
- `vagrant-libvirt` supports
1634
- [rsync](https://www.vagrantup.com/docs/synced-folders/rsync) with
1635
- unidirectional synced folders.
1636
-
1637
- Example with rsync:
1638
-
1639
- ``` ruby
1640
- Vagrant.configure("2") do |config|
1641
- config.vm.synced_folder "./", "/vagrant", type: "rsync"
1642
- end
1643
- ```
1644
-
1645
- **9P**
1646
-
1647
- `vagrant-libvirt` supports [VirtFS](http://www.linux-kvm.org/page/VirtFS) ([9p
1648
- or Plan 9](https://en.wikipedia.org/wiki/9P_\(protocol\))) with bidirectional
1649
- synced folders.
1650
-
1651
- Difference between NFS and 9p is explained
1652
- [here](https://unix.stackexchange.com/questions/240281/virtfs-plan-9-vs-nfs-as-tool-for-share-folder-for-virtual-machine).
1653
-
1654
- For 9p shares, a `mount: false` option allows to define synced folders without
1655
- mounting them at boot.
1656
-
1657
- Example for `accessmode: "squash"` with 9p:
1658
-
1659
- ``` ruby
1660
- Vagrant.configure("2") do |config|
1661
- config.vm.synced_folder "./", "/vagrant", type: "9p", disabled: false, accessmode: "squash", owner: "1000"
1662
- end
1663
- ```
1664
-
1665
- Example for `accessmode: "mapped"` with 9p:
1666
-
1667
- ``` ruby
1668
- Vagrant.configure("2") do |config|
1669
- config.vm.synced_folder "./", "/vagrant", type: "9p", disabled: false, accessmode: "mapped", mount: false
1670
- end
1671
- ```
1672
-
1673
- Further documentation on using 9p can be found in [kernel
1674
- docs](https://www.kernel.org/doc/Documentation/filesystems/9p.txt) and in
1675
- [QEMU
1676
- wiki](https://wiki.qemu.org/Documentation/9psetup#Starting_the_Guest_directly).
1677
-
1678
- Please do note that 9p depends on support in the guest and not all distros
1679
- come with the 9p module by default.
1680
-
1681
- **Virtio-fs**
1682
-
1683
- `vagrant-libvirt` supports [Virtio-fs](https://virtio-fs.gitlab.io/) with
1684
- bidirectional synced folders.
1685
-
1686
- For virtiofs shares, a `mount: false` option allows to define synced folders
1687
- without mounting them at boot.
3
+ ![Vagrant Libvirt Logo](docs/assets/images/logo.png?raw=true "Vagrant Libvirt")
1688
4
 
1689
- So far, passthrough is the only supported access mode and it requires running
1690
- the virtiofsd daemon as root.
1691
-
1692
- QEMU needs to allocate the backing memory for all the guest RAM as shared
1693
- memory, e.g. [Use file-backed
1694
- memory](https://libvirt.org/kbase/virtiofs.html#host-setup) by enable
1695
- `memory_backing_dir` option in `/etc/libvirt/qemu.conf`:
1696
-
1697
- ``` shell
1698
- memory_backing_dir = "/dev/shm"
1699
- ```
1700
-
1701
- Example for Libvirt \>= 6.2.0 (e.g. Ubuntu 20.10 with Linux 5.8.0 + QEMU 5.0 +
1702
- Libvirt 6.6.0, i.e. NUMA nodes required) with virtiofs:
1703
-
1704
- ``` ruby
1705
- Vagrant.configure("2") do |config|
1706
- config.vm.provider :libvirt do |libvirt|
1707
- libvirt.cpus = 2
1708
- libvirt.numa_nodes = [{ :cpus => "0-1", :memory => 8192, :memAccess => "shared" }]
1709
- libvirt.memorybacking :access, :mode => "shared"
1710
- end
1711
- config.vm.synced_folder "./", "/vagrant", type: "virtiofs"
1712
- end
1713
- ```
1714
-
1715
- Example for Libvirt \>= 6.9.0 (e.g. Ubuntu 21.04 with Linux 5.11.0 + QEMU 5.2 +
1716
- Libvirt 7.0.0, or Ubuntu 20.04 + [PPA
1717
- enabled](https://launchpad.net/~savoury1/+archive/ubuntu/virtualisation)) with
1718
- virtiofs:
1719
-
1720
- ``` ruby
1721
- Vagrant.configure("2") do |config|
1722
- config.vm.provider :libvirt do |libvirt|
1723
- libvirt.cpus = 2
1724
- libvirt.memory = 8192
1725
- libvirt.memorybacking :access, :mode => "shared"
1726
- end
1727
- config.vm.synced_folder "./", "/vagrant", type: "virtiofs"
1728
- end
1729
- ```
1730
-
1731
- Further documentation on using virtiofs can be found in [official
1732
- HowTo](https://virtio-fs.gitlab.io/index.html#howto) and in [Libvirt
1733
- KB](https://libvirt.org/kbase/virtiofs.html).
1734
-
1735
- Please do note that virtiofs depends on:
1736
-
1737
- - Host: Linux \>= 5.4, QEMU \>= 4.2 and Libvirt \>= 6.2 (e.g. Ubuntu 20.10)
1738
- - Guest: Linux \>= 5.4 (e.g. Ubuntu 20.04)
1739
-
1740
- ## QEMU Session Support
1741
-
1742
- vagrant-libvirt supports using QEMU user sessions to maintain Vagrant VMs. As the session connection does not have root access to the system features which require root will not work. Access to networks created by the system QEMU connection can be granted by using the [QEMU bridge helper](https://wiki.qemu.org/Features/HelperNetworking). The bridge helper is enabled by default on some distros but may need to be enabled/installed on others.
1743
-
1744
- There must be a virbr network defined in the QEMU system session. The libvirt `default` network which comes by default, the vagrant `vagrant-libvirt` network which is generated if you run a Vagrantfile using the System session, or a manually defined network can be used. These networks can be set to autostart with `sudo virsh net-autostart <net-name>`, which'll mean no further root access is required even after reboots.
1745
-
1746
- The QEMU bridge helper is configured via `/etc/qemu/bridge.conf`. This file must include the virbr you wish to use (e.g. virbr0, virbr1, etc). You can find this out via `sudo virsh net-dumpxml <net-name>`.
1747
- ```
1748
- allow virbr0
1749
- ```
1750
-
1751
- An example configuration of a machine using the QEMU session connection:
1752
-
1753
- ```ruby
1754
- Vagrant.configure("2") do |config|
1755
- config.vm.provider :libvirt do |libvirt|
1756
- # Use QEMU session instead of system connection
1757
- libvirt.qemu_use_session = true
1758
- # URI of QEMU session connection, default is as below
1759
- libvirt.uri = 'qemu:///session'
1760
- # URI of QEMU system connection, use to obtain IP address for management, default is below
1761
- libvirt.system_uri = 'qemu:///system'
1762
- # Path to store Libvirt images for the virtual machine, default is as ~/.local/share/libvirt/images
1763
- libvirt.storage_pool_path = '/home/user/.local/share/libvirt/images'
1764
- # Management network device, default is below
1765
- libvirt.management_network_device = 'virbr0'
1766
- end
1767
-
1768
- # Public network configuration using existing network device
1769
- # Note: Private networks do not work with QEMU session enabled as root access is required to create new network devices
1770
- config.vm.network :public_network, :dev => "virbr1",
1771
- :mode => "bridge",
1772
- :type => "bridge"
1773
- end
1774
- ```
1775
-
1776
- ## Customized Graphics
1777
-
1778
- vagrant-libvirt supports customizing the display and video settings of the
1779
- managed guest. This is probably most useful for VNC-type displays with
1780
- multiple guests. It lets you specify the exact port for each guest to use
1781
- deterministically.
1782
-
1783
- Here is an example of using custom display options:
1784
-
1785
- ```ruby
1786
- Vagrant.configure("2") do |config|
1787
- config.vm.provider :libvirt do |libvirt|
1788
- libvirt.graphics_port = 5901
1789
- libvirt.graphics_ip = '0.0.0.0'
1790
- libvirt.video_type = 'qxl'
1791
- end
1792
- end
1793
- ```
1794
-
1795
- ## TPM Devices
1796
-
1797
- Modern versions of Libvirt support connecting to TPM devices on the host
1798
- system. This allows you to enable Trusted Boot Extensions, among other
1799
- features, on your guest VMs.
1800
-
1801
- To passthrough a hardware TPM, you will generally only need to modify the
1802
- `tpm_path` variable in your guest configuration. However, advanced usage,
1803
- such as the application of a Software TPM, may require modifying the
1804
- `tpm_model`, `tpm_type` and `tpm_version` variables.
1805
-
1806
- The TPM options will only be used if you specify a TPM path or version.
1807
- Declarations of any TPM options without specifying a path or version will
1808
- result in those options being ignored.
1809
-
1810
- Here is an example of using the TPM options:
1811
-
1812
- ```ruby
1813
- Vagrant.configure("2") do |config|
1814
- config.vm.provider :libvirt do |libvirt|
1815
- libvirt.tpm_model = 'tpm-tis'
1816
- libvirt.tpm_type = 'passthrough'
1817
- libvirt.tpm_path = '/dev/tpm0'
1818
- end
1819
- end
1820
- ```
1821
-
1822
- It's also possible for Libvirt to start an emulated TPM device on the host.
1823
- Requires `swtpm` and `swtpm-tools`
1824
-
1825
- ```ruby
1826
- Vagrant.configure("2") do |config|
1827
- config.vm.provider :libvirt do |libvirt|
1828
- libvirt.tpm_model = "tpm-crb"
1829
- libvirt.tpm_type = "emulator"
1830
- libvirt.tpm_version = "2.0"
1831
- end
1832
- end
1833
- ```
1834
-
1835
- ## Memory balloon
1836
-
1837
- The configuration of the memory balloon device can be overridden. By default,
1838
- libvirt will automatically attach a memory balloon; this behavior is preserved
1839
- by not configuring any memballoon-related options. The memory balloon can be
1840
- explicitly disabled by setting `memballoon_enabled` to `false`. Setting
1841
- `memballoon_enabled` to `true` will allow additional configuration of
1842
- memballoon-related options.
1843
-
1844
- Here is an example of using the memballoon options:
1845
-
1846
- ```ruby
1847
- Vagrant.configure("2") do |config|
1848
- config.vm.provider :libvirt do |libvirt|
1849
- libvirt.memballoon_enabled = true
1850
- libvirt.memballoon_model = 'virtio'
1851
- libvirt.memballoon_pci_bus = '0x00'
1852
- libvirt.memballoon_pci_slot = '0x0f'
1853
- end
1854
- end
1855
- ```
1856
-
1857
- ## Libvirt communication channels
1858
-
1859
- For certain functionality to be available within a guest, a private
1860
- communication channel must be established with the host. Two notable examples
1861
- of this are the QEMU guest agent, and the Spice/QXL graphics type.
1862
-
1863
- Below is a simple example which exposes a virtio serial channel to the guest.
1864
- Note: in a multi-VM environment, the channel would be created for all VMs.
1865
-
1866
- ```ruby
1867
- vagrant.configure(2) do |config|
1868
- config.vm.provider :libvirt do |libvirt|
1869
- libvirt.channel :type => 'unix', :target_name => 'org.qemu.guest_agent.0', :target_type => 'virtio'
1870
- end
1871
- end
1872
- ```
1873
-
1874
- Below is the syntax for creating a spicevmc channel for use by a qxl graphics
1875
- card.
1876
-
1877
- ```ruby
1878
- vagrant.configure(2) do |config|
1879
- config.vm.provider :libvirt do |libvirt|
1880
- libvirt.channel :type => 'spicevmc', :target_name => 'com.redhat.spice.0', :target_type => 'virtio'
1881
- end
1882
- end
1883
- ```
1884
-
1885
- These settings can be specified on a per-VM basis, however the per-guest
1886
- settings will OVERRIDE any global 'config' setting. In the following example,
1887
- we create 3 VMs with the following configuration:
1888
-
1889
- * **master**: No channel settings specified, so we default to the provider
1890
- setting of a single virtio guest agent channel.
1891
- * **node1**: Override the channel setting, setting both the guest agent
1892
- channel, and a spicevmc channel
1893
- * **node2**: Override the channel setting, setting both the guest agent
1894
- channel, and a 'guestfwd' channel. TCP traffic sent by the guest to the given
1895
- IP address and port is forwarded to the host socket `/tmp/foo`. Note: this
1896
- device must be unique for each VM.
1897
-
1898
- For example:
1899
-
1900
- ```ruby
1901
- Vagrant.configure(2) do |config|
1902
- config.vm.box = "fedora/32-cloud-base"
1903
- config.vm.provider :libvirt do |libvirt|
1904
- libvirt.channel :type => 'unix', :target_name => 'org.qemu.guest_agent.0', :target_type => 'virtio'
1905
- end
1906
-
1907
- config.vm.define "master" do |master|
1908
- master.vm.provider :libvirt do |domain|
1909
- domain.memory = 1024
1910
- end
1911
- end
1912
- config.vm.define "node1" do |node1|
1913
- node1.vm.provider :libvirt do |domain|
1914
- domain.channel :type => 'unix', :target_name => 'org.qemu.guest_agent.0', :target_type => 'virtio'
1915
- domain.channel :type => 'spicevmc', :target_name => 'com.redhat.spice.0', :target_type => 'virtio'
1916
- end
1917
- end
1918
- config.vm.define "node2" do |node2|
1919
- node2.vm.provider :libvirt do |domain|
1920
- domain.channel :type => 'unix', :target_name => 'org.qemu.guest_agent.0', :target_type => 'virtio'
1921
- domain.channel :type => 'unix', :target_type => 'guestfwd', :target_address => '192.0.2.42', :target_port => '4242',
1922
- :source_path => '/tmp/foo'
1923
- end
1924
- end
1925
- end
1926
- ```
1927
-
1928
- ## Custom command line arguments and environment variables
1929
- You can also specify multiple qemuargs arguments or qemuenv environment variables for qemu-system
1930
-
1931
- * `value` - Value
1932
-
1933
- ```ruby
1934
- Vagrant.configure("2") do |config|
1935
- config.vm.provider :libvirt do |libvirt|
1936
- libvirt.qemuargs :value => "-device"
1937
- libvirt.qemuargs :value => "intel-iommu"
1938
- libvirt.qemuenv QEMU_AUDIO_DRV: 'pa'
1939
- libvirt.qemuenv QEMU_AUDIO_TIMER_PERIOD: '150'
1940
- libvirt.qemuenv QEMU_PA_SAMPLES: '1024', QEMU_PA_SERVER: '/run/user/1000/pulse/native'
1941
- end
1942
- end
1943
- ```
1944
-
1945
- ## Box Formats
1946
-
1947
- ### Version 1
1948
-
1949
- This is the original format that most boxes currently use.
1950
-
1951
- You can view an example box in the
1952
- [`example_box/directory`](https://github.com/vagrant-libvirt/vagrant-libvirt/tree/master/example_box).
1953
- That directory also contains instructions on how to build a box.
5
+ [![Join the chat at https://gitter.im/vagrant-libvirt/vagrant-libvirt](https://badges.gitter.im/vagrant-libvirt/vagrant-libvirt.svg)](https://gitter.im/vagrant-libvirt/vagrant-libvirt?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
6
+ [![Build Status](https://github.com/vagrant-libvirt/vagrant-libvirt/actions/workflows/unit-tests.yml/badge.svg)](https://github.com/vagrant-libvirt/vagrant-libvirt/actions/workflows/unit-tests.yml)
7
+ [![Coverage Status](https://coveralls.io/repos/github/vagrant-libvirt/vagrant-libvirt/badge.svg?branch=master)](https://coveralls.io/github/vagrant-libvirt/vagrant-libvirt?branch=master)
8
+ [![Gem Version](https://badge.fury.io/rb/vagrant-libvirt.svg)](https://badge.fury.io/rb/vagrant-libvirt)
1954
9
 
1955
- The box is a tarball containing:
10
+ This is a [Vagrant](http://www.vagrantup.com) plugin that adds a
11
+ [Libvirt](http://libvirt.org) provider to Vagrant, allowing Vagrant to
12
+ control and provision machines via Libvirt toolkit.
1956
13
 
1957
- * qcow2 image file named `box.img`
1958
- * `metadata.json` file describing box image (`provider`, `virtual_size`,
1959
- `format`)
1960
- * `Vagrantfile` that does default settings for the provider-specific
1961
- configuration for this provider
14
+ **Note:** Actual version is still a development one. Feedback is welcome and
15
+ can help a lot :-)
1962
16
 
17
+ Vagrant-libvirt Documentation is published at [https://vagrant-libvirt.github.io/vagrant-libvirt/](https://vagrant-libvirt.github.io/vagrant-libvirt/)
1963
18
 
1964
- ### Version 2 (Experimental)
19
+ ## Index
1965
20
 
1966
- Due to the limitation of only being able to handle a single disk with the version 1 format, a new
1967
- format was added to support boxes that need to specify multiple disks. This is still currently
1968
- experimental and as such support for packaging has yet to be added. There is a script in the tools
1969
- folder (tools/create_box_with_two_disks.sh) that should provide a guideline on how to create such
1970
- a box for those that wish to experiment and provide early feedback.
21
+ <!-- vim-markdown-toc GFM -->
1971
22
 
1972
- At it's most basic, it expects an array of disks to allow a specific order to be presented. Disks
1973
- will be attached in this order and as such assume device names base on this within the VM. The
1974
- 'path' attribute is required, and is expected to be relative to the base of the box. This should
1975
- allow placing the disk images within a nested directory within the box if it useful for those
1976
- with a larger number of disks. The name allows overriding the target volume name that will be
1977
- used in the libvirt storage pool. Note that vagrant-libvirt will still prefix the volume name
1978
- with `#{box_name}_vagrant_box_image_#{box_version}_` to avoid accidental clashes with other boxes.
23
+ * [Installing](#installing)
24
+ * [Running](#running)
25
+ * [Development](#development)
26
+ * [Contributing](#contributing)
1979
27
 
1980
- Format and virtual size need no longer be specified as they are now retrieved directly from the
1981
- provided image using `qemu-img info ...`.
28
+ <!-- vim-markdown-toc -->
1982
29
 
1983
- Example format:
1984
- ```json
1985
- {
1986
- 'disks': [
1987
- {
1988
- 'path': 'disk1.img'
1989
- },
1990
- {
1991
- 'path': 'disk2.img',
1992
- 'name': 'secondary_disk'
1993
- },
1994
- {
1995
- 'path': 'disk3.img'
1996
- }
1997
- ],
1998
- 'provider': 'libvirt'
1999
- }
2000
- ```
30
+ ## Installing
2001
31
 
2002
- ## Create Box
32
+ Installation typically involves a number of distribution package dependencies to ensure that Libvirt is available.
33
+ Recommend that you follow the [installation guide](https://vagrant-libvirt.github.io/vagrant-libvirt/installation.html).
2003
34
 
2004
- If creating a box from a modified vagrant-libvirt machine, ensure that
2005
- you have set the `config.ssh.insert_key = false` in the original Vagrantfile
2006
- as otherwise Vagrant will replace the default connection key-pair that is
2007
- required on first boot with one specific to the machine and prevent
2008
- the default key from working on the exported result.
2009
- ```ruby
2010
- Vagrant.configure("2") do |config|
2011
- # this setting is only recommended if planning to export the
2012
- # resulting machine
2013
- config.ssh.insert_key = false
35
+ ## Running
2014
36
 
2015
- config.vm.define :test_vm do |test_vm|
2016
- test_vm.vm.box = "fedora/32-cloud-base"
2017
- end
2018
- end
2019
- ```
37
+ Once installed, use vagrant-libvirt through vagrant.
2020
38
 
2021
- To create a vagrant-libvirt box from a qcow2 image, run `create_box.sh`
2022
- (located in the tools directory):
39
+ Locate a vagrant box containing the distribution you want to use at
40
+ [Vagrant Cloud](https://app.vagrantup.com/boxes/search?provider=libvirt) and
41
+ initialize.
2023
42
 
2024
43
  ```shell
2025
- $ create_box.sh ubuntu14.qcow2
44
+ vagrant init fedora/32-cloud-base
2026
45
  ```
2027
46
 
2028
- You can also create a box by using [Packer](https://packer.io). Packer
2029
- templates for use with vagrant-libvirt are available at
2030
- https://github.com/jakobadam/packer-qemu-templates. After cloning that project
2031
- you can build a vagrant-libvirt box by running:
47
+ Then run following command:
2032
48
 
2033
49
  ```shell
2034
- $ cd packer-qemu-templates
2035
- $ packer build ubuntu-14.04-server-amd64-vagrant.json
50
+ vagrant up --provider=libvirt
2036
51
  ```
2037
52
 
2038
- ## Package Box from VM
2039
-
2040
- vagrant-libvirt has native support for [`vagrant
2041
- package`](https://www.vagrantup.com/docs/cli/package.html) via
2042
- libguestfs [virt-sysprep](http://libguestfs.org/virt-sysprep.1.html).
2043
- virt-sysprep operations can be customized via the
2044
- `VAGRANT_LIBVIRT_VIRT_SYSPREP_OPERATIONS` environment variable; see the
2045
- [upstream
2046
- documentation](http://libguestfs.org/virt-sysprep.1.html#operations) for
2047
- further details especially on default sysprep operations enabled for
2048
- your system.
2049
-
2050
- Options to the virt-sysprep command call can be passed via
2051
- `VAGRANT_LIBVIRT_VIRT_SYSPREP_OPTIONS` environment variable.
53
+ Vagrant needs to know that we want to use Libvirt and not default VirtualBox.
54
+ That's why there is `--provider=libvirt` option specified. Other way to tell
55
+ Vagrant to use Libvirt provider is to setup environment variable
2052
56
 
2053
57
  ```shell
2054
- $ export VAGRANT_LIBVIRT_VIRT_SYSPREP_OPTIONS="--delete /etc/hostname"
2055
- $ vagrant package
58
+ export VAGRANT_DEFAULT_PROVIDER=libvirt
2056
59
  ```
2057
60
 
2058
- For example, on Chef [bento](https://github.com/chef/bento) VMs that
2059
- require SSH hostkeys already set (e.g. bento/debian-7) as well as leave
2060
- existing LVM UUIDs untouched (e.g. bento/ubuntu-18.04), these can be
2061
- packaged into vagrant-libvirt boxes like so:
2062
-
61
+ Afterwards to enter the VM simply use:
2063
62
  ```shell
2064
- $ export VAGRANT_LIBVIRT_VIRT_SYSPREP_OPERATIONS="defaults,-ssh-userdir,-ssh-hostkeys,-lvm-uuids"
2065
- $ vagrant package
63
+ vagrant ssh
2066
64
  ```
2067
65
 
2068
- ## Troubleshooting VMs
2069
-
2070
- The first step for troubleshooting a VM image that appears to not boot correctly,
2071
- or hangs waiting to get an IP, is to check it with a VNC viewer. A key thing
2072
- to remember is that if the VM doesn't get an IP, then vagrant can't communicate
2073
- with it to configure anything, so a problem at this stage is likely to come from
2074
- the VM, but we'll outline the tools and common problems to help you troubleshoot
2075
- that.
2076
-
2077
- By default, when you create a new VM, a vnc server will listen on `127.0.0.1` on
2078
- port `TCP5900`. If you connect with a vnc viewer you can see the boot process. If
2079
- your VM isn't listening on `5900` by default, you can use `virsh dumpxml` to find
2080
- out which port it's listening on, or can configure it with `graphics_port` and
2081
- `graphics_ip` (see 'Domain Specific Options' above).
2082
-
2083
- Note: Connecting with the console (`virsh console`) requires additional config,
2084
- so some VMs may not show anything on the console at all, instead displaying it in
2085
- the VNC console. The issue with the text console is that you also need to build the
2086
- image used to tell the kernel to output to the console during boot, and typically
2087
- most do not have this built in.
2088
-
2089
- Problems we've seen in the past include:
2090
- - Forgetting to remove `/etc/udev/rules.d/70-persistent-net.rules` before packaging
2091
- the VM
2092
- - VMs expecting a specific disk device to be connected
2093
-
2094
- If you're still confused, check the Github Issues for this repo for anything that
2095
- looks similar to your problem.
2096
-
2097
- [Github Issue #1032](https://github.com/vagrant-libvirt/vagrant-libvirt/issues/1032)
2098
- contains some historical troubleshooting for VMs that appeared to hang.
2099
-
2100
- Did you hit a problem that you'd like to note here to save time in the future?
2101
- Please do!
2102
-
66
+ If you can't find a box that works as you need, have a look at our documentation
67
+ on [creating boxes](https://vagrant-libvirt.github.io/vagrant-libvirt/boxes.html#creating-boxes)
68
+ on how to take existing ones, customize them and repackage.
2103
69
 
2104
70
  ## Development
2105
71
 
@@ -2107,16 +73,16 @@ To work on the `vagrant-libvirt` plugin, clone this repository out, and use
2107
73
  [Bundler](http://gembundler.com) to get the dependencies:
2108
74
 
2109
75
  ```shell
2110
- $ git clone https://github.com/vagrant-libvirt/vagrant-libvirt.git
2111
- $ cd vagrant-libvirt
2112
- $ bundle install
76
+ git clone https://github.com/vagrant-libvirt/vagrant-libvirt.git
77
+ cd vagrant-libvirt
78
+ bundle install
2113
79
  ```
2114
80
 
2115
81
  Once you have the dependencies, verify the unit tests pass with `rspec`:
2116
82
 
2117
83
  ```shell
2118
- $ export VAGRANT_HOME=$(mktemp -d)
2119
- $ bundle exec rspec --fail-fast --color --format documentation
84
+ export VAGRANT_HOME=$(mktemp -d)
85
+ bundle exec rspec --fail-fast --color --format documentation
2120
86
  ```
2121
87
 
2122
88
  If those pass, you're ready to start developing the plugin.
@@ -2128,14 +94,17 @@ Additionally if you wish to test against a specific version of vagrant you
2128
94
  can control the version using the following before running the tests:
2129
95
 
2130
96
  ```shell
2131
- $ export VAGRANT_VERSION=v2.2.14
97
+ export VAGRANT_VERSION=v2.2.14
98
+ bundle update && bundle exec rspec --fail-fast --color --format documentation
2132
99
  ```
2133
100
 
2134
101
  **Note** rvm is used by the maintainers to help provide an environment to test
2135
102
  against multiple ruby versions that align with the ones used by vagrant for
2136
103
  their embedded ruby depending on the release. You can see what version is used
2137
104
  by looking at the current [unit tests](.github/workflows/unit-tests.yml)
2138
- workflow.
105
+ workflow. By default if you have rvm installed and enabled it this project looks
106
+ to use ruby 2.6.6 and configures a separate gemset, both of which will be switched
107
+ to each time you enter the project directory.
2139
108
 
2140
109
  You can test the plugin without installing it into your Vagrant environment by
2141
110
  just creating a `Vagrantfile` in the top level of this directory (it is
@@ -2169,6 +138,7 @@ $ bundle exec vagrant up --provider=libvirt
2169
138
  **IMPORTANT NOTE:** bundle is crucial. You need to use bundled Vagrant.
2170
139
 
2171
140
  ## Contributing
141
+ [![contributions welcome](https://img.shields.io/badge/contributions-welcome-brightgreen.svg?style=flat)](https://github.com/vagrant-libvirt/vagrant-libvirt/issues)
2172
142
 
2173
143
  1. Fork it
2174
144
  2. Create your feature branch (`git checkout -b my-new-feature`)
@@ -2176,6 +146,8 @@ $ bundle exec vagrant up --provider=libvirt
2176
146
  4. Push to the branch (`git push origin my-new-feature`)
2177
147
  5. Create new Pull Request
2178
148
 
149
+ For future work take a look at [open issues](https://github.com/vagrant-libvirt/vagrant-libvirt/issues?state=open).
150
+
2179
151
  <!--
2180
152
  # styling for TOC
2181
153
  vim: expandtab shiftwidth=2