vagrant-libvirt 0.0.15 → 0.0.16

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA1:
3
- metadata.gz: 693a19302ccb1cda6b30a7be5b5d9b8c5e74b5e3
4
- data.tar.gz: 951b697e48245021c9cec5d8e392a44d7d0881ec
3
+ metadata.gz: e5475753167fabd89226aeb3801ae1122a07b0e9
4
+ data.tar.gz: 4e9db984dfcd0100c10a5e04d7a9b738a64d33ec
5
5
  SHA512:
6
- metadata.gz: 8d82a8e8257f289130dfc2fc84388adbaff56803012d5a8fdfd6b12395bb100102aa452c75fb98ce1579af2ba0b961d794fe6149e02ed67e8870384ff8c5b402
7
- data.tar.gz: 21f86a6f84414e3aeb92e801d5114bec30c5c0cac4a391c28142c73468c0825ae620ba527424a1fffac4970f8a75e7abe6119954c60180aef165dfc4baad8e89
6
+ metadata.gz: bdc9f497e2e357be510406e75e44e6638cdbd4e07830a96821c51cc698ceb21849f2216adfe829bbdc9a96396e46e0507b789b0741f8e996ea3319dbace2a418
7
+ data.tar.gz: 872af8fbd4e578d78a20f030a04ddf1a9aa60844f0bf6c87603b28b19c2850c5763266a47ec148d106731c15b69a08ddfd46d6a5996b5fe513a212b018e9b58e
@@ -1,3 +1,24 @@
1
+ # 0.0.15 (Feb 01, 2014)
2
+ * Minimum vagrant version supported is 1.4.3
3
+ * Add support for port forwarding (by Ryan Petrello <ryan@ryanpetrello.com>)
4
+ * Improve network device creation (by Brian Pitts <brian@polibyte.com>)
5
+ * Improvements to NFS sharing (by Matt Palmer and Jason DeTiberus <detiber@gmail.com>)
6
+ * Provisioning fixes for Vagrant 1.4 (by @keitwb)
7
+
8
+ # 0.0.14 (Jan. 21, 2014)
9
+ * Vagrant 1.4 compatibility fixes (by Dmitry Vasilets <pronix.service@gmail.com>)
10
+ * Improve how VMs IP address is discovered (by Brian Pitts <brian@polibyte.com>)
11
+ * Add cpu_mode parameter (by Jordan Tardif <jordan.tardi@gmail.com>)
12
+ * Add disk_bus parameter (by Brian Pitts <brian@polibyte.com>)
13
+ * Fixes to some error output (by Matthiew Coudron <matthieu.coudron@lip6.fr>)
14
+ * Add parameters for booting kernel file (by Matthiew Coudron <matthieu.coudron@lip6.fr>)
15
+ * Add default_prefix parameter (by James Shubin <purpleidea@gmail.com>)
16
+ * Improve network creation (by Brian Pitts <brian@polibyte.com>)
17
+ * Replace default_network parameter with management_network parameters (by Brian Pitts <brian@polibyte.com>)
18
+
19
+ # 0.0.13 (Dec. 12, 2013)
20
+ * Allow to use nested virtualization again (by Artem Chernikov <achernikov@suse.com>)
21
+
1
22
  # 0.0.12 (Dec. 03, 2013)
2
23
 
3
24
  * Proxy ssh through libvirt host, if libvirt is connected via ssh(by @erik-smit)
data/Gemfile CHANGED
@@ -10,3 +10,7 @@ group :development do
10
10
  gem "vagrant", :git => "git://github.com/mitchellh/vagrant.git"
11
11
  end
12
12
 
13
+ group :plugins do
14
+ gem "vagrant-libvirt", :path => '.'
15
+ end
16
+
data/README.md CHANGED
@@ -7,9 +7,9 @@ control and provision machines via Libvirt toolkit.
7
7
  **Note:** Actual version (0.0.15) is still a development one. Feedback is
8
8
  welcome and can help a lot :-)
9
9
 
10
- ## Features (Version 0.0.13)
10
+ ## Features
11
11
 
12
- * Controll local Libvirt hypervisors.
12
+ * Control local Libvirt hypervisors.
13
13
  * Vagrant `up`, `destroy`, `suspend`, `resume`, `halt`, `ssh`, `reload` and `provision` commands.
14
14
  * Upload box image (qcow2 format) to Libvirt storage pool.
15
15
  * Create volume as COW diff image for domains.
@@ -18,9 +18,10 @@ welcome and can help a lot :-)
18
18
  * SSH into domains.
19
19
  * Setup hostname and network interfaces.
20
20
  * Provision domains with any built-in Vagrant provisioner.
21
- * Synced folder support via `rsync` or `nfs`.
22
- * Snapshots via [sahara](https://github.com/jedi4ever/sahara)
23
- * Use boxes from other Vagrant providers via [vagrant-mutate](https://github.com/sciurus/vagrant-mutate)
21
+ * Synced folder support via `rsync`, `nfs` or `9p`.
22
+ * Snapshots via [sahara](https://github.com/jedi4ever/sahara).
23
+ * Package caching via [vagrant-cachier](http://fgrehm.viewdocs.io/vagrant-cachier/).
24
+ * Use boxes from other Vagrant providers via [vagrant-mutate](https://github.com/sciurus/vagrant-mutate).
24
25
 
25
26
  ## Future work
26
27
 
@@ -79,11 +80,18 @@ Vagrant.configure("2") do |config|
79
80
  end
80
81
 
81
82
  config.vm.provider :libvirt do |libvirt|
82
- libvirt.driver = "qemu"
83
+ libvirt.driver = "kvm"
83
84
  libvirt.host = "localhost"
84
85
  libvirt.connect_via_ssh = true
85
86
  libvirt.username = "root"
86
87
  libvirt.storage_pool_name = "default"
88
+
89
+ # include as many of these addition disks as you want to
90
+ libvirt.storage :file,
91
+ #:path => '', # automatically chosen if unspecified!
92
+ #:device => 'vdb', # automatically chosen if unspecified!
93
+ #:size => '10G', # defaults to 10G if unspecified!
94
+ :type => 'qcow2' # defaults to 'qcow2' if unspecified!
87
95
  end
88
96
  end
89
97
  ```
@@ -92,12 +100,13 @@ end
92
100
 
93
101
  This provider exposes quite a few provider-specific configuration options:
94
102
 
95
- * `driver` - A hypervisor name to access. For now only qemu is supported.
103
+ * `driver` - A hypervisor name to access. For now only kvm and qemu are supported.
96
104
  * `host` - The name of the server, where libvirtd is running.
97
105
  * `connect_via_ssh` - If use ssh tunnel to connect to Libvirt.
98
106
  * `username` - Username and password to access Libvirt.
99
107
  * `password` - Password to access Libvirt.
100
108
  * `id_ssh_key_file` - The id ssh key file name to access Libvirt (eg: id_dsa or id_rsa or ... in the user .ssh directory)
109
+ * `socket` - Path to the libvirt unix socket (eg: /var/run/libvirt/libvirt-sock)
101
110
  * `storage_pool_name` - Libvirt storage pool name, where box image and instance snapshots will be stored.
102
111
 
103
112
  ### Domain Specific Options
@@ -108,9 +117,9 @@ This provider exposes quite a few provider-specific configuration options:
108
117
  * `nested` - [Enable nested virtualization](https://github.com/torvalds/linux/blob/master/Documentation/virtual/kvm/nested-vmx.txt). Default is false.
109
118
  * `cpu_mode` - What cpu mode to use for nested virtualization. Defaults to 'host-model' if not set.
110
119
  * `volume_cache` - Controls the cache mechanism. Possible values are "default", "none", "writethrough", "writeback", "directsync" and "unsafe". [See driver->cache in libvirt documentation](http://libvirt.org/formatdomain.html#elementsDisks).
111
- * `kernel` - To launch the guest with a kernel residing on host filesystems (Equivalent to qemu `-kernel`)
112
- * `initrd` - To specify the initramfs/initrd to use for the guest (Equivalent to qemu `-initrd`)
113
- * `cmd_line` - Arguments passed on to the guest kernel initramfs or initrd to use (Equivalent to qemu `-append`)
120
+ * `kernel` - To launch the guest with a kernel residing on host filesystems. Equivalent to qemu `-kernel`.
121
+ * `initrd` - To specify the initramfs/initrd to use for the guest. Equivalent to qemu `-initrd`.
122
+ * `cmd_line` - Arguments passed on to the guest kernel initramfs or initrd to use. Equivalent to qemu `-append`.
114
123
 
115
124
 
116
125
  Specific domain settings can be set for each domain separately in multi-VM
@@ -156,7 +165,7 @@ Vagrant goes through steps below when creating new project:
156
165
  4. Create and start new domain on Libvirt host.
157
166
  5. Check for DHCP lease from dnsmasq server.
158
167
  6. Wait till SSH is available.
159
- 7. Sync folders via `rsync` and run Vagrant provisioner on new domain if
168
+ 7. Sync folders and run Vagrant provisioner on new domain if
160
169
  setup in Vagrantfile.
161
170
 
162
171
  ## Networks
@@ -228,7 +237,7 @@ starts with 'libvirt__' string. Here is a list of those options:
228
237
  * `:mac` - MAC address for the interface.
229
238
 
230
239
  ### Public Network Options
231
- * `:dev` - Physical device that the public interface should use. Default is 'eth0'
240
+ * `:dev` - Physical device that the public interface should use. Default is 'eth0'.
232
241
  * `:mode` - The mode in which the public interface should operate in. Supported
233
242
  modes are available from the [libvirt documentation](http://www.libvirt.org/formatdomain.html#elementsNICSDirect).
234
243
  Default mode is 'bridge'.
@@ -279,14 +288,16 @@ will maintain an active ssh process for the lifetime of the VM.
279
288
 
280
289
  ## Synced Folders
281
290
 
282
- There is minimal support for synced folders. Upon `vagrant up`, the Libvirt
283
- provider will use `rsync` (if available) to uni-directionally sync the folder
284
- to the remote machine over SSH.
291
+ vagrant-libvirt supports bidirectional synced folders via nfs or 9p and
292
+ unidirectional via rsync. The default is nfs. Vagrant automatically syncs
293
+ the project folder on the host to */vagrant* in the guest. You can also
294
+ configure additional synced folders.
295
+
296
+ You can change the synced folder type for */vagrant* by explicity configuring
297
+ it an setting the type, e.g.
285
298
 
286
- This is good enough for all built-in Vagrant provisioners (shell,
287
- chef, and puppet) to work!
299
+ config.vm.synced_folder './', '/vagrant', type: 'rsync'
288
300
 
289
- If used options `:nfs => true`, folder will exported by nfs.
290
301
 
291
302
  ## Box Format
292
303
 
@@ -335,9 +346,9 @@ IMPORTANT NOTE: bundle is crucial. You need to use bundled vagrant.
335
346
 
336
347
  ## Contributing
337
348
 
338
- 1. Fork it
339
- 2. Create your feature branch (`git checkout -b my-new-feature`)
340
- 3. Commit your changes (`git commit -am 'Add some feature'`)
341
- 4. Push to the branch (`git push origin my-new-feature`)
342
- 5. Create new Pull Request
349
+ 1. Fork it.
350
+ 2. Create your feature branch (`git checkout -b my-new-feature`).
351
+ 3. Commit your changes (`git commit -am 'Add some feature'`).
352
+ 4. Push to the branch (`git push origin my-new-feature`).
353
+ 5. Create new Pull Request.
343
354
 
@@ -33,10 +33,11 @@ Vagrant.configure("2") do |config|
33
33
 
34
34
  # A hypervisor name to access. Different drivers can be specified, but
35
35
  # this version of provider creates KVM machines only. Some examples of
36
- # drivers are qemu (KVM/qemu), xen (Xen hypervisor), lxc (Linux Containers),
36
+ # drivers are kvm (qemu hardware accelerated), qemu (qemu emulated),
37
+ # xen (Xen hypervisor), lxc (Linux Containers),
37
38
  # esx (VMware ESX), vmwarews (VMware Workstation) and more. Refer to
38
39
  # documentation for available drivers (http://libvirt.org/drivers.html).
39
- libvirt.driver = "qemu"
40
+ libvirt.driver = "kvm"
40
41
 
41
42
  # The name of the server, where libvirtd is running.
42
43
  libvirt.host = "localhost"
@@ -18,32 +18,35 @@ module VagrantPlugins
18
18
  if !env[:result]
19
19
  b2.use SetNameOfDomain
20
20
  b2.use HandleStoragePool
21
- b2.use HandleBoxUrl
21
+ b2.use HandleBox
22
22
  b2.use HandleBoxImage
23
23
  b2.use CreateDomainVolume
24
24
  b2.use CreateDomain
25
25
 
26
- b2.use TimedProvision
26
+ b2.use Provision
27
27
  b2.use CreateNetworks
28
28
  b2.use CreateNetworkInterfaces
29
29
 
30
+
31
+ b2.use PrepareNFSValidIds
32
+ b2.use SyncedFolderCleanup
33
+ b2.use SyncedFolders
34
+
30
35
  b2.use StartDomain
31
36
  b2.use WaitTillUp
32
37
 
33
- if Vagrant::VERSION < "1.4.0"
34
- b2.use NFS
35
- else
36
- b2.use PrepareNFSValidIds
37
- b2.use SyncedFolderCleanup
38
- b2.use SyncedFolders
39
- end
38
+ b2.use StartDomain
39
+ b2.use WaitTillUp
40
+
41
+
42
+
40
43
 
41
44
  b2.use ForwardPorts
42
45
 
43
46
  b2.use PrepareNFSSettings
44
47
  b2.use ShareFolders
45
48
  b2.use SetHostname
46
- b2.use SyncFolders
49
+ # b2.use SyncFolders
47
50
  else
48
51
  b2.use action_start
49
52
  end
@@ -74,6 +77,11 @@ module VagrantPlugins
74
77
  # Ensure networks are created and active
75
78
  b3.use CreateNetworks
76
79
 
80
+ b3.use PrepareNFSValidIds
81
+ b3.use SyncedFolderCleanup
82
+ b3.use SyncedFolders
83
+
84
+
77
85
  # Start it..
78
86
  b3.use StartDomain
79
87
 
@@ -81,14 +89,6 @@ module VagrantPlugins
81
89
  # so wait for dhcp lease and store IP into machines data_dir.
82
90
  b3.use WaitTillUp
83
91
 
84
- # Handle shared folders
85
- if Vagrant::VERSION < "1.4.0"
86
- b3.use NFS
87
- else
88
- b3.use PrepareNFSValidIds
89
- b3.use SyncedFolderCleanup
90
- b3.use SyncedFolders
91
- end
92
92
 
93
93
  b3.use ForwardPorts
94
94
  b3.use PrepareNFSSettings
@@ -156,7 +156,7 @@ module VagrantPlugins
156
156
 
157
157
  b2.use ConnectLibvirt
158
158
  b2.use ClearForwardedPorts
159
- b2.use PruneNFSExports
159
+ # b2.use PruneNFSExports
160
160
  b2.use DestroyDomain
161
161
  b2.use DestroyNetworks
162
162
  end
@@ -204,7 +204,7 @@ module VagrantPlugins
204
204
  end
205
205
 
206
206
  b3.use Provision
207
- b3.use SyncFolders
207
+ # b3.use SyncFolders
208
208
  end
209
209
  end
210
210
  end
@@ -321,25 +321,29 @@ module VagrantPlugins
321
321
  autoload :MessageNotCreated, action_root.join('message_not_created')
322
322
  autoload :MessageNotRunning, action_root.join('message_not_running')
323
323
  autoload :MessageNotSuspended, action_root.join('message_not_suspended')
324
+
324
325
  autoload :PrepareNFSSettings, action_root.join('prepare_nfs_settings')
325
326
  autoload :PrepareNFSValidIds, action_root.join('prepare_nfs_valid_ids')
326
327
  autoload :PruneNFSExports, action_root.join('prune_nfs_exports')
328
+
327
329
  autoload :ReadSSHInfo, action_root.join('read_ssh_info')
328
330
  autoload :ReadState, action_root.join('read_state')
329
331
  autoload :ResumeDomain, action_root.join('resume_domain')
330
332
  autoload :SetNameOfDomain, action_root.join('set_name_of_domain')
333
+
334
+ # I don't think we need it anymore
331
335
  autoload :ShareFolders, action_root.join('share_folders')
332
336
  autoload :StartDomain, action_root.join('start_domain')
333
337
  autoload :SuspendDomain, action_root.join('suspend_domain')
334
- autoload :SyncFolders, action_root.join('sync_folders')
335
338
  autoload :TimedProvision, action_root.join('timed_provision')
339
+
336
340
  autoload :WaitTillUp, action_root.join('wait_till_up')
341
+ autoload :PrepareNFSValidIds, action_root.join('prepare_nfs_valid_ids')
342
+
337
343
  autoload :SSHRun, 'vagrant/action/builtin/ssh_run'
338
- autoload :HandleBoxUrl, 'vagrant/action/builtin/handle_box_url'
339
- unless Vagrant::VERSION < "1.4.0"
340
- autoload :SyncedFolders, 'vagrant/action/builtin/synced_folders'
341
- autoload :SyncedFolderCleanup, 'vagrant/action/builtin/synced_folder_cleanup'
342
- end
344
+ autoload :HandleBox, 'vagrant/action/builtin/handle_box'
345
+ autoload :SyncedFolders, 'vagrant/action/builtin/synced_folders'
346
+ autoload :SyncedFolderCleanup, 'vagrant/action/builtin/synced_folder_cleanup'
343
347
  end
344
348
  end
345
349
  end
@@ -23,9 +23,9 @@ module VagrantPlugins
23
23
  config = env[:machine].provider_config
24
24
 
25
25
  # Setup connection uri.
26
- uri = config.driver
26
+ uri = config.driver.dup
27
27
  virt_path = case uri
28
- when 'qemu', 'openvz', 'uml', 'phyp', 'parallels'
28
+ when 'qemu', 'openvz', 'uml', 'phyp', 'parallels', 'kvm'
29
29
  '/system'
30
30
  when 'xen', 'esx'
31
31
  '/'
@@ -34,6 +34,9 @@ module VagrantPlugins
34
34
  else
35
35
  raise "Require specify driver #{uri}"
36
36
  end
37
+ if uri == 'kvm'
38
+ uri = 'qemu' # use qemu uri for kvm domain type
39
+ end
37
40
 
38
41
  if config.connect_via_ssh
39
42
  uri << '+ssh://'
@@ -57,8 +60,10 @@ module VagrantPlugins
57
60
  if config.id_ssh_key_file
58
61
  # set ssh key for access to libvirt host
59
62
  home_dir = `echo ${HOME}`.chomp
60
- uri << "&keyfile=#{home_dir}/.ssh/"+config.id_ssh_key_file
63
+ uri << "\&keyfile=#{home_dir}/.ssh/"+config.id_ssh_key_file
61
64
  end
65
+ # set path to libvirt socket
66
+ uri << "\&socket="+config.socket if config.socket
62
67
 
63
68
  conn_attr = {}
64
69
  conn_attr[:provider] = 'libvirt'
@@ -68,8 +73,7 @@ module VagrantPlugins
68
73
 
69
74
  # Setup command for retrieving IP address for newly created machine
70
75
  # with some MAC address. Get it from dnsmasq leases table
71
- ip_command = %q[ LEASES=$(find /var/lib/libvirt/dnsmasq/ /var/lib/misc/ -name '*leases'); ]
72
- ip_command << %q[ [ -n "$LEASES" ] && grep $mac $LEASES | awk '{ print $3 }' ]
76
+ ip_command = %q[ find /var/lib/libvirt/dnsmasq/ /var/lib/misc/ -name '*leases' -exec grep $mac {} \; | cut -d' ' -f3 ]
73
77
  conn_attr[:libvirt_ip_command] = ip_command
74
78
 
75
79
  @logger.info("Connecting to Libvirt (#{uri}) ...")
@@ -12,6 +12,14 @@ module VagrantPlugins
12
12
  @app = app
13
13
  end
14
14
 
15
+ def _disk_name(name, disk)
16
+ return "#{name}-#{disk[:device]}.#{disk[:type]}" # disk name
17
+ end
18
+
19
+ def _disks_print(disks)
20
+ return disks.collect{ |x| x[:device]+'('+x[:type]+','+x[:size]+')' }.join(', ')
21
+ end
22
+
15
23
  def call(env)
16
24
  # Get config.
17
25
  config = env[:machine].provider_config
@@ -28,8 +36,12 @@ module VagrantPlugins
28
36
  @cmd_line = config.cmd_line
29
37
  @initrd = config.initrd
30
38
 
31
- # TODO get type from driver config option
32
- @domain_type = 'kvm'
39
+ # Storage
40
+ @storage_pool_name = config.storage_pool_name
41
+ @disks = config.disks
42
+
43
+ config = env[:machine].provider_config
44
+ @domain_type = config.driver
33
45
 
34
46
  @os_type = 'hvm'
35
47
 
@@ -39,6 +51,32 @@ module VagrantPlugins
39
51
  raise Errors::DomainVolumeExists if domain_volume == nil
40
52
  @domain_volume_path = domain_volume.path
41
53
 
54
+ # the default storage prefix is typically: /var/lib/libvirt/images/
55
+ storage_prefix = File.dirname(@domain_volume_path)+'/' # steal
56
+
57
+ @disks.each do |disk|
58
+ disk[:name] = _disk_name(@name, disk)
59
+ if disk[:path].nil?
60
+ disk[:path] = "#{storage_prefix}#{_disk_name(@name, disk)}" # automatically chosen!
61
+ end
62
+
63
+ # make the disk. equivalent to:
64
+ # qemu-img create -f qcow2 <path> 5g
65
+ begin
66
+ #puts "Making disk: #{d}, #{t}, #{p}"
67
+ domain_volume_disk = env[:libvirt_compute].volumes.create(
68
+ :name => disk[:name],
69
+ :format_type => disk[:type],
70
+ :path => disk[:path],
71
+ :capacity => disk[:size],
72
+ #:allocation => ?,
73
+ :pool_name => @storage_pool_name)
74
+ rescue Fog::Errors::Error => e
75
+ raise Errors::FogDomainVolumeCreateError,
76
+ :error_message => e.message
77
+ end
78
+ end
79
+
42
80
  # Output the settings we're going to use to the user
43
81
  env[:ui].info(I18n.t("vagrant_libvirt.creating_domain"))
44
82
  env[:ui].info(" -- Name: #{@name}")
@@ -46,11 +84,17 @@ module VagrantPlugins
46
84
  env[:ui].info(" -- Cpus: #{@cpus}")
47
85
  env[:ui].info(" -- Memory: #{@memory_size/1024}M")
48
86
  env[:ui].info(" -- Base box: #{env[:machine].box.name}")
49
- env[:ui].info(" -- Storage pool: #{env[:machine].provider_config.storage_pool_name}")
87
+ env[:ui].info(" -- Storage pool: #{@storage_pool_name}")
50
88
  env[:ui].info(" -- Image: #{@domain_volume_path}")
51
89
  env[:ui].info(" -- Volume Cache: #{@domain_volume_cache}")
52
90
  env[:ui].info(" -- Kernel: #{@kernel}")
53
91
  env[:ui].info(" -- Initrd: #{@initrd}")
92
+ if @disks.length > 0
93
+ env[:ui].info(" -- Disks: #{_disks_print(@disks)}")
94
+ end
95
+ @disks.each do |disk|
96
+ env[:ui].info(" -- Disk(#{disk[:device]}): #{disk[:path]}")
97
+ end
54
98
  env[:ui].info(" -- Command line : #{@cmd_line}")
55
99
 
56
100
  # Create libvirt domain.
@@ -70,9 +70,11 @@ module VagrantPlugins
70
70
 
71
71
  # Configuration for public interfaces which use the macvtap driver
72
72
  if iface_configuration[:iface_type] == :public_network
73
- template_name = 'public_interface'
74
73
  @device = iface_configuration.fetch(:dev, 'eth0')
74
+ @type = iface_configuration.fetch(:type, 'direct')
75
75
  @mode = iface_configuration.fetch(:mode, 'bridge')
76
+ @model_type = iface_configuration.fetch(:model_type, 'e1000')
77
+ template_name = 'public_interface'
76
78
  @logger.info("Setting up public interface using device #{@device} in mode #{@mode}")
77
79
  end
78
80
 
@@ -147,6 +149,10 @@ module VagrantPlugins
147
149
  # Get list of all (active and inactive) libvirt networks.
148
150
  available_networks = libvirt_networks(libvirt_client)
149
151
 
152
+ if options[:iface_type] == :public_network
153
+ return 'public'
154
+ end
155
+
150
156
  if options[:ip]
151
157
  address = network_address(options[:ip], options[:netmask])
152
158
  available_networks.each do |network|
@@ -157,7 +163,7 @@ module VagrantPlugins
157
163
  end
158
164
  end
159
165
 
160
- raise NetworkNotAvailableError, network_name: options[:ip]
166
+ raise Errors::NetworkNotAvailableError, network_name: options[:ip]
161
167
  end
162
168
  end
163
169
  end
@@ -47,8 +47,8 @@ module VagrantPlugins
47
47
  ))
48
48
 
49
49
  ssh_pid = redirect_port(
50
- @env[:machine].name,
51
- fp[:host_ip] || '0.0.0.0',
50
+ @env[:machine],
51
+ fp[:host_ip] || 'localhost',
52
52
  fp[:host],
53
53
  fp[:guest_ip] || @env[:machine].provider.ssh_info[:host],
54
54
  fp[:guest]
@@ -77,13 +77,24 @@ module VagrantPlugins
77
77
  end
78
78
 
79
79
  def redirect_port(machine, host_ip, host_port, guest_ip, guest_port)
80
+ ssh_info = machine.ssh_info
80
81
  params = %W(
81
- #{machine}
82
- -L #{host_ip}:#{host_port}:#{guest_ip}:#{guest_port}
82
+ "-L #{host_ip}:#{host_port}:#{guest_ip}:#{guest_port}"
83
83
  -N
84
+ #{ssh_info[:host]}
84
85
  ).join(' ')
85
- # TODO get options without shelling out
86
- options = `vagrant ssh-config #{machine} | awk '{printf " -o "$1"="$2}'`
86
+
87
+ options = (%W(
88
+ User=#{ssh_info[:username]}
89
+ Port=#{ssh_info[:port]}
90
+ UserKnownHostsFile=/dev/null
91
+ StrictHostKeyChecking=no
92
+ PasswordAuthentication=no
93
+ ForwardX11=#{ssh_info[:forward_x11] ? 'yes' : 'no'}
94
+ ) + ssh_info[:private_key_path].map do |pk|
95
+ "IdentityFile=#{pk}"
96
+ end).map { |s| s.prepend('-o ') }.join(' ')
97
+
87
98
  ssh_cmd = "ssh #{options} #{params}"
88
99
 
89
100
  @logger.debug "Forwarding port with `#{ssh_cmd}`"
@@ -15,7 +15,8 @@ module VagrantPlugins
15
15
  uuids = env[:libvirt_compute].servers.all.map(&:id)
16
16
  # not exiisted in array will removed from nfs
17
17
  uuids.delete(uuid)
18
- env[:host].nfs_prune(uuids)
18
+ env[:host].capability(
19
+ :nfs_prune, env[:machine].ui, uuids)
19
20
  end
20
21
 
21
22
  @app.call(env)
@@ -19,13 +19,29 @@ module VagrantPlugins
19
19
  def read_state(libvirt, machine)
20
20
  return :not_created if machine.id.nil?
21
21
 
22
+ begin
23
+ server = libvirt.servers.get(machine.id)
24
+ rescue Libvirt::RetrieveError => e
25
+ server = nil
26
+ @logger.debug('Machine not found #{e}.')
27
+ end
22
28
  # Find the machine
23
- server = libvirt.servers.get(machine.id)
24
- if server.nil? || [:'shutting-down', :terminated].include?(server.state.to_sym)
25
- # The machine can't be found
26
- @logger.info('Machine not found or terminated, assuming it got destroyed.')
27
- machine.id = nil
28
- return :not_created
29
+ begin
30
+ server = libvirt.servers.get(machine.id)
31
+ if server.nil? || [:'shutting-down', :terminated].include?(server.state.to_sym)
32
+ # The machine can't be found
33
+ @logger.info('Machine shutting down or terminated, assuming it got destroyed.')
34
+ machine.id = nil
35
+ return :not_created
36
+ end
37
+ rescue Libvirt::RetrieveError => e
38
+ if e.libvirt_code == ProviderLibvirt::Util::ErrorCodes::VIR_ERR_NO_DOMAIN
39
+ @logger.info("Machine #{machine.id} not found.")
40
+ machine.id = nil
41
+ return :not_created
42
+ else
43
+ raise e
44
+ end
29
45
  end
30
46
 
31
47
  # Return the state
@@ -21,10 +21,16 @@ module VagrantPlugins
21
21
  env[:domain_name] << '_'
22
22
  env[:domain_name] << env[:machine].name.to_s
23
23
 
24
+ begin
24
25
  @logger.info("Looking for domain #{env[:domain_name]} through list #{env[:libvirt_compute].servers.all}")
25
26
  # Check if the domain name is not already taken
26
- domain = ProviderLibvirt::Util::Collection.find_matching(
27
+
28
+ domain = ProviderLibvirt::Util::Collection.find_matching(
27
29
  env[:libvirt_compute].servers.all, env[:domain_name])
30
+ rescue Fog::Errors::Error => e
31
+ @logger.info("#{e}")
32
+ domain = nil
33
+ end
28
34
 
29
35
  @logger.info("Looking for domain #{env[:domain_name]}")
30
36
 
@@ -0,0 +1,40 @@
1
+ require "vagrant/util/retryable"
2
+
3
+ module VagrantPlugins
4
+ module ProviderLibvirt
5
+ module Cap
6
+ class MountP9
7
+ extend Vagrant::Util::Retryable
8
+
9
+ def self.mount_p9_shared_folder(machine, folders, options)
10
+ folders.each do |name, opts|
11
+ # Expand the guest path so we can handle things like "~/vagrant"
12
+ expanded_guest_path = machine.guest.capability(
13
+ :shell_expand_guest_path, opts[:guestpath])
14
+
15
+ # Do the actual creating and mounting
16
+ machine.communicate.sudo("mkdir -p #{expanded_guest_path}")
17
+
18
+ # Mount
19
+ mount_tag = name.dup
20
+
21
+ mount_opts="-o trans=virtio"
22
+ mount_opts += ",access=#{options[:owner]}" if options[:owner]
23
+ mount_opts += ",version=#{options[:version]}" if options[:version]
24
+ mount_opts += ",#{opts[:mount_options]}" if opts[:mount_options]
25
+
26
+ mount_command = "mount -t 9p #{mount_opts} '#{mount_tag}' #{expanded_guest_path}"
27
+ retryable(:on => Vagrant::Errors::LinuxMountFailed,
28
+ :tries => 5,
29
+ :sleep => 3) do
30
+ machine.communicate.sudo('modprobe 9p')
31
+ machine.communicate.sudo('modprobe 9pnet_virtio')
32
+ machine.communicate.sudo(mount_command,
33
+ :error_class => Vagrant::Errors::LinuxMountFailed)
34
+ end
35
+ end
36
+ end
37
+ end
38
+ end
39
+ end
40
+ end
@@ -0,0 +1,103 @@
1
+ require "log4r"
2
+ require 'ostruct'
3
+ require 'nokogiri'
4
+
5
+
6
+ require "vagrant/util/subprocess"
7
+ require "vagrant/errors"
8
+ require "vagrant-libvirt/errors"
9
+ # require_relative "helper"
10
+
11
+ module VagrantPlugins
12
+ module SyncedFolder9p
13
+ class SyncedFolder < Vagrant.plugin("2", :synced_folder)
14
+ include Vagrant::Util
15
+ include VagrantPlugins::ProviderLibvirt::Util::ErbTemplate
16
+
17
+ def initialize(*args)
18
+ super
19
+
20
+ @logger = Log4r::Logger.new("vagrant_libvirt::synced_folders::9p")
21
+ end
22
+
23
+ def usable?(machine, raise_error=false)
24
+ # bail now if not using libvirt since checking version would throw error
25
+ return false unless machine.provider_name == :libvirt
26
+
27
+ # <filesystem/> support in device attach/detach introduced in 1.2.2
28
+ # version number format is major * 1,000,000 + minor * 1,000 + release
29
+ libvirt_version = ProviderLibvirt.libvirt_connection.client.libversion
30
+ if libvirt_version >= 1002002
31
+ return true
32
+ else
33
+ return false
34
+ end
35
+ end
36
+
37
+ def prepare(machine, folders, opts)
38
+
39
+ raise Vagrant::Errors::Error("No libvirt connection") if ProviderLibvirt.libvirt_connection.nil?
40
+
41
+ @conn = ProviderLibvirt.libvirt_connection.client
42
+
43
+ begin
44
+ # loop through folders
45
+ folders.each do |id, folder_opts|
46
+ folder_opts.merge!({ :accessmode => "passthrough",
47
+ :readonly => nil })
48
+ machine.ui.info "================\nMachine id: #{machine.id}\nShould be mounting folders\n #{id}, opts: #{folder_opts}"
49
+
50
+ xml = to_xml('filesystem', folder_opts )
51
+ # puts "<<<<< XML:\n #{xml}\n >>>>>"
52
+ @conn.lookup_domain_by_uuid(machine.id).attach_device(xml, 0)
53
+
54
+ end
55
+ rescue => e
56
+ machine.ui.error("could not attach device because: #{e}")
57
+ raise VagrantPlugins::ProviderLibvirt::Errors::AttachDeviceError,:error_message => e.message
58
+ end
59
+ end
60
+
61
+
62
+ # TODO once up, mount folders
63
+ def enable(machine, folders, _opts)
64
+ # Go through each folder and mount
65
+ machine.ui.info("mounting p9 share in guest")
66
+ # Only mount folders that have a guest path specified.
67
+ mount_folders = {}
68
+ folders.each do |id, opts|
69
+ mount_folders[id] = opts.dup if opts[:guestpath]
70
+ end
71
+ common_opts = {
72
+ :version => '9p2000.L',
73
+ }
74
+ # Mount the actual folder
75
+ machine.guest.capability(
76
+ :mount_p9_shared_folder, mount_folders, common_opts)
77
+ end
78
+
79
+ def cleanup(machine, _opts)
80
+
81
+ raise Vagrant::Errors::Error("No libvirt connection") if ProviderLibvirt.libvirt_connection.nil?
82
+
83
+ @conn = ProviderLibvirt.libvirt_connection.client
84
+
85
+ begin
86
+ if machine.id && machine.id != ""
87
+ dom = @conn.lookup_domain_by_uuid(machine.id)
88
+ Nokogiri::XML(dom.xml_desc).xpath('/domain/devices/filesystem').each do |xml|
89
+ dom.detach_device(xml.to_s)
90
+
91
+ machine.ui.info "Cleaned up shared folders"
92
+ end
93
+ end
94
+ rescue => e
95
+ machine.ui.error("could not detach device because: #{e}")
96
+ raise VagrantPlugins::ProviderLibvirt::Errors::DetachDeviceError,:error_message => e.message
97
+ end
98
+
99
+ end
100
+
101
+ end
102
+ end
103
+ end
@@ -1,5 +1,14 @@
1
1
  require 'vagrant'
2
2
 
3
+ class Numeric
4
+ Alphabet = ('a'..'z').to_a
5
+ def vdev
6
+ s, q = '', self
7
+ (q, r = (q - 1).divmod(26)) && s.prepend(Alphabet[r]) until q.zero?
8
+ 'vd'+s
9
+ end
10
+ end
11
+
3
12
  module VagrantPlugins
4
13
  module ProviderLibvirt
5
14
  class Config < Vagrant.plugin('2', :config)
@@ -12,6 +21,9 @@ module VagrantPlugins
12
21
  # If use ssh tunnel to connect to Libvirt.
13
22
  attr_accessor :connect_via_ssh
14
23
 
24
+ # Path towards the libvirt socket
25
+ attr_accessor :socket
26
+
15
27
  # The username to access Libvirt.
16
28
  attr_accessor :username
17
29
 
@@ -43,6 +55,9 @@ module VagrantPlugins
43
55
  attr_accessor :cmd_line
44
56
  attr_accessor :initrd
45
57
 
58
+ # Storage
59
+ attr_accessor :disks
60
+
46
61
  def initialize
47
62
  @driver = UNSET_VALUE
48
63
  @host = UNSET_VALUE
@@ -64,10 +79,51 @@ module VagrantPlugins
64
79
  @kernel = UNSET_VALUE
65
80
  @initrd = UNSET_VALUE
66
81
  @cmd_line = UNSET_VALUE
82
+
83
+ # Storage
84
+ @disks = UNSET_VALUE
85
+ end
86
+
87
+ def _get_device(disks)
88
+ disks = [] if disks == UNSET_VALUE
89
+ # skip existing devices and also the first one (vda)
90
+ exist = disks.collect {|x| x[:device]}+[1.vdev.to_s]
91
+ skip = 1 # we're 1 based, not 0 based...
92
+ while true do
93
+ dev = skip.vdev # get lettered device
94
+ if !exist.include?(dev)
95
+ return dev
96
+ end
97
+ skip+=1
98
+ end
99
+ end
100
+
101
+ # NOTE: this will run twice for each time it's needed- keep it idempotent
102
+ def storage(storage_type, options={})
103
+ options = {
104
+ :device => _get_device(@disks),
105
+ :type => 'qcow2',
106
+ :size => '10G', # matches the fog default
107
+ :path => nil,
108
+ }.merge(options)
109
+
110
+ #puts "storage(#{storage_type} --- #{options.to_s})"
111
+ @disks = [] if @disks == UNSET_VALUE
112
+
113
+ disk = {
114
+ :device => options[:device],
115
+ :type => options[:type],
116
+ :size => options[:size],
117
+ :path => options[:path],
118
+ }
119
+
120
+ if storage_type == :file
121
+ @disks << disk # append
122
+ end
67
123
  end
68
124
 
69
125
  def finalize!
70
- @driver = 'qemu' if @driver == UNSET_VALUE
126
+ @driver = 'kvm' if @driver == UNSET_VALUE
71
127
  @host = nil if @host == UNSET_VALUE
72
128
  @connect_via_ssh = false if @connect_via_ssh == UNSET_VALUE
73
129
  @username = nil if @username == UNSET_VALUE
@@ -87,6 +143,9 @@ module VagrantPlugins
87
143
  @kernel = nil if @kernel == UNSET_VALUE
88
144
  @cmd_line = '' if @cmd_line == UNSET_VALUE
89
145
  @initrd = '' if @initrd == UNSET_VALUE
146
+
147
+ # Storage
148
+ @disks = [] if @disks == UNSET_VALUE
90
149
  end
91
150
 
92
151
  def validate(machine)
@@ -122,6 +122,10 @@ module VagrantPlugins
122
122
  error_key(:attach_device_error)
123
123
  end
124
124
 
125
+ class DetachDeviceError < VagrantLibvirtError
126
+ error_key(:detach_device_error)
127
+ end
128
+
125
129
  class NoIpAddressError < VagrantLibvirtError
126
130
  error_key(:no_ip_address_error)
127
131
  end
@@ -6,8 +6,8 @@ end
6
6
 
7
7
  # This is a sanity check to make sure no one is attempting to install
8
8
  # this into an early Vagrant version.
9
- if Vagrant::VERSION < '1.3.0'
10
- raise 'The Vagrant Libvirt plugin is only compatible with Vagrant 1.3+'
9
+ if Vagrant::VERSION < '1.5.0'
10
+ raise 'The Vagrant Libvirt plugin is only compatible with Vagrant 1.5+'
11
11
  end
12
12
 
13
13
  module VagrantPlugins
@@ -31,6 +31,18 @@ module VagrantPlugins
31
31
  require_relative 'provider'
32
32
  Provider
33
33
  end
34
+
35
+ guest_capability("linux", "mount_p9_shared_folder") do
36
+ require_relative "cap/mount_p9"
37
+ Cap::MountP9
38
+ end
39
+
40
+ # lower priority than nfs or rsync
41
+ # https://github.com/pradels/vagrant-libvirt/pull/170
42
+ synced_folder("9p", 4) do
43
+ require_relative "cap/synced_folder"
44
+ VagrantPlugins::SyncedFolder9p::SyncedFolder
45
+ end
34
46
 
35
47
  # This initializes the internationalization strings.
36
48
  def self.setup_i18n
@@ -31,6 +31,17 @@
31
31
  <%# we need to ensure a unique target dev -%>
32
32
  <target dev='vda' bus='<%= @disk_bus %>'/>
33
33
  </disk>
34
+ <%# additional disks -%>
35
+ <% @disks.each do |d| -%>
36
+ <disk type='file' device='disk'>
37
+ <driver name='qemu' type='<%= d[:type] %>'/>
38
+ <source file='<%= d[:path] %>'/>
39
+ <target dev='<%= d[:device] %>' bus='virtio'/>
40
+ <%# this will get auto generated by libvirt
41
+ <address type='pci' domain='0x0000' bus='0x00' slot='???' function='0x0'/>
42
+ -%>
43
+ </disk>
44
+ <% end -%>
34
45
  <serial type='pty'>
35
46
  <target port='0'/>
36
47
  </serial>
@@ -0,0 +1,8 @@
1
+ <filesystem type='mount' accessmode='<%= accessmode %>'>
2
+ <driver type='path' wrpolicy='immediate'/>
3
+ <source dir='<%= hostpath %>'/>
4
+ <target dir='<%= guestpath %>'/>
5
+ <% unless readonly.nil? %>
6
+ <readonly />
7
+ <% end %>
8
+ </filesystem>
@@ -1,7 +1,13 @@
1
- <interface type='direct'>
1
+ <interface type='<%= @type %>'>
2
2
  <% if @mac %>
3
3
  <mac address='<%= @mac %>'/>
4
4
  <% end %>
5
+ <%if @type == 'direct'%>
5
6
  <source dev='<%= @device %>' mode='<%= @mode %>'/>
7
+ <% else %>
8
+ <source bridge='<%=@device%>'/>
9
+ <model type='<%=@model_type%>'/>
10
+ <% end %>
6
11
  </interface>
7
12
 
13
+
@@ -5,6 +5,7 @@ module VagrantPlugins
5
5
  autoload :Collection, 'vagrant-libvirt/util/collection'
6
6
  autoload :Timer, 'vagrant-libvirt/util/timer'
7
7
  autoload :NetworkUtil, 'vagrant-libvirt/util/network_util'
8
+ autoload :ErrorCodes, 'vagrant-libvirt/util/error_codes'
8
9
  end
9
10
  end
10
11
  end
@@ -1,17 +1,21 @@
1
- require 'erb'
1
+ require 'erubis'
2
2
 
3
3
  module VagrantPlugins
4
4
  module ProviderLibvirt
5
5
  module Util
6
6
  module ErbTemplate
7
7
 
8
- # Taken from fog source.
9
- def to_xml template_name = nil
8
+
9
+ # TODO might be a chance to use vagrant template system according to https://github.com/mitchellh/vagrant/issues/3231
10
+ def to_xml template_name = nil, data = binding
10
11
  erb = template_name || self.class.to_s.split("::").last.downcase
11
12
  path = File.join(File.dirname(__FILE__), "..", "templates",
12
13
  "#{erb}.xml.erb")
13
14
  template = File.read(path)
14
- ERB.new(template, nil, '-').result(binding)
15
+
16
+ # TODO according to erubis documentation, we should rather use evaluate and forget about
17
+ # binding since the template may then change variables values
18
+ Erubis::Eruby.new(template, :trim => true).result(data)
15
19
  end
16
20
 
17
21
  end
@@ -0,0 +1,101 @@
1
+ # Ripped from http://libvirt.org/html/libvirt-virterror.html#virErrorNumber.
2
+ module VagrantPlugins
3
+ module ProviderLibvirt
4
+ module Util
5
+ module ErrorCodes
6
+ VIR_ERR_OK = 0
7
+ VIR_ERR_INTERNAL_ERROR = 1 # internal error
8
+ VIR_ERR_NO_MEMORY = 2 # memory allocation failure
9
+ VIR_ERR_NO_SUPPORT = 3 # no support for this function
10
+ VIR_ERR_UNKNOWN_HOST = 4 # could not resolve hostname
11
+ VIR_ERR_NO_CONNECT = 5 # can't connect to hypervisor
12
+ VIR_ERR_INVALID_CONN = 6 # invalid connection object
13
+ VIR_ERR_INVALID_DOMAIN = 7 # invalid domain object
14
+ VIR_ERR_INVALID_ARG = 8 # invalid function argument
15
+ VIR_ERR_OPERATION_FAILED = 9 # a command to hypervisor failed
16
+ VIR_ERR_GET_FAILED = 10 # a HTTP GET command to failed
17
+ VIR_ERR_POST_FAILED = 11 # a HTTP POST command to failed
18
+ VIR_ERR_HTTP_ERROR = 12 # unexpected HTTP error code
19
+ VIR_ERR_SEXPR_SERIAL = 13 # failure to serialize an S-Expr
20
+ VIR_ERR_NO_XEN = 14 # could not open Xen hypervisor control
21
+ VIR_ERR_XEN_CALL = 15 # failure doing an hypervisor call
22
+ VIR_ERR_OS_TYPE = 16 # unknown OS type
23
+ VIR_ERR_NO_KERNEL = 17 # missing kernel information
24
+ VIR_ERR_NO_ROOT = 18 # missing root device information
25
+ VIR_ERR_NO_SOURCE = 19 # missing source device information
26
+ VIR_ERR_NO_TARGET = 20 # missing target device information
27
+ VIR_ERR_NO_NAME = 21 # missing domain name information
28
+ VIR_ERR_NO_OS = 22 # missing domain OS information
29
+ VIR_ERR_NO_DEVICE = 23 # missing domain devices information
30
+ VIR_ERR_NO_XENSTORE = 24 # could not open Xen Store control
31
+ VIR_ERR_DRIVER_FULL = 25 # too many drivers registered
32
+ VIR_ERR_CALL_FAILED = 26 # not supported by the drivers (DEPRECATED)
33
+ VIR_ERR_XML_ERROR = 27 # an XML description is not well formed or broken
34
+ VIR_ERR_DOM_EXIST = 28 # the domain already exist
35
+ VIR_ERR_OPERATION_DENIED = 29 # operation forbidden on read-only connections
36
+ VIR_ERR_OPEN_FAILED = 30 # failed to open a conf file
37
+ VIR_ERR_READ_FAILED = 31 # failed to read a conf file
38
+ VIR_ERR_PARSE_FAILED = 32 # failed to parse a conf file
39
+ VIR_ERR_CONF_SYNTAX = 33 # failed to parse the syntax of a conf file
40
+ VIR_ERR_WRITE_FAILED = 34 # failed to write a conf file
41
+ VIR_ERR_XML_DETAIL = 35 # detail of an XML error
42
+ VIR_ERR_INVALID_NETWORK = 36 # invalid network object
43
+ VIR_ERR_NETWORK_EXIST = 37 # the network already exist
44
+ VIR_ERR_SYSTEM_ERROR = 38 # general system call failure
45
+ VIR_ERR_RPC = 39 # some sort of RPC error
46
+ VIR_ERR_GNUTLS_ERROR = 40 # error from a GNUTLS call
47
+ VIR_WAR_NO_NETWORK = 41 # failed to start network
48
+ VIR_ERR_NO_DOMAIN = 42 # domain not found or unexpectedly disappeared
49
+ VIR_ERR_NO_NETWORK = 43 # network not found
50
+ VIR_ERR_INVALID_MAC = 44 # invalid MAC address
51
+ VIR_ERR_AUTH_FAILED = 45 # authentication failed
52
+ VIR_ERR_INVALID_STORAGE_POOL = 46 # invalid storage pool object
53
+ VIR_ERR_INVALID_STORAGE_VOL = 47 # invalid storage vol object
54
+ VIR_WAR_NO_STORAGE = 48 # failed to start storage
55
+ VIR_ERR_NO_STORAGE_POOL = 49 # storage pool not found
56
+ VIR_ERR_NO_STORAGE_VOL = 50 # storage volume not found
57
+ VIR_WAR_NO_NODE = 51 # failed to start node driver
58
+ VIR_ERR_INVALID_NODE_DEVICE = 52 # invalid node device object
59
+ VIR_ERR_NO_NODE_DEVICE = 53 # node device not found
60
+ VIR_ERR_NO_SECURITY_MODEL = 54 # security model not found
61
+ VIR_ERR_OPERATION_INVALID = 55 # operation is not applicable at this time
62
+ VIR_WAR_NO_INTERFACE = 56 # failed to start interface driver
63
+ VIR_ERR_NO_INTERFACE = 57 # interface driver not running
64
+ VIR_ERR_INVALID_INTERFACE = 58 # invalid interface object
65
+ VIR_ERR_MULTIPLE_INTERFACES = 59 # more than one matching interface found
66
+ VIR_WAR_NO_NWFILTER = 60 # failed to start nwfilter driver
67
+ VIR_ERR_INVALID_NWFILTER = 61 # invalid nwfilter object
68
+ VIR_ERR_NO_NWFILTER = 62 # nw filter pool not found
69
+ VIR_ERR_BUILD_FIREWALL = 63 # nw filter pool not found
70
+ VIR_WAR_NO_SECRET = 64 # failed to start secret storage
71
+ VIR_ERR_INVALID_SECRET = 65 # invalid secret
72
+ VIR_ERR_NO_SECRET = 66 # secret not found
73
+ VIR_ERR_CONFIG_UNSUPPORTED = 67 # unsupported configuration construct
74
+ VIR_ERR_OPERATION_TIMEOUT = 68 # timeout occurred during operation
75
+ VIR_ERR_MIGRATE_PERSIST_FAILED = 69 # a migration worked, but making the VM persist on the dest host failed
76
+ VIR_ERR_HOOK_SCRIPT_FAILED = 70 # a synchronous hook script failed
77
+ VIR_ERR_INVALID_DOMAIN_SNAPSHOT = 71 # invalid domain snapshot
78
+ VIR_ERR_NO_DOMAIN_SNAPSHOT = 72 # domain snapshot not found
79
+ VIR_ERR_INVALID_STREAM = 73 # stream pointer not valid
80
+ VIR_ERR_ARGUMENT_UNSUPPORTED = 74 # valid API use but unsupported by the given driver
81
+ VIR_ERR_STORAGE_PROBE_FAILED = 75 # storage pool probe failed
82
+ VIR_ERR_STORAGE_POOL_BUILT = 76 # storage pool already built
83
+ VIR_ERR_SNAPSHOT_REVERT_RISKY = 77 # force was not requested for a risky domain snapshot revert
84
+ VIR_ERR_OPERATION_ABORTED = 78 # operation on a domain was canceled/aborted by user
85
+ VIR_ERR_AUTH_CANCELLED = 79 # authentication cancelled
86
+ VIR_ERR_NO_DOMAIN_METADATA = 80 # The metadata is not present
87
+ VIR_ERR_MIGRATE_UNSAFE = 81 # Migration is not safe
88
+ VIR_ERR_OVERFLOW = 82 # integer overflow
89
+ VIR_ERR_BLOCK_COPY_ACTIVE = 83 # action prevented by block copy job
90
+ VIR_ERR_OPERATION_UNSUPPORTED = 84 # The requested operation is not supported
91
+ VIR_ERR_SSH = 85 # error in ssh transport driver
92
+ VIR_ERR_AGENT_UNRESPONSIVE = 86 # guest agent is unresponsive, not running or not usable
93
+ VIR_ERR_RESOURCE_BUSY = 87 # resource is already in use
94
+ VIR_ERR_ACCESS_DENIED = 88 # operation on the object/resource was denied
95
+ VIR_ERR_DBUS_SERVICE = 89 # error from a dbus service
96
+ VIR_ERR_STORAGE_VOL_EXIST = 90 # the storage vol already exists
97
+ end
98
+ end
99
+ end
100
+ end
101
+
@@ -1,5 +1,5 @@
1
1
  module VagrantPlugins
2
2
  module ProviderLibvirt
3
- VERSION = '0.0.15'
3
+ VERSION = '0.0.16'
4
4
  end
5
5
  end
@@ -99,6 +99,8 @@ en:
99
99
  No domain found. %{error_message}
100
100
  attach_device_error: |-
101
101
  Error while attaching new device to domain. %{error_message}
102
+ detach_device_error: |-
103
+ Error while detaching device from domain. %{error_message}
102
104
  no_ip_address_error: |-
103
105
  No IP address found.
104
106
  management_network_error: |-
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: vagrant-libvirt
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.0.15
4
+ version: 0.0.16
5
5
  platform: ruby
6
6
  authors:
7
7
  - Lukas Stanek
@@ -10,7 +10,7 @@ authors:
10
10
  autorequire:
11
11
  bindir: bin
12
12
  cert_chain: []
13
- date: 2014-02-01 00:00:00.000000000 Z
13
+ date: 2014-05-11 00:00:00.000000000 Z
14
14
  dependencies:
15
15
  - !ruby/object:Gem::Dependency
16
16
  name: fog
@@ -116,15 +116,16 @@ files:
116
116
  - lib/vagrant-libvirt/action/share_folders.rb
117
117
  - lib/vagrant-libvirt/action/start_domain.rb
118
118
  - lib/vagrant-libvirt/action/suspend_domain.rb
119
- - lib/vagrant-libvirt/action/sync_folders.rb
120
- - lib/vagrant-libvirt/action/timed_provision.rb
121
119
  - lib/vagrant-libvirt/action/wait_till_up.rb
120
+ - lib/vagrant-libvirt/cap/mount_p9.rb
121
+ - lib/vagrant-libvirt/cap/synced_folder.rb
122
122
  - lib/vagrant-libvirt/config.rb
123
123
  - lib/vagrant-libvirt/errors.rb
124
124
  - lib/vagrant-libvirt/plugin.rb
125
125
  - lib/vagrant-libvirt/provider.rb
126
126
  - lib/vagrant-libvirt/templates/default_storage_pool.xml.erb
127
127
  - lib/vagrant-libvirt/templates/domain.xml.erb
128
+ - lib/vagrant-libvirt/templates/filesystem.xml.erb
128
129
  - lib/vagrant-libvirt/templates/interface.xml.erb
129
130
  - lib/vagrant-libvirt/templates/private_network.xml.erb
130
131
  - lib/vagrant-libvirt/templates/public_interface.xml.erb
@@ -132,6 +133,7 @@ files:
132
133
  - lib/vagrant-libvirt/util.rb
133
134
  - lib/vagrant-libvirt/util/collection.rb
134
135
  - lib/vagrant-libvirt/util/erb_template.rb
136
+ - lib/vagrant-libvirt/util/error_codes.rb
135
137
  - lib/vagrant-libvirt/util/network_util.rb
136
138
  - lib/vagrant-libvirt/util/timer.rb
137
139
  - lib/vagrant-libvirt/version.rb
@@ -158,7 +160,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
158
160
  version: '0'
159
161
  requirements: []
160
162
  rubyforge_project:
161
- rubygems_version: 2.0.14
163
+ rubygems_version: 2.2.2
162
164
  signing_key:
163
165
  specification_version: 4
164
166
  summary: Vagrant provider for libvirt.
@@ -1,66 +0,0 @@
1
- require "log4r"
2
- require "vagrant/util/subprocess"
3
-
4
- module VagrantPlugins
5
- module ProviderLibvirt
6
- module Action
7
- # This middleware uses `rsync` to sync the folders over to the
8
- # libvirt domain.
9
- class SyncFolders
10
- def initialize(app, env)
11
- @app = app
12
- @logger = Log4r::Logger.new("vagrant_libvirt::action::sync_folders")
13
- end
14
-
15
- def call(env)
16
- @app.call(env)
17
-
18
- ssh_info = env[:machine].ssh_info
19
-
20
- env[:machine].config.vm.synced_folders.each do |id, data|
21
- next if data[:type] == :nfs
22
- proxycommand = "-o ProxyCommand='#{ssh_info[:proxy_command]}'" if ssh_info[:proxy_command]
23
- hostpath = File.expand_path(data[:hostpath], env[:root_path])
24
- guestpath = data[:guestpath]
25
-
26
- # Make sure there is a trailing slash on the host path to
27
- # avoid creating an additional directory with rsync
28
- hostpath = "#{hostpath}/" if hostpath !~ /\/$/
29
-
30
- env[:ui].info(I18n.t('vagrant_libvirt.rsync_folder',
31
- :hostpath => hostpath,
32
- :guestpath => guestpath))
33
-
34
- # Create the guest path
35
- env[:machine].communicate.sudo("mkdir -p '#{guestpath}'")
36
- env[:machine].communicate.sudo(
37
- "chown #{ssh_info[:username]} '#{guestpath}'")
38
-
39
- # Rsync over to the guest path using the SSH info
40
- command = [
41
- 'rsync', '--del', '--verbose', '--archive', '-z',
42
- '--exclude', '.vagrant/',
43
- '-e', "ssh -p #{ssh_info[:port]} #{proxycommand} -o StrictHostKeyChecking=no #{ssh_key_options(ssh_info)}",
44
- hostpath,
45
- "#{ssh_info[:username]}@#{ssh_info[:host]}:#{guestpath}"]
46
-
47
- r = Vagrant::Util::Subprocess.execute(*command)
48
- if r.exit_code != 0
49
- raise Errors::RsyncError,
50
- :guestpath => guestpath,
51
- :hostpath => hostpath,
52
- :stderr => r.stderr
53
- end
54
- end
55
- end
56
- private
57
-
58
- def ssh_key_options(ssh_info)
59
- # Ensure that `private_key_path` is an Array (for Vagrant < 1.4)
60
- Array(ssh_info[:private_key_path]).map { |path| "-i '#{path}' " }.join
61
- end
62
- end
63
- end
64
- end
65
- end
66
-
@@ -1,21 +0,0 @@
1
- require 'vagrant-libvirt/util/timer'
2
-
3
- module VagrantPlugins
4
- module ProviderLibvirt
5
- module Action
6
- # This is the same as the builtin provision except it times the
7
- # provisioner runs.
8
- class TimedProvision < Vagrant::Action::Builtin::Provision
9
- def run_provisioner(env)
10
- timer = Util::Timer.time do
11
- super
12
- end
13
-
14
- env[:metrics] ||= {}
15
- env[:metrics]['provisioner_times'] ||= []
16
- env[:metrics]['provisioner_times'] << [env[:provisioner].class.to_s, timer]
17
- end
18
- end
19
- end
20
- end
21
- end