lxd-common 0.9.8 → 0.9.9
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +5 -5
- data/.gitattributes +3 -0
- data/.gitignore +15 -15
- data/.rubocop.yml +2 -0
- data/.travis.yml +35 -34
- data/Gemfile +1 -1
- data/LICENSE +203 -202
- data/README.md +200 -11
- data/Rakefile +3 -3
- data/Vagrantfile +4 -4
- data/bin/console +6 -6
- data/lib/nexussw/lxd.rb +9 -4
- data/lib/nexussw/lxd/driver.rb +33 -17
- data/lib/nexussw/lxd/driver/cli.rb +2 -2
- data/lib/nexussw/lxd/driver/mixins/cli.rb +99 -47
- data/lib/nexussw/lxd/driver/mixins/helpers/wait.rb +4 -4
- data/lib/nexussw/lxd/driver/mixins/rest.rb +21 -27
- data/lib/nexussw/lxd/driver/rest.rb +2 -2
- data/lib/nexussw/lxd/rest_api.rb +49 -35
- data/lib/nexussw/lxd/rest_api/connection.rb +18 -14
- data/lib/nexussw/lxd/transport.rb +6 -6
- data/lib/nexussw/lxd/transport/cli.rb +2 -2
- data/lib/nexussw/lxd/transport/local.rb +2 -2
- data/lib/nexussw/lxd/transport/mixins/cli.rb +31 -15
- data/lib/nexussw/lxd/transport/mixins/helpers/execute.rb +2 -2
- data/lib/nexussw/lxd/transport/mixins/helpers/folder_txfr.rb +9 -9
- data/lib/nexussw/lxd/transport/mixins/helpers/users.rb +3 -3
- data/lib/nexussw/lxd/transport/mixins/local.rb +22 -18
- data/lib/nexussw/lxd/transport/mixins/rest.rb +51 -43
- data/lib/nexussw/lxd/transport/rest.rb +2 -2
- data/lib/nexussw/lxd/version.rb +1 -1
- data/lxd-common.gemspec +17 -17
- metadata +9 -7
data/README.md
CHANGED
@@ -1,34 +1,223 @@
|
|
1
|
-
#
|
1
|
+
# lxd-common [](https://travis-ci.org/NexusSW/lxd-common) [](https://gemnasium.com/github.com/NexusSW/lxd-common)
|
2
2
|
|
3
3
|
[](https://codeclimate.com/github/NexusSW/lxd-common/maintainability)
|
4
4
|
[](https://codeclimate.com/github/NexusSW/lxd-common/test_coverage)
|
5
5
|
|
6
6
|
## Installation
|
7
7
|
|
8
|
+
> **NOTE:** Versions < 0.10 are considered pre-release while Version 0.9.8 is considered the first stable pre-release. Make sure when filing issues that you are on at least 0.9.8 and upgrade to 0.10+ as soon as it is possible and available (imminent).
|
9
|
+
|
8
10
|
Add this line to your application's Gemfile:
|
9
11
|
|
10
12
|
```ruby
|
11
|
-
gem 'lxd-common'
|
13
|
+
gem 'lxd-common', '~> 0.9', '>= 0.9.8'
|
12
14
|
```
|
13
15
|
|
14
|
-
|
16
|
+
Next execute:
|
15
17
|
|
16
|
-
> bundle
|
18
|
+
> bundle install
|
17
19
|
|
18
|
-
Or install it yourself
|
20
|
+
Or install it yourself with:
|
19
21
|
|
20
22
|
> gem install lxd-common
|
21
23
|
|
24
|
+
## Background
|
25
|
+
|
26
|
+
This gem intends to completely obfuscate the complexities of communicating with an LXD host. LXD exposes two methods of interaction: a legacy LXC compatible CLI interface, and a new REST API. Both behave a bit differently, but remain equally valid for today, and will remain available for the forseeable future. This gem exposes an interface that will remain consistent, no matter the underlying communication mechanism.
|
27
|
+
|
28
|
+
Do you want to:
|
29
|
+
|
30
|
+
* Control a local LXD host via the CLI?
|
31
|
+
* Control a local/remote LXD host via their REST API?
|
32
|
+
* Control a LXD host and not care where it's located or want to deal with the API variances?
|
33
|
+
* Oh! and do you want to control a LXD host that's buried under 2-3+ layers of nesting? _(some scenarios still under dev at LXD & upstream)_
|
34
|
+
|
35
|
+
You're covered. You provide the credentials and we'll provide the agnostic API.
|
36
|
+
|
22
37
|
## Usage
|
23
38
|
|
24
|
-
|
39
|
+
This gem is split up into 2 functional areas: Driver and Transport. Constructing a driver object does require some environment specific information. But once you have the driver object constructed, all else is generic.
|
40
|
+
|
41
|
+
Drivers allow you to communicate with the LXD host directly and to perform all command and control operations such as creating a container, as well as starting/stopping/deleting and for setting and querying container configuration.
|
42
|
+
|
43
|
+
Transports allow you to make use of a container. They can execute commands inside of a container, and transfer files in and out of a container. Transports can be obtained by executing `transport = driver.transport_for 'containername'` on any driver.
|
44
|
+
|
45
|
+
### Drivers
|
46
|
+
|
47
|
+
There are 2 different drivers at your disposal:
|
48
|
+
|
49
|
+
* NexusSW::LXD::Driver::Rest
|
50
|
+
* NexusSW::LXD::Driver::CLI
|
51
|
+
|
52
|
+
And once they're constructed, they both respond to the same API calls in the same way, and so you no longer need to deal with API variances. The next sections tell you how to construct these drivers, but the final 'Driver methods' section, and everything afterwards, is completely generic.
|
53
|
+
|
54
|
+
#### REST Driver
|
55
|
+
|
56
|
+
ex: `driver = NexusSW::LXD::Driver::Rest.new 'https://someserver:8443', verify_ssl: false`
|
57
|
+
|
58
|
+
The first parameter of course being the REST endpoint of the LXD host. SSL verfication is enabled by default and can be disabled with the options shortcut of `verify_ssl: false`. The other option available is `ssl:` and has the following subkeys:
|
59
|
+
|
60
|
+
key | default | description
|
61
|
+
---|:---:|---
|
62
|
+
verify | true | overrides `verify_ssl`. verify_ssl is just a shortcut to this option for syntactic sugar when no other ssl options are needed
|
63
|
+
client_cert | ~/.config/lxc/client.crt | client certificate file used to authorize your connection to the LXD host
|
64
|
+
client_key | ~/.config/lxc/client.key | key file associated with the above certificate
|
65
|
+
|
66
|
+
ex2: `driver = NexusSW::LXD::Driver::Rest.new 'https://someserver:8443', ssl: { verify: false, client_cert: '/someother.crt', client_key: '/someother.key' }`
|
67
|
+
|
68
|
+
#### CLI Driver
|
69
|
+
|
70
|
+
The CLI driver is a different beast. All it knows is 'how' to construct `lxc` commands, just as you would type them into your shell if you were managing LXD manually. It doesn't know 'where' to execute those commands by default, and so you have to give it a transport. You'll see why in the next section.
|
71
|
+
|
72
|
+
ex: `driver = NexusSW::LXD::Driver::CLI.new(NexusSW::LXD::Transport::Local.new)`
|
73
|
+
|
74
|
+
There are no options at present. You only need to pass in a transport telling the CLI driver 'where' to run its commands. Above I've demonstrated using the Local Transport which executes the `lxc` commands on the local machine. Using the CLI driver with the Local transport is usually synonymous to using the Rest driver pointed at localhost (barring any environmental modifications).
|
75
|
+
|
76
|
+
##### CLI Driver _magic_: nested containers
|
77
|
+
|
78
|
+
And so I briefly mentioned that you can call `transport_for` on any driver to gain a transport instance. And I alluded to the CLI driver working with any transport. By extension this would mean that you can use the CLI driver to execute `lxc` commands almost anywhere. The CLI driver only cares that the transport it is given implements the NexusSW::LXD::Transport interface.
|
79
|
+
|
80
|
+
So what if we did this?
|
81
|
+
|
82
|
+
```ruby
|
83
|
+
1> outerdriver = NexusSW::LXD::Driver::Rest.new 'https://someserver:8443'
|
84
|
+
2> resttransport = outerdriver.transport_for 'somecontainer'
|
85
|
+
|
86
|
+
3> middledriver = NexusSW::LXD::Driver::CLI.new resttransport
|
87
|
+
4> middletransport = middledriver.transport_for 'nestedcontainer'
|
88
|
+
|
89
|
+
5> innerdriver = NexusSW::LXD::Driver::CLI.new middletransport
|
90
|
+
6> innertransport = innerdriver.transport_for 'some-very-nested-container'
|
91
|
+
|
92
|
+
7> contents = innertransport.read_file '/tmp/something_interesting'
|
93
|
+
8> puts innertransport.execute('dmesg').error!.stdout
|
94
|
+
```
|
95
|
+
|
96
|
+
That would totally work!!! The final innertransport object is a transport for 'some-very-nested-container' that is itself running within a container named 'nestedcontainer', that is also itself running inside of 'somecontainer' that is remotely located on 'someserver'.
|
97
|
+
|
98
|
+
On line 3 you can see the CLI driver accepting a transport produced by the REST API. And on line 5 you see the CLI driver consuming a transport produced by a different instance of itself. It all works given the caveat that you've set up nested LXD hosts on both 'somecontainer' and 'nestedcontainer' (an exercise for the reader).
|
99
|
+
|
100
|
+
#### Driver methods
|
101
|
+
|
102
|
+
To add one more point of emphasis: drivers talk to an LXD host while transports talk to individual containers.
|
103
|
+
|
104
|
+
**NOTE:** Due to the behavior of some of the underlying long-running tasks inherent in LXD (It can take a minute to create, start, or stop a container), these driver methods are implemented with a more convergent philosophy rather than being classically imperitive. This means that, for example, a call to delete_container will NOT fail if that container does not exist (one would otherwise expect a 404, e.g.). Similarly, start_container will not fail if the container is already started. The driver just ensures that the container state is what you just asked for, and if it is, then great! And if it is not, then it will send the necessary commands. If this is not your desired behavior, then ample other status check methods are available for you to handle these errors in your desired way.
|
105
|
+
|
106
|
+
Here are some of the things that you can do while talking to an LXD host: (driver methods)
|
107
|
+
|
108
|
+
method name | parameters | options | notes
|
109
|
+
---|---|---|---
|
110
|
+
create_container | container_name | _see next section_ | _see next section_
|
111
|
+
update_container | container_name | _see next section_ | _see next section_
|
112
|
+
start_container | container_name
|
113
|
+
stop_container | container_name | force: false<br />timeout: 0<br />retry_interval: _unset_<br />retry_count: _unset_ | the default behavior is no force, no timeout, and no retries. This is analogous to sending a SIGPWR to the init system within the container and waiting for a graceful shutdown. Specifying `force: true` instead sends a SIGKILL to the container
|
114
|
+
delete_container | container_name | | This will force stop the container, if it is currently running, prior to deleting. If you need a graceful shutdown, send the stop_container command on your own.
|
115
|
+
container_status | container_name | | returns a simple container status string such as 'running' or 'stopped'. There are many other intermediate states, but these 2 are most likely the ones to be interested in.
|
116
|
+
container | container_name | | returns the current container configuration
|
117
|
+
container_state | container_name | | returns various runtime info regarding the running state of the container (e.g. IP Address, Memory utilization, etc...)
|
118
|
+
wait_for | container_name,<br />what,<br />timeout = 60 | | 'what' currently supports `:ip`, and `:cloud_init`. After container start, calling this method will wait for an IP to be assigned, or for cloud-init to complete, respectively
|
119
|
+
transport_for | container_name | | returns a transport used to communicate directly with a container. Note that the container itself does not need to be directly routable. If you have access to the host, then this transport will just plain work.
|
120
|
+
|
121
|
+
##### Driver.create_container and Driver.update_container
|
122
|
+
|
123
|
+
_(Create only)_ Source image options: (one of alias, fingerprint, properties MUST be specified)
|
124
|
+
|
125
|
+
option | default | notes
|
126
|
+
---|---|---
|
127
|
+
:server | none | server:port from which to obtain a base image<br />e.g. `https://images.linuxcontainers.org`,<br />`https://cloud-images.ubuntu.com/releases`,<br />`https://cloud-images.ubuntu.com/daily`
|
128
|
+
:protocol | 'lxd' | Use 'lxd' or 'simplestreams' to communicate with the source server. Use 'simplestreams' for the above public servers
|
129
|
+
:alias | none | server dependant. e.g. `ubuntu:16.04` or `ubuntu/xenial`. Refer to the image catalog of each server for valid values.
|
130
|
+
:fingerprint | none | partial matches accepted. Each individual image gets a fingerprint.
|
131
|
+
:properties | none | A Hash with image search parameters. Refer to LXD documentation for valid key value pairs. (Use of :properties is discouraged due to its non-deterministic nature)
|
132
|
+
:autostart | true | Start the container immediately upon creation. This is *_not_* the same as config["boot.autostart"]
|
133
|
+
|
134
|
+
The above is how you locate a base image for your new container. The next options allow you to specify last minute configuration details.
|
135
|
+
|
136
|
+
option | default | notes
|
137
|
+
---|---|---
|
138
|
+
:profiles | ['default'] | An ordered array of profiles to apply to your new container. If you specify ANY profiles, and still want the 'default' profile applied, then you must explicitly include 'default' in your array
|
139
|
+
:config | none | A Hash of configuration options to apply to this specific container. Overrides configs specified within profiles. Refer to LXD documentation for valid key value pairs. https://github.com/lxc/lxd/blob/master/doc/containers.md
|
140
|
+
:devices | | Refer to the above link for supported device types and options
|
141
|
+
|
142
|
+
In the case of `update_container`, we will merely apply any options that are sent. If you wish to unset a configuration key, call update_container with an opposing value for that key.
|
143
|
+
|
144
|
+
###### Examples
|
145
|
+
|
146
|
+
minimum:
|
147
|
+
|
148
|
+
```ruby
|
149
|
+
driver.create_container 'some-container', alias: 'ubuntu:18.04', \
|
150
|
+
server: 'https://cloud-images.ubuntu.com/daily', protocol: 'simplestreams'
|
151
|
+
```
|
152
|
+
|
153
|
+
more options:
|
154
|
+
|
155
|
+
```ruby
|
156
|
+
driver.create_container 'dev-cattle-01', profiles: ['default', 'cattle'], \
|
157
|
+
config: { 'security.nesting': true, 'security.privileged': true }, \
|
158
|
+
server: 'https://images.linuxcontainers.org', protocol: 'simplestreams', \
|
159
|
+
alias: 'ubuntu/bionic'
|
160
|
+
```
|
161
|
+
|
162
|
+
```ruby
|
163
|
+
driver.update_container 'dev-cattle-01', \
|
164
|
+
config: { 'security.nesting': false, 'security.privileged': false, \
|
165
|
+
'security.idmap.isolated': true }, \
|
166
|
+
```
|
167
|
+
|
168
|
+
### Transports
|
169
|
+
|
170
|
+
And having navigated all of the above, you can now get your transport instance with<br />`transport = driver.transport_for 'some-container'`. Here's what you can do with it:
|
171
|
+
|
172
|
+
#### Transport methods
|
173
|
+
|
174
|
+
method name | parameters | options | notes
|
175
|
+
---|---|---|---
|
176
|
+
user | _user | _options = {})
|
177
|
+
read_file | _path)
|
178
|
+
write_file | _path, _content | _options = {})
|
179
|
+
download_file | _path, _local_path)
|
180
|
+
download_folder | _path, _local_path | _options = {})
|
181
|
+
upload_file | _local_path, _path | _options = {})
|
182
|
+
upload_folder | _local_path, _path | _options = {})
|
183
|
+
execute | command | _see next section_ | _see next section_
|
184
|
+
|
185
|
+
##### Transport.execute
|
186
|
+
|
187
|
+
This one merits a section of its own.
|
188
|
+
|
189
|
+
## Contributing: Development and Testing
|
190
|
+
|
191
|
+
Bug reports and pull requests are welcome on GitHub at <https://github.com/NexusSW/lxd-common>. DCO signoffs are required on your commits.
|
192
|
+
|
193
|
+
After checking out this repo, and installing development dependencies via `bundle install`, you can run some quick smoke tests that exercise most code paths via `rake mock`. This just exercises the gem without talking to an actual LXD host.
|
194
|
+
|
195
|
+
The full integration test suite `rake spec` requires:
|
196
|
+
|
197
|
+
* a local LXD host (> 2.0) with the REST API enabled on port 8443
|
198
|
+
* your user account to be in the lxd group
|
199
|
+
* a client certificate and key located in ~/.config/lxc/ named client.crt and client.key
|
200
|
+
* and that certificate to be trusted by the LXD host via `lxc config trust add ~/.config/lxc/client.crt`
|
201
|
+
* _recommended_: a decent internet connection. Most of the test time is spent in downloading images and spinning up containers. So the quicker your filesystem and internet, the quicker the tests will run. As a benchmark, Travis runs the integration tests in (presently) ~6-7 minutes, which includes installation and setup time. If feasible, set your LXD up to use BTRFS to speed up the nesting tests.
|
202
|
+
|
203
|
+
Refer to [spec/provision_recipe.rb](https://github.com/NexusSW/lxd-common/blob/master/spec/provision_recipe.rb) (Chef) if you need hints on how to accomplish this setup.
|
204
|
+
|
205
|
+
### TDD Cycle _(suggested)_
|
206
|
+
|
207
|
+
**NOTE:** In contrast to normal expectations, these tests are not isolated and are stateful in between test cases. Pay attention to tests that expect a container to exist, or for a file to exist within a container. (Apologies: integration test times would be exponentially higher if I didn't make this concession)
|
25
208
|
|
26
|
-
|
209
|
+
1. run `rake mock` to verify your dependencies. Expect it to pass. Fix whatever is causing failures.
|
210
|
+
2. **Write your failing test case(s)**
|
211
|
+
3. run `rake mock`. If there are compilation errors or exceptions generated by the spec/support/mock_transport.rb that cause pre-existing tests to fail, fix them so that past tests pass, and that your new tests compile (if feasible), but fail.
|
212
|
+
4. **Create your new functionality**
|
213
|
+
5. run `rake mock` again. Code the mock_transport (and/or potentially the mock_hk) to return the results that you expect the LXD host to return.
|
214
|
+
6. Repeat the above until `rake mock` passes
|
215
|
+
7. run `rake spec` to see if your changes work for real. Or if you can't set your workstation up for integration tests as described above, submit a PR and let Travis test it.
|
27
216
|
|
28
|
-
|
217
|
+
`rake mock` and `rake spec` must both pass before Travis will pass.
|
29
218
|
|
30
|
-
|
219
|
+
#### Development Goals
|
31
220
|
|
32
|
-
|
221
|
+
When developing your new functionality, keep in mind that this gem intends to obfuscate the differences in the behavior between LXD's CLI and REST interfaces. The test suite is designed to expose such differences, and it may become necessary for you to create completely seperate implementations in order to make them behave identically.
|
33
222
|
|
34
|
-
|
223
|
+
Whether to expose the behavior of the CLI, or that of the REST interface, or something in between, will be up for debate on a case by case basis. But they do need to have the same behavior. And that should be in line with the behavior of other pre-existing functions, should they fall within the same category or otherwise interact.
|
data/Rakefile
CHANGED
@@ -1,10 +1,10 @@
|
|
1
1
|
|
2
|
-
require
|
3
|
-
require
|
2
|
+
require "bundler/gem_tasks"
|
3
|
+
require "rspec/core/rake_task"
|
4
4
|
|
5
5
|
RSpec::Core::RakeTask.new(:spec)
|
6
6
|
RSpec::Core::RakeTask.new(:mock) do |task|
|
7
|
-
task.pattern =
|
7
|
+
task.pattern = "spec/**{,/*/**}/*_mock.rb"
|
8
8
|
end
|
9
9
|
|
10
10
|
task default: :spec
|
data/Vagrantfile
CHANGED
@@ -1,11 +1,11 @@
|
|
1
1
|
# -*- mode: ruby -*-
|
2
2
|
# vi: set ft=ruby :
|
3
3
|
|
4
|
-
Vagrant.configure(
|
5
|
-
config.vm.box =
|
4
|
+
Vagrant.configure("2") do |config|
|
5
|
+
config.vm.box = "ubuntu/xenial64"
|
6
6
|
# config.vm.box = 'ubuntu/trusty64'
|
7
7
|
|
8
|
-
config.vm.provision
|
9
|
-
chef.recipe = File.read
|
8
|
+
config.vm.provision "chef_apply" do |chef|
|
9
|
+
chef.recipe = File.read "spec/provision_recipe.rb"
|
10
10
|
end
|
11
11
|
end
|
data/bin/console
CHANGED
@@ -1,10 +1,10 @@
|
|
1
1
|
#!/usr/bin/env ruby
|
2
2
|
|
3
|
-
require
|
4
|
-
require
|
5
|
-
require
|
6
|
-
require
|
7
|
-
require
|
3
|
+
require "bundler/setup"
|
4
|
+
require "nexussw/lxd/driver/cli"
|
5
|
+
require "nexussw/lxd/driver/rest"
|
6
|
+
require "nexussw/lxd/transport/cli"
|
7
|
+
require "nexussw/lxd/transport/rest"
|
8
8
|
|
9
9
|
# You can add fixtures and/or initialization code here to make experimenting
|
10
10
|
# with your gem easier. You can also use a different console, if you like.
|
@@ -13,5 +13,5 @@ require 'nexussw/lxd/transport/rest'
|
|
13
13
|
# require 'pry'
|
14
14
|
# Pry.start
|
15
15
|
|
16
|
-
require
|
16
|
+
require "irb"
|
17
17
|
IRB.start
|
data/lib/nexussw/lxd.rb
CHANGED
@@ -1,4 +1,4 @@
|
|
1
|
-
require
|
1
|
+
require "timeout"
|
2
2
|
|
3
3
|
module NexusSW
|
4
4
|
module LXD
|
@@ -32,9 +32,14 @@ module NexusSW
|
|
32
32
|
def self.symbolize_keys(hash)
|
33
33
|
{}.tap do |retval|
|
34
34
|
hash.each do |k, v|
|
35
|
-
|
36
|
-
|
37
|
-
|
35
|
+
if %w{config expanded_config}.include? k
|
36
|
+
retval[k.to_sym] = v
|
37
|
+
next
|
38
|
+
elsif v.is_a?(Array)
|
39
|
+
v.map! do |a|
|
40
|
+
a.is_a?(Hash) ? symbolize_keys(a) : a
|
41
|
+
end
|
42
|
+
end
|
38
43
|
retval[k.to_sym] = v.is_a?(Hash) ? symbolize_keys(v) : v
|
39
44
|
end
|
40
45
|
end
|
data/lib/nexussw/lxd/driver.rb
CHANGED
@@ -1,24 +1,24 @@
|
|
1
|
-
require
|
1
|
+
require "nexussw/lxd"
|
2
2
|
|
3
3
|
module NexusSW
|
4
4
|
module LXD
|
5
5
|
class Driver
|
6
6
|
STATUS_CODES = {
|
7
|
-
100
|
8
|
-
101
|
9
|
-
102
|
10
|
-
103
|
11
|
-
104
|
12
|
-
105
|
13
|
-
106
|
14
|
-
107
|
15
|
-
108
|
16
|
-
109
|
17
|
-
110
|
18
|
-
111
|
19
|
-
200
|
20
|
-
400
|
21
|
-
401
|
7
|
+
100 => "created",
|
8
|
+
101 => "started",
|
9
|
+
102 => "stopped",
|
10
|
+
103 => "running",
|
11
|
+
104 => "cancelling",
|
12
|
+
105 => "pending",
|
13
|
+
106 => "starting",
|
14
|
+
107 => "stopping",
|
15
|
+
108 => "aborting",
|
16
|
+
109 => "freezing",
|
17
|
+
110 => "frozen",
|
18
|
+
111 => "thawed",
|
19
|
+
200 => "success",
|
20
|
+
400 => "failure",
|
21
|
+
401 => "cancelled",
|
22
22
|
}.freeze
|
23
23
|
|
24
24
|
def create_container(_container_name, _container_options)
|
@@ -37,6 +37,10 @@ module NexusSW
|
|
37
37
|
raise "#{self.class}#delete_container not implemented"
|
38
38
|
end
|
39
39
|
|
40
|
+
def update_container(_container_name, _container_options)
|
41
|
+
raise "#{self.class}#update_container not implemented"
|
42
|
+
end
|
43
|
+
|
40
44
|
def container_status(_container_id)
|
41
45
|
raise "#{self.class}#container_status not implemented"
|
42
46
|
end
|
@@ -49,13 +53,25 @@ module NexusSW
|
|
49
53
|
raise "#{self.class}#container_state not implemented"
|
50
54
|
end
|
51
55
|
|
52
|
-
def wait_for(_what)
|
56
|
+
def wait_for(_container_name, _what, _timeout = 60)
|
53
57
|
raise "#{self.class}#wait_for not implemented"
|
54
58
|
end
|
55
59
|
|
56
60
|
def transport_for(_container_name)
|
57
61
|
raise "#{self.class}#transport_for not implemented"
|
58
62
|
end
|
63
|
+
|
64
|
+
def self.convert_bools(oldhash)
|
65
|
+
{}.tap do |retval|
|
66
|
+
oldhash.each do |k, v|
|
67
|
+
retval[k] = case v
|
68
|
+
when "true" then true
|
69
|
+
when "false" then false
|
70
|
+
else v.is_a?(Hash) ? convert_bools(v) : v
|
71
|
+
end
|
72
|
+
end
|
73
|
+
end
|
74
|
+
end
|
59
75
|
end
|
60
76
|
end
|
61
77
|
end
|
@@ -1,8 +1,8 @@
|
|
1
|
-
require
|
2
|
-
require
|
3
|
-
require
|
4
|
-
require
|
5
|
-
require
|
1
|
+
require "nexussw/lxd/driver/mixins/helpers/wait"
|
2
|
+
require "nexussw/lxd/transport/cli"
|
3
|
+
require "tempfile"
|
4
|
+
require "yaml"
|
5
|
+
require "json"
|
6
6
|
|
7
7
|
module NexusSW
|
8
8
|
module LXD
|
@@ -17,12 +17,13 @@ module NexusSW
|
|
17
17
|
attr_reader :inner_transport, :driver_options
|
18
18
|
|
19
19
|
def transport_for(container_name)
|
20
|
-
Transport::CLI.new inner_transport, container_name, info: YAML.load(inner_transport.execute(
|
20
|
+
Transport::CLI.new inner_transport, container_name, info: YAML.load(inner_transport.execute("lxc info").error!.stdout)
|
21
21
|
end
|
22
22
|
|
23
23
|
def create_container(container_name, container_options = {})
|
24
|
+
autostart = (container_options.delete(:autostart) != false)
|
24
25
|
if container_exists? container_name
|
25
|
-
start_container
|
26
|
+
start_container(container_name) if autostart
|
26
27
|
return container_name
|
27
28
|
end
|
28
29
|
cline = "lxc launch #{image_alias(container_options)} #{container_name}"
|
@@ -30,36 +31,101 @@ module NexusSW
|
|
30
31
|
profiles.each { |p| cline += " -p #{p}" }
|
31
32
|
configs = container_options[:config] || {}
|
32
33
|
configs.each { |k, v| cline += " -c #{k}=#{v}" }
|
34
|
+
if !autostart || container_options[:devices] # append to the cline to avoid potential lag between create & stop
|
35
|
+
cline += " && lxc stop -f #{container_name}"
|
36
|
+
cline = ["sh", "-c", cline] # There's no guarantee that inner_transport is running a shell for the && operator
|
37
|
+
end
|
33
38
|
inner_transport.execute(cline).error!
|
34
|
-
|
39
|
+
if container_options[:devices]
|
40
|
+
update_container(container_name, devices: container_options[:devices])
|
41
|
+
start_container(container_name) if autostart
|
42
|
+
else
|
43
|
+
wait_for_status container_name, "running" if autostart
|
44
|
+
end
|
35
45
|
container_name
|
36
46
|
end
|
37
47
|
|
48
|
+
def update_container(container_name, container_options)
|
49
|
+
raise NexusSW::LXD::RestAPI::Error::NotFound, "Container (#{container_name}) does not exist" unless container_exists? container_name
|
50
|
+
configs = container_options[:config]
|
51
|
+
devices = container_options[:devices]
|
52
|
+
profiles = container_options[:profiles]
|
53
|
+
existing = container(container_name)
|
54
|
+
|
55
|
+
if configs
|
56
|
+
configs.each do |k, v|
|
57
|
+
if v.nil?
|
58
|
+
next unless existing[:config][k]
|
59
|
+
inner_transport.execute("lxc config unset #{container_name} #{k}").error!
|
60
|
+
else
|
61
|
+
next if existing[:config][k] == v
|
62
|
+
inner_transport.execute("lxc config set #{container_name} #{k} #{v}").error!
|
63
|
+
end
|
64
|
+
end
|
65
|
+
end
|
66
|
+
|
67
|
+
if devices
|
68
|
+
devices.each do |name, device|
|
69
|
+
cmd = "add"
|
70
|
+
if device.nil?
|
71
|
+
next unless existing[:devices].include? name
|
72
|
+
inner_transport.execute("lxc config device remove #{container_name} #{name}").error!
|
73
|
+
next
|
74
|
+
elsif existing[:devices].include?(name)
|
75
|
+
cmd = "set"
|
76
|
+
if existing[:devices][name][:type] != device[:type]
|
77
|
+
inner_transport.execute("lxc config device remove #{container_name} #{name}").error!
|
78
|
+
cmd = "add"
|
79
|
+
end
|
80
|
+
end
|
81
|
+
if cmd == "add"
|
82
|
+
cline = "lxc config device add #{container_name} #{name} #{device[:type]}"
|
83
|
+
device.each do |k, v|
|
84
|
+
cline << " #{k}=#{v}"
|
85
|
+
end
|
86
|
+
inner_transport.execute(cline).error!
|
87
|
+
else
|
88
|
+
device.each do |k, v|
|
89
|
+
next if k == :type
|
90
|
+
next if v == existing[:devices][name][k]
|
91
|
+
inner_transport.execute("lxc config device set #{container_name} #{name} #{k} #{v}").error!
|
92
|
+
end
|
93
|
+
end
|
94
|
+
end
|
95
|
+
end
|
96
|
+
|
97
|
+
if profiles
|
98
|
+
inner_transport.execute("lxc profile assign #{container_name} #{profiles.join(",")}").error! unless profiles == existing[:profiles]
|
99
|
+
end
|
100
|
+
|
101
|
+
container container_name
|
102
|
+
end
|
103
|
+
|
38
104
|
def start_container(container_id)
|
39
|
-
return if container_status(container_id) ==
|
105
|
+
return if container_status(container_id) == "running"
|
40
106
|
inner_transport.execute("lxc start #{container_id}").error!
|
41
|
-
wait_for_status container_id,
|
107
|
+
wait_for_status container_id, "running"
|
42
108
|
end
|
43
109
|
|
44
110
|
def stop_container(container_id, options = {})
|
45
111
|
options ||= {} # default behavior: no timeout or retries. These functions are up to the consumer's context and not really 'sane' defaults
|
46
|
-
return if container_status(container_id) ==
|
112
|
+
return if container_status(container_id) == "stopped"
|
47
113
|
return inner_transport.execute("lxc stop #{container_id} --force", capture: false).error! if options[:force]
|
48
114
|
LXD.with_timeout_and_retries(options) do
|
49
|
-
return if container_status(container_id) ==
|
115
|
+
return if container_status(container_id) == "stopped"
|
50
116
|
timeout = " --timeout=#{options[:retry_interval]}" if options[:retry_interval]
|
51
117
|
retval = inner_transport.execute("lxc stop #{container_id}#{timeout || ''}", capture: false)
|
52
118
|
begin
|
53
119
|
retval.error!
|
54
120
|
rescue => e
|
55
|
-
return if container_status(container_id) ==
|
121
|
+
return if container_status(container_id) == "stopped"
|
56
122
|
# can't distinguish between timeout, or other error.
|
57
123
|
# but if the status call is not popping a 404, and we're not stopped, then a retry is worth it
|
58
124
|
raise Timeout::Retry.new(e) if timeout # rubocop:disable Style/RaiseArgs
|
59
125
|
raise
|
60
126
|
end
|
61
127
|
end
|
62
|
-
wait_for_status container_id,
|
128
|
+
wait_for_status container_id, "stopped"
|
63
129
|
end
|
64
130
|
|
65
131
|
def delete_container(container_id)
|
@@ -71,20 +137,6 @@ module NexusSW
|
|
71
137
|
STATUS_CODES[container(container_id)[:status_code].to_i]
|
72
138
|
end
|
73
139
|
|
74
|
-
def convert_keys(oldhash)
|
75
|
-
return oldhash unless oldhash.is_a?(Hash) || oldhash.is_a?(Array)
|
76
|
-
retval = {}
|
77
|
-
if oldhash.is_a? Array
|
78
|
-
retval = []
|
79
|
-
oldhash.each { |v| retval << convert_keys(v) }
|
80
|
-
else
|
81
|
-
oldhash.each do |k, v|
|
82
|
-
retval[k.to_sym] = convert_keys(v)
|
83
|
-
end
|
84
|
-
end
|
85
|
-
retval
|
86
|
-
end
|
87
|
-
|
88
140
|
# YAML is not supported until somewhere in the feature branch
|
89
141
|
# the YAML return has :state and :container at the root level
|
90
142
|
# the JSON return has no :container (:container is root)
|
@@ -94,7 +146,7 @@ module NexusSW
|
|
94
146
|
res = inner_transport.execute("lxc list #{container_id} --format=json")
|
95
147
|
res.error!
|
96
148
|
JSON.parse(res.stdout).each do |c|
|
97
|
-
return
|
149
|
+
return LXD.symbolize_keys(c["state"]) if c["name"] == container_id
|
98
150
|
end
|
99
151
|
nil
|
100
152
|
end
|
@@ -103,14 +155,14 @@ module NexusSW
|
|
103
155
|
res = inner_transport.execute("lxc list #{container_id} --format=json")
|
104
156
|
res.error!
|
105
157
|
JSON.parse(res.stdout).each do |c|
|
106
|
-
return
|
158
|
+
return Driver.convert_bools(LXD.symbolize_keys(c.reject { |k, _| k == "state" })) if c["name"] == container_id
|
107
159
|
end
|
108
160
|
nil
|
109
161
|
end
|
110
162
|
|
111
163
|
def container_exists?(container_id)
|
112
164
|
return true if container_status(container_id)
|
113
|
-
|
165
|
+
false
|
114
166
|
rescue
|
115
167
|
false
|
116
168
|
end
|
@@ -130,32 +182,32 @@ module NexusSW
|
|
130
182
|
|
131
183
|
private
|
132
184
|
|
133
|
-
def remote_for!(url, protocol =
|
134
|
-
raise
|
185
|
+
def remote_for!(url, protocol = "lxd")
|
186
|
+
raise "Protocol is required" unless protocol # protect me from accidentally slipping in a nil
|
135
187
|
# normalize the url and 'require' protocol to protect against a scenario:
|
136
188
|
# 1) user only specifies https://someimageserver.org without specifying the protocol
|
137
189
|
# 2) the rest of this function would blindly add that without saying the protocol
|
138
190
|
# 3) 'lxc remote add' would add that remote, but defaults to the lxd protocol and appends ':8443' to the saved url
|
139
191
|
# 4) the next time this function is called we would not match that same entry due to the ':8443'
|
140
192
|
# 5) ultimately resulting in us adding a new remote EVERY time this function is called
|
141
|
-
port = url.split(
|
142
|
-
url +=
|
193
|
+
port = url.split(":", 3)[2]
|
194
|
+
url += ":8443" unless port || protocol != "lxd"
|
143
195
|
remotes = begin
|
144
|
-
YAML.load(inner_transport.read_file(
|
196
|
+
YAML.load(inner_transport.read_file("~/.config/lxc/config.yml")) || {}
|
145
197
|
rescue
|
146
198
|
{}
|
147
199
|
end
|
148
200
|
# make sure these default entries are available to us even if config.yml isn't created yet
|
149
201
|
# and i've seen instances where these defaults don't live in the config.yml
|
150
|
-
remotes = {
|
151
|
-
|
152
|
-
|
153
|
-
|
202
|
+
remotes = { "remotes" => {
|
203
|
+
"images" => { "addr" => "https://images.linuxcontainers.org" },
|
204
|
+
"ubuntu" => { "addr" => "https://cloud-images.ubuntu.com/releases" },
|
205
|
+
"ubuntu-daily" => { "addr" => "https://cloud-images.ubuntu.com/daily" },
|
154
206
|
} }.merge remotes
|
155
207
|
max = 0
|
156
|
-
remotes[
|
157
|
-
return remote.to_s if data[
|
158
|
-
num = remote.to_s.split(
|
208
|
+
remotes["remotes"].each do |remote, data|
|
209
|
+
return remote.to_s if data["addr"] == url
|
210
|
+
num = remote.to_s.split("-", 2)[1] if remote.to_s.start_with? "images-"
|
159
211
|
max = num.to_i if num && num.to_i > max
|
160
212
|
end
|
161
213
|
remote = "images-#{max + 1}"
|
@@ -163,22 +215,22 @@ module NexusSW
|
|
163
215
|
remote
|
164
216
|
end
|
165
217
|
|
166
|
-
def image(properties, remote =
|
218
|
+
def image(properties, remote = "")
|
167
219
|
return nil unless properties && properties.any?
|
168
220
|
cline = "lxc image list #{remote} --format=json"
|
169
221
|
properties.each { |k, v| cline += " #{k}=#{v}" }
|
170
222
|
res = inner_transport.execute cline
|
171
223
|
res.error!
|
172
224
|
res = JSON.parse(res.stdout)
|
173
|
-
return res[0][
|
225
|
+
return res[0]["fingerprint"] if res.any?
|
174
226
|
end
|
175
227
|
|
176
228
|
def image_alias(container_options)
|
177
|
-
remote = container_options[:server] ? remote_for!(container_options[:server], container_options[:protocol] ||
|
229
|
+
remote = container_options[:server] ? remote_for!(container_options[:server], container_options[:protocol] || "lxd") + ":" : ""
|
178
230
|
name = container_options[:alias]
|
179
231
|
name ||= container_options[:fingerprint]
|
180
232
|
name ||= image(container_options[:properties], remote)
|
181
|
-
raise
|
233
|
+
raise "No image parameters. One of alias, fingerprint, or properties must be specified (The CLI interface does not support empty containers)" unless name
|
182
234
|
"#{remote}#{name}"
|
183
235
|
end
|
184
236
|
end
|