vagrant-compose 0.2.4 → 0.7.0
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +4 -4
- data/.gitignore +2 -1
- data/CHANGELOG.md +4 -0
- data/README.md +15 -408
- data/doc/declarative.md +431 -0
- data/doc/programmatic.md +435 -0
- data/lib/locales/en.yml +7 -4
- data/lib/vagrant/compose/config.rb +89 -20
- data/lib/vagrant/compose/declarative/cluster.rb +139 -0
- data/lib/vagrant/compose/errors.rb +5 -1
- data/lib/vagrant/compose/{util/node.rb → node.rb} +0 -0
- data/lib/vagrant/compose/programmatic/cluster.rb +227 -0
- data/lib/vagrant/compose/programmatic/node_group.rb +104 -0
- data/lib/vagrant/compose/version.rb +1 -1
- data/vagrant-compose.gemspec +3 -2
- metadata +22 -5
- data/lib/vagrant/compose/util/cluster.rb +0 -265
- data/lib/vagrant/compose/util/node_group.rb +0 -102
data/doc/declarative.md
ADDED
@@ -0,0 +1,431 @@
|
|
1
|
+
# Declarative Approach
|
2
|
+
|
3
|
+
Vagrant requires some ruby knowledge, because the Vagrantfile itself is based on ruby, and this fact sometimes is and obstacle for people with limited programming background.
|
4
|
+
|
5
|
+
This cannot be avoided, but by using the declarative approach also people with limited programming background can use vagrant-compose to easily define a cluster composed by many VMs.
|
6
|
+
|
7
|
+
With declarative approach, the definition of the cluster is done in yaml, and the ruby programming part within the Vagrantfile is reduced to the minimum.
|
8
|
+
|
9
|
+
## Quick start
|
10
|
+
|
11
|
+
Create a yaml file containing the definition of a cluster named `kubernates` with one `master` node and three `minions` nodes.
|
12
|
+
|
13
|
+
```yaml
|
14
|
+
kubernetes:
|
15
|
+
master:
|
16
|
+
instances: 1
|
17
|
+
minions:
|
18
|
+
instances: 3
|
19
|
+
```
|
20
|
+
|
21
|
+
Then create the following `Vagrantfile` for parsing the above yaml file
|
22
|
+
|
23
|
+
```ruby
|
24
|
+
Vagrant.configure(2) do |config|
|
25
|
+
#load cluster definition
|
26
|
+
config.cluster.from("mycluster.yaml")
|
27
|
+
|
28
|
+
#cluster creation
|
29
|
+
config.cluster.nodes.each do |node, index|
|
30
|
+
config.vm.define "#{node.boxname}" do |node_vm|
|
31
|
+
node_vm.vm.box = "#{node.box}"
|
32
|
+
end
|
33
|
+
end
|
34
|
+
end
|
35
|
+
```
|
36
|
+
|
37
|
+
The first part of the `Vagrantfile` contains the command for parsing the cluster definition:
|
38
|
+
|
39
|
+
```ruby
|
40
|
+
config.cluster.from("mycluster.yaml")
|
41
|
+
```
|
42
|
+
|
43
|
+
The second part of the `Vagrantfile` creates the cluster by defining a vm in VirtualBox for each node in the cluster:
|
44
|
+
|
45
|
+
```ruby
|
46
|
+
config.cluster.nodes.each do |node, index|
|
47
|
+
config.vm.define "#{node.boxname}" do |node_vm|
|
48
|
+
node_vm.vm.box = "#{node.box}"
|
49
|
+
end
|
50
|
+
end
|
51
|
+
```
|
52
|
+
|
53
|
+
If you run `vagrant up` you will get a 4 node cluster with following machines, based on `ubuntu/trusty64` base box (default).
|
54
|
+
|
55
|
+
- `master1`
|
56
|
+
- `minion1`
|
57
|
+
- `minion2`
|
58
|
+
- `minion3`
|
59
|
+
|
60
|
+
Done !
|
61
|
+
|
62
|
+
Of course, real-word scenarios are more complex; it is necessary to get more control in configuring the cluster topology and machine attributes, and finally you need also to implement automatic provisioning of software stack installed in the machines.
|
63
|
+
|
64
|
+
See following chapters for more details.
|
65
|
+
|
66
|
+
## Configuring the cluster
|
67
|
+
|
68
|
+
For instance, if you are setting up an environment for testing [Kubernetes](http://kubernetes.io/), your cluster will be composed by:
|
69
|
+
|
70
|
+
- a group of nodes for kubernetes master roles (masters)
|
71
|
+
- a group of nodes for deploying pods (minions)
|
72
|
+
|
73
|
+
Using the declarative approach, the above cluster can be defined using a yaml file, and using vagrant-compose plugin, it very easy to instruct Vagrant create a separate VM for each of the above node.
|
74
|
+
|
75
|
+
### Defining cluster and cluster attributes
|
76
|
+
|
77
|
+
The outer element of the yaml file should be an object with the cluster name:
|
78
|
+
|
79
|
+
````yaml
|
80
|
+
kubernetes:
|
81
|
+
...
|
82
|
+
````
|
83
|
+
|
84
|
+
Inside the cluster element, cluster attributes ca be defined:
|
85
|
+
|
86
|
+
```yaml
|
87
|
+
kubernetes:
|
88
|
+
box: ubuntu/trusty64
|
89
|
+
domain: test
|
90
|
+
...
|
91
|
+
```
|
92
|
+
|
93
|
+
Valid cluster attributes are defined in the following list; if an attribute is not provided, the default value apply.
|
94
|
+
- **`box`**
|
95
|
+
The value/value generator to be used for assigning a base box to nodes in the cluster; it defaults to ubuntu/trusty64.
|
96
|
+
NB. this attribute acts as defaults for all node groups, but each node group can override it.
|
97
|
+
- **`node_prefix`**
|
98
|
+
A prefix to be added before each node name / box name; it defaults to empty string (no prefix)
|
99
|
+
- **`domain`**
|
100
|
+
The network domain to wich the cluster belongs. It will be used for computing nodes fqdn; it defaults to vagrant
|
101
|
+
|
102
|
+
### Defining set of nodes
|
103
|
+
|
104
|
+
A cluster can be composed by one or more set of nodes; each set of nodes represent a group of one or more nodes with similar characteristics.
|
105
|
+
|
106
|
+
Node groups are defined as elements within the cluster element:
|
107
|
+
|
108
|
+
```yaml
|
109
|
+
kubernetes:
|
110
|
+
...
|
111
|
+
masters:
|
112
|
+
...
|
113
|
+
minions:
|
114
|
+
...
|
115
|
+
...
|
116
|
+
```
|
117
|
+
|
118
|
+
Each node group element can contain a list of attributes; attributes defined at node group level will act as templete/value generator for the same attribute for each node within the group.
|
119
|
+
|
120
|
+
Available node group attributes are defined in the following list; if an attribute is not provided, the default value apply.
|
121
|
+
|
122
|
+
- **`instances`**
|
123
|
+
The number of nodes in the node group; default value equals to 1.
|
124
|
+
|
125
|
+
- **`box`**
|
126
|
+
The value/value generator to be used for assigning a base box to nodes in the node group; default value equals to**`cluster.box`** attribute.
|
127
|
+
|
128
|
+
- **`boxname`**
|
129
|
+
The value/value generator to be used for assigning a boxname to nodes in the node group; default value is the following expression:
|
130
|
+
|
131
|
+
**`"{% if cluster_node_prefix %}{{cluster_node_prefix}}-{% endif %}{{group_name}}{{node_index + 1}}"`**
|
132
|
+
|
133
|
+
- **`hostname`**
|
134
|
+
The value/value generator to be used for assigning a hostname to nodes in the node group; default value is the following expression:
|
135
|
+
|
136
|
+
**`"{{boxname}}"`**
|
137
|
+
|
138
|
+
- **`fqdn`**
|
139
|
+
|
140
|
+
The value/value generator to be used for assigning a fqdn to nodes in the node group; default value is the following expression:
|
141
|
+
|
142
|
+
**`"{{hostname}}{% if cluster_domain %}.{{cluster_domain}}{% endif %}"`**
|
143
|
+
|
144
|
+
- **`aliases`**
|
145
|
+
The value/value generator to be used for assigning an alias/a list of aliases to nodes in the node group; default value is an empty list.
|
146
|
+
|
147
|
+
- **`ip`**
|
148
|
+
The value/value generator to be used for assigning an ip address to nodes in the node group; default value is the following expression:
|
149
|
+
|
150
|
+
**`"172.31.{{group_index}}.{{100 + node_index + 1}}"`**
|
151
|
+
|
152
|
+
- **`cpus`**
|
153
|
+
The value/value generator to be used for defining the number of vCPU for nodes in the node group; default value is 1.
|
154
|
+
|
155
|
+
- **`memory`**
|
156
|
+
The value/value generator to be used for defining the quantity of memory assigned to nodes in the node group; default value is 256.
|
157
|
+
|
158
|
+
- **`attributes`**
|
159
|
+
|
160
|
+
The value/value generator to be used for defining additional attributes for nodes in the node group; default value is an empty dictionary.
|
161
|
+
|
162
|
+
Please note that each attributes can be set to:
|
163
|
+
|
164
|
+
- A literal value, like for instance `"ubuntu/trusty64" or 256. Such value will be inherited - without changes - by all nodes in the node group.
|
165
|
+
- A [Jinja2](http://jinja.pocoo.org/docs/dev/) expressions, afterwards value_generator, that will be executed when building the nodes in the node group.
|
166
|
+
|
167
|
+
Jinja2 expressions are described in http://jinja.pocoo.org/docs/dev/templates/ ; on top of out of the box functions/filters defined in Jinja2, it is allowed usage of functions/filters defined in Ansible, as documented in http://docs.ansible.com/ansible/playbooks_filters.html.
|
168
|
+
|
169
|
+
Each expression will be executed within an execution context where a set of varibles is made available by the vagrant-playbook processors:
|
170
|
+
- cluster_name
|
171
|
+
- cluster_node_prefix
|
172
|
+
- cluster_domain
|
173
|
+
- group_index (the index of the nodegroup within the cluster, zero based)
|
174
|
+
- group_name
|
175
|
+
- node_index (the index of the node within the nodegroup, zero based)
|
176
|
+
- additionally, all attributes already computed for this node group will be presented as variable (attributes are computed in following order: box, boxname, hostname, aliases, fqdn, ip, cpus, memory, ansible_groups, attributes; for instance, when execution the expression for computing the hostname attribute
|
177
|
+
|
178
|
+
## Composing nodes
|
179
|
+
|
180
|
+
The yaml file containing the cluster definition can be used within a Vagrant file as a recipe for building a cluster with a VM for each node.
|
181
|
+
|
182
|
+
``` ruby
|
183
|
+
Vagrant.configure(2) do |config|
|
184
|
+
...
|
185
|
+
config.cluster.from("mycluster.yaml")
|
186
|
+
...
|
187
|
+
end
|
188
|
+
```
|
189
|
+
The above command will compose the cluster, transforming node groups in node, and store them in the `config.cluster.nodes` variable; each node has following attributes assigned according to value/value generators defined at node group level in the yaml file:
|
190
|
+
|
191
|
+
- **box**
|
192
|
+
- **boxname**
|
193
|
+
- **hostname**
|
194
|
+
- **fqdn**
|
195
|
+
- **aliases**
|
196
|
+
- **ip**
|
197
|
+
- **cpus**
|
198
|
+
- **memory**
|
199
|
+
- **attributes**
|
200
|
+
|
201
|
+
Two additional attributes will be automatically set for each node:
|
202
|
+
|
203
|
+
- **index**, [integer (zero based)], uniquely assigned to each node in the cluster
|
204
|
+
- **group_index**, [integer (zero based)], uniquely assigned to each node in a set of nodes
|
205
|
+
|
206
|
+
## Creating nodes
|
207
|
+
|
208
|
+
Given the list of nodes stored in the `config.cluster.nodes` variable, it is possible to create a multi-machine environment by iterating over the list:
|
209
|
+
|
210
|
+
``` ruby
|
211
|
+
config.cluster.nodes.each do |node|
|
212
|
+
...
|
213
|
+
end
|
214
|
+
```
|
215
|
+
|
216
|
+
Within the cycle you can instruct vagrant to create machines based on attributes of the current node; for instance, you can define a VM in VirtualBox (default Vagrant provider) and use the [vagrant-hostmanager](https://github.com/smdahlen/vagrant-hostmanager) plugin to set the hostname into the guest machine:
|
217
|
+
|
218
|
+
``` ruby
|
219
|
+
config.cluster.nodes.each do |node|
|
220
|
+
config.vm.define "#{node.boxname}" do |node_vm|
|
221
|
+
node_vm.vm.box = "#{node.box}"
|
222
|
+
node_vm.vm.network :private_network, ip: "#{node.ip}"
|
223
|
+
node_vm.vm.hostname = "#{node.fqdn}"
|
224
|
+
node_vm.hostmanager.aliases = node.aliases unless node.aliases.empty?
|
225
|
+
node_vm.vm.provision :hostmanager
|
226
|
+
|
227
|
+
node_vm.vm.provider "virtualbox" do |vb|
|
228
|
+
vb.name = "#{node.boxname}"
|
229
|
+
vb.memory = node.memory
|
230
|
+
vb.cpus = node.cpus
|
231
|
+
end
|
232
|
+
end
|
233
|
+
end
|
234
|
+
```
|
235
|
+
|
236
|
+
> In order to increase performance of node creation, you can leverage on support for linked clones introduced by Vagrant 1.8.1. Add the following line to the above script:
|
237
|
+
>
|
238
|
+
> vb.linked_clone = true if Vagrant::VERSION =~ /^1.8/
|
239
|
+
|
240
|
+
[vagrant-hostmanager](https://github.com/smdahlen/vagrant-hostmanager) requires following additional settings before the `config.cluster.nodes.each` command:
|
241
|
+
|
242
|
+
``` ruby
|
243
|
+
config.hostmanager.enabled = false
|
244
|
+
config.hostmanager.manage_host = true
|
245
|
+
config.hostmanager.include_offline = true
|
246
|
+
```
|
247
|
+
|
248
|
+
## Configuring ansible provisioning
|
249
|
+
|
250
|
+
The vagrant-compose plugin provides support for a straight forward provisioning of nodes in the cluster implemented with Ansible.
|
251
|
+
|
252
|
+
### Defining ansible_groups
|
253
|
+
|
254
|
+
Each set of nodes, and therefore all the nodes within the set, can be assigned to one or more ansible_groups.
|
255
|
+
|
256
|
+
In the following example, `masters` nodes will be part of `etcd` and `docker` ansible_groups.
|
257
|
+
|
258
|
+
```yaml
|
259
|
+
kubernetes:
|
260
|
+
...
|
261
|
+
masters:
|
262
|
+
...
|
263
|
+
ansible_groups:
|
264
|
+
- etcd
|
265
|
+
- docker
|
266
|
+
...
|
267
|
+
minions:
|
268
|
+
...
|
269
|
+
...
|
270
|
+
```
|
271
|
+
|
272
|
+
This configuration is used by the `config.cluster.from(…)` method in order to define an **inventory file** with all nodes; the resulting list of ansible_groups, each with its own list of host is stored in the `config.cluster.ansible_groups` variable.
|
273
|
+
|
274
|
+
Please note that the possibility to assign a node to one or more groups introduces an high degree of flexibility, as well as the capability to add nodes in different node groups to the same ansible_groups.
|
275
|
+
|
276
|
+
Ansible can leverage on ansible_groups for providing machines with the required software stacks.
|
277
|
+
NB. you can see resulting ansible_groups by using `debug` command with `verbose` equal to `true`.
|
278
|
+
|
279
|
+
### Defining group vars
|
280
|
+
|
281
|
+
In Ansible, the inventory file is usually integrated with a set of variables containing settings that will influence playbooks behaviour for all the host in a group.
|
282
|
+
|
283
|
+
The vagrant-compose plugin allows you to define one or more group_vars generator for each ansible_groups;
|
284
|
+
|
285
|
+
```yaml
|
286
|
+
kubernetes:
|
287
|
+
ansible_playbook_path: ...
|
288
|
+
...
|
289
|
+
masters:
|
290
|
+
ansible_groups:
|
291
|
+
- etcd
|
292
|
+
- docker
|
293
|
+
...
|
294
|
+
minions:
|
295
|
+
...
|
296
|
+
...
|
297
|
+
ansible_group_vars:
|
298
|
+
etcd :
|
299
|
+
var1: ...
|
300
|
+
docker :
|
301
|
+
var2: ...
|
302
|
+
var3: ...
|
303
|
+
...
|
304
|
+
```
|
305
|
+
|
306
|
+
Group vars can be set to literal value or to Jinja2 value generators, that will be executed during the parse of the yaml file; each Jinja2 expression will be executed within an execution context where a set of varibles is made available by the vagrant-playbook processors:
|
307
|
+
|
308
|
+
- **context_vars** see below
|
309
|
+
- **nodes**, list of nodes in the ansible_group to which the group_vars belong
|
310
|
+
|
311
|
+
Ansible group vars will be stored into yaml files saved into `{cluster.ansible_playbook_path}\group_vars` folder.
|
312
|
+
|
313
|
+
The variable `cluster.ansible_playbook_path` defaults to the current directory (the directory of the Vagrantfile) + `/provisioning`; this value can be changed like any other cluster attributes (see Defining cluster & cluster attributes).
|
314
|
+
|
315
|
+
### Defining host vars
|
316
|
+
|
317
|
+
While group vars will influence playbooks behaviour for all hosts in a group, in Ansible host vars will influence playbooks behaviour for a specific host.
|
318
|
+
|
319
|
+
The vagrant-compose plugin allows to define one or more host_vars generator for each ansible_groups;
|
320
|
+
|
321
|
+
```yaml
|
322
|
+
kubernetes:
|
323
|
+
ansible_playbook_path: ...
|
324
|
+
...
|
325
|
+
masters:
|
326
|
+
ansible_groups:
|
327
|
+
- etcd
|
328
|
+
- docker
|
329
|
+
...
|
330
|
+
minions:
|
331
|
+
...
|
332
|
+
...
|
333
|
+
ansible_group_vars:
|
334
|
+
...
|
335
|
+
ansible_host_vars:
|
336
|
+
etcd :
|
337
|
+
var5: ...
|
338
|
+
docker :
|
339
|
+
var6: ...
|
340
|
+
var7: ...
|
341
|
+
...
|
342
|
+
```
|
343
|
+
|
344
|
+
Host vars can be set to literal value or to Jinja2 value generators, that will be executed during the parse of the yaml file; each Jinja2 expression will be executed within an execution context where a set of varibles is made available by the vagrant-playbook processors:
|
345
|
+
|
346
|
+
- **context_vars** see below
|
347
|
+
- **node**, the node in the ansible_group to which host_vars belongs
|
348
|
+
|
349
|
+
Ansible host vars will be stored into yaml files saved into `{cluster.ansible_playbook_path}\host_vars` folder.
|
350
|
+
|
351
|
+
### Context vars
|
352
|
+
|
353
|
+
Group vars and host var generation by design can operate only with the set of information that comes with a groups of nodes or a single node.
|
354
|
+
|
355
|
+
However, sometimes, it is necessary to share some information across group of nodes.
|
356
|
+
This can be achieved by setting one or more context_vars generator for each ansible_groups.
|
357
|
+
|
358
|
+
```yaml
|
359
|
+
kubernetes:
|
360
|
+
ansible_playbook_path: ...
|
361
|
+
...
|
362
|
+
masters:
|
363
|
+
ansible_groups:
|
364
|
+
- etcd
|
365
|
+
- docker
|
366
|
+
...
|
367
|
+
minions:
|
368
|
+
...
|
369
|
+
...
|
370
|
+
context_vars:
|
371
|
+
etcd :
|
372
|
+
var8: ...
|
373
|
+
docker :
|
374
|
+
var9: ...
|
375
|
+
var10: ...
|
376
|
+
ansible_group_vars:
|
377
|
+
...
|
378
|
+
ansible_host_vars:
|
379
|
+
...
|
380
|
+
...
|
381
|
+
```
|
382
|
+
|
383
|
+
Context vars can be set to literal value or to Jinja2 value generators, that will be executed during the parse of the yaml file; each Jinja2 expression will be executed within an execution context where a set of varibles is made available by the vagrant-playbook processors:
|
384
|
+
|
385
|
+
- nodes, list of nodes in the ansible_group to which the group_vars belong
|
386
|
+
|
387
|
+
|
388
|
+
> Context_vars generator are always executed before group_vars and host_vars generators; the resulting context, is given in input to group_vars and host_vars generators.
|
389
|
+
|
390
|
+
Then, you can use the above context var when generating group_vars for host vars.
|
391
|
+
|
392
|
+
### Group of groups
|
393
|
+
|
394
|
+
A useful ansible inventory feature is [group of groups](http://docs.ansible.com/ansible/intro_inventory.html#hosts-and-groups).
|
395
|
+
|
396
|
+
By default vagrant-compose will generate a group named `[all_groups:children]` with all the ansible_groups defined in cluster configuration; however:
|
397
|
+
- you cannot rename all_groups
|
398
|
+
- you cannot exclude any ansible group from all_groups.
|
399
|
+
|
400
|
+
If you need higher control on groups of groups you can simply add a new item to the variable `config.cluster.ansible_groups` before creating nodes.
|
401
|
+
|
402
|
+
For instance:
|
403
|
+
```ruby
|
404
|
+
config.cluster.ansible_groups['k8s-cluster:children'] = ['kube-master', 'kube-nodes']
|
405
|
+
```
|
406
|
+
|
407
|
+
Please note that you can use this approach also for setting group variables directly into the inventory file using :vars (see ansible documentation).
|
408
|
+
|
409
|
+
## Creating nodes (with provisioning)
|
410
|
+
|
411
|
+
Given `config.cluster.ansible_groups` variable, generated group_vars and host_vars files, and of course an ansible playbook, it is possible to integrate provisioning into the node creation sequence.
|
412
|
+
|
413
|
+
NB. The example uses ansible parallel execution (all nodes are provisioned together in parallel after completing node creation).
|
414
|
+
|
415
|
+
``` ruby
|
416
|
+
config.cluster.from("mycluster.yaml")
|
417
|
+
...
|
418
|
+
config.cluster.nodes.each do |node|
|
419
|
+
config.vm.define "#{node.boxname}" do |node_vm|
|
420
|
+
...
|
421
|
+
if node.index == config.cluster.nodes.size - 1
|
422
|
+
node_vm.vm.provision "ansible" do |ansible|
|
423
|
+
ansible.limit = 'all' # enable parallel provisioning
|
424
|
+
ansible.playbook = "provisioning/playbook.yml"
|
425
|
+
ansible.groups = config.cluster.ansible_groups
|
426
|
+
end
|
427
|
+
end
|
428
|
+
end
|
429
|
+
end
|
430
|
+
```
|
431
|
+
|