dev-lxc 3.2.0 → 3.3.0

Sign up to get free protection for your applications and to get access to all the features.
@@ -1,10 +1,10 @@
1
- # dev-lxc 2.0 is Available
2
-
3
- Here are some of the new features which provide a significantly simplified and streamlined usage.
4
-
5
- * mixlib-install library is used to automatically manage a cache of product packages
6
- * Genuine container snapshot management (make as many snapshots as you want)
7
- * New "nodes" server type which auto configures nodes for a Chef Server in the same cluster
8
- * Removed all xc-... bash functions because the new "nodes" server type replaces this functionality
9
- * Able to build Chef Server HA 2.0 cluster using chef-backend
10
- * Updated and simplified READMEs
1
+ # dev-lxc 2.0 is Available
2
+
3
+ Here are some of the new features which provide a significantly simplified and streamlined usage.
4
+
5
+ * mixlib-install library is used to automatically manage a cache of product packages
6
+ * Genuine container snapshot management (make as many snapshots as you want)
7
+ * New "nodes" server type which auto configures nodes for a Chef Server in the same cluster
8
+ * Removed all xc-... bash functions because the new "nodes" server type replaces this functionality
9
+ * Able to build Chef Server HA 2.0 cluster using chef-backend
10
+ * Updated and simplified READMEs
@@ -1,30 +1,30 @@
1
- ### Maintain Uniqueness Across Multiple Clusters
2
-
3
- The default cluster configs are already designed to be unique from each other but as you build
4
- more clusters you have to maintain uniqueness across the YAML config files for the following items.
5
-
6
- * Server names, `api_fqdn` and `analytics_fqdn`
7
-
8
- Server names should really be unique across all clusters.
9
-
10
- Even when cluster A is shutdown, if cluster B uses the same server names when it is created it
11
- will use the already existing servers from cluster A.
12
-
13
- `api_fqdn` and `analytics_fqdn` uniqueness only matters when clusters with the same `api_fqdn`
14
- and `analytics_fqdn` are running.
15
-
16
- If cluster B is started with the same `api_fqdn` or `analytics_fqdn` as an already running cluster A,
17
- then cluster B will overwrite cluster A's DNS resolution of `api_fqdn` or `analytics_fqdn`.
18
-
19
- * IP Addresses
20
-
21
- IP addresses uniqueness only matters when clusters with the same IP's are running.
22
-
23
- If cluster B is started with the same IP's as an already running cluster A, then cluster B
24
- will overwrite cluster A's DHCP reservation of the IP's but dnsmasq will still refuse to
25
- assign the IP's to cluster B because they already in use by cluster A. dnsmasq then assigns
26
- random IP's from the DHCP pool to cluster B leaving it in an unexpected state.
27
-
28
- The `dev-lxc-platform` creates the IP range 10.0.3.150 - 254 for DHCP reserved IP's.
29
-
30
- Use unique IP's from that range when configuring clusters.
1
+ ### Maintain Uniqueness Across Multiple Clusters
2
+
3
+ The default cluster configs are already designed to be unique from each other but as you build
4
+ more clusters you have to maintain uniqueness across the YAML config files for the following items.
5
+
6
+ * Server names, `api_fqdn` and `analytics_fqdn`
7
+
8
+ Server names should really be unique across all clusters.
9
+
10
+ Even when cluster A is shutdown, if cluster B uses the same server names when it is created it
11
+ will use the already existing servers from cluster A.
12
+
13
+ `api_fqdn` and `analytics_fqdn` uniqueness only matters when clusters with the same `api_fqdn`
14
+ and `analytics_fqdn` are running.
15
+
16
+ If cluster B is started with the same `api_fqdn` or `analytics_fqdn` as an already running cluster A,
17
+ then cluster B will overwrite cluster A's DNS resolution of `api_fqdn` or `analytics_fqdn`.
18
+
19
+ * IP Addresses
20
+
21
+ IP addresses uniqueness only matters when clusters with the same IP's are running.
22
+
23
+ If cluster B is started with the same IP's as an already running cluster A, then cluster B
24
+ will overwrite cluster A's DHCP reservation of the IP's but dnsmasq will still refuse to
25
+ assign the IP's to cluster B because they already in use by cluster A. dnsmasq then assigns
26
+ random IP's from the DHCP pool to cluster B leaving it in an unexpected state.
27
+
28
+ The `dev-lxc-platform` creates the IP range 10.0.3.150 - 254 for DHCP reserved IP's.
29
+
30
+ Use unique IP's from that range when configuring clusters.
data/docs/mitmproxy.md CHANGED
@@ -1,7 +1,7 @@
1
- ### Use mitmproxy to view HTTP traffic
2
-
3
- Run `mitmproxy` in a terminal on the host instance.
4
-
5
- Uncomment the `https_proxy` line in the chef-repo's `.chef/knife.rb` or in a node's `/etc/chef/client.rb` so traffic from knife commands or chef-client runs will be proxied through mitmproxy making the HTTP requests visible in the mitmproxy console.
6
-
7
- If you enabled local port forwarding for port 8080 in your workstation's SSH config file and configured your web browser to use `127.0.0.1:8080` for HTTP and HTTPS proxies as described in the [dev-lxc-platform README.md](https://github.com/jeremiahsnapp/dev-lxc-platform) then you should be able to see the HTTP requests appear in the mitmproxy console.
1
+ ### Use mitmproxy to view HTTP traffic
2
+
3
+ Run `mitmproxy` in a terminal on the host instance.
4
+
5
+ Uncomment the `https_proxy` line in the chef-repo's `.chef/knife.rb` or in a node's `/etc/chef/client.rb` so traffic from knife commands or chef-client runs will be proxied through mitmproxy making the HTTP requests visible in the mitmproxy console.
6
+
7
+ If you enabled local port forwarding for port 8080 in your workstation's SSH config file and configured your web browser to use `127.0.0.1:8080` for HTTP and HTTPS proxies as described in the [dev-lxc-platform README.md](https://github.com/jeremiahsnapp/dev-lxc-platform) then you should be able to see the HTTP requests appear in the mitmproxy console.
data/docs/usage.md CHANGED
@@ -1,213 +1,213 @@
1
- ## Usage
2
-
3
- ### dl Command and Subcommands
4
-
5
- `dl` is the dev-lxc command line tool.
6
-
7
- `dev-lxc` subcommands and some options can be auto-completed by pressing the `Tab` key.
8
-
9
- You only have to type enough of a `dev-lxc` subcommand to make it unique.
10
-
11
- For example, the following commands are equivalent:
12
-
13
- ```
14
- dl help
15
- dl he
16
- ```
17
-
18
- ### Display dev-lxc help
19
-
20
- ```
21
- dl help
22
-
23
- dl help <subcommand>
24
- ```
25
-
26
- ### Configure a cluster
27
-
28
- See the [configuration docs](docs/configuration.md) to learn how to use the `dl init` command to create and configure a `dev-lxc.yml` file.
29
-
30
- ### cluster-view, tks, tls commands
31
-
32
- The dev-lxc-platform comes with some commands that create and manage helpful
33
- tmux/byobu sessions to more easily see the state of a cluster.
34
-
35
- Running the `cluster-view` command in the same directory as a `dev-lxc.yml` file
36
- creates a tmux/byobu session with the same name as the cluster's directory.
37
-
38
- `cluster-view` can also be run with the parent directory of a `dev-lxc.yml` file
39
- as the first argument and `cluster-view` will change to that directory before
40
- creating the tmux/byobu session.
41
-
42
- The session's first window is named "cluster".
43
-
44
- The left side is for running dev-lxc commands.
45
-
46
- The right side updates every 0.5 seconds with the cluster's status provided by `dl status`.
47
-
48
- The session's second window is named "shell". It opens in the same directory as the
49
- cluster's `dev-lxc.yml` file.
50
-
51
- The `tls` and `tks` commands are really aliases.
52
-
53
- `tls` is an alias for `tmux list-sessions` and is used to see what tmux/byobu sessions
54
- are running.
55
-
56
- `tks` is an alias for `tmux kill-session -t` and is used to kill tmux/byobu sessions.
57
- When specifying the session to be killed you only need as many characters of the session
58
- name that are required to make the name unique among the list of running sessions.
59
-
60
- I recommend switching to a different running tmux/byobu session before killing the current
61
- tmux/byobu session. Otherwise you will need to reattach to the remaining tmux/byobu session.
62
- Use the keyboard shortcuts Alt-Up/Down to easily switch between tmux/byobu sessions.
63
-
64
- ### Cluster status
65
-
66
- Run the following command to see the status of the cluster.
67
-
68
- ```
69
- dl status
70
- ```
71
-
72
- This is an example of the output.
73
-
74
- ```
75
- chef.lxc NOT_CREATED
76
-
77
- analytics.lxc NOT_CREATED
78
-
79
- supermarket.lxc NOT_CREATED
80
-
81
- node-1.lxc NOT_CREATED
82
- ```
83
-
84
- ### Specifying a Subset of Servers
85
-
86
- Many dev-lxc subcommands can act on a subset of the cluster's servers by specifying a regular expression that matches the desired server names.
87
-
88
- For example, the following command will show the status of the Chef Server.
89
-
90
- ```
91
- dl status chef
92
- ```
93
-
94
- ### Start cluster
95
-
96
- Starting the cluster the first time takes awhile since it has a lot to download and build.
97
-
98
- ```
99
- dl up
100
- ```
101
-
102
- A test org, users, knife.rb and keys are automatically created in
103
- the bootstrap backend server in `/root/chef-repo/.chef` for testing purposes.
104
-
105
- The `knife-opc` plugin is installed in the embedded ruby environment of the
106
- Private Chef and Enterprise Chef server to facilitate the creation of the test
107
- org and user.
108
-
109
- Note: You also have the option of running the `prepare-product-cache` subcommand which downloads required product packages to the cache.
110
- This can be helpful when you don't want to start building the cluster yet but you want the package cache ready when you build the cluster later.
111
-
112
- ```
113
- dl prepare-product-cache
114
- ```
115
-
116
- ### Print Chef Automate Credentials
117
-
118
- If the cluster has a Chef Automate server you can use the `print-automate-credentials` subcommand to see what the login credentials.
119
-
120
- ```
121
- dl print-automate-credentials
122
- ```
123
-
124
- ### Create chef-repo
125
-
126
- Create a `.chef` directory in the current directory with appropriate knife.rb and pem files.
127
-
128
- Use the `-p` option to also get pivotal.pem and pivotal.rb files.
129
-
130
- Use the `-f` option to overwrite existing knife.rb and pivotal.rb files.
131
-
132
- ```
133
- dl chef-repo
134
- ```
135
-
136
- Now you can easily use knife to access the cluster.
137
-
138
- ```
139
- knife client list
140
- ```
141
-
142
- ### Stop and start the cluster
143
-
144
- ```
145
- dl halt
146
- dl up
147
- ```
148
-
149
- ### Run arbitrary commands in each server
150
-
151
- ```
152
- dl run-command chef 'uptime'
153
- ```
154
-
155
- ### Attach the terminal to a server
156
-
157
- Attach the terminal to a server in the cluster that matches the REGEX pattern given.
158
-
159
- ```
160
- dl attach chef
161
- ```
162
-
163
- ### Create a snapshot of the servers
164
-
165
- Save the changes in the servers to snapshots with a comment.
166
-
167
- ```
168
- dl halt
169
- dl snapshot -c 'this is a snapshot comment'
170
- ```
171
-
172
- ### List snapshots
173
-
174
- ```
175
- dl snapshot -l
176
- ```
177
-
178
- ### Restore snapshots
179
-
180
- Restore snapshots by name.
181
-
182
- Leave out the snapshot name or specify `LAST` to restore the most recent snapshot.
183
-
184
- ```
185
- dl snapshot -r
186
- dl up
187
- ```
188
-
189
- ### Destroy snapshots
190
-
191
- Destroy snapshots by name or destroy all snapshots by specifying `ALL`.
192
-
193
- Leave out the snapshot name or specify `LAST` to destroy the most recent snapshots.
194
-
195
- ```
196
- dl snapshot -d
197
- ```
198
-
199
- ### Destroy cluster
200
-
201
- Use the following command to destroy the cluster's servers.
202
-
203
- ```
204
- dl destroy
205
- ```
206
-
207
- ### Show Calculated Configuration
208
-
209
- Mostly for debugging purposes you have the ability to print the calculated cluster configuration.
210
-
211
- ```
212
- dl show-config
213
- ```
1
+ ## Usage
2
+
3
+ ### dl Command and Subcommands
4
+
5
+ `dl` is the dev-lxc command line tool.
6
+
7
+ `dev-lxc` subcommands and some options can be auto-completed by pressing the `Tab` key.
8
+
9
+ You only have to type enough of a `dev-lxc` subcommand to make it unique.
10
+
11
+ For example, the following commands are equivalent:
12
+
13
+ ```
14
+ dl help
15
+ dl he
16
+ ```
17
+
18
+ ### Display dev-lxc help
19
+
20
+ ```
21
+ dl help
22
+
23
+ dl help <subcommand>
24
+ ```
25
+
26
+ ### Configure a cluster
27
+
28
+ See the [configuration docs](docs/configuration.md) to learn how to use the `dl init` command to create and configure a `dev-lxc.yml` file.
29
+
30
+ ### cluster-view, tks, tls commands
31
+
32
+ The dev-lxc-platform comes with some commands that create and manage helpful
33
+ tmux/byobu sessions to more easily see the state of a cluster.
34
+
35
+ Running the `cluster-view` command in the same directory as a `dev-lxc.yml` file
36
+ creates a tmux/byobu session with the same name as the cluster's directory.
37
+
38
+ `cluster-view` can also be run with the parent directory of a `dev-lxc.yml` file
39
+ as the first argument and `cluster-view` will change to that directory before
40
+ creating the tmux/byobu session.
41
+
42
+ The session's first window is named "cluster".
43
+
44
+ The left side is for running dev-lxc commands.
45
+
46
+ The right side updates every 0.5 seconds with the cluster's status provided by `dl status`.
47
+
48
+ The session's second window is named "shell". It opens in the same directory as the
49
+ cluster's `dev-lxc.yml` file.
50
+
51
+ The `tls` and `tks` commands are really aliases.
52
+
53
+ `tls` is an alias for `tmux list-sessions` and is used to see what tmux/byobu sessions
54
+ are running.
55
+
56
+ `tks` is an alias for `tmux kill-session -t` and is used to kill tmux/byobu sessions.
57
+ When specifying the session to be killed you only need as many characters of the session
58
+ name that are required to make the name unique among the list of running sessions.
59
+
60
+ I recommend switching to a different running tmux/byobu session before killing the current
61
+ tmux/byobu session. Otherwise you will need to reattach to the remaining tmux/byobu session.
62
+ Use the keyboard shortcuts Alt-Up/Down to easily switch between tmux/byobu sessions.
63
+
64
+ ### Cluster status
65
+
66
+ Run the following command to see the status of the cluster.
67
+
68
+ ```
69
+ dl status
70
+ ```
71
+
72
+ This is an example of the output.
73
+
74
+ ```
75
+ chef.lxc NOT_CREATED
76
+
77
+ analytics.lxc NOT_CREATED
78
+
79
+ supermarket.lxc NOT_CREATED
80
+
81
+ node-1.lxc NOT_CREATED
82
+ ```
83
+
84
+ ### Specifying a Subset of Servers
85
+
86
+ Many dev-lxc subcommands can act on a subset of the cluster's servers by specifying a regular expression that matches the desired server names.
87
+
88
+ For example, the following command will show the status of the Chef Server.
89
+
90
+ ```
91
+ dl status chef
92
+ ```
93
+
94
+ ### Start cluster
95
+
96
+ Starting the cluster the first time takes awhile since it has a lot to download and build.
97
+
98
+ ```
99
+ dl up
100
+ ```
101
+
102
+ A test org, users, knife.rb and keys are automatically created in
103
+ the bootstrap backend server in `/root/chef-repo/.chef` for testing purposes.
104
+
105
+ The `knife-opc` plugin is installed in the embedded ruby environment of the
106
+ Private Chef and Enterprise Chef server to facilitate the creation of the test
107
+ org and user.
108
+
109
+ Note: You also have the option of running the `prepare-product-cache` subcommand which downloads required product packages to the cache.
110
+ This can be helpful when you don't want to start building the cluster yet but you want the package cache ready when you build the cluster later.
111
+
112
+ ```
113
+ dl prepare-product-cache
114
+ ```
115
+
116
+ ### Print Chef Automate Credentials
117
+
118
+ If the cluster has a Chef Automate server you can use the `print-automate-credentials` subcommand to see what the login credentials.
119
+
120
+ ```
121
+ dl print-automate-credentials
122
+ ```
123
+
124
+ ### Create chef-repo
125
+
126
+ Create a `.chef` directory in the current directory with appropriate knife.rb and pem files.
127
+
128
+ Use the `-p` option to also get pivotal.pem and pivotal.rb files.
129
+
130
+ Use the `-f` option to overwrite existing knife.rb and pivotal.rb files.
131
+
132
+ ```
133
+ dl chef-repo
134
+ ```
135
+
136
+ Now you can easily use knife to access the cluster.
137
+
138
+ ```
139
+ knife client list
140
+ ```
141
+
142
+ ### Stop and start the cluster
143
+
144
+ ```
145
+ dl halt
146
+ dl up
147
+ ```
148
+
149
+ ### Run arbitrary commands in each server
150
+
151
+ ```
152
+ dl run-command chef 'uptime'
153
+ ```
154
+
155
+ ### Attach the terminal to a server
156
+
157
+ Attach the terminal to a server in the cluster that matches the REGEX pattern given.
158
+
159
+ ```
160
+ dl attach chef
161
+ ```
162
+
163
+ ### Create a snapshot of the servers
164
+
165
+ Save the changes in the servers to snapshots with a comment.
166
+
167
+ ```
168
+ dl halt
169
+ dl snapshot -c 'this is a snapshot comment'
170
+ ```
171
+
172
+ ### List snapshots
173
+
174
+ ```
175
+ dl snapshot -l
176
+ ```
177
+
178
+ ### Restore snapshots
179
+
180
+ Restore snapshots by name.
181
+
182
+ Leave out the snapshot name or specify `LAST` to restore the most recent snapshot.
183
+
184
+ ```
185
+ dl snapshot -r
186
+ dl up
187
+ ```
188
+
189
+ ### Destroy snapshots
190
+
191
+ Destroy snapshots by name or destroy all snapshots by specifying `ALL`.
192
+
193
+ Leave out the snapshot name or specify `LAST` to destroy the most recent snapshots.
194
+
195
+ ```
196
+ dl snapshot -d
197
+ ```
198
+
199
+ ### Destroy cluster
200
+
201
+ Use the following command to destroy the cluster's servers.
202
+
203
+ ```
204
+ dl destroy
205
+ ```
206
+
207
+ ### Show Calculated Configuration
208
+
209
+ Mostly for debugging purposes you have the ability to print the calculated cluster configuration.
210
+
211
+ ```
212
+ dl show-config
213
+ ```