hetzner-k3s 0.5.7 → 0.5.8
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +4 -4
- data/Gemfile.lock +4 -4
- data/README.md +26 -13
- data/bin/build.sh +3 -3
- data/lib/hetzner/infra/server.rb +49 -36
- data/lib/hetzner/k3s/cli.rb +14 -392
- data/lib/hetzner/k3s/cluster.rb +16 -17
- data/lib/hetzner/k3s/configuration.rb +454 -0
- data/lib/hetzner/k3s/version.rb +1 -1
- metadata +3 -2
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA256:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: c0d855f62ab9e222986d220edcdde203f7eb363ab725c8aa4e1f1389f2b251e2
|
4
|
+
data.tar.gz: 19d6a1ff6769cbec2207539d615d6a873eaede07aec9b15a43e6ef9d79101731
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: d8fdc127e71f790e530d3abf97ca30d22b28d75eff687e436d1351336595ae7ba912c0c947c8b1738ab9d50447e577c4be753db3c20a2bfbcca195fcc6a0d193
|
7
|
+
data.tar.gz: 5cb4203c6270f0e82b66049fefd7244aaeb42e1dd89e257a0e1ba4b126db04ed8adf27b715b10874248f6661b6fed3126e562913ec2342f086bbff4c717c329b
|
data/Gemfile.lock
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
PATH
|
2
2
|
remote: .
|
3
3
|
specs:
|
4
|
-
hetzner-k3s (0.5.
|
4
|
+
hetzner-k3s (0.5.7)
|
5
5
|
bcrypt_pbkdf
|
6
6
|
ed25519
|
7
7
|
http
|
@@ -30,7 +30,7 @@ GEM
|
|
30
30
|
http-cookie (~> 1.0)
|
31
31
|
http-form_data (~> 2.2)
|
32
32
|
http-parser (~> 1.2.0)
|
33
|
-
http-cookie (1.0.
|
33
|
+
http-cookie (1.0.5)
|
34
34
|
domain_name (~> 0.5)
|
35
35
|
http-form_data (2.3.0)
|
36
36
|
http-parser (1.2.3)
|
@@ -39,7 +39,7 @@ GEM
|
|
39
39
|
parallel (1.21.0)
|
40
40
|
parser (3.1.0.0)
|
41
41
|
ast (~> 2.4.1)
|
42
|
-
public_suffix (4.0.
|
42
|
+
public_suffix (4.0.7)
|
43
43
|
rainbow (3.1.1)
|
44
44
|
rake (12.3.3)
|
45
45
|
regexp_parser (2.2.0)
|
@@ -74,7 +74,7 @@ GEM
|
|
74
74
|
thor (1.2.1)
|
75
75
|
unf (0.1.4)
|
76
76
|
unf_ext
|
77
|
-
unf_ext (0.0.8)
|
77
|
+
unf_ext (0.0.8.2)
|
78
78
|
unicode-display_width (2.1.0)
|
79
79
|
|
80
80
|
PLATFORMS
|
data/README.md
CHANGED
@@ -16,6 +16,8 @@ Using this tool, creating a highly available k3s cluster with 3 masters for the
|
|
16
16
|
|
17
17
|
See roadmap [here](https://github.com/vitobotta/hetzner-k3s/projects/1) for the features planned or in progress.
|
18
18
|
|
19
|
+
Also see this [wiki page](https://github.com/vitobotta/hetzner-k3s/wiki/Tutorial:---Setting-up-a-cluster) for a tutorial on how to set up a cluster with the most common setup to get you started.
|
20
|
+
|
19
21
|
## Requirements
|
20
22
|
|
21
23
|
All that is needed to use this tool is
|
@@ -39,7 +41,12 @@ This will install the `hetzner-k3s` executable in your PATH.
|
|
39
41
|
Alternatively, if you don't want to set up a Ruby runtime but have Docker installed, you can use a container. Run the following from inside the directory where you have the config file for the cluster (described in the next section):
|
40
42
|
|
41
43
|
```bash
|
42
|
-
docker run --rm -it
|
44
|
+
docker run --rm -it \
|
45
|
+
-v ${PWD}:/cluster \
|
46
|
+
-v ${HOME}/.ssh:/tmp/.ssh \
|
47
|
+
vitobotta/hetzner-k3s:v0.5.8 \
|
48
|
+
create-cluster \
|
49
|
+
--config-file /cluster/test.yaml
|
43
50
|
```
|
44
51
|
|
45
52
|
Replace `test.yaml` with the name of your config file.
|
@@ -164,6 +171,8 @@ The `create-cluster` command can be run any number of times with the same config
|
|
164
171
|
|
165
172
|
To add one or more nodes to a node pool, just change the instance count in the configuration file for that node pool and re-run the create command.
|
166
173
|
|
174
|
+
**Important**: if you are increasing the size of a node pool created prior to v0.5.7, please see [this thread](https://github.com/vitobotta/hetzner-k3s/issues/80).
|
175
|
+
|
167
176
|
### Scaling down a node pool
|
168
177
|
|
169
178
|
To make a node pool smaller:
|
@@ -199,16 +208,6 @@ Note that the API server will briefly be unavailable during the upgrade of the c
|
|
199
208
|
|
200
209
|
To check the upgrade progress, run `watch kubectl get nodes -owide`. You will see the masters being upgraded one per time, followed by the worker nodes.
|
201
210
|
|
202
|
-
## Upgrade the OS on nodes
|
203
|
-
|
204
|
-
The easiest way to upgrade the OS on existing nodes is actually to replace them, as it happens with managed Kubernetes service. To do this:
|
205
|
-
|
206
|
-
- drain one node
|
207
|
-
- delete the node from Kubernetes
|
208
|
-
- delete the node from the Hetzner console
|
209
|
-
- re-run the script to recreate the deleted node with an updated OS
|
210
|
-
- proceed with the next node
|
211
|
-
|
212
211
|
### What to do if the upgrade doesn't go smoothly
|
213
212
|
|
214
213
|
If the upgrade gets stuck for some reason, or it doesn't upgrade all the nodes:
|
@@ -234,7 +233,8 @@ I have noticed that sometimes I need to re-run the upgrade command a couple of t
|
|
234
233
|
You can also check the logs of the system upgrade controller's pod:
|
235
234
|
|
236
235
|
```bash
|
237
|
-
kubectl -n system-upgrade
|
236
|
+
kubectl -n system-upgrade \
|
237
|
+
logs -f $(kubectl -n system-upgrade get pod -l pod-template-hash -o jsonpath="{.items[0].metadata.name}")
|
238
238
|
```
|
239
239
|
|
240
240
|
A final note about upgrades is that if for some reason the upgrade gets stuck after upgrading the masters and before upgrading the worker nodes, just cleaning up the resources as described above might not be enough. In that case also try running the following to tell the upgrade job for the workers that the masters have already been upgraded, so the upgrade can continue for the workers:
|
@@ -243,6 +243,15 @@ A final note about upgrades is that if for some reason the upgrade gets stuck af
|
|
243
243
|
kubectl label node <master1> <master2> <master2> plan.upgrade.cattle.io/k3s-server=upgraded
|
244
244
|
```
|
245
245
|
|
246
|
+
## Upgrading the OS on nodes
|
247
|
+
|
248
|
+
- consider adding a temporary node during the process if you don't have enough spare capacity in the cluster
|
249
|
+
- drain one node
|
250
|
+
- update etc
|
251
|
+
- reboot
|
252
|
+
- uncordon
|
253
|
+
- proceed with the next node
|
254
|
+
|
246
255
|
## Deleting a cluster
|
247
256
|
|
248
257
|
To delete a cluster, running
|
@@ -277,7 +286,7 @@ I set `load-balancer.hetzner.cloud/hostname` to a valid hostname that I configur
|
|
277
286
|
|
278
287
|
The annotation `load-balancer.hetzner.cloud/use-private-ip: "true"` ensures that the communication between the load balancer and the nodes happens through the private network, so we don't have to open any ports on the nodes (other than the port 6443 for the Kubernetes API server).
|
279
288
|
|
280
|
-
The other annotations should be self explanatory. You can find a list of the available annotations here.
|
289
|
+
The other annotations should be self explanatory. You can find a list of the available annotations [here](https://pkg.go.dev/github.com/hetznercloud/hcloud-cloud-controller-manager/internal/annotation).
|
281
290
|
|
282
291
|
## Persistent volumes
|
283
292
|
|
@@ -293,6 +302,10 @@ I recommend that you create a separate Hetzner project for each cluster, because
|
|
293
302
|
|
294
303
|
Please create a PR if you want to propose any changes, or open an issue if you are having trouble with the tool - I will do my best to help if I can.
|
295
304
|
|
305
|
+
Contributors:
|
306
|
+
|
307
|
+
- [TitanFighter](https://github.com/TitanFighter) for [this awesome tutorial](https://github.com/vitobotta/hetzner-k3s/wiki/Tutorial:---Setting-up-a-cluster)
|
308
|
+
|
296
309
|
## License
|
297
310
|
|
298
311
|
The gem is available as open source under the terms of the [MIT License](https://opensource.org/licenses/MIT).
|
data/bin/build.sh
CHANGED
@@ -4,9 +4,9 @@ set -e
|
|
4
4
|
|
5
5
|
IMAGE="vitobotta/hetzner-k3s"
|
6
6
|
|
7
|
-
docker build -t ${IMAGE}:v0.5.
|
7
|
+
docker build -t ${IMAGE}:v0.5.8 \
|
8
8
|
--platform=linux/amd64 \
|
9
|
-
--cache-from ${IMAGE}:v0.5.
|
9
|
+
--cache-from ${IMAGE}:v0.5.7 \
|
10
10
|
--build-arg BUILDKIT_INLINE_CACHE=1 .
|
11
11
|
|
12
|
-
docker push vitobotta/hetzner-k3s:v0.5.
|
12
|
+
docker push vitobotta/hetzner-k3s:v0.5.8
|
data/lib/hetzner/infra/server.rb
CHANGED
@@ -8,13 +8,19 @@ module Hetzner
|
|
8
8
|
end
|
9
9
|
|
10
10
|
def create(location:, instance_type:, instance_id:, firewall_id:, network_id:, ssh_key_id:, placement_group_id:, image:, additional_packages: [], additional_post_create_commands: [])
|
11
|
+
@location = location
|
12
|
+
@instance_type = instance_type
|
13
|
+
@instance_id = instance_id
|
14
|
+
@firewall_id = firewall_id
|
15
|
+
@network_id = network_id
|
16
|
+
@ssh_key_id = ssh_key_id
|
17
|
+
@placement_group_id = placement_group_id
|
18
|
+
@image = image
|
11
19
|
@additional_packages = additional_packages
|
12
20
|
@additional_post_create_commands = additional_post_create_commands
|
13
21
|
|
14
22
|
puts
|
15
23
|
|
16
|
-
server_name = "#{cluster_name}-#{instance_type}-#{instance_id}"
|
17
|
-
|
18
24
|
if (server = find_server(server_name))
|
19
25
|
puts "Server #{server_name} already exists, skipping."
|
20
26
|
puts
|
@@ -23,44 +29,16 @@ module Hetzner
|
|
23
29
|
|
24
30
|
puts "Creating server #{server_name}..."
|
25
31
|
|
26
|
-
|
27
|
-
|
28
|
-
|
29
|
-
image:,
|
30
|
-
firewalls: [
|
31
|
-
{ firewall: firewall_id }
|
32
|
-
],
|
33
|
-
networks: [
|
34
|
-
network_id
|
35
|
-
],
|
36
|
-
server_type: instance_type,
|
37
|
-
ssh_keys: [
|
38
|
-
ssh_key_id
|
39
|
-
],
|
40
|
-
user_data:,
|
41
|
-
labels: {
|
42
|
-
cluster: cluster_name,
|
43
|
-
role: (server_name =~ /master/ ? 'master' : 'worker')
|
44
|
-
},
|
45
|
-
placement_group: placement_group_id
|
46
|
-
}
|
47
|
-
|
48
|
-
response = hetzner_client.post('/servers', server_config)
|
49
|
-
response_body = response.body
|
50
|
-
|
51
|
-
server = JSON.parse(response_body)['server']
|
32
|
+
if (server = make_request)
|
33
|
+
puts "...server #{server_name} created."
|
34
|
+
puts
|
52
35
|
|
53
|
-
|
36
|
+
server
|
37
|
+
else
|
54
38
|
puts "Error creating server #{server_name}. Response details below:"
|
55
39
|
puts
|
56
40
|
p response
|
57
|
-
return
|
58
41
|
end
|
59
|
-
|
60
|
-
puts "...server #{server_name} created."
|
61
|
-
puts
|
62
|
-
|
63
|
-
server
|
64
42
|
end
|
65
43
|
|
66
44
|
def delete(server_name:)
|
@@ -75,7 +53,7 @@ module Hetzner
|
|
75
53
|
|
76
54
|
private
|
77
55
|
|
78
|
-
attr_reader :hetzner_client, :cluster_name, :additional_packages, :additional_post_create_commands
|
56
|
+
attr_reader :hetzner_client, :cluster_name, :location, :instance_type, :instance_id, :firewall_id, :network_id, :ssh_key_id, :placement_group_id, :image, :additional_packages, :additional_post_create_commands
|
79
57
|
|
80
58
|
def find_server(server_name)
|
81
59
|
hetzner_client.get('/servers?sort=created:desc')['servers'].detect { |network| network['name'] == server_name }
|
@@ -113,5 +91,40 @@ module Hetzner
|
|
113
91
|
#{post_create_commands}
|
114
92
|
YAML
|
115
93
|
end
|
94
|
+
|
95
|
+
def server_name
|
96
|
+
@server_name ||= "#{cluster_name}-#{instance_type}-#{instance_id}"
|
97
|
+
end
|
98
|
+
|
99
|
+
def server_config
|
100
|
+
@server_config ||= {
|
101
|
+
name: server_name,
|
102
|
+
location:,
|
103
|
+
image:,
|
104
|
+
firewalls: [
|
105
|
+
{ firewall: firewall_id }
|
106
|
+
],
|
107
|
+
networks: [
|
108
|
+
network_id
|
109
|
+
],
|
110
|
+
server_type: instance_type,
|
111
|
+
ssh_keys: [
|
112
|
+
ssh_key_id
|
113
|
+
],
|
114
|
+
user_data:,
|
115
|
+
labels: {
|
116
|
+
cluster: cluster_name,
|
117
|
+
role: (server_name =~ /master/ ? 'master' : 'worker')
|
118
|
+
},
|
119
|
+
placement_group: placement_group_id
|
120
|
+
}
|
121
|
+
end
|
122
|
+
|
123
|
+
def make_request
|
124
|
+
response = hetzner_client.post('/servers', server_config)
|
125
|
+
response_body = response.body
|
126
|
+
|
127
|
+
JSON.parse(response_body)['server']
|
128
|
+
end
|
116
129
|
end
|
117
130
|
end
|
data/lib/hetzner/k3s/cli.rb
CHANGED
@@ -8,6 +8,7 @@ require 'open-uri'
|
|
8
8
|
require 'yaml'
|
9
9
|
|
10
10
|
require_relative 'cluster'
|
11
|
+
require_relative 'configuration'
|
11
12
|
require_relative 'version'
|
12
13
|
|
13
14
|
module Hetzner
|
@@ -17,13 +18,6 @@ module Hetzner
|
|
17
18
|
true
|
18
19
|
end
|
19
20
|
|
20
|
-
def initialize(*args)
|
21
|
-
@errors = []
|
22
|
-
@used_server_types = []
|
23
|
-
|
24
|
-
super
|
25
|
-
end
|
26
|
-
|
27
21
|
desc 'version', 'Print the version'
|
28
22
|
def version
|
29
23
|
puts Hetzner::K3s::VERSION
|
@@ -32,15 +26,15 @@ module Hetzner
|
|
32
26
|
desc 'create-cluster', 'Create a k3s cluster in Hetzner Cloud'
|
33
27
|
option :config_file, required: true
|
34
28
|
def create_cluster
|
35
|
-
|
36
|
-
Cluster.new(
|
29
|
+
configuration.validate action: :create
|
30
|
+
Cluster.new(configuration:).create
|
37
31
|
end
|
38
32
|
|
39
33
|
desc 'delete-cluster', 'Delete an existing k3s cluster in Hetzner Cloud'
|
40
34
|
option :config_file, required: true
|
41
35
|
def delete_cluster
|
42
|
-
|
43
|
-
Cluster.new(
|
36
|
+
configuration.validate action: :delete
|
37
|
+
Cluster.new(configuration:).delete
|
44
38
|
end
|
45
39
|
|
46
40
|
desc 'upgrade-cluster', 'Upgrade an existing k3s cluster in Hetzner Cloud to a new version'
|
@@ -48,399 +42,27 @@ module Hetzner
|
|
48
42
|
option :new_k3s_version, required: true
|
49
43
|
option :force, default: 'false'
|
50
44
|
def upgrade_cluster
|
51
|
-
|
52
|
-
|
53
|
-
Cluster.new(hetzner_client:, hetzner_token:)
|
54
|
-
.upgrade(configuration:, new_k3s_version: options[:new_k3s_version], config_file: options[:config_file])
|
45
|
+
configuration.validate action: :upgrade
|
46
|
+
Cluster.new(configuration:).upgrade(new_k3s_version: options[:new_k3s_version], config_file: options[:config_file])
|
55
47
|
end
|
56
48
|
|
57
49
|
desc 'releases', 'List available k3s releases'
|
58
50
|
def releases
|
59
|
-
available_releases.each do |release|
|
51
|
+
Hetzner::Configuration.available_releases.each do |release|
|
60
52
|
puts release
|
61
53
|
end
|
62
54
|
end
|
63
55
|
|
64
56
|
private
|
65
57
|
|
66
|
-
attr_reader :
|
67
|
-
attr_accessor :errors, :used_server_types
|
68
|
-
|
69
|
-
def validate_configuration(action)
|
70
|
-
validate_configuration_file
|
71
|
-
validate_token
|
72
|
-
validate_cluster_name
|
73
|
-
validate_kubeconfig_path
|
74
|
-
|
75
|
-
case action
|
76
|
-
when :create
|
77
|
-
validate_create
|
78
|
-
when :delete
|
79
|
-
validate_kubeconfig_path_must_exist
|
80
|
-
when :upgrade
|
81
|
-
validate_upgrade
|
82
|
-
end
|
83
|
-
|
84
|
-
errors.flatten!
|
85
|
-
|
86
|
-
return if errors.empty?
|
87
|
-
|
88
|
-
puts 'Some information in the configuration file requires your attention:'
|
89
|
-
|
90
|
-
errors.each do |error|
|
91
|
-
puts " - #{error}"
|
92
|
-
end
|
93
|
-
|
94
|
-
exit 1
|
95
|
-
end
|
96
|
-
|
97
|
-
def valid_token?
|
98
|
-
return @valid unless @valid.nil?
|
99
|
-
|
100
|
-
begin
|
101
|
-
token = hetzner_token
|
102
|
-
@hetzner_client = Hetzner::Client.new(token:)
|
103
|
-
response = hetzner_client.get('/locations')
|
104
|
-
error_code = response.dig('error', 'code')
|
105
|
-
@valid = error_code&.size != 0
|
106
|
-
rescue StandardError
|
107
|
-
@valid = false
|
108
|
-
end
|
109
|
-
end
|
110
|
-
|
111
|
-
def validate_token
|
112
|
-
errors << 'Invalid Hetzner Cloud token' unless valid_token?
|
113
|
-
end
|
114
|
-
|
115
|
-
def validate_cluster_name
|
116
|
-
errors << 'Cluster name is an invalid format (only lowercase letters, digits and dashes are allowed)' unless configuration['cluster_name'] =~ /\A[a-z\d-]+\z/
|
117
|
-
|
118
|
-
return if configuration['cluster_name'] =~ /\A[a-z]+.*\z/
|
119
|
-
|
120
|
-
errors << 'Ensure that the cluster name starts with a normal letter'
|
121
|
-
end
|
122
|
-
|
123
|
-
def validate_kubeconfig_path
|
124
|
-
path = File.expand_path(configuration['kubeconfig_path'])
|
125
|
-
errors << 'kubeconfig path cannot be a directory' and return if File.directory? path
|
126
|
-
|
127
|
-
directory = File.dirname(path)
|
128
|
-
errors << "Directory #{directory} doesn't exist" unless File.exist? directory
|
129
|
-
rescue StandardError
|
130
|
-
errors << 'Invalid path for the kubeconfig'
|
131
|
-
end
|
132
|
-
|
133
|
-
def validate_public_ssh_key
|
134
|
-
path = File.expand_path(configuration['public_ssh_key_path'])
|
135
|
-
errors << 'Invalid Public SSH key path' and return unless File.exist? path
|
136
|
-
|
137
|
-
key = File.read(path)
|
138
|
-
errors << 'Public SSH key is invalid' unless ::SSHKey.valid_ssh_public_key?(key)
|
139
|
-
rescue StandardError
|
140
|
-
errors << 'Invalid Public SSH key path'
|
141
|
-
end
|
142
|
-
|
143
|
-
def validate_private_ssh_key
|
144
|
-
private_ssh_key_path = configuration['private_ssh_key_path']
|
145
|
-
|
146
|
-
return unless private_ssh_key_path
|
147
|
-
|
148
|
-
path = File.expand_path(private_ssh_key_path)
|
149
|
-
errors << 'Invalid Private SSH key path' and return unless File.exist?(path)
|
150
|
-
rescue StandardError
|
151
|
-
errors << 'Invalid Private SSH key path'
|
152
|
-
end
|
153
|
-
|
154
|
-
def validate_kubeconfig_path_must_exist
|
155
|
-
path = File.expand_path configuration['kubeconfig_path']
|
156
|
-
errors << 'kubeconfig path is invalid' and return unless File.exist? path
|
157
|
-
|
158
|
-
errors << 'kubeconfig path cannot be a directory' if File.directory? path
|
159
|
-
rescue StandardError
|
160
|
-
errors << 'Invalid kubeconfig path'
|
161
|
-
end
|
162
|
-
|
163
|
-
def server_types
|
164
|
-
return [] unless valid_token?
|
165
|
-
|
166
|
-
@server_types ||= hetzner_client.get('/server_types')['server_types'].map { |server_type| server_type['name'] }
|
167
|
-
rescue StandardError
|
168
|
-
@errors << 'Cannot fetch server types with Hetzner API, please try again later'
|
169
|
-
false
|
170
|
-
end
|
171
|
-
|
172
|
-
def locations
|
173
|
-
return [] unless valid_token?
|
174
|
-
|
175
|
-
@locations ||= hetzner_client.get('/locations')['locations'].map { |location| location['name'] }
|
176
|
-
rescue StandardError
|
177
|
-
@errors << 'Cannot fetch locations with Hetzner API, please try again later'
|
178
|
-
[]
|
179
|
-
end
|
180
|
-
|
181
|
-
def valid_location?(location)
|
182
|
-
return if locations.empty? && !valid_token?
|
183
|
-
|
184
|
-
locations.include? location
|
185
|
-
end
|
186
|
-
|
187
|
-
def validate_masters_location
|
188
|
-
return if valid_location?(configuration['location'])
|
189
|
-
|
190
|
-
errors << 'Invalid location for master nodes - valid locations: nbg1 (Nuremberg, Germany), fsn1 (Falkenstein, Germany), hel1 (Helsinki, Finland) or ash (Ashburn, Virginia, USA)'
|
191
|
-
end
|
192
|
-
|
193
|
-
def available_releases
|
194
|
-
@available_releases ||= begin
|
195
|
-
response = HTTP.get('https://api.github.com/repos/k3s-io/k3s/tags?per_page=999').body
|
196
|
-
JSON.parse(response).map { |hash| hash['name'] }
|
197
|
-
end
|
198
|
-
rescue StandardError
|
199
|
-
errors << 'Cannot fetch the releases with Hetzner API, please try again later'
|
200
|
-
end
|
201
|
-
|
202
|
-
def validate_k3s_version
|
203
|
-
k3s_version = configuration['k3s_version']
|
204
|
-
errors << 'Invalid k3s version' unless available_releases.include? k3s_version
|
205
|
-
end
|
206
|
-
|
207
|
-
def validate_new_k3s_version
|
208
|
-
new_k3s_version = options[:new_k3s_version]
|
209
|
-
errors << 'The new k3s version is invalid' unless available_releases.include? new_k3s_version
|
210
|
-
end
|
58
|
+
attr_reader :hetzner_token, :hetzner_client
|
211
59
|
|
212
|
-
def
|
213
|
-
|
214
|
-
|
215
|
-
|
216
|
-
|
217
|
-
rescue StandardError
|
218
|
-
errors << 'Invalid masters configuration'
|
219
|
-
return
|
220
|
-
end
|
221
|
-
|
222
|
-
if masters_pool.nil?
|
223
|
-
errors << 'Invalid masters configuration'
|
224
|
-
return
|
60
|
+
def configuration
|
61
|
+
@configuration ||= begin
|
62
|
+
config = ::Hetzner::Configuration.new(options:)
|
63
|
+
@hetzner_token = config.hetzner_token
|
64
|
+
config
|
225
65
|
end
|
226
|
-
|
227
|
-
validate_instance_group masters_pool, workers: false
|
228
|
-
end
|
229
|
-
|
230
|
-
def validate_worker_node_pools
|
231
|
-
worker_node_pools = configuration['worker_node_pools'] || []
|
232
|
-
|
233
|
-
unless worker_node_pools.size.positive? || schedule_workloads_on_masters?
|
234
|
-
errors << 'Invalid node pools configuration'
|
235
|
-
return
|
236
|
-
end
|
237
|
-
|
238
|
-
return if worker_node_pools.size.zero? && schedule_workloads_on_masters?
|
239
|
-
|
240
|
-
if !worker_node_pools.is_a? Array
|
241
|
-
errors << 'Invalid node pools configuration'
|
242
|
-
elsif worker_node_pools.size.zero?
|
243
|
-
errors << 'At least one node pool is required in order to schedule workloads' unless schedule_workloads_on_masters?
|
244
|
-
elsif worker_node_pools.map { |worker_node_pool| worker_node_pool['name'] }.uniq.size != worker_node_pools.size
|
245
|
-
errors << 'Each node pool must have an unique name'
|
246
|
-
elsif server_types
|
247
|
-
worker_node_pools.each do |worker_node_pool|
|
248
|
-
validate_instance_group worker_node_pool
|
249
|
-
end
|
250
|
-
end
|
251
|
-
end
|
252
|
-
|
253
|
-
def schedule_workloads_on_masters?
|
254
|
-
schedule_workloads_on_masters = configuration['schedule_workloads_on_masters']
|
255
|
-
schedule_workloads_on_masters ? !!schedule_workloads_on_masters : false
|
256
|
-
end
|
257
|
-
|
258
|
-
def validate_instance_group(instance_group, workers: true)
|
259
|
-
instance_group_errors = []
|
260
|
-
|
261
|
-
instance_group_type = workers ? "Worker mode pool '#{instance_group['name']}'" : 'Masters pool'
|
262
|
-
|
263
|
-
instance_group_errors << "#{instance_group_type} has an invalid name" unless !workers || instance_group['name'] =~ /\A([A-Za-z0-9\-_]+)\Z/
|
264
|
-
|
265
|
-
instance_group_errors << "#{instance_group_type} is in an invalid format" unless instance_group.is_a? Hash
|
266
|
-
|
267
|
-
instance_group_errors << "#{instance_group_type} has an invalid instance type" unless !valid_token? || server_types.include?(instance_group['instance_type'])
|
268
|
-
|
269
|
-
if workers
|
270
|
-
location = instance_group.fetch('location', configuration['location'])
|
271
|
-
instance_group_errors << "#{instance_group_type} has an invalid location - valid locations: nbg1 (Nuremberg, Germany), fsn1 (Falkenstein, Germany), hel1 (Helsinki, Finland) or ash (Ashburn, Virginia, USA)" unless valid_location?(location)
|
272
|
-
|
273
|
-
in_network_zone = configuration['location'] == 'ash' ? location == 'ash' : location != 'ash'
|
274
|
-
instance_group_errors << "#{instance_group_type} must be in the same network zone as the masters. If the masters are located in Ashburn, all the node pools must be located in Ashburn too, otherwise none of the node pools should be located in Ashburn." unless in_network_zone
|
275
|
-
end
|
276
|
-
|
277
|
-
if instance_group['instance_count'].is_a? Integer
|
278
|
-
if instance_group['instance_count'] < 1
|
279
|
-
instance_group_errors << "#{instance_group_type} must have at least one node"
|
280
|
-
elsif instance_group['instance_count'] > 10
|
281
|
-
instance_group_errors << "#{instance_group_type} cannot have more than 10 nodes due to a limitation with the Hetzner placement groups. You can add more node pools if you need more nodes."
|
282
|
-
elsif !workers
|
283
|
-
instance_group_errors << 'Masters count must equal to 1 for non-HA clusters or an odd number (recommended 3) for an HA cluster' unless instance_group['instance_count'].odd?
|
284
|
-
end
|
285
|
-
else
|
286
|
-
instance_group_errors << "#{instance_group_type} has an invalid instance count"
|
287
|
-
end
|
288
|
-
|
289
|
-
used_server_types << instance_group['instance_type']
|
290
|
-
|
291
|
-
errors << instance_group_errors
|
292
|
-
end
|
293
|
-
|
294
|
-
def validate_verify_host_key
|
295
|
-
return unless [true, false].include?(configuration.fetch('public_ssh_key_path', false))
|
296
|
-
|
297
|
-
errors << 'Please set the verify_host_key option to either true or false'
|
298
|
-
end
|
299
|
-
|
300
|
-
def hetzner_token
|
301
|
-
@token = ENV.fetch('HCLOUD_TOKEN', nil)
|
302
|
-
return @token unless @token.nil?
|
303
|
-
|
304
|
-
@token = configuration['hetzner_token']
|
305
|
-
end
|
306
|
-
|
307
|
-
def validate_ssh_allowed_networks
|
308
|
-
networks ||= configuration['ssh_allowed_networks']
|
309
|
-
|
310
|
-
if networks.nil? || networks.empty?
|
311
|
-
errors << 'At least one network/IP range must be specified for SSH access'
|
312
|
-
return
|
313
|
-
end
|
314
|
-
|
315
|
-
invalid_networks = networks.reject do |network|
|
316
|
-
IPAddr.new(network)
|
317
|
-
rescue StandardError
|
318
|
-
false
|
319
|
-
end
|
320
|
-
|
321
|
-
unless invalid_networks.empty?
|
322
|
-
invalid_networks.each do |network|
|
323
|
-
errors << "The network #{network} is an invalid range"
|
324
|
-
end
|
325
|
-
end
|
326
|
-
|
327
|
-
invalid_ranges = networks.reject do |network|
|
328
|
-
network.include? '/'
|
329
|
-
end
|
330
|
-
|
331
|
-
unless invalid_ranges.empty?
|
332
|
-
invalid_ranges.each do |_network|
|
333
|
-
errors << 'Please use the CIDR notation for the networks to avoid ambiguity'
|
334
|
-
end
|
335
|
-
end
|
336
|
-
|
337
|
-
return unless invalid_networks.empty?
|
338
|
-
|
339
|
-
current_ip = URI.open('http://whatismyip.akamai.com').read
|
340
|
-
|
341
|
-
current_ip_networks = networks.detect do |network|
|
342
|
-
IPAddr.new(network).include?(current_ip)
|
343
|
-
rescue StandardError
|
344
|
-
false
|
345
|
-
end
|
346
|
-
|
347
|
-
errors << "Your current IP #{current_ip} is not included into any of the networks you've specified, so we won't be able to SSH into the nodes" unless current_ip_networks
|
348
|
-
end
|
349
|
-
|
350
|
-
def validate_additional_packages
|
351
|
-
additional_packages = configuration['additional_packages']
|
352
|
-
errors << 'Invalid additional packages configuration - it should be an array' if additional_packages && !additional_packages.is_a?(Array)
|
353
|
-
end
|
354
|
-
|
355
|
-
def validate_post_create_commands
|
356
|
-
post_create_commands = configuration['post_create_commands']
|
357
|
-
errors << 'Invalid post create commands configuration - it should be an array' if post_create_commands && !post_create_commands.is_a?(Array)
|
358
|
-
end
|
359
|
-
|
360
|
-
def validate_create
|
361
|
-
validate_public_ssh_key
|
362
|
-
validate_private_ssh_key
|
363
|
-
validate_ssh_allowed_networks
|
364
|
-
validate_masters_location
|
365
|
-
validate_k3s_version
|
366
|
-
validate_masters
|
367
|
-
validate_worker_node_pools
|
368
|
-
validate_verify_host_key
|
369
|
-
validate_additional_packages
|
370
|
-
validate_post_create_commands
|
371
|
-
validate_kube_api_server_args
|
372
|
-
validate_kube_scheduler_args
|
373
|
-
validate_kube_controller_manager_args
|
374
|
-
validate_kube_cloud_controller_manager_args
|
375
|
-
validate_kubelet_args
|
376
|
-
validate_kube_proxy_args
|
377
|
-
end
|
378
|
-
|
379
|
-
def validate_upgrade
|
380
|
-
validate_kubeconfig_path_must_exist
|
381
|
-
validate_new_k3s_version
|
382
|
-
end
|
383
|
-
|
384
|
-
def validate_configuration_file
|
385
|
-
config_file_path = options[:config_file]
|
386
|
-
|
387
|
-
if File.exist?(config_file_path)
|
388
|
-
begin
|
389
|
-
@configuration = YAML.load_file(options[:config_file])
|
390
|
-
unless configuration.is_a? Hash
|
391
|
-
puts 'Configuration is invalid'
|
392
|
-
exit 1
|
393
|
-
end
|
394
|
-
rescue StandardError
|
395
|
-
puts 'Please ensure that the config file is a correct YAML manifest.'
|
396
|
-
exit 1
|
397
|
-
end
|
398
|
-
else
|
399
|
-
puts 'Please specify a correct path for the config file.'
|
400
|
-
exit 1
|
401
|
-
end
|
402
|
-
end
|
403
|
-
|
404
|
-
def validate_kube_api_server_args
|
405
|
-
kube_api_server_args = configuration['kube_api_server_args']
|
406
|
-
return unless kube_api_server_args
|
407
|
-
|
408
|
-
errors << 'kube_api_server_args must be an array of arguments' unless kube_api_server_args.is_a? Array
|
409
|
-
end
|
410
|
-
|
411
|
-
def validate_kube_scheduler_args
|
412
|
-
kube_scheduler_args = configuration['kube_scheduler_args']
|
413
|
-
return unless kube_scheduler_args
|
414
|
-
|
415
|
-
errors << 'kube_scheduler_args must be an array of arguments' unless kube_scheduler_args.is_a? Array
|
416
|
-
end
|
417
|
-
|
418
|
-
def validate_kube_controller_manager_args
|
419
|
-
kube_controller_manager_args = configuration['kube_controller_manager_args']
|
420
|
-
return unless kube_controller_manager_args
|
421
|
-
|
422
|
-
errors << 'kube_controller_manager_args must be an array of arguments' unless kube_controller_manager_args.is_a? Array
|
423
|
-
end
|
424
|
-
|
425
|
-
def validate_kube_cloud_controller_manager_args
|
426
|
-
kube_cloud_controller_manager_args = configuration['kube_cloud_controller_manager_args']
|
427
|
-
return unless kube_cloud_controller_manager_args
|
428
|
-
|
429
|
-
errors << 'kube_cloud_controller_manager_args must be an array of arguments' unless kube_cloud_controller_manager_args.is_a? Array
|
430
|
-
end
|
431
|
-
|
432
|
-
def validate_kubelet_args
|
433
|
-
kubelet_args = configuration['kubelet_args']
|
434
|
-
return unless kubelet_args
|
435
|
-
|
436
|
-
errors << 'kubelet_args must be an array of arguments' unless kubelet_args.is_a? Array
|
437
|
-
end
|
438
|
-
|
439
|
-
def validate_kube_proxy_args
|
440
|
-
kube_proxy_args = configuration['kube_proxy_args']
|
441
|
-
return unless kube_proxy_args
|
442
|
-
|
443
|
-
errors << 'kube_proxy_args must be an array of arguments' unless kube_proxy_args.is_a? Array
|
444
66
|
end
|
445
67
|
end
|
446
68
|
end
|
data/lib/hetzner/k3s/cluster.rb
CHANGED
@@ -19,13 +19,11 @@ require_relative '../utils'
|
|
19
19
|
class Cluster
|
20
20
|
include Utils
|
21
21
|
|
22
|
-
def initialize(
|
23
|
-
@
|
24
|
-
@hetzner_token = hetzner_token
|
22
|
+
def initialize(configuration:)
|
23
|
+
@configuration = configuration
|
25
24
|
end
|
26
25
|
|
27
|
-
def create
|
28
|
-
@configuration = configuration
|
26
|
+
def create
|
29
27
|
@cluster_name = configuration['cluster_name']
|
30
28
|
@kubeconfig_path = File.expand_path(configuration['kubeconfig_path'])
|
31
29
|
@public_ssh_key_path = File.expand_path(configuration['public_ssh_key_path'])
|
@@ -57,8 +55,7 @@ class Cluster
|
|
57
55
|
deploy_system_upgrade_controller
|
58
56
|
end
|
59
57
|
|
60
|
-
def delete
|
61
|
-
@configuration = configuration
|
58
|
+
def delete
|
62
59
|
@cluster_name = configuration['cluster_name']
|
63
60
|
@kubeconfig_path = File.expand_path(configuration['kubeconfig_path'])
|
64
61
|
@public_ssh_key_path = File.expand_path(configuration['public_ssh_key_path'])
|
@@ -68,8 +65,7 @@ class Cluster
|
|
68
65
|
delete_resources
|
69
66
|
end
|
70
67
|
|
71
|
-
def upgrade(
|
72
|
-
@configuration = configuration
|
68
|
+
def upgrade(new_k3s_version:, config_file:)
|
73
69
|
@cluster_name = configuration['cluster_name']
|
74
70
|
@kubeconfig_path = File.expand_path(configuration['kubeconfig_path'])
|
75
71
|
@new_k3s_version = new_k3s_version
|
@@ -82,10 +78,10 @@ class Cluster
|
|
82
78
|
|
83
79
|
attr_accessor :servers
|
84
80
|
|
85
|
-
attr_reader :
|
81
|
+
attr_reader :configuration, :cluster_name, :kubeconfig_path, :k3s_version,
|
86
82
|
:masters_config, :worker_node_pools,
|
87
83
|
:masters_location, :public_ssh_key_path,
|
88
|
-
:hetzner_token, :new_k3s_version,
|
84
|
+
:hetzner_token, :new_k3s_version,
|
89
85
|
:config_file, :verify_host_key, :networks, :private_ssh_key_path,
|
90
86
|
:enable_encryption, :kube_api_server_args, :kube_scheduler_args,
|
91
87
|
:kube_controller_manager_args, :kube_cloud_controller_manager_args,
|
@@ -190,9 +186,10 @@ class Cluster
|
|
190
186
|
puts 'Upgrade will now start. Run `watch kubectl get nodes` to see the nodes being upgraded. This should take a few minutes for a small cluster.'
|
191
187
|
puts 'The API server may be briefly unavailable during the upgrade of the controlplane.'
|
192
188
|
|
193
|
-
|
189
|
+
updated_configuration = configuration.raw
|
190
|
+
updated_configuration['k3s_version'] = new_k3s_version
|
194
191
|
|
195
|
-
File.write(config_file,
|
192
|
+
File.write(config_file, updated_configuration.to_yaml)
|
196
193
|
end
|
197
194
|
|
198
195
|
def master_script(master)
|
@@ -214,10 +211,8 @@ class Cluster
|
|
214
211
|
--cluster-cidr=10.244.0.0/16 \
|
215
212
|
--etcd-expose-metrics=true \
|
216
213
|
#{flannel_wireguard} \
|
217
|
-
--kube-controller-manager-arg="address=0.0.0.0" \
|
218
214
|
--kube-controller-manager-arg="bind-address=0.0.0.0" \
|
219
215
|
--kube-proxy-arg="metrics-bind-address=0.0.0.0" \
|
220
|
-
--kube-scheduler-arg="address=0.0.0.0" \
|
221
216
|
--kube-scheduler-arg="bind-address=0.0.0.0" \
|
222
217
|
#{taint} #{extra_args} \
|
223
218
|
--kubelet-arg="cloud-provider=external" \
|
@@ -299,7 +294,7 @@ class Cluster
|
|
299
294
|
name: 'hcloud'
|
300
295
|
stringData:
|
301
296
|
network: "#{cluster_name}"
|
302
|
-
token: "#{hetzner_token}"
|
297
|
+
token: "#{configuration.hetzner_token}"
|
303
298
|
EOF
|
304
299
|
BASH
|
305
300
|
|
@@ -339,7 +334,7 @@ class Cluster
|
|
339
334
|
namespace: 'kube-system'
|
340
335
|
name: 'hcloud-csi'
|
341
336
|
stringData:
|
342
|
-
token: "#{hetzner_token}"
|
337
|
+
token: "#{configuration.hetzner_token}"
|
343
338
|
EOF
|
344
339
|
BASH
|
345
340
|
|
@@ -643,4 +638,8 @@ class Cluster
|
|
643
638
|
" --kube-proxy-arg=\"#{arg}\" "
|
644
639
|
end.join
|
645
640
|
end
|
641
|
+
|
642
|
+
def hetzner_client
|
643
|
+
configuration.hetzner_client
|
644
|
+
end
|
646
645
|
end
|
@@ -0,0 +1,454 @@
|
|
1
|
+
# frozen_string_literal: true
|
2
|
+
|
3
|
+
module Hetzner
|
4
|
+
class Configuration
|
5
|
+
GITHUB_DELIM_LINKS = ','.freeze
|
6
|
+
GITHUB_LINK_REGEX = /<([^>]+)>; rel=\"([^\"]+)\"/
|
7
|
+
|
8
|
+
attr_reader :hetzner_client
|
9
|
+
|
10
|
+
def initialize(options:)
|
11
|
+
@options = options
|
12
|
+
@errors = []
|
13
|
+
|
14
|
+
validate_configuration_file
|
15
|
+
end
|
16
|
+
|
17
|
+
def validate(action:)
|
18
|
+
validate_token
|
19
|
+
|
20
|
+
if valid_token?
|
21
|
+
validate_cluster_name
|
22
|
+
validate_kubeconfig_path
|
23
|
+
|
24
|
+
case action
|
25
|
+
when :create
|
26
|
+
validate_create
|
27
|
+
when :delete
|
28
|
+
validate_kubeconfig_path_must_exist
|
29
|
+
when :upgrade
|
30
|
+
validate_upgrade
|
31
|
+
end
|
32
|
+
end
|
33
|
+
|
34
|
+
errors.flatten!
|
35
|
+
|
36
|
+
return if errors.empty?
|
37
|
+
|
38
|
+
puts 'Some information in the configuration file requires your attention:'
|
39
|
+
|
40
|
+
errors.each do |error|
|
41
|
+
puts " - #{error}"
|
42
|
+
end
|
43
|
+
|
44
|
+
exit 1
|
45
|
+
end
|
46
|
+
|
47
|
+
def self.available_releases
|
48
|
+
@available_releases ||= begin
|
49
|
+
releases = []
|
50
|
+
|
51
|
+
response, page_releases = fetch_releases('https://api.github.com/repos/k3s-io/k3s/tags?per_page=100')
|
52
|
+
releases = page_releases
|
53
|
+
link_header = response.headers['link']
|
54
|
+
|
55
|
+
while !link_header.nil?
|
56
|
+
next_page_url = extract_next_github_page_url(link_header)
|
57
|
+
|
58
|
+
break if next_page_url.nil?
|
59
|
+
|
60
|
+
response, page_releases = fetch_releases(next_page_url)
|
61
|
+
|
62
|
+
releases += page_releases
|
63
|
+
|
64
|
+
link_header = response.headers['link']
|
65
|
+
end
|
66
|
+
|
67
|
+
releases.sort
|
68
|
+
end
|
69
|
+
rescue StandardError
|
70
|
+
if defined?errors
|
71
|
+
errors << 'Cannot fetch the releases with Hetzner API, please try again later'
|
72
|
+
else
|
73
|
+
puts 'Cannot fetch the releases with Hetzner API, please try again later'
|
74
|
+
end
|
75
|
+
end
|
76
|
+
|
77
|
+
def hetzner_token
|
78
|
+
return @token unless @token.nil?
|
79
|
+
|
80
|
+
@token = ENV.fetch('HCLOUD_TOKEN', configuration['hetzner_token'])
|
81
|
+
end
|
82
|
+
|
83
|
+
def [](key)
|
84
|
+
configuration[key]
|
85
|
+
end
|
86
|
+
|
87
|
+
def fetch(key, default)
|
88
|
+
configuration.fetch(key, default)
|
89
|
+
end
|
90
|
+
|
91
|
+
def raw
|
92
|
+
configuration
|
93
|
+
end
|
94
|
+
|
95
|
+
private
|
96
|
+
|
97
|
+
attr_reader :configuration, :errors, :options
|
98
|
+
|
99
|
+
def self.fetch_releases(url)
|
100
|
+
response = HTTP.get(url)
|
101
|
+
[response, JSON.parse(response.body).map { |hash| hash['name'] }]
|
102
|
+
end
|
103
|
+
|
104
|
+
def self.extract_next_github_page_url(link_header)
|
105
|
+
link_header.split(GITHUB_DELIM_LINKS).each do |link|
|
106
|
+
GITHUB_LINK_REGEX.match(link.strip) do |match|
|
107
|
+
url_part, meta_part = match[1], match[2]
|
108
|
+
next if !url_part || !meta_part
|
109
|
+
return url_part if meta_part == "next"
|
110
|
+
end
|
111
|
+
end
|
112
|
+
|
113
|
+
nil
|
114
|
+
end
|
115
|
+
|
116
|
+
def self.assign_url_part(meta_part, url_part)
|
117
|
+
case meta_part
|
118
|
+
when "next"
|
119
|
+
url_part
|
120
|
+
end
|
121
|
+
end
|
122
|
+
|
123
|
+
def validate_create
|
124
|
+
validate_public_ssh_key
|
125
|
+
validate_private_ssh_key
|
126
|
+
validate_ssh_allowed_networks
|
127
|
+
validate_masters_location
|
128
|
+
validate_k3s_version
|
129
|
+
validate_masters
|
130
|
+
validate_worker_node_pools
|
131
|
+
validate_verify_host_key
|
132
|
+
validate_additional_packages
|
133
|
+
validate_post_create_commands
|
134
|
+
validate_kube_api_server_args
|
135
|
+
validate_kube_scheduler_args
|
136
|
+
validate_kube_controller_manager_args
|
137
|
+
validate_kube_cloud_controller_manager_args
|
138
|
+
validate_kubelet_args
|
139
|
+
validate_kube_proxy_args
|
140
|
+
end
|
141
|
+
|
142
|
+
def validate_upgrade
|
143
|
+
validate_kubeconfig_path_must_exist
|
144
|
+
validate_new_k3s_version
|
145
|
+
end
|
146
|
+
|
147
|
+
def validate_public_ssh_key
|
148
|
+
path = File.expand_path(configuration['public_ssh_key_path'])
|
149
|
+
errors << 'Invalid Public SSH key path' and return unless File.exist? path
|
150
|
+
|
151
|
+
key = File.read(path)
|
152
|
+
errors << 'Public SSH key is invalid' unless ::SSHKey.valid_ssh_public_key?(key)
|
153
|
+
rescue StandardError
|
154
|
+
errors << 'Invalid Public SSH key path'
|
155
|
+
end
|
156
|
+
|
157
|
+
def validate_private_ssh_key
|
158
|
+
private_ssh_key_path = configuration['private_ssh_key_path']
|
159
|
+
|
160
|
+
return unless private_ssh_key_path
|
161
|
+
|
162
|
+
path = File.expand_path(private_ssh_key_path)
|
163
|
+
errors << 'Invalid Private SSH key path' and return unless File.exist?(path)
|
164
|
+
rescue StandardError
|
165
|
+
errors << 'Invalid Private SSH key path'
|
166
|
+
end
|
167
|
+
|
168
|
+
def validate_ssh_allowed_networks
|
169
|
+
networks ||= configuration['ssh_allowed_networks']
|
170
|
+
|
171
|
+
if networks.nil? || networks.empty?
|
172
|
+
errors << 'At least one network/IP range must be specified for SSH access'
|
173
|
+
return
|
174
|
+
end
|
175
|
+
|
176
|
+
invalid_networks = networks.reject do |network|
|
177
|
+
IPAddr.new(network)
|
178
|
+
rescue StandardError
|
179
|
+
false
|
180
|
+
end
|
181
|
+
|
182
|
+
unless invalid_networks.empty?
|
183
|
+
invalid_networks.each do |network|
|
184
|
+
errors << "The network #{network} is an invalid range"
|
185
|
+
end
|
186
|
+
end
|
187
|
+
|
188
|
+
invalid_ranges = networks.reject do |network|
|
189
|
+
network.include? '/'
|
190
|
+
end
|
191
|
+
|
192
|
+
unless invalid_ranges.empty?
|
193
|
+
invalid_ranges.each do |_network|
|
194
|
+
errors << 'Please use the CIDR notation for the networks to avoid ambiguity'
|
195
|
+
end
|
196
|
+
end
|
197
|
+
|
198
|
+
return unless invalid_networks.empty?
|
199
|
+
|
200
|
+
current_ip = URI.open('http://whatismyip.akamai.com').read
|
201
|
+
|
202
|
+
current_ip_networks = networks.detect do |network|
|
203
|
+
IPAddr.new(network).include?(current_ip)
|
204
|
+
rescue StandardError
|
205
|
+
false
|
206
|
+
end
|
207
|
+
|
208
|
+
errors << "Your current IP #{current_ip} is not included into any of the networks you've specified, so we won't be able to SSH into the nodes" unless current_ip_networks
|
209
|
+
end
|
210
|
+
|
211
|
+
def validate_masters_location
|
212
|
+
return if valid_location?(configuration['location'])
|
213
|
+
|
214
|
+
errors << 'Invalid location for master nodes - valid locations: nbg1 (Nuremberg, Germany), fsn1 (Falkenstein, Germany), hel1 (Helsinki, Finland) or ash (Ashburn, Virginia, USA)'
|
215
|
+
end
|
216
|
+
|
217
|
+
def validate_k3s_version
|
218
|
+
k3s_version = configuration['k3s_version']
|
219
|
+
errors << 'Invalid k3s version' unless Hetzner::Configuration.available_releases.include? k3s_version
|
220
|
+
end
|
221
|
+
|
222
|
+
def validate_masters
|
223
|
+
masters_pool = nil
|
224
|
+
|
225
|
+
begin
|
226
|
+
masters_pool = configuration['masters']
|
227
|
+
rescue StandardError
|
228
|
+
errors << 'Invalid masters configuration'
|
229
|
+
return
|
230
|
+
end
|
231
|
+
|
232
|
+
if masters_pool.nil?
|
233
|
+
errors << 'Invalid masters configuration'
|
234
|
+
return
|
235
|
+
end
|
236
|
+
|
237
|
+
validate_instance_group masters_pool, workers: false
|
238
|
+
end
|
239
|
+
|
240
|
+
def validate_worker_node_pools
|
241
|
+
worker_node_pools = configuration['worker_node_pools'] || []
|
242
|
+
|
243
|
+
unless worker_node_pools.size.positive? || schedule_workloads_on_masters?
|
244
|
+
errors << 'Invalid node pools configuration'
|
245
|
+
return
|
246
|
+
end
|
247
|
+
|
248
|
+
return if worker_node_pools.size.zero? && schedule_workloads_on_masters?
|
249
|
+
|
250
|
+
if !worker_node_pools.is_a? Array
|
251
|
+
errors << 'Invalid node pools configuration'
|
252
|
+
elsif worker_node_pools.size.zero?
|
253
|
+
errors << 'At least one node pool is required in order to schedule workloads' unless schedule_workloads_on_masters?
|
254
|
+
elsif worker_node_pools.map { |worker_node_pool| worker_node_pool['name'] }.uniq.size != worker_node_pools.size
|
255
|
+
errors << 'Each node pool must have an unique name'
|
256
|
+
elsif server_types
|
257
|
+
worker_node_pools.each do |worker_node_pool|
|
258
|
+
validate_instance_group worker_node_pool
|
259
|
+
end
|
260
|
+
end
|
261
|
+
end
|
262
|
+
|
263
|
+
def validate_verify_host_key
|
264
|
+
return unless [true, false].include?(configuration.fetch('public_ssh_key_path', false))
|
265
|
+
|
266
|
+
errors << 'Please set the verify_host_key option to either true or false'
|
267
|
+
end
|
268
|
+
|
269
|
+
def validate_additional_packages
|
270
|
+
additional_packages = configuration['additional_packages']
|
271
|
+
errors << 'Invalid additional packages configuration - it should be an array' if additional_packages && !additional_packages.is_a?(Array)
|
272
|
+
end
|
273
|
+
|
274
|
+
def validate_post_create_commands
|
275
|
+
post_create_commands = configuration['post_create_commands']
|
276
|
+
errors << 'Invalid post create commands configuration - it should be an array' if post_create_commands && !post_create_commands.is_a?(Array)
|
277
|
+
end
|
278
|
+
|
279
|
+
def validate_kube_api_server_args
|
280
|
+
kube_api_server_args = configuration['kube_api_server_args']
|
281
|
+
return unless kube_api_server_args
|
282
|
+
|
283
|
+
errors << 'kube_api_server_args must be an array of arguments' unless kube_api_server_args.is_a? Array
|
284
|
+
end
|
285
|
+
|
286
|
+
def validate_kube_scheduler_args
|
287
|
+
kube_scheduler_args = configuration['kube_scheduler_args']
|
288
|
+
return unless kube_scheduler_args
|
289
|
+
|
290
|
+
errors << 'kube_scheduler_args must be an array of arguments' unless kube_scheduler_args.is_a? Array
|
291
|
+
end
|
292
|
+
|
293
|
+
def validate_kube_controller_manager_args
|
294
|
+
kube_controller_manager_args = configuration['kube_controller_manager_args']
|
295
|
+
return unless kube_controller_manager_args
|
296
|
+
|
297
|
+
errors << 'kube_controller_manager_args must be an array of arguments' unless kube_controller_manager_args.is_a? Array
|
298
|
+
end
|
299
|
+
|
300
|
+
def validate_kube_cloud_controller_manager_args
|
301
|
+
kube_cloud_controller_manager_args = configuration['kube_cloud_controller_manager_args']
|
302
|
+
return unless kube_cloud_controller_manager_args
|
303
|
+
|
304
|
+
errors << 'kube_cloud_controller_manager_args must be an array of arguments' unless kube_cloud_controller_manager_args.is_a? Array
|
305
|
+
end
|
306
|
+
|
307
|
+
def validate_kubelet_args
|
308
|
+
kubelet_args = configuration['kubelet_args']
|
309
|
+
return unless kubelet_args
|
310
|
+
|
311
|
+
errors << 'kubelet_args must be an array of arguments' unless kubelet_args.is_a? Array
|
312
|
+
end
|
313
|
+
|
314
|
+
def validate_kube_proxy_args
|
315
|
+
kube_proxy_args = configuration['kube_proxy_args']
|
316
|
+
return unless kube_proxy_args
|
317
|
+
|
318
|
+
errors << 'kube_proxy_args must be an array of arguments' unless kube_proxy_args.is_a? Array
|
319
|
+
end
|
320
|
+
|
321
|
+
def validate_configuration_file
|
322
|
+
config_file_path = options[:config_file]
|
323
|
+
|
324
|
+
if File.exist?(config_file_path)
|
325
|
+
begin
|
326
|
+
@configuration = YAML.load_file(options[:config_file])
|
327
|
+
unless configuration.is_a? Hash
|
328
|
+
puts 'Configuration is invalid'
|
329
|
+
exit 1
|
330
|
+
end
|
331
|
+
rescue StandardError
|
332
|
+
puts 'Please ensure that the config file is a correct YAML manifest.'
|
333
|
+
exit 1
|
334
|
+
end
|
335
|
+
else
|
336
|
+
puts 'Please specify a correct path for the config file.'
|
337
|
+
exit 1
|
338
|
+
end
|
339
|
+
end
|
340
|
+
|
341
|
+
def validate_token
|
342
|
+
errors << 'Invalid Hetzner Cloud token' unless valid_token?
|
343
|
+
end
|
344
|
+
|
345
|
+
def validate_kubeconfig_path
|
346
|
+
path = File.expand_path(configuration['kubeconfig_path'])
|
347
|
+
errors << 'kubeconfig path cannot be a directory' and return if File.directory? path
|
348
|
+
|
349
|
+
directory = File.dirname(path)
|
350
|
+
errors << "Directory #{directory} doesn't exist" unless File.exist? directory
|
351
|
+
rescue StandardError
|
352
|
+
errors << 'Invalid path for the kubeconfig'
|
353
|
+
end
|
354
|
+
|
355
|
+
def validate_kubeconfig_path_must_exist
|
356
|
+
path = File.expand_path configuration['kubeconfig_path']
|
357
|
+
errors << 'kubeconfig path is invalid' and return unless File.exist? path
|
358
|
+
|
359
|
+
errors << 'kubeconfig path cannot be a directory' if File.directory? path
|
360
|
+
rescue StandardError
|
361
|
+
errors << 'Invalid kubeconfig path'
|
362
|
+
end
|
363
|
+
|
364
|
+
def validate_cluster_name
|
365
|
+
errors << 'Cluster name is an invalid format (only lowercase letters, digits and dashes are allowed)' unless configuration['cluster_name'] =~ /\A[a-z\d-]+\z/
|
366
|
+
|
367
|
+
return if configuration['cluster_name'] =~ /\A[a-z]+.*([a-z]|\d)+\z/
|
368
|
+
|
369
|
+
errors << 'Ensure that the cluster name starts and ends with a normal letter'
|
370
|
+
end
|
371
|
+
|
372
|
+
def validate_new_k3s_version
|
373
|
+
new_k3s_version = options[:new_k3s_version]
|
374
|
+
errors << 'The new k3s version is invalid' unless Hetzner::Configuration.available_releases.include? new_k3s_version
|
375
|
+
end
|
376
|
+
|
377
|
+
def valid_token?
|
378
|
+
return @valid unless @valid.nil?
|
379
|
+
|
380
|
+
begin
|
381
|
+
token = hetzner_token
|
382
|
+
@hetzner_client = Hetzner::Client.new(token:)
|
383
|
+
response = hetzner_client.get('/locations')
|
384
|
+
error_code = response.dig('error', 'code')
|
385
|
+
@valid = error_code != 'unauthorized'
|
386
|
+
rescue StandardError
|
387
|
+
@valid = false
|
388
|
+
end
|
389
|
+
end
|
390
|
+
|
391
|
+
def validate_instance_group(instance_group, workers: true)
|
392
|
+
instance_group_errors = []
|
393
|
+
|
394
|
+
instance_group_type = workers ? "Worker mode pool '#{instance_group['name']}'" : 'Masters pool'
|
395
|
+
|
396
|
+
instance_group_errors << "#{instance_group_type} has an invalid name" unless !workers || instance_group['name'] =~ /\A([A-Za-z0-9\-_]+)\Z/
|
397
|
+
|
398
|
+
instance_group_errors << "#{instance_group_type} is in an invalid format" unless instance_group.is_a? Hash
|
399
|
+
|
400
|
+
instance_group_errors << "#{instance_group_type} has an invalid instance type" unless !valid_token? || server_types.include?(instance_group['instance_type'])
|
401
|
+
|
402
|
+
if workers
|
403
|
+
location = instance_group.fetch('location', configuration['location'])
|
404
|
+
instance_group_errors << "#{instance_group_type} has an invalid location - valid locations: nbg1 (Nuremberg, Germany), fsn1 (Falkenstein, Germany), hel1 (Helsinki, Finland) or ash (Ashburn, Virginia, USA)" unless valid_location?(location)
|
405
|
+
|
406
|
+
in_network_zone = configuration['location'] == 'ash' ? location == 'ash' : location != 'ash'
|
407
|
+
instance_group_errors << "#{instance_group_type} must be in the same network zone as the masters. If the masters are located in Ashburn, all the node pools must be located in Ashburn too, otherwise none of the node pools should be located in Ashburn." unless in_network_zone
|
408
|
+
end
|
409
|
+
|
410
|
+
if instance_group['instance_count'].is_a? Integer
|
411
|
+
if instance_group['instance_count'] < 1
|
412
|
+
instance_group_errors << "#{instance_group_type} must have at least one node"
|
413
|
+
elsif instance_group['instance_count'] > 10
|
414
|
+
instance_group_errors << "#{instance_group_type} cannot have more than 10 nodes due to a limitation with the Hetzner placement groups. You can add more node pools if you need more nodes."
|
415
|
+
elsif !workers
|
416
|
+
instance_group_errors << 'Masters count must equal to 1 for non-HA clusters or an odd number (recommended 3) for an HA cluster' unless instance_group['instance_count'].odd?
|
417
|
+
end
|
418
|
+
else
|
419
|
+
instance_group_errors << "#{instance_group_type} has an invalid instance count"
|
420
|
+
end
|
421
|
+
|
422
|
+
errors << instance_group_errors
|
423
|
+
end
|
424
|
+
|
425
|
+
def valid_location?(location)
|
426
|
+
return if locations.empty? && !valid_token?
|
427
|
+
|
428
|
+
locations.include? location
|
429
|
+
end
|
430
|
+
|
431
|
+
def locations
|
432
|
+
return [] unless valid_token?
|
433
|
+
|
434
|
+
@locations ||= hetzner_client.get('/locations')['locations'].map { |location| location['name'] }
|
435
|
+
rescue StandardError
|
436
|
+
@errors << 'Cannot fetch locations with Hetzner API, please try again later'
|
437
|
+
[]
|
438
|
+
end
|
439
|
+
|
440
|
+
def schedule_workloads_on_masters?
|
441
|
+
schedule_workloads_on_masters = configuration['schedule_workloads_on_masters']
|
442
|
+
schedule_workloads_on_masters ? !!schedule_workloads_on_masters : false
|
443
|
+
end
|
444
|
+
|
445
|
+
def server_types
|
446
|
+
return [] unless valid_token?
|
447
|
+
|
448
|
+
@server_types ||= hetzner_client.get('/server_types')['server_types'].map { |server_type| server_type['name'] }
|
449
|
+
rescue StandardError
|
450
|
+
@errors << 'Cannot fetch server types with Hetzner API, please try again later'
|
451
|
+
false
|
452
|
+
end
|
453
|
+
end
|
454
|
+
end
|
data/lib/hetzner/k3s/version.rb
CHANGED
metadata
CHANGED
@@ -1,14 +1,14 @@
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
2
2
|
name: hetzner-k3s
|
3
3
|
version: !ruby/object:Gem::Version
|
4
|
-
version: 0.5.
|
4
|
+
version: 0.5.8
|
5
5
|
platform: ruby
|
6
6
|
authors:
|
7
7
|
- Vito Botta
|
8
8
|
autorequire:
|
9
9
|
bindir: exe
|
10
10
|
cert_chain: []
|
11
|
-
date: 2022-
|
11
|
+
date: 2022-08-11 00:00:00.000000000 Z
|
12
12
|
dependencies:
|
13
13
|
- !ruby/object:Gem::Dependency
|
14
14
|
name: bcrypt_pbkdf
|
@@ -161,6 +161,7 @@ files:
|
|
161
161
|
- lib/hetzner/infra/ssh_key.rb
|
162
162
|
- lib/hetzner/k3s/cli.rb
|
163
163
|
- lib/hetzner/k3s/cluster.rb
|
164
|
+
- lib/hetzner/k3s/configuration.rb
|
164
165
|
- lib/hetzner/k3s/version.rb
|
165
166
|
- lib/hetzner/utils.rb
|
166
167
|
homepage: https://github.com/vitobotta/hetzner-k3s
|