hetzner-k3s 0.3.7 → 0.4.1
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +4 -4
- data/Dockerfile +1 -2
- data/Gemfile.lock +1 -1
- data/README.md +35 -2
- data/bin/build.sh +12 -0
- data/lib/hetzner/infra/firewall.rb +79 -57
- data/lib/hetzner/infra/load_balancer.rb +14 -2
- data/lib/hetzner/infra/ssh_key.rb +15 -15
- data/lib/hetzner/k3s/cli.rb +114 -19
- data/lib/hetzner/k3s/cluster.rb +67 -53
- data/lib/hetzner/k3s/version.rb +1 -1
- metadata +3 -2
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA256:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: cb83104df3f0724108d93046e10e5889be57a54d941549d2b8f2400344448ce6
|
4
|
+
data.tar.gz: 2f3a5069910608a299b611bd7ccfdcae4e82ac1d9d0e98ad4b21542173297662
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: 400792543d20abaa5a6b57b26bdabc1ab475d9f5e991ee1808c77c9027e17039036bd1e80b292aeb9982275e05b93ec58b9986bbf7c689cbf568b3f558d23f8c
|
7
|
+
data.tar.gz: 71ef14f3b9d8c86590a11afe260ce68e69bc6bda532328b4d7f3b4064192bf6a2df587ea9107b441102ed564bcb3dcb11f661926185d7ef54f93d3c0a7c90f44
|
data/Dockerfile
CHANGED
data/Gemfile.lock
CHANGED
data/README.md
CHANGED
@@ -38,7 +38,7 @@ This will install the `hetzner-k3s` executable in your PATH.
|
|
38
38
|
Alternatively, if you don't want to set up a Ruby runtime but have Docker installed, you can use a container. Run the following from inside the directory where you have the config file for the cluster (described in the next section):
|
39
39
|
|
40
40
|
```bash
|
41
|
-
docker run --rm -it -v ${PWD}:/cluster -v ${HOME}/.ssh:/tmp/.ssh vitobotta/hetzner-k3s:v0.
|
41
|
+
docker run --rm -it -v ${PWD}:/cluster -v ${HOME}/.ssh:/tmp/.ssh vitobotta/hetzner-k3s:v0.4.1 create-cluster --config-file /cluster/test.yaml
|
42
42
|
```
|
43
43
|
|
44
44
|
Replace `test.yaml` with the name of your config file.
|
@@ -53,7 +53,10 @@ hetzner_token: <your token>
|
|
53
53
|
cluster_name: test
|
54
54
|
kubeconfig_path: "./kubeconfig"
|
55
55
|
k3s_version: v1.21.3+k3s1
|
56
|
-
|
56
|
+
public_ssh_key_path: "~/.ssh/id_rsa.pub"
|
57
|
+
private_ssh_key_path: "~/.ssh/id_rsa"
|
58
|
+
ssh_allowed_networks:
|
59
|
+
- 0.0.0.0/0
|
57
60
|
verify_host_key: false
|
58
61
|
location: nbg1
|
59
62
|
masters:
|
@@ -72,6 +75,11 @@ It should hopefully be self explanatory; you can run `hetzner-k3s releases` to s
|
|
72
75
|
|
73
76
|
If you are using Docker, then set `kubeconfig_path` to `/cluster/kubeconfig` so that the kubeconfig is created in the same directory where your config file is.
|
74
77
|
|
78
|
+
If you don't want to specify the Hetzner token in the config file (for example if you want to use the tool with CI), then you can use the `HCLOUD_TOKEN` environment variable instead, which has predecence.
|
79
|
+
|
80
|
+
**Important**: The tool assignes the label `cluster` to each server it creates, with the clsuter name you specify in the config file, as the value. So please ensure you don't create unrelated servers in the same project having
|
81
|
+
the label `cluster=<cluster name>`, because otherwise they will be deleted if you delete the cluster. I recommend you create a separate Hetzner project for each cluster, see note at the end of this README for more details.
|
82
|
+
|
75
83
|
|
76
84
|
If you set `masters.instance_count` to 1 then the tool will create a non highly available control plane; for production clusters you may want to set it to a number greater than 1. This number must be odd to avoid split brain issues with etcd and the recommended number is 3.
|
77
85
|
|
@@ -225,8 +233,33 @@ The other annotations should be self explanatory. You can find a list of the ava
|
|
225
233
|
Once the cluster is ready you can create persistent volumes out of the box with the default storage class `hcloud-volumes`, since the Hetzner CSI driver is installed automatically. This will use Hetzner's block storage (based on Ceph so it's replicated and highly available) for your persistent volumes. Note that the minimum size of a volume is 10Gi. If you specify a smaller size for a volume, the volume will be created with a capacity of 10Gi anyway.
|
226
234
|
|
227
235
|
|
236
|
+
## Keeping a project per cluster
|
237
|
+
|
238
|
+
I recommend that you create a separate Hetzner project for each cluster, because otherwise multiple clusters will attempt to create overlapping routes. I will make the pod cidr configurable in the future to avoid this, but I still recommend keeping clusters separated from each other. This way, if you want to delete a cluster with all the resources created for it, you can just delete the project.
|
239
|
+
|
240
|
+
|
228
241
|
## changelog
|
229
242
|
|
243
|
+
- 0.4.1
|
244
|
+
- Allow to optionally specify the path of the private SSH key
|
245
|
+
- Set correct permissions for the kubeconfig file
|
246
|
+
- Retry fetching manifests a few times to allow for temporary network issues
|
247
|
+
- Allow to optionally schedule workloads on masters
|
248
|
+
- Allow clusters with no worker node pools if shceduling is enabled for the masters
|
249
|
+
|
250
|
+
- 0.4.0
|
251
|
+
- Ensure the masters are removed from the API load balancer before deleting the load balancer
|
252
|
+
- Ensure the servers are removed from the firewall before deleting it
|
253
|
+
- Allow using an environment variable to specify the Hetzner token
|
254
|
+
- Allow restricting SSH access to the nodes to specific networks
|
255
|
+
- Do not open the port 6443 on the nodes if a load balancer is created for an HA cluster
|
256
|
+
|
257
|
+
- 0.3.9
|
258
|
+
- Add command "version" to print the version of the tool in use
|
259
|
+
|
260
|
+
- 0.3.8
|
261
|
+
- Fix: added a check on a label to ensure that only servers that belong to the cluster are deleted from the project
|
262
|
+
|
230
263
|
- 0.3.7
|
231
264
|
- Ensure that the cluster name only contains lowercase letters, digits and dashes for compatibility with the cloud controller manager
|
232
265
|
|
data/bin/build.sh
ADDED
@@ -5,7 +5,9 @@ module Hetzner
|
|
5
5
|
@cluster_name = cluster_name
|
6
6
|
end
|
7
7
|
|
8
|
-
def create
|
8
|
+
def create(ha:, networks:)
|
9
|
+
@ha = ha
|
10
|
+
@networks = networks
|
9
11
|
puts
|
10
12
|
|
11
13
|
if firewall = find_firewall
|
@@ -16,16 +18,21 @@ module Hetzner
|
|
16
18
|
|
17
19
|
puts "Creating firewall..."
|
18
20
|
|
19
|
-
response = hetzner_client.post("/firewalls",
|
21
|
+
response = hetzner_client.post("/firewalls", create_firewall_config).body
|
20
22
|
puts "...firewall created."
|
21
23
|
puts
|
22
24
|
|
23
25
|
JSON.parse(response)["firewall"]["id"]
|
24
26
|
end
|
25
27
|
|
26
|
-
def delete
|
28
|
+
def delete(servers)
|
27
29
|
if firewall = find_firewall
|
28
30
|
puts "Deleting firewall..."
|
31
|
+
|
32
|
+
servers.each do |server|
|
33
|
+
hetzner_client.post("/firewalls/#{firewall["id"]}/actions/remove_from_resources", remove_targets_config(server["id"]))
|
34
|
+
end
|
35
|
+
|
29
36
|
hetzner_client.delete("/firewalls", firewall["id"])
|
30
37
|
puts "...firewall deleted."
|
31
38
|
else
|
@@ -37,64 +44,79 @@ module Hetzner
|
|
37
44
|
|
38
45
|
private
|
39
46
|
|
40
|
-
attr_reader :hetzner_client, :cluster_name, :firewall
|
47
|
+
attr_reader :hetzner_client, :cluster_name, :firewall, :ha, :networks
|
48
|
+
|
49
|
+
def create_firewall_config
|
50
|
+
rules = [
|
51
|
+
{
|
52
|
+
"description": "Allow port 22 (SSH)",
|
53
|
+
"direction": "in",
|
54
|
+
"protocol": "tcp",
|
55
|
+
"port": "22",
|
56
|
+
"source_ips": networks,
|
57
|
+
"destination_ips": []
|
58
|
+
},
|
59
|
+
{
|
60
|
+
"description": "Allow ICMP (ping)",
|
61
|
+
"direction": "in",
|
62
|
+
"protocol": "icmp",
|
63
|
+
"port": nil,
|
64
|
+
"source_ips": [
|
65
|
+
"0.0.0.0/0",
|
66
|
+
"::/0"
|
67
|
+
],
|
68
|
+
"destination_ips": []
|
69
|
+
},
|
70
|
+
{
|
71
|
+
"description": "Allow all TCP traffic between nodes on the private network",
|
72
|
+
"direction": "in",
|
73
|
+
"protocol": "tcp",
|
74
|
+
"port": "any",
|
75
|
+
"source_ips": [
|
76
|
+
"10.0.0.0/16"
|
77
|
+
],
|
78
|
+
"destination_ips": []
|
79
|
+
},
|
80
|
+
{
|
81
|
+
"description": "Allow all UDP traffic between nodes on the private network",
|
82
|
+
"direction": "in",
|
83
|
+
"protocol": "udp",
|
84
|
+
"port": "any",
|
85
|
+
"source_ips": [
|
86
|
+
"10.0.0.0/16"
|
87
|
+
],
|
88
|
+
"destination_ips": []
|
89
|
+
}
|
90
|
+
]
|
91
|
+
|
92
|
+
unless ha
|
93
|
+
rules << {
|
94
|
+
"description": "Allow port 6443 (Kubernetes API server)",
|
95
|
+
"direction": "in",
|
96
|
+
"protocol": "tcp",
|
97
|
+
"port": "6443",
|
98
|
+
"source_ips": [
|
99
|
+
"0.0.0.0/0",
|
100
|
+
"::/0"
|
101
|
+
],
|
102
|
+
"destination_ips": []
|
103
|
+
}
|
104
|
+
end
|
41
105
|
|
42
|
-
def firewall_config
|
43
106
|
{
|
44
107
|
name: cluster_name,
|
45
|
-
rules:
|
46
|
-
|
47
|
-
|
48
|
-
|
49
|
-
|
50
|
-
|
51
|
-
|
52
|
-
"0.0.0.0/0",
|
53
|
-
"::/0"
|
54
|
-
],
|
55
|
-
"destination_ips": []
|
56
|
-
},
|
57
|
-
{
|
58
|
-
"description": "Allow ICMP (ping)",
|
59
|
-
"direction": "in",
|
60
|
-
"protocol": "icmp",
|
61
|
-
"port": nil,
|
62
|
-
"source_ips": [
|
63
|
-
"0.0.0.0/0",
|
64
|
-
"::/0"
|
65
|
-
],
|
66
|
-
"destination_ips": []
|
67
|
-
},
|
68
|
-
{
|
69
|
-
"description": "Allow port 6443 (Kubernetes API server)",
|
70
|
-
"direction": "in",
|
71
|
-
"protocol": "tcp",
|
72
|
-
"port": "6443",
|
73
|
-
"source_ips": [
|
74
|
-
"0.0.0.0/0",
|
75
|
-
"::/0"
|
76
|
-
],
|
77
|
-
"destination_ips": []
|
78
|
-
},
|
79
|
-
{
|
80
|
-
"description": "Allow all TCP traffic between nodes on the private network",
|
81
|
-
"direction": "in",
|
82
|
-
"protocol": "tcp",
|
83
|
-
"port": "any",
|
84
|
-
"source_ips": [
|
85
|
-
"10.0.0.0/16"
|
86
|
-
],
|
87
|
-
"destination_ips": []
|
88
|
-
},
|
108
|
+
rules: rules
|
109
|
+
}
|
110
|
+
end
|
111
|
+
|
112
|
+
def remove_targets_config(server_id)
|
113
|
+
{
|
114
|
+
"remove_from": [
|
89
115
|
{
|
90
|
-
"
|
91
|
-
|
92
|
-
|
93
|
-
"
|
94
|
-
"source_ips": [
|
95
|
-
"10.0.0.0/16"
|
96
|
-
],
|
97
|
-
"destination_ips": []
|
116
|
+
"server": {
|
117
|
+
"id": server_id
|
118
|
+
},
|
119
|
+
"type": "server"
|
98
120
|
}
|
99
121
|
]
|
100
122
|
}
|
@@ -19,7 +19,7 @@ module Hetzner
|
|
19
19
|
|
20
20
|
puts "Creating API load_balancer..."
|
21
21
|
|
22
|
-
response = hetzner_client.post("/load_balancers",
|
22
|
+
response = hetzner_client.post("/load_balancers", create_load_balancer_config).body
|
23
23
|
puts "...API load balancer created."
|
24
24
|
puts
|
25
25
|
|
@@ -29,6 +29,9 @@ module Hetzner
|
|
29
29
|
def delete(ha:)
|
30
30
|
if load_balancer = find_load_balancer
|
31
31
|
puts "Deleting API load balancer..." unless ha
|
32
|
+
|
33
|
+
hetzner_client.post("/load_balancers/#{load_balancer["id"]}/actions/remove_target", remove_targets_config)
|
34
|
+
|
32
35
|
hetzner_client.delete("/load_balancers", load_balancer["id"])
|
33
36
|
puts "...API load balancer deleted." unless ha
|
34
37
|
elsif ha
|
@@ -46,7 +49,7 @@ module Hetzner
|
|
46
49
|
"#{cluster_name}-api"
|
47
50
|
end
|
48
51
|
|
49
|
-
def
|
52
|
+
def create_load_balancer_config
|
50
53
|
{
|
51
54
|
"algorithm": {
|
52
55
|
"type": "round_robin"
|
@@ -76,6 +79,15 @@ module Hetzner
|
|
76
79
|
}
|
77
80
|
end
|
78
81
|
|
82
|
+
def remove_targets_config
|
83
|
+
{
|
84
|
+
"label_selector": {
|
85
|
+
"selector": "cluster=#{cluster_name},role=master"
|
86
|
+
},
|
87
|
+
"type": "label_selector"
|
88
|
+
}
|
89
|
+
end
|
90
|
+
|
79
91
|
def find_load_balancer
|
80
92
|
hetzner_client.get("/load_balancers")["load_balancers"].detect{ |load_balancer| load_balancer["name"] == load_balancer_name }
|
81
93
|
end
|
@@ -5,15 +5,15 @@ module Hetzner
|
|
5
5
|
@cluster_name = cluster_name
|
6
6
|
end
|
7
7
|
|
8
|
-
def create(
|
9
|
-
@
|
8
|
+
def create(public_ssh_key_path:)
|
9
|
+
@public_ssh_key_path = public_ssh_key_path
|
10
10
|
|
11
11
|
puts
|
12
12
|
|
13
|
-
if
|
13
|
+
if (public_ssh_key = find_public_ssh_key)
|
14
14
|
puts "SSH key already exists, skipping."
|
15
15
|
puts
|
16
|
-
return
|
16
|
+
return public_ssh_key["id"]
|
17
17
|
end
|
18
18
|
|
19
19
|
puts "Creating SSH key..."
|
@@ -26,13 +26,13 @@ module Hetzner
|
|
26
26
|
JSON.parse(response)["ssh_key"]["id"]
|
27
27
|
end
|
28
28
|
|
29
|
-
def delete(
|
30
|
-
@
|
29
|
+
def delete(public_ssh_key_path:)
|
30
|
+
@public_ssh_key_path = public_ssh_key_path
|
31
31
|
|
32
|
-
if
|
33
|
-
if
|
32
|
+
if (public_ssh_key = find_public_ssh_key)
|
33
|
+
if public_ssh_key["name"] == cluster_name
|
34
34
|
puts "Deleting ssh_key..."
|
35
|
-
hetzner_client.delete("/ssh_keys",
|
35
|
+
hetzner_client.delete("/ssh_keys", public_ssh_key["id"])
|
36
36
|
puts "...ssh_key deleted."
|
37
37
|
else
|
38
38
|
puts "The SSH key existed before creating the cluster, so I won't delete it."
|
@@ -46,24 +46,24 @@ module Hetzner
|
|
46
46
|
|
47
47
|
private
|
48
48
|
|
49
|
-
attr_reader :hetzner_client, :cluster_name, :
|
49
|
+
attr_reader :hetzner_client, :cluster_name, :public_ssh_key_path
|
50
50
|
|
51
|
-
def
|
52
|
-
@
|
51
|
+
def public_ssh_key
|
52
|
+
@public_ssh_key ||= File.read(public_ssh_key_path).chop
|
53
53
|
end
|
54
54
|
|
55
55
|
def ssh_key_config
|
56
56
|
{
|
57
57
|
name: cluster_name,
|
58
|
-
|
58
|
+
public_ssh_key: public_ssh_key
|
59
59
|
}
|
60
60
|
end
|
61
61
|
|
62
62
|
def fingerprint
|
63
|
-
@fingerprint ||= ::SSHKey.fingerprint(
|
63
|
+
@fingerprint ||= ::SSHKey.fingerprint(public_ssh_key)
|
64
64
|
end
|
65
65
|
|
66
|
-
def
|
66
|
+
def find_public_ssh_key
|
67
67
|
key = hetzner_client.get("/ssh_keys")["ssh_keys"].detect do |ssh_key|
|
68
68
|
ssh_key["fingerprint"] == fingerprint
|
69
69
|
end
|
data/lib/hetzner/k3s/cli.rb
CHANGED
@@ -1,8 +1,11 @@
|
|
1
1
|
require "thor"
|
2
2
|
require "http"
|
3
3
|
require "sshkey"
|
4
|
+
require 'ipaddr'
|
5
|
+
require 'open-uri'
|
4
6
|
|
5
7
|
require_relative "cluster"
|
8
|
+
require_relative "version"
|
6
9
|
|
7
10
|
module Hetzner
|
8
11
|
module K3s
|
@@ -11,13 +14,18 @@ module Hetzner
|
|
11
14
|
true
|
12
15
|
end
|
13
16
|
|
17
|
+
desc "version", "Print the version"
|
18
|
+
def version
|
19
|
+
puts Hetzner::K3s::VERSION
|
20
|
+
end
|
21
|
+
|
14
22
|
desc "create-cluster", "Create a k3s cluster in Hetzner Cloud"
|
15
23
|
option :config_file, required: true
|
16
24
|
|
17
25
|
def create_cluster
|
18
26
|
validate_config_file :create
|
19
27
|
|
20
|
-
Cluster.new(hetzner_client: hetzner_client).create configuration: configuration
|
28
|
+
Cluster.new(hetzner_client: hetzner_client, hetzner_token: find_hetzner_token).create configuration: configuration
|
21
29
|
end
|
22
30
|
|
23
31
|
desc "delete-cluster", "Delete an existing k3s cluster in Hetzner Cloud"
|
@@ -25,7 +33,7 @@ module Hetzner
|
|
25
33
|
|
26
34
|
def delete_cluster
|
27
35
|
validate_config_file :delete
|
28
|
-
Cluster.new(hetzner_client: hetzner_client).delete configuration: configuration
|
36
|
+
Cluster.new(hetzner_client: hetzner_client, hetzner_token: find_hetzner_token).delete configuration: configuration
|
29
37
|
end
|
30
38
|
|
31
39
|
desc "upgrade-cluster", "Upgrade an existing k3s cluster in Hetzner Cloud to a new version"
|
@@ -35,7 +43,7 @@ module Hetzner
|
|
35
43
|
|
36
44
|
def upgrade_cluster
|
37
45
|
validate_config_file :upgrade
|
38
|
-
Cluster.new(hetzner_client: hetzner_client).upgrade configuration: configuration, new_k3s_version: options[:new_k3s_version], config_file: options[:config_file]
|
46
|
+
Cluster.new(hetzner_client: hetzner_client, hetzner_token: find_hetzner_token).upgrade configuration: configuration, new_k3s_version: options[:new_k3s_version], config_file: options[:config_file]
|
39
47
|
end
|
40
48
|
|
41
49
|
desc "releases", "List available k3s releases"
|
@@ -75,7 +83,9 @@ module Hetzner
|
|
75
83
|
|
76
84
|
case action
|
77
85
|
when :create
|
78
|
-
|
86
|
+
validate_public_ssh_key
|
87
|
+
validate_private_ssh_key
|
88
|
+
validate_ssh_allowed_networks
|
79
89
|
validate_location
|
80
90
|
validate_k3s_version
|
81
91
|
validate_masters
|
@@ -101,12 +111,26 @@ module Hetzner
|
|
101
111
|
end
|
102
112
|
end
|
103
113
|
|
114
|
+
def valid_token?
|
115
|
+
return @valid unless @valid.nil?
|
116
|
+
|
117
|
+
begin
|
118
|
+
token = find_hetzner_token
|
119
|
+
@hetzner_client = Hetzner::Client.new(token: token)
|
120
|
+
response = hetzner_client.get("/locations")
|
121
|
+
error_code = response.dig("error", "code")
|
122
|
+
@valid = if error_code and error_code.size > 0
|
123
|
+
false
|
124
|
+
else
|
125
|
+
true
|
126
|
+
end
|
127
|
+
rescue
|
128
|
+
@valid = false
|
129
|
+
end
|
130
|
+
end
|
131
|
+
|
104
132
|
def validate_token
|
105
|
-
|
106
|
-
@hetzner_client = Hetzner::Client.new(token: token)
|
107
|
-
hetzner_client.get("/locations")
|
108
|
-
rescue
|
109
|
-
errors << "Invalid Hetzner Cloid token"
|
133
|
+
errors << "Invalid Hetzner Cloud token" unless valid_token?
|
110
134
|
end
|
111
135
|
|
112
136
|
def validate_cluster_name
|
@@ -124,16 +148,25 @@ module Hetzner
|
|
124
148
|
errors << "Invalid path for the kubeconfig"
|
125
149
|
end
|
126
150
|
|
127
|
-
def
|
128
|
-
path = File.expand_path(configuration.dig("
|
151
|
+
def validate_public_ssh_key
|
152
|
+
path = File.expand_path(configuration.dig("public_ssh_key_path"))
|
129
153
|
errors << "Invalid Public SSH key path" and return unless File.exists? path
|
130
154
|
|
131
155
|
key = File.read(path)
|
132
|
-
errors << "Public SSH key is invalid" unless ::SSHKey.valid_ssh_public_key?
|
156
|
+
errors << "Public SSH key is invalid" unless ::SSHKey.valid_ssh_public_key?(key)
|
133
157
|
rescue
|
134
158
|
errors << "Invalid Public SSH key path"
|
135
159
|
end
|
136
160
|
|
161
|
+
def validate_private_ssh_key
|
162
|
+
return unless (private_ssh_key_path = configuration.dig("private_ssh_key_path"))
|
163
|
+
|
164
|
+
path = File.expand_path(private_ssh_key_path)
|
165
|
+
errors << "Invalid Private SSH key path" and return unless File.exists?(path)
|
166
|
+
rescue
|
167
|
+
errors << "Invalid Private SSH key path"
|
168
|
+
end
|
169
|
+
|
137
170
|
def validate_kubeconfig_path_must_exist
|
138
171
|
path = File.expand_path configuration.dig("kubeconfig_path")
|
139
172
|
errors << "kubeconfig path is invalid" and return unless File.exists? path
|
@@ -143,6 +176,7 @@ module Hetzner
|
|
143
176
|
end
|
144
177
|
|
145
178
|
def server_types
|
179
|
+
return [] unless valid_token?
|
146
180
|
@server_types ||= hetzner_client.get("/server_types")["server_types"].map{ |server_type| server_type["name"] }
|
147
181
|
rescue
|
148
182
|
@errors << "Cannot fetch server types with Hetzner API, please try again later"
|
@@ -150,13 +184,15 @@ module Hetzner
|
|
150
184
|
end
|
151
185
|
|
152
186
|
def locations
|
187
|
+
return [] unless valid_token?
|
153
188
|
@locations ||= hetzner_client.get("/locations")["locations"].map{ |location| location["name"] }
|
154
189
|
rescue
|
155
190
|
@errors << "Cannot fetch locations with Hetzner API, please try again later"
|
156
|
-
|
191
|
+
[]
|
157
192
|
end
|
158
193
|
|
159
194
|
def validate_location
|
195
|
+
return if locations.empty? && !valid_token?
|
160
196
|
errors << "Invalid location - available locations: nbg1 (Nuremberg, Germany), fsn1 (Falkenstein, Germany), hel1 (Helsinki, Finland)" unless locations.include? configuration.dig("location")
|
161
197
|
end
|
162
198
|
|
@@ -205,14 +241,22 @@ module Hetzner
|
|
205
241
|
begin
|
206
242
|
worker_node_pools = configuration.dig("worker_node_pools")
|
207
243
|
rescue
|
208
|
-
|
244
|
+
unless schedule_workloads_on_masters?
|
245
|
+
errors << "Invalid node pools configuration"
|
246
|
+
return
|
247
|
+
end
|
248
|
+
end
|
249
|
+
|
250
|
+
if worker_node_pools.nil? && schedule_workloads_on_masters?
|
209
251
|
return
|
210
252
|
end
|
211
253
|
|
212
254
|
if !worker_node_pools.is_a? Array
|
213
255
|
errors << "Invalid node pools configuration"
|
214
256
|
elsif worker_node_pools.size == 0
|
215
|
-
|
257
|
+
unless schedule_workloads_on_masters?
|
258
|
+
errors << "At least one node pool is required in order to schedule workloads"
|
259
|
+
end
|
216
260
|
elsif worker_node_pools.map{ |worker_node_pool| worker_node_pool["name"]}.uniq.size != worker_node_pools.size
|
217
261
|
errors << "Each node pool must have an unique name"
|
218
262
|
elsif server_types
|
@@ -222,6 +266,11 @@ module Hetzner
|
|
222
266
|
end
|
223
267
|
end
|
224
268
|
|
269
|
+
def schedule_workloads_on_masters?
|
270
|
+
schedule_workloads_on_masters = configuration.dig("schedule_workloads_on_masters")
|
271
|
+
schedule_workloads_on_masters ? !!schedule_workloads_on_masters : false
|
272
|
+
end
|
273
|
+
|
225
274
|
def validate_new_k3s_version_must_be_more_recent
|
226
275
|
return if options[:force] == "true"
|
227
276
|
return unless kubernetes_client
|
@@ -265,7 +314,7 @@ module Hetzner
|
|
265
314
|
instance_group_errors << "#{instance_group_type} is in an invalid format"
|
266
315
|
end
|
267
316
|
|
268
|
-
unless server_types.include?(instance_group["instance_type"])
|
317
|
+
unless !valid_token? or server_types.include?(instance_group["instance_type"])
|
269
318
|
instance_group_errors << "#{instance_group_type} has an invalid instance type"
|
270
319
|
end
|
271
320
|
|
@@ -290,16 +339,62 @@ module Hetzner
|
|
290
339
|
config_hash = YAML.load_file(File.expand_path(configuration["kubeconfig_path"]))
|
291
340
|
config_hash['current-context'] = configuration["cluster_name"]
|
292
341
|
@kubernetes_client = K8s::Client.config(K8s::Config.new(config_hash))
|
293
|
-
rescue
|
294
342
|
errors << "Cannot connect to the Kubernetes cluster"
|
295
343
|
false
|
296
344
|
end
|
297
345
|
|
298
|
-
|
299
346
|
def validate_verify_host_key
|
300
|
-
return unless [true, false].include?(configuration.fetch("
|
347
|
+
return unless [true, false].include?(configuration.fetch("public_ssh_key_path", false))
|
301
348
|
errors << "Please set the verify_host_key option to either true or false"
|
302
349
|
end
|
350
|
+
|
351
|
+
def find_hetzner_token
|
352
|
+
@token = ENV["HCLOUD_TOKEN"]
|
353
|
+
return @token if @token
|
354
|
+
@token = configuration.dig("hetzner_token")
|
355
|
+
end
|
356
|
+
|
357
|
+
def validate_ssh_allowed_networks
|
358
|
+
networks ||= configuration.dig("ssh_allowed_networks")
|
359
|
+
|
360
|
+
if networks.nil? or networks.empty?
|
361
|
+
errors << "At least one network/IP range must be specified for SSH access"
|
362
|
+
return
|
363
|
+
end
|
364
|
+
|
365
|
+
invalid_networks = networks.reject do |network|
|
366
|
+
IPAddr.new(network) rescue false
|
367
|
+
end
|
368
|
+
|
369
|
+
unless invalid_networks.empty?
|
370
|
+
invalid_networks.each do |network|
|
371
|
+
errors << "The network #{network} is an invalid range"
|
372
|
+
end
|
373
|
+
end
|
374
|
+
|
375
|
+
invalid_ranges = networks.reject do |network|
|
376
|
+
network.include? "/"
|
377
|
+
end
|
378
|
+
|
379
|
+
unless invalid_ranges.empty?
|
380
|
+
invalid_ranges.each do |network|
|
381
|
+
errors << "Please use the CIDR notation for the networks to avoid ambiguity"
|
382
|
+
end
|
383
|
+
end
|
384
|
+
|
385
|
+
return unless invalid_networks.empty?
|
386
|
+
|
387
|
+
current_ip = URI.open('http://whatismyip.akamai.com').read
|
388
|
+
|
389
|
+
current_ip_networks = networks.detect do |network|
|
390
|
+
IPAddr.new(network).include?(current_ip) rescue false
|
391
|
+
end
|
392
|
+
|
393
|
+
unless current_ip_networks
|
394
|
+
errors << "Your current IP #{current_ip} is not included into any of the networks you've specified, so we won't be able to SSH into the nodes"
|
395
|
+
end
|
396
|
+
end
|
397
|
+
|
303
398
|
end
|
304
399
|
end
|
305
400
|
end
|
data/lib/hetzner/k3s/cluster.rb
CHANGED
@@ -16,21 +16,25 @@ require_relative "../k3s/client_patch"
|
|
16
16
|
|
17
17
|
|
18
18
|
class Cluster
|
19
|
-
def initialize(hetzner_client:)
|
19
|
+
def initialize(hetzner_client:, hetzner_token:)
|
20
20
|
@hetzner_client = hetzner_client
|
21
|
+
@hetzner_token = hetzner_token
|
21
22
|
end
|
22
23
|
|
23
24
|
def create(configuration:)
|
24
|
-
@
|
25
|
+
@configuration = configuration
|
25
26
|
@cluster_name = configuration.dig("cluster_name")
|
26
27
|
@kubeconfig_path = File.expand_path(configuration.dig("kubeconfig_path"))
|
27
|
-
@
|
28
|
+
@public_ssh_key_path = File.expand_path(configuration.dig("public_ssh_key_path"))
|
29
|
+
private_ssh_key_path = configuration.dig("private_ssh_key_path")
|
30
|
+
@private_ssh_key_path = File.expand_path(private_ssh_key_path) if private_ssh_key_path
|
28
31
|
@k3s_version = configuration.dig("k3s_version")
|
29
32
|
@masters_config = configuration.dig("masters")
|
30
|
-
@worker_node_pools = configuration
|
33
|
+
@worker_node_pools = find_worker_node_pools(configuration)
|
31
34
|
@location = configuration.dig("location")
|
32
35
|
@verify_host_key = configuration.fetch("verify_host_key", false)
|
33
36
|
@servers = []
|
37
|
+
@networks = configuration.dig("ssh_allowed_networks")
|
34
38
|
|
35
39
|
create_resources
|
36
40
|
|
@@ -46,7 +50,7 @@ class Cluster
|
|
46
50
|
def delete(configuration:)
|
47
51
|
@cluster_name = configuration.dig("cluster_name")
|
48
52
|
@kubeconfig_path = File.expand_path(configuration.dig("kubeconfig_path"))
|
49
|
-
@
|
53
|
+
@public_ssh_key_path = File.expand_path(configuration.dig("public_ssh_key_path"))
|
50
54
|
|
51
55
|
delete_resources
|
52
56
|
end
|
@@ -63,13 +67,17 @@ class Cluster
|
|
63
67
|
|
64
68
|
private
|
65
69
|
|
70
|
+
def find_worker_node_pools(configuration)
|
71
|
+
configuration.fetch("worker_node_pools", [])
|
72
|
+
end
|
73
|
+
|
66
74
|
attr_accessor :servers
|
67
75
|
|
68
76
|
attr_reader :hetzner_client, :cluster_name, :kubeconfig_path, :k3s_version,
|
69
77
|
:masters_config, :worker_node_pools,
|
70
|
-
:location, :
|
78
|
+
:location, :public_ssh_key_path, :kubernetes_client,
|
71
79
|
:hetzner_token, :tls_sans, :new_k3s_version, :configuration,
|
72
|
-
:config_file, :verify_host_key
|
80
|
+
:config_file, :verify_host_key, :networks, :private_ssh_key_path, :configuration
|
73
81
|
|
74
82
|
|
75
83
|
def latest_k3s_version
|
@@ -78,10 +86,13 @@ class Cluster
|
|
78
86
|
end
|
79
87
|
|
80
88
|
def create_resources
|
89
|
+
master_instance_type = masters_config["instance_type"]
|
90
|
+
masters_count = masters_config["instance_count"]
|
91
|
+
|
81
92
|
firewall_id = Hetzner::Firewall.new(
|
82
93
|
hetzner_client: hetzner_client,
|
83
94
|
cluster_name: cluster_name
|
84
|
-
).create
|
95
|
+
).create(ha: (masters_count > 1), networks: networks)
|
85
96
|
|
86
97
|
network_id = Hetzner::Network.new(
|
87
98
|
hetzner_client: hetzner_client,
|
@@ -91,13 +102,10 @@ class Cluster
|
|
91
102
|
ssh_key_id = Hetzner::SSHKey.new(
|
92
103
|
hetzner_client: hetzner_client,
|
93
104
|
cluster_name: cluster_name
|
94
|
-
).create(
|
105
|
+
).create(public_ssh_key_path: public_ssh_key_path)
|
95
106
|
|
96
107
|
server_configs = []
|
97
108
|
|
98
|
-
master_instance_type = masters_config["instance_type"]
|
99
|
-
masters_count = masters_config["instance_count"]
|
100
|
-
|
101
109
|
masters_count.times do |i|
|
102
110
|
server_configs << {
|
103
111
|
location: location,
|
@@ -150,42 +158,15 @@ class Cluster
|
|
150
158
|
end
|
151
159
|
|
152
160
|
def delete_resources
|
153
|
-
|
154
|
-
|
155
|
-
|
156
|
-
|
157
|
-
|
158
|
-
threads = servers.map do |node|
|
159
|
-
Thread.new do
|
160
|
-
Hetzner::Server.new(hetzner_client: hetzner_client, cluster_name: cluster_name).delete(server_name: node.metadata[:name])
|
161
|
-
end
|
162
|
-
end
|
163
|
-
|
164
|
-
threads.each(&:join) unless threads.empty?
|
165
|
-
end
|
166
|
-
rescue Timeout::Error, Excon::Error::Socket
|
167
|
-
puts "Unable to fetch nodes from Kubernetes API. Is the cluster online?"
|
168
|
-
end
|
169
|
-
|
170
|
-
# Deleting nodes defined in the config file just in case there are leftovers i.e. nodes that
|
171
|
-
# were not part of the cluster for some reason
|
172
|
-
|
173
|
-
threads = all_servers.map do |server|
|
174
|
-
Thread.new do
|
175
|
-
Hetzner::Server.new(hetzner_client: hetzner_client, cluster_name: cluster_name).delete(server_name: server["name"])
|
176
|
-
end
|
177
|
-
end
|
178
|
-
|
179
|
-
threads.each(&:join) unless threads.empty?
|
180
|
-
|
181
|
-
puts
|
182
|
-
|
183
|
-
sleep 5 # give time for the servers to actually be deleted
|
161
|
+
Hetzner::LoadBalancer.new(
|
162
|
+
hetzner_client: hetzner_client,
|
163
|
+
cluster_name: cluster_name
|
164
|
+
).delete(ha: (masters.size > 1))
|
184
165
|
|
185
166
|
Hetzner::Firewall.new(
|
186
167
|
hetzner_client: hetzner_client,
|
187
168
|
cluster_name: cluster_name
|
188
|
-
).delete
|
169
|
+
).delete(all_servers)
|
189
170
|
|
190
171
|
Hetzner::Network.new(
|
191
172
|
hetzner_client: hetzner_client,
|
@@ -195,13 +176,15 @@ class Cluster
|
|
195
176
|
Hetzner::SSHKey.new(
|
196
177
|
hetzner_client: hetzner_client,
|
197
178
|
cluster_name: cluster_name
|
198
|
-
).delete(
|
179
|
+
).delete(public_ssh_key_path: public_ssh_key_path)
|
199
180
|
|
200
|
-
|
201
|
-
|
202
|
-
|
203
|
-
|
181
|
+
threads = all_servers.map do |server|
|
182
|
+
Thread.new do
|
183
|
+
Hetzner::Server.new(hetzner_client: hetzner_client, cluster_name: cluster_name).delete(server_name: server["name"])
|
184
|
+
end
|
185
|
+
end
|
204
186
|
|
187
|
+
threads.each(&:join) unless threads.empty?
|
205
188
|
end
|
206
189
|
|
207
190
|
def upgrade_cluster
|
@@ -231,6 +214,8 @@ class Cluster
|
|
231
214
|
server = master == first_master ? " --cluster-init " : " --server https://#{first_master_private_ip}:6443 "
|
232
215
|
flannel_interface = find_flannel_interface(master)
|
233
216
|
|
217
|
+
taint = schedule_workloads_on_masters? ? " " : " --node-taint CriticalAddonsOnly=true:NoExecute "
|
218
|
+
|
234
219
|
<<~EOF
|
235
220
|
curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION="#{k3s_version}" K3S_TOKEN="#{k3s_token}" INSTALL_K3S_EXEC="server \
|
236
221
|
--disable-cloud-controller \
|
@@ -247,8 +232,9 @@ class Cluster
|
|
247
232
|
--kube-proxy-arg="metrics-bind-address=0.0.0.0" \
|
248
233
|
--kube-scheduler-arg="address=0.0.0.0" \
|
249
234
|
--kube-scheduler-arg="bind-address=0.0.0.0" \
|
250
|
-
|
235
|
+
#{taint} \
|
251
236
|
--kubelet-arg="cloud-provider=external" \
|
237
|
+
--advertise-address=$(hostname -I | awk '{print $2}') \
|
252
238
|
--node-ip=$(hostname -I | awk '{print $2}') \
|
253
239
|
--node-external-ip=$(hostname -I | awk '{print $1}') \
|
254
240
|
--flannel-iface=#{flannel_interface} \
|
@@ -336,7 +322,7 @@ class Cluster
|
|
336
322
|
end
|
337
323
|
|
338
324
|
|
339
|
-
manifest =
|
325
|
+
manifest = fetch_manifest("https://github.com/hetznercloud/hcloud-cloud-controller-manager/releases/latest/download/ccm-networks.yaml")
|
340
326
|
|
341
327
|
File.write("/tmp/cloud-controller-manager.yaml", manifest)
|
342
328
|
|
@@ -361,6 +347,13 @@ class Cluster
|
|
361
347
|
retry
|
362
348
|
end
|
363
349
|
|
350
|
+
def fetch_manifest(url)
|
351
|
+
retries ||= 1
|
352
|
+
HTTP.follow.get(url).body
|
353
|
+
rescue
|
354
|
+
retry if (retries += 1) <= 10
|
355
|
+
end
|
356
|
+
|
364
357
|
def deploy_system_upgrade_controller
|
365
358
|
puts
|
366
359
|
puts "Deploying k3s System Upgrade Controller..."
|
@@ -465,7 +458,13 @@ class Cluster
|
|
465
458
|
public_ip = server.dig("public_net", "ipv4", "ip")
|
466
459
|
output = ""
|
467
460
|
|
468
|
-
|
461
|
+
params = { verify_host_key: (verify_host_key ? :always : :never) }
|
462
|
+
|
463
|
+
if private_ssh_key_path
|
464
|
+
params[:keys] = [private_ssh_key_path]
|
465
|
+
end
|
466
|
+
|
467
|
+
Net::SSH.start(public_ip, "root", params) do |session|
|
469
468
|
session.exec!(command) do |channel, stream, data|
|
470
469
|
output << data
|
471
470
|
puts data if print_output
|
@@ -476,6 +475,10 @@ class Cluster
|
|
476
475
|
retry unless e.message =~ /Too many authentication failures/
|
477
476
|
rescue Net::SSH::ConnectionTimeout, Errno::ECONNREFUSED, Errno::ENETUNREACH, Errno::EHOSTUNREACH
|
478
477
|
retry
|
478
|
+
rescue Net::SSH::AuthenticationFailed
|
479
|
+
puts
|
480
|
+
puts "Cannot continue: SSH authentication failed. Please ensure that the private SSH key is correct."
|
481
|
+
exit 1
|
479
482
|
rescue Net::SSH::HostKeyMismatch
|
480
483
|
puts
|
481
484
|
puts "Cannot continue: Unable to SSH into server with IP #{public_ip} because the existing fingerprint in the known_hosts file does not match that of the actual host key."
|
@@ -501,7 +504,7 @@ class Cluster
|
|
501
504
|
end
|
502
505
|
|
503
506
|
def all_servers
|
504
|
-
@all_servers ||= hetzner_client.get("/servers")["servers"]
|
507
|
+
@all_servers ||= hetzner_client.get("/servers")["servers"].select{ |server| belongs_to_cluster?(server) == true }
|
505
508
|
end
|
506
509
|
|
507
510
|
def masters
|
@@ -565,6 +568,8 @@ class Cluster
|
|
565
568
|
gsub("default", cluster_name)
|
566
569
|
|
567
570
|
File.write(kubeconfig_path, kubeconfig)
|
571
|
+
|
572
|
+
FileUtils.chmod "go-r", kubeconfig_path
|
568
573
|
end
|
569
574
|
|
570
575
|
def ugrade_plan_manifest_path
|
@@ -624,4 +629,13 @@ class Cluster
|
|
624
629
|
temp_file_path
|
625
630
|
end
|
626
631
|
|
632
|
+
def belongs_to_cluster?(server)
|
633
|
+
server.dig("labels", "cluster") == cluster_name
|
634
|
+
end
|
635
|
+
|
636
|
+
def schedule_workloads_on_masters?
|
637
|
+
schedule_workloads_on_masters = configuration.dig("schedule_workloads_on_masters")
|
638
|
+
schedule_workloads_on_masters ? !!schedule_workloads_on_masters : false
|
639
|
+
end
|
640
|
+
|
627
641
|
end
|
data/lib/hetzner/k3s/version.rb
CHANGED
metadata
CHANGED
@@ -1,14 +1,14 @@
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
2
2
|
name: hetzner-k3s
|
3
3
|
version: !ruby/object:Gem::Version
|
4
|
-
version: 0.
|
4
|
+
version: 0.4.1
|
5
5
|
platform: ruby
|
6
6
|
authors:
|
7
7
|
- Vito Botta
|
8
8
|
autorequire:
|
9
9
|
bindir: exe
|
10
10
|
cert_chain: []
|
11
|
-
date: 2021-
|
11
|
+
date: 2021-10-02 00:00:00.000000000 Z
|
12
12
|
dependencies:
|
13
13
|
- !ruby/object:Gem::Dependency
|
14
14
|
name: thor
|
@@ -127,6 +127,7 @@ files:
|
|
127
127
|
- LICENSE.txt
|
128
128
|
- README.md
|
129
129
|
- Rakefile
|
130
|
+
- bin/build.sh
|
130
131
|
- bin/console
|
131
132
|
- bin/setup
|
132
133
|
- cluster_config.yaml.example
|