hetzner-k3s 0.3.9 → 0.4.3

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 9c3b95ba8775783388acc881cbffd928c3fb00d92b6e6a5369b2bbd47f163aae
4
- data.tar.gz: 495ea16d040b3808cb069ef6b03cec04bd7e6dd8f3fe7e623e7e49a1e3dc6eb3
3
+ metadata.gz: c3e7bedd3a695b38522a25598d9d0523bee9e5120aea6e76c462ba91dc8f3f12
4
+ data.tar.gz: d54f24ebe59f8b0354c74fdacff6f747169b288eddb99f070456f147523a07b4
5
5
  SHA512:
6
- metadata.gz: 01a31eca33e328550f1583ff036c9002f8deb784bb76e0cb1e01df0677be9d678bfae1f619354bdb538ced6ba3a95bb5f03180429536c245e822372b452e82a7
7
- data.tar.gz: aa92fef9440c4e85afe30bbecb86091f44d0cd693c71b44b141432199f1c710488f6b7dfbf0dd59658d8a9d341414c5c75ea5d162cff12b10949f22cd0ef1cda
6
+ metadata.gz: 36c460db4001ba080ef27dec517c4c44ce853a75935fee7d0fc5e4a13d689a5e812d762ec842a22503ac07d555a24be545293db9bb838ddcb7d27cd1866d9a1e
7
+ data.tar.gz: 2e8f67ca3c74292b7ad3c3a2098b1a4c8d096b1cb01677e8cc94ae199ee4b0df25eb869d2400b32451da6475a7518c7265cd1b70ec6bb0090e25b76047a03e4e
data/Gemfile.lock CHANGED
@@ -1,7 +1,7 @@
1
1
  PATH
2
2
  remote: .
3
3
  specs:
4
- hetzner-k3s (0.3.9)
4
+ hetzner-k3s (0.4.0)
5
5
  bcrypt_pbkdf
6
6
  ed25519
7
7
  http
data/README.md CHANGED
@@ -25,7 +25,7 @@ All that is needed to use this tool is
25
25
 
26
26
  ## Installation
27
27
 
28
- Once you have the Ruby runtime up and running, you just need to install the gem:
28
+ Once you have the Ruby runtime up and running (2.7.2 or newer in the 2.7 series is recommended at this stage), you just need to install the gem:
29
29
 
30
30
  ```bash
31
31
  gem install hetzner-k3s
@@ -38,7 +38,7 @@ This will install the `hetzner-k3s` executable in your PATH.
38
38
  Alternatively, if you don't want to set up a Ruby runtime but have Docker installed, you can use a container. Run the following from inside the directory where you have the config file for the cluster (described in the next section):
39
39
 
40
40
  ```bash
41
- docker run --rm -it -v ${PWD}:/cluster -v ${HOME}/.ssh:/tmp/.ssh vitobotta/hetzner-k3s:v0.3.8 create-cluster --config-file /cluster/test.yaml
41
+ docker run --rm -it -v ${PWD}:/cluster -v ${HOME}/.ssh:/tmp/.ssh vitobotta/hetzner-k3s:v0.4.2 create-cluster --config-file /cluster/test.yaml
42
42
  ```
43
43
 
44
44
  Replace `test.yaml` with the name of your config file.
@@ -53,9 +53,13 @@ hetzner_token: <your token>
53
53
  cluster_name: test
54
54
  kubeconfig_path: "./kubeconfig"
55
55
  k3s_version: v1.21.3+k3s1
56
- ssh_key_path: "~/.ssh/id_rsa.pub"
56
+ public_ssh_key_path: "~/.ssh/id_rsa.pub"
57
+ private_ssh_key_path: "~/.ssh/id_rsa"
58
+ ssh_allowed_networks:
59
+ - 0.0.0.0/0
57
60
  verify_host_key: false
58
61
  location: nbg1
62
+ schedule_workloads_on_masters: false
59
63
  masters:
60
64
  instance_type: cpx21
61
65
  instance_count: 3
@@ -72,7 +76,9 @@ It should hopefully be self explanatory; you can run `hetzner-k3s releases` to s
72
76
 
73
77
  If you are using Docker, then set `kubeconfig_path` to `/cluster/kubeconfig` so that the kubeconfig is created in the same directory where your config file is.
74
78
 
75
- **Important**: The tool assignes the label `cluster` to each server it creates, with the clsuter name you specify in the config file, as the value. So please ensure you don't create unrelated servers in the same project having
79
+ If you don't want to specify the Hetzner token in the config file (for example if you want to use the tool with CI), then you can use the `HCLOUD_TOKEN` environment variable instead, which has predecence.
80
+
81
+ **Important**: The tool assignes the label `cluster` to each server it creates, with the cluster name you specify in the config file, as the value. So please ensure you don't create unrelated servers in the same project having
76
82
  the label `cluster=<cluster name>`, because otherwise they will be deleted if you delete the cluster. I recommend you create a separate Hetzner project for each cluster, see note at the end of this README for more details.
77
83
 
78
84
 
@@ -235,6 +241,27 @@ I recommend that you create a separate Hetzner project for each cluster, because
235
241
 
236
242
  ## changelog
237
243
 
244
+ - 0.4.3
245
+ - Fix an issue with SSH key creation
246
+
247
+ - 0.4.2
248
+ - Update Hetzner CSI driver to v1.6.0
249
+ - Update System Upgrade Controller to v0.8.0
250
+
251
+ - 0.4.1
252
+ - Allow to optionally specify the path of the private SSH key
253
+ - Set correct permissions for the kubeconfig file
254
+ - Retry fetching manifests a few times to allow for temporary network issues
255
+ - Allow to optionally schedule workloads on masters
256
+ - Allow clusters with no worker node pools if scheduling is enabled for the masters
257
+
258
+ - 0.4.0
259
+ - Ensure the masters are removed from the API load balancer before deleting the load balancer
260
+ - Ensure the servers are removed from the firewall before deleting it
261
+ - Allow using an environment variable to specify the Hetzner token
262
+ - Allow restricting SSH access to the nodes to specific networks
263
+ - Do not open the port 6443 on the nodes if a load balancer is created for an HA cluster
264
+
238
265
  - 0.3.9
239
266
  - Add command "version" to print the version of the tool in use
240
267
 
data/bin/build.sh ADDED
@@ -0,0 +1,14 @@
1
+ #!/bin/bash
2
+
3
+ set -e
4
+
5
+
6
+
7
+ IMAGE="vitobotta/hetzner-k3s"
8
+
9
+ docker build -t ${IMAGE}:v0.4.3 \
10
+ --platform=linux/amd64 \
11
+ --cache-from ${IMAGE}:v0.4.2 \
12
+ --build-arg BUILDKIT_INLINE_CACHE=1 .
13
+
14
+ docker push vitobotta/hetzner-k3s:v0.4.2
@@ -5,7 +5,9 @@ module Hetzner
5
5
  @cluster_name = cluster_name
6
6
  end
7
7
 
8
- def create
8
+ def create(ha:, networks:)
9
+ @ha = ha
10
+ @networks = networks
9
11
  puts
10
12
 
11
13
  if firewall = find_firewall
@@ -16,16 +18,21 @@ module Hetzner
16
18
 
17
19
  puts "Creating firewall..."
18
20
 
19
- response = hetzner_client.post("/firewalls", firewall_config).body
21
+ response = hetzner_client.post("/firewalls", create_firewall_config).body
20
22
  puts "...firewall created."
21
23
  puts
22
24
 
23
25
  JSON.parse(response)["firewall"]["id"]
24
26
  end
25
27
 
26
- def delete
28
+ def delete(servers)
27
29
  if firewall = find_firewall
28
30
  puts "Deleting firewall..."
31
+
32
+ servers.each do |server|
33
+ hetzner_client.post("/firewalls/#{firewall["id"]}/actions/remove_from_resources", remove_targets_config(server["id"]))
34
+ end
35
+
29
36
  hetzner_client.delete("/firewalls", firewall["id"])
30
37
  puts "...firewall deleted."
31
38
  else
@@ -37,64 +44,79 @@ module Hetzner
37
44
 
38
45
  private
39
46
 
40
- attr_reader :hetzner_client, :cluster_name, :firewall
47
+ attr_reader :hetzner_client, :cluster_name, :firewall, :ha, :networks
48
+
49
+ def create_firewall_config
50
+ rules = [
51
+ {
52
+ "description": "Allow port 22 (SSH)",
53
+ "direction": "in",
54
+ "protocol": "tcp",
55
+ "port": "22",
56
+ "source_ips": networks,
57
+ "destination_ips": []
58
+ },
59
+ {
60
+ "description": "Allow ICMP (ping)",
61
+ "direction": "in",
62
+ "protocol": "icmp",
63
+ "port": nil,
64
+ "source_ips": [
65
+ "0.0.0.0/0",
66
+ "::/0"
67
+ ],
68
+ "destination_ips": []
69
+ },
70
+ {
71
+ "description": "Allow all TCP traffic between nodes on the private network",
72
+ "direction": "in",
73
+ "protocol": "tcp",
74
+ "port": "any",
75
+ "source_ips": [
76
+ "10.0.0.0/16"
77
+ ],
78
+ "destination_ips": []
79
+ },
80
+ {
81
+ "description": "Allow all UDP traffic between nodes on the private network",
82
+ "direction": "in",
83
+ "protocol": "udp",
84
+ "port": "any",
85
+ "source_ips": [
86
+ "10.0.0.0/16"
87
+ ],
88
+ "destination_ips": []
89
+ }
90
+ ]
91
+
92
+ unless ha
93
+ rules << {
94
+ "description": "Allow port 6443 (Kubernetes API server)",
95
+ "direction": "in",
96
+ "protocol": "tcp",
97
+ "port": "6443",
98
+ "source_ips": [
99
+ "0.0.0.0/0",
100
+ "::/0"
101
+ ],
102
+ "destination_ips": []
103
+ }
104
+ end
41
105
 
42
- def firewall_config
43
106
  {
44
107
  name: cluster_name,
45
- rules: [
46
- {
47
- "description": "Allow port 22 (SSH)",
48
- "direction": "in",
49
- "protocol": "tcp",
50
- "port": "22",
51
- "source_ips": [
52
- "0.0.0.0/0",
53
- "::/0"
54
- ],
55
- "destination_ips": []
56
- },
57
- {
58
- "description": "Allow ICMP (ping)",
59
- "direction": "in",
60
- "protocol": "icmp",
61
- "port": nil,
62
- "source_ips": [
63
- "0.0.0.0/0",
64
- "::/0"
65
- ],
66
- "destination_ips": []
67
- },
68
- {
69
- "description": "Allow port 6443 (Kubernetes API server)",
70
- "direction": "in",
71
- "protocol": "tcp",
72
- "port": "6443",
73
- "source_ips": [
74
- "0.0.0.0/0",
75
- "::/0"
76
- ],
77
- "destination_ips": []
78
- },
79
- {
80
- "description": "Allow all TCP traffic between nodes on the private network",
81
- "direction": "in",
82
- "protocol": "tcp",
83
- "port": "any",
84
- "source_ips": [
85
- "10.0.0.0/16"
86
- ],
87
- "destination_ips": []
88
- },
108
+ rules: rules
109
+ }
110
+ end
111
+
112
+ def remove_targets_config(server_id)
113
+ {
114
+ "remove_from": [
89
115
  {
90
- "description": "Allow all UDP traffic between nodes on the private network",
91
- "direction": "in",
92
- "protocol": "udp",
93
- "port": "any",
94
- "source_ips": [
95
- "10.0.0.0/16"
96
- ],
97
- "destination_ips": []
116
+ "server": {
117
+ "id": server_id
118
+ },
119
+ "type": "server"
98
120
  }
99
121
  ]
100
122
  }
@@ -19,7 +19,7 @@ module Hetzner
19
19
 
20
20
  puts "Creating API load_balancer..."
21
21
 
22
- response = hetzner_client.post("/load_balancers", load_balancer_config).body
22
+ response = hetzner_client.post("/load_balancers", create_load_balancer_config).body
23
23
  puts "...API load balancer created."
24
24
  puts
25
25
 
@@ -29,6 +29,9 @@ module Hetzner
29
29
  def delete(ha:)
30
30
  if load_balancer = find_load_balancer
31
31
  puts "Deleting API load balancer..." unless ha
32
+
33
+ hetzner_client.post("/load_balancers/#{load_balancer["id"]}/actions/remove_target", remove_targets_config)
34
+
32
35
  hetzner_client.delete("/load_balancers", load_balancer["id"])
33
36
  puts "...API load balancer deleted." unless ha
34
37
  elsif ha
@@ -46,7 +49,7 @@ module Hetzner
46
49
  "#{cluster_name}-api"
47
50
  end
48
51
 
49
- def load_balancer_config
52
+ def create_load_balancer_config
50
53
  {
51
54
  "algorithm": {
52
55
  "type": "round_robin"
@@ -76,6 +79,15 @@ module Hetzner
76
79
  }
77
80
  end
78
81
 
82
+ def remove_targets_config
83
+ {
84
+ "label_selector": {
85
+ "selector": "cluster=#{cluster_name},role=master"
86
+ },
87
+ "type": "label_selector"
88
+ }
89
+ end
90
+
79
91
  def find_load_balancer
80
92
  hetzner_client.get("/load_balancers")["load_balancers"].detect{ |load_balancer| load_balancer["name"] == load_balancer_name }
81
93
  end
@@ -5,15 +5,15 @@ module Hetzner
5
5
  @cluster_name = cluster_name
6
6
  end
7
7
 
8
- def create(ssh_key_path:)
9
- @ssh_key_path = ssh_key_path
8
+ def create(public_ssh_key_path:)
9
+ @public_ssh_key_path = public_ssh_key_path
10
10
 
11
11
  puts
12
12
 
13
- if ssh_key = find_ssh_key
13
+ if (public_ssh_key = find_public_ssh_key)
14
14
  puts "SSH key already exists, skipping."
15
15
  puts
16
- return ssh_key["id"]
16
+ return public_ssh_key["id"]
17
17
  end
18
18
 
19
19
  puts "Creating SSH key..."
@@ -26,13 +26,13 @@ module Hetzner
26
26
  JSON.parse(response)["ssh_key"]["id"]
27
27
  end
28
28
 
29
- def delete(ssh_key_path:)
30
- @ssh_key_path = ssh_key_path
29
+ def delete(public_ssh_key_path:)
30
+ @public_ssh_key_path = public_ssh_key_path
31
31
 
32
- if ssh_key = find_ssh_key
33
- if ssh_key["name"] == cluster_name
32
+ if (public_ssh_key = find_public_ssh_key)
33
+ if public_ssh_key["name"] == cluster_name
34
34
  puts "Deleting ssh_key..."
35
- hetzner_client.delete("/ssh_keys", ssh_key["id"])
35
+ hetzner_client.delete("/ssh_keys", public_ssh_key["id"])
36
36
  puts "...ssh_key deleted."
37
37
  else
38
38
  puts "The SSH key existed before creating the cluster, so I won't delete it."
@@ -46,24 +46,24 @@ module Hetzner
46
46
 
47
47
  private
48
48
 
49
- attr_reader :hetzner_client, :cluster_name, :ssh_key_path
49
+ attr_reader :hetzner_client, :cluster_name, :public_ssh_key_path
50
50
 
51
- def public_key
52
- @public_key ||= File.read(ssh_key_path).chop
51
+ def public_ssh_key
52
+ @public_ssh_key ||= File.read(public_ssh_key_path).chop
53
53
  end
54
54
 
55
55
  def ssh_key_config
56
56
  {
57
57
  name: cluster_name,
58
- public_key: public_key
58
+ public_key: public_ssh_key
59
59
  }
60
60
  end
61
61
 
62
62
  def fingerprint
63
- @fingerprint ||= ::SSHKey.fingerprint(public_key)
63
+ @fingerprint ||= ::SSHKey.fingerprint(public_ssh_key)
64
64
  end
65
65
 
66
- def find_ssh_key
66
+ def find_public_ssh_key
67
67
  key = hetzner_client.get("/ssh_keys")["ssh_keys"].detect do |ssh_key|
68
68
  ssh_key["fingerprint"] == fingerprint
69
69
  end
@@ -1,6 +1,8 @@
1
1
  require "thor"
2
2
  require "http"
3
3
  require "sshkey"
4
+ require 'ipaddr'
5
+ require 'open-uri'
4
6
 
5
7
  require_relative "cluster"
6
8
  require_relative "version"
@@ -23,7 +25,7 @@ module Hetzner
23
25
  def create_cluster
24
26
  validate_config_file :create
25
27
 
26
- Cluster.new(hetzner_client: hetzner_client).create configuration: configuration
28
+ Cluster.new(hetzner_client: hetzner_client, hetzner_token: find_hetzner_token).create configuration: configuration
27
29
  end
28
30
 
29
31
  desc "delete-cluster", "Delete an existing k3s cluster in Hetzner Cloud"
@@ -31,7 +33,7 @@ module Hetzner
31
33
 
32
34
  def delete_cluster
33
35
  validate_config_file :delete
34
- Cluster.new(hetzner_client: hetzner_client).delete configuration: configuration
36
+ Cluster.new(hetzner_client: hetzner_client, hetzner_token: find_hetzner_token).delete configuration: configuration
35
37
  end
36
38
 
37
39
  desc "upgrade-cluster", "Upgrade an existing k3s cluster in Hetzner Cloud to a new version"
@@ -41,7 +43,7 @@ module Hetzner
41
43
 
42
44
  def upgrade_cluster
43
45
  validate_config_file :upgrade
44
- Cluster.new(hetzner_client: hetzner_client).upgrade configuration: configuration, new_k3s_version: options[:new_k3s_version], config_file: options[:config_file]
46
+ Cluster.new(hetzner_client: hetzner_client, hetzner_token: find_hetzner_token).upgrade configuration: configuration, new_k3s_version: options[:new_k3s_version], config_file: options[:config_file]
45
47
  end
46
48
 
47
49
  desc "releases", "List available k3s releases"
@@ -81,7 +83,9 @@ module Hetzner
81
83
 
82
84
  case action
83
85
  when :create
84
- validate_ssh_key
86
+ validate_public_ssh_key
87
+ validate_private_ssh_key
88
+ validate_ssh_allowed_networks
85
89
  validate_location
86
90
  validate_k3s_version
87
91
  validate_masters
@@ -107,12 +111,26 @@ module Hetzner
107
111
  end
108
112
  end
109
113
 
114
+ def valid_token?
115
+ return @valid unless @valid.nil?
116
+
117
+ begin
118
+ token = find_hetzner_token
119
+ @hetzner_client = Hetzner::Client.new(token: token)
120
+ response = hetzner_client.get("/locations")
121
+ error_code = response.dig("error", "code")
122
+ @valid = if error_code and error_code.size > 0
123
+ false
124
+ else
125
+ true
126
+ end
127
+ rescue
128
+ @valid = false
129
+ end
130
+ end
131
+
110
132
  def validate_token
111
- token = configuration.dig("hetzner_token")
112
- @hetzner_client = Hetzner::Client.new(token: token)
113
- hetzner_client.get("/locations")
114
- rescue
115
- errors << "Invalid Hetzner Cloid token"
133
+ errors << "Invalid Hetzner Cloud token" unless valid_token?
116
134
  end
117
135
 
118
136
  def validate_cluster_name
@@ -130,16 +148,25 @@ module Hetzner
130
148
  errors << "Invalid path for the kubeconfig"
131
149
  end
132
150
 
133
- def validate_ssh_key
134
- path = File.expand_path(configuration.dig("ssh_key_path"))
151
+ def validate_public_ssh_key
152
+ path = File.expand_path(configuration.dig("public_ssh_key_path"))
135
153
  errors << "Invalid Public SSH key path" and return unless File.exists? path
136
154
 
137
155
  key = File.read(path)
138
- errors << "Public SSH key is invalid" unless ::SSHKey.valid_ssh_public_key? key
156
+ errors << "Public SSH key is invalid" unless ::SSHKey.valid_ssh_public_key?(key)
139
157
  rescue
140
158
  errors << "Invalid Public SSH key path"
141
159
  end
142
160
 
161
+ def validate_private_ssh_key
162
+ return unless (private_ssh_key_path = configuration.dig("private_ssh_key_path"))
163
+
164
+ path = File.expand_path(private_ssh_key_path)
165
+ errors << "Invalid Private SSH key path" and return unless File.exists?(path)
166
+ rescue
167
+ errors << "Invalid Private SSH key path"
168
+ end
169
+
143
170
  def validate_kubeconfig_path_must_exist
144
171
  path = File.expand_path configuration.dig("kubeconfig_path")
145
172
  errors << "kubeconfig path is invalid" and return unless File.exists? path
@@ -149,6 +176,7 @@ module Hetzner
149
176
  end
150
177
 
151
178
  def server_types
179
+ return [] unless valid_token?
152
180
  @server_types ||= hetzner_client.get("/server_types")["server_types"].map{ |server_type| server_type["name"] }
153
181
  rescue
154
182
  @errors << "Cannot fetch server types with Hetzner API, please try again later"
@@ -156,13 +184,15 @@ module Hetzner
156
184
  end
157
185
 
158
186
  def locations
187
+ return [] unless valid_token?
159
188
  @locations ||= hetzner_client.get("/locations")["locations"].map{ |location| location["name"] }
160
189
  rescue
161
190
  @errors << "Cannot fetch locations with Hetzner API, please try again later"
162
- false
191
+ []
163
192
  end
164
193
 
165
194
  def validate_location
195
+ return if locations.empty? && !valid_token?
166
196
  errors << "Invalid location - available locations: nbg1 (Nuremberg, Germany), fsn1 (Falkenstein, Germany), hel1 (Helsinki, Finland)" unless locations.include? configuration.dig("location")
167
197
  end
168
198
 
@@ -211,14 +241,22 @@ module Hetzner
211
241
  begin
212
242
  worker_node_pools = configuration.dig("worker_node_pools")
213
243
  rescue
214
- errors << "Invalid node pools configuration"
244
+ unless schedule_workloads_on_masters?
245
+ errors << "Invalid node pools configuration"
246
+ return
247
+ end
248
+ end
249
+
250
+ if worker_node_pools.nil? && schedule_workloads_on_masters?
215
251
  return
216
252
  end
217
253
 
218
254
  if !worker_node_pools.is_a? Array
219
255
  errors << "Invalid node pools configuration"
220
256
  elsif worker_node_pools.size == 0
221
- errors << "At least one node pool is required in order to schedule workloads"
257
+ unless schedule_workloads_on_masters?
258
+ errors << "At least one node pool is required in order to schedule workloads"
259
+ end
222
260
  elsif worker_node_pools.map{ |worker_node_pool| worker_node_pool["name"]}.uniq.size != worker_node_pools.size
223
261
  errors << "Each node pool must have an unique name"
224
262
  elsif server_types
@@ -228,6 +266,11 @@ module Hetzner
228
266
  end
229
267
  end
230
268
 
269
+ def schedule_workloads_on_masters?
270
+ schedule_workloads_on_masters = configuration.dig("schedule_workloads_on_masters")
271
+ schedule_workloads_on_masters ? !!schedule_workloads_on_masters : false
272
+ end
273
+
231
274
  def validate_new_k3s_version_must_be_more_recent
232
275
  return if options[:force] == "true"
233
276
  return unless kubernetes_client
@@ -271,7 +314,7 @@ module Hetzner
271
314
  instance_group_errors << "#{instance_group_type} is in an invalid format"
272
315
  end
273
316
 
274
- unless server_types.include?(instance_group["instance_type"])
317
+ unless !valid_token? or server_types.include?(instance_group["instance_type"])
275
318
  instance_group_errors << "#{instance_group_type} has an invalid instance type"
276
319
  end
277
320
 
@@ -301,11 +344,58 @@ module Hetzner
301
344
  false
302
345
  end
303
346
 
304
-
305
347
  def validate_verify_host_key
306
- return unless [true, false].include?(configuration.fetch("ssh_key_path", false))
348
+ return unless [true, false].include?(configuration.fetch("public_ssh_key_path", false))
307
349
  errors << "Please set the verify_host_key option to either true or false"
308
350
  end
351
+
352
+ def find_hetzner_token
353
+ @token = ENV["HCLOUD_TOKEN"]
354
+ return @token if @token
355
+ @token = configuration.dig("hetzner_token")
356
+ end
357
+
358
+ def validate_ssh_allowed_networks
359
+ networks ||= configuration.dig("ssh_allowed_networks")
360
+
361
+ if networks.nil? or networks.empty?
362
+ errors << "At least one network/IP range must be specified for SSH access"
363
+ return
364
+ end
365
+
366
+ invalid_networks = networks.reject do |network|
367
+ IPAddr.new(network) rescue false
368
+ end
369
+
370
+ unless invalid_networks.empty?
371
+ invalid_networks.each do |network|
372
+ errors << "The network #{network} is an invalid range"
373
+ end
374
+ end
375
+
376
+ invalid_ranges = networks.reject do |network|
377
+ network.include? "/"
378
+ end
379
+
380
+ unless invalid_ranges.empty?
381
+ invalid_ranges.each do |network|
382
+ errors << "Please use the CIDR notation for the networks to avoid ambiguity"
383
+ end
384
+ end
385
+
386
+ return unless invalid_networks.empty?
387
+
388
+ current_ip = URI.open('http://whatismyip.akamai.com').read
389
+
390
+ current_ip_networks = networks.detect do |network|
391
+ IPAddr.new(network).include?(current_ip) rescue false
392
+ end
393
+
394
+ unless current_ip_networks
395
+ errors << "Your current IP #{current_ip} is not included into any of the networks you've specified, so we won't be able to SSH into the nodes"
396
+ end
397
+ end
398
+
309
399
  end
310
400
  end
311
401
  end
@@ -16,21 +16,25 @@ require_relative "../k3s/client_patch"
16
16
 
17
17
 
18
18
  class Cluster
19
- def initialize(hetzner_client:)
19
+ def initialize(hetzner_client:, hetzner_token:)
20
20
  @hetzner_client = hetzner_client
21
+ @hetzner_token = hetzner_token
21
22
  end
22
23
 
23
24
  def create(configuration:)
24
- @hetzner_token = configuration.dig("hetzner_token")
25
+ @configuration = configuration
25
26
  @cluster_name = configuration.dig("cluster_name")
26
27
  @kubeconfig_path = File.expand_path(configuration.dig("kubeconfig_path"))
27
- @ssh_key_path = File.expand_path(configuration.dig("ssh_key_path"))
28
+ @public_ssh_key_path = File.expand_path(configuration.dig("public_ssh_key_path"))
29
+ private_ssh_key_path = configuration.dig("private_ssh_key_path")
30
+ @private_ssh_key_path = File.expand_path(private_ssh_key_path) if private_ssh_key_path
28
31
  @k3s_version = configuration.dig("k3s_version")
29
32
  @masters_config = configuration.dig("masters")
30
- @worker_node_pools = configuration.dig("worker_node_pools")
33
+ @worker_node_pools = find_worker_node_pools(configuration)
31
34
  @location = configuration.dig("location")
32
35
  @verify_host_key = configuration.fetch("verify_host_key", false)
33
36
  @servers = []
37
+ @networks = configuration.dig("ssh_allowed_networks")
34
38
 
35
39
  create_resources
36
40
 
@@ -46,7 +50,7 @@ class Cluster
46
50
  def delete(configuration:)
47
51
  @cluster_name = configuration.dig("cluster_name")
48
52
  @kubeconfig_path = File.expand_path(configuration.dig("kubeconfig_path"))
49
- @ssh_key_path = File.expand_path(configuration.dig("ssh_key_path"))
53
+ @public_ssh_key_path = File.expand_path(configuration.dig("public_ssh_key_path"))
50
54
 
51
55
  delete_resources
52
56
  end
@@ -63,13 +67,17 @@ class Cluster
63
67
 
64
68
  private
65
69
 
70
+ def find_worker_node_pools(configuration)
71
+ configuration.fetch("worker_node_pools", [])
72
+ end
73
+
66
74
  attr_accessor :servers
67
75
 
68
76
  attr_reader :hetzner_client, :cluster_name, :kubeconfig_path, :k3s_version,
69
77
  :masters_config, :worker_node_pools,
70
- :location, :ssh_key_path, :kubernetes_client,
78
+ :location, :public_ssh_key_path, :kubernetes_client,
71
79
  :hetzner_token, :tls_sans, :new_k3s_version, :configuration,
72
- :config_file, :verify_host_key
80
+ :config_file, :verify_host_key, :networks, :private_ssh_key_path, :configuration
73
81
 
74
82
 
75
83
  def latest_k3s_version
@@ -78,10 +86,13 @@ class Cluster
78
86
  end
79
87
 
80
88
  def create_resources
89
+ master_instance_type = masters_config["instance_type"]
90
+ masters_count = masters_config["instance_count"]
91
+
81
92
  firewall_id = Hetzner::Firewall.new(
82
93
  hetzner_client: hetzner_client,
83
94
  cluster_name: cluster_name
84
- ).create
95
+ ).create(ha: (masters_count > 1), networks: networks)
85
96
 
86
97
  network_id = Hetzner::Network.new(
87
98
  hetzner_client: hetzner_client,
@@ -91,13 +102,10 @@ class Cluster
91
102
  ssh_key_id = Hetzner::SSHKey.new(
92
103
  hetzner_client: hetzner_client,
93
104
  cluster_name: cluster_name
94
- ).create(ssh_key_path: ssh_key_path)
105
+ ).create(public_ssh_key_path: public_ssh_key_path)
95
106
 
96
107
  server_configs = []
97
108
 
98
- master_instance_type = masters_config["instance_type"]
99
- masters_count = masters_config["instance_count"]
100
-
101
109
  masters_count.times do |i|
102
110
  server_configs << {
103
111
  location: location,
@@ -150,42 +158,15 @@ class Cluster
150
158
  end
151
159
 
152
160
  def delete_resources
153
- # Deleting nodes defined according to Kubernetes first
154
- begin
155
- Timeout::timeout(5) do
156
- servers = kubernetes_client.api("v1").resource("nodes").list
157
-
158
- threads = servers.map do |node|
159
- Thread.new do
160
- Hetzner::Server.new(hetzner_client: hetzner_client, cluster_name: cluster_name).delete(server_name: node.metadata[:name])
161
- end
162
- end
163
-
164
- threads.each(&:join) unless threads.empty?
165
- end
166
- rescue Timeout::Error, Excon::Error::Socket
167
- puts "Unable to fetch nodes from Kubernetes API. Is the cluster online?"
168
- end
169
-
170
- # Deleting nodes defined in the config file just in case there are leftovers i.e. nodes that
171
- # were not part of the cluster for some reason
172
-
173
- threads = all_servers.map do |server|
174
- Thread.new do
175
- Hetzner::Server.new(hetzner_client: hetzner_client, cluster_name: cluster_name).delete(server_name: server["name"])
176
- end
177
- end
178
-
179
- threads.each(&:join) unless threads.empty?
180
-
181
- puts
182
-
183
- sleep 5 # give time for the servers to actually be deleted
161
+ Hetzner::LoadBalancer.new(
162
+ hetzner_client: hetzner_client,
163
+ cluster_name: cluster_name
164
+ ).delete(ha: (masters.size > 1))
184
165
 
185
166
  Hetzner::Firewall.new(
186
167
  hetzner_client: hetzner_client,
187
168
  cluster_name: cluster_name
188
- ).delete
169
+ ).delete(all_servers)
189
170
 
190
171
  Hetzner::Network.new(
191
172
  hetzner_client: hetzner_client,
@@ -195,13 +176,15 @@ class Cluster
195
176
  Hetzner::SSHKey.new(
196
177
  hetzner_client: hetzner_client,
197
178
  cluster_name: cluster_name
198
- ).delete(ssh_key_path: ssh_key_path)
179
+ ).delete(public_ssh_key_path: public_ssh_key_path)
199
180
 
200
- Hetzner::LoadBalancer.new(
201
- hetzner_client: hetzner_client,
202
- cluster_name: cluster_name
203
- ).delete(ha: (masters.size > 1))
181
+ threads = all_servers.map do |server|
182
+ Thread.new do
183
+ Hetzner::Server.new(hetzner_client: hetzner_client, cluster_name: cluster_name).delete(server_name: server["name"])
184
+ end
185
+ end
204
186
 
187
+ threads.each(&:join) unless threads.empty?
205
188
  end
206
189
 
207
190
  def upgrade_cluster
@@ -231,6 +214,8 @@ class Cluster
231
214
  server = master == first_master ? " --cluster-init " : " --server https://#{first_master_private_ip}:6443 "
232
215
  flannel_interface = find_flannel_interface(master)
233
216
 
217
+ taint = schedule_workloads_on_masters? ? " " : " --node-taint CriticalAddonsOnly=true:NoExecute "
218
+
234
219
  <<~EOF
235
220
  curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION="#{k3s_version}" K3S_TOKEN="#{k3s_token}" INSTALL_K3S_EXEC="server \
236
221
  --disable-cloud-controller \
@@ -247,8 +232,9 @@ class Cluster
247
232
  --kube-proxy-arg="metrics-bind-address=0.0.0.0" \
248
233
  --kube-scheduler-arg="address=0.0.0.0" \
249
234
  --kube-scheduler-arg="bind-address=0.0.0.0" \
250
- --node-taint CriticalAddonsOnly=true:NoExecute \
235
+ #{taint} \
251
236
  --kubelet-arg="cloud-provider=external" \
237
+ --advertise-address=$(hostname -I | awk '{print $2}') \
252
238
  --node-ip=$(hostname -I | awk '{print $2}') \
253
239
  --node-external-ip=$(hostname -I | awk '{print $1}') \
254
240
  --flannel-iface=#{flannel_interface} \
@@ -336,7 +322,7 @@ class Cluster
336
322
  end
337
323
 
338
324
 
339
- manifest = HTTP.follow.get("https://github.com/hetznercloud/hcloud-cloud-controller-manager/releases/latest/download/ccm-networks.yaml").body
325
+ manifest = fetch_manifest("https://github.com/hetznercloud/hcloud-cloud-controller-manager/releases/latest/download/ccm-networks.yaml")
340
326
 
341
327
  File.write("/tmp/cloud-controller-manager.yaml", manifest)
342
328
 
@@ -361,11 +347,18 @@ class Cluster
361
347
  retry
362
348
  end
363
349
 
350
+ def fetch_manifest(url)
351
+ retries ||= 1
352
+ HTTP.follow.get(url).body
353
+ rescue
354
+ retry if (retries += 1) <= 10
355
+ end
356
+
364
357
  def deploy_system_upgrade_controller
365
358
  puts
366
359
  puts "Deploying k3s System Upgrade Controller..."
367
360
 
368
- manifest = HTTP.follow.get("https://github.com/rancher/system-upgrade-controller/releases/download/v0.7.3/system-upgrade-controller.yaml").body
361
+ manifest = HTTP.follow.get("https://github.com/rancher/system-upgrade-controller/releases/download/v0.8.0/system-upgrade-controller.yaml").body
369
362
 
370
363
  File.write("/tmp/system-upgrade-controller.yaml", manifest)
371
364
 
@@ -414,7 +407,7 @@ class Cluster
414
407
  end
415
408
 
416
409
 
417
- manifest = HTTP.follow.get("https://raw.githubusercontent.com/hetznercloud/csi-driver/v1.5.3/deploy/kubernetes/hcloud-csi.yml").body
410
+ manifest = HTTP.follow.get("https://raw.githubusercontent.com/hetznercloud/csi-driver/v1.6.0/deploy/kubernetes/hcloud-csi.yml").body
418
411
 
419
412
  File.write("/tmp/csi-driver.yaml", manifest)
420
413
 
@@ -465,7 +458,13 @@ class Cluster
465
458
  public_ip = server.dig("public_net", "ipv4", "ip")
466
459
  output = ""
467
460
 
468
- Net::SSH.start(public_ip, "root", verify_host_key: (verify_host_key ? :always : :never)) do |session|
461
+ params = { verify_host_key: (verify_host_key ? :always : :never) }
462
+
463
+ if private_ssh_key_path
464
+ params[:keys] = [private_ssh_key_path]
465
+ end
466
+
467
+ Net::SSH.start(public_ip, "root", params) do |session|
469
468
  session.exec!(command) do |channel, stream, data|
470
469
  output << data
471
470
  puts data if print_output
@@ -476,6 +475,10 @@ class Cluster
476
475
  retry unless e.message =~ /Too many authentication failures/
477
476
  rescue Net::SSH::ConnectionTimeout, Errno::ECONNREFUSED, Errno::ENETUNREACH, Errno::EHOSTUNREACH
478
477
  retry
478
+ rescue Net::SSH::AuthenticationFailed
479
+ puts
480
+ puts "Cannot continue: SSH authentication failed. Please ensure that the private SSH key is correct."
481
+ exit 1
479
482
  rescue Net::SSH::HostKeyMismatch
480
483
  puts
481
484
  puts "Cannot continue: Unable to SSH into server with IP #{public_ip} because the existing fingerprint in the known_hosts file does not match that of the actual host key."
@@ -565,6 +568,8 @@ class Cluster
565
568
  gsub("default", cluster_name)
566
569
 
567
570
  File.write(kubeconfig_path, kubeconfig)
571
+
572
+ FileUtils.chmod "go-r", kubeconfig_path
568
573
  end
569
574
 
570
575
  def ugrade_plan_manifest_path
@@ -628,4 +633,9 @@ class Cluster
628
633
  server.dig("labels", "cluster") == cluster_name
629
634
  end
630
635
 
636
+ def schedule_workloads_on_masters?
637
+ schedule_workloads_on_masters = configuration.dig("schedule_workloads_on_masters")
638
+ schedule_workloads_on_masters ? !!schedule_workloads_on_masters : false
639
+ end
640
+
631
641
  end
@@ -1,5 +1,5 @@
1
1
  module Hetzner
2
2
  module K3s
3
- VERSION = "0.3.9"
3
+ VERSION = "0.4.3"
4
4
  end
5
5
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: hetzner-k3s
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.3.9
4
+ version: 0.4.3
5
5
  platform: ruby
6
6
  authors:
7
7
  - Vito Botta
8
8
  autorequire:
9
9
  bindir: exe
10
10
  cert_chain: []
11
- date: 2021-08-20 00:00:00.000000000 Z
11
+ date: 2021-10-17 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: thor
@@ -127,6 +127,7 @@ files:
127
127
  - LICENSE.txt
128
128
  - README.md
129
129
  - Rakefile
130
+ - bin/build.sh
130
131
  - bin/console
131
132
  - bin/setup
132
133
  - cluster_config.yaml.example