hetzner-k3s 0.4.0 → 0.4.4

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 6ee4a4ac2c31ebff805ee20edc3658ffe64be32e50b524ee4af3646e3ffc3a3c
4
- data.tar.gz: 8cbc33a2a696b19c8e614932d1daa7fa9beddaf9d69dd8377d909cb382e40f87
3
+ metadata.gz: 153385c9fce84159b90d6b77bed3e3afebd3cb0739fbd6de1e4cc91e5f1e130f
4
+ data.tar.gz: '08a51842b854c2a438012c6fde115520e32fb782524a3ddd4048a772db8aeaa3'
5
5
  SHA512:
6
- metadata.gz: ff2ca466abbd198b3bc76c8854113d90033fb606e9f11152ecf6d079564ee4dcbdab359a5b17229770a7dc531a9674b211d079a2204596efe3ec5b67157bf82e
7
- data.tar.gz: a6a16c64b0ada5c4d1a740894df09a9f41ed0110b06cfd5629b1971c64836a5fd38bceebb7898432b275cb677779606ecd780b917d4095eb161987d26c0eecc0
6
+ metadata.gz: 9d6ac1e71d783a6b01d77863e30fae972c0983cda9e85dfcaac5fda8da0ea7a33565404aaa8efdd0594185f31c85ebc91cf92cdfe06fd9b92422aaf8c158feee
7
+ data.tar.gz: 4f93a9eb6635d2c757dfbc1205023e563bfbb564ca0b0f9b839ea4b722f9fc2d7db7978ce4ab7da70d2ce23fd3559a301342f3b222522ac4df0554a9f66317bd
data/Gemfile.lock CHANGED
@@ -1,7 +1,7 @@
1
1
  PATH
2
2
  remote: .
3
3
  specs:
4
- hetzner-k3s (0.4.0)
4
+ hetzner-k3s (0.4.3)
5
5
  bcrypt_pbkdf
6
6
  ed25519
7
7
  http
data/README.md CHANGED
@@ -2,7 +2,7 @@
2
2
 
3
3
  This is a CLI tool - based on a Ruby gem - to quickly create and manage Kubernetes clusters in [Hetzner Cloud](https://www.hetzner.com/cloud) using the lightweight Kubernetes distribution [k3s](https://k3s.io/) from [Rancher](https://rancher.com/).
4
4
 
5
- Hetzner Cloud is an awesome cloud provider which offers a truly great service with the best performance/cost ratio in the market. I highly recommend them if European locations (Germany and Finland) are OK for your projects (the Nuremberg data center has decent latency for US users as well). With Hetzner's Cloud Controller Manager and CSI driver you can provision load balancers and persistent volumes very easily.
5
+ Hetzner Cloud is an awesome cloud provider which offers a truly great service with the best performance/cost ratio in the market. With Hetzner's Cloud Controller Manager and CSI driver you can provision load balancers and persistent volumes very easily.
6
6
 
7
7
  k3s is my favorite Kubernetes distribution now because it uses much less memory and CPU, leaving more resources to workloads. It is also super quick to deploy because it's a single binary.
8
8
 
@@ -25,7 +25,7 @@ All that is needed to use this tool is
25
25
 
26
26
  ## Installation
27
27
 
28
- Once you have the Ruby runtime up and running, you just need to install the gem:
28
+ Once you have the Ruby runtime up and running (2.7.2 or newer in the 2.7 series is recommended at this stage), you just need to install the gem:
29
29
 
30
30
  ```bash
31
31
  gem install hetzner-k3s
@@ -38,7 +38,7 @@ This will install the `hetzner-k3s` executable in your PATH.
38
38
  Alternatively, if you don't want to set up a Ruby runtime but have Docker installed, you can use a container. Run the following from inside the directory where you have the config file for the cluster (described in the next section):
39
39
 
40
40
  ```bash
41
- docker run --rm -it -v ${PWD}:/cluster -v ${HOME}/.ssh:/tmp/.ssh vitobotta/hetzner-k3s:v0.3.8 create-cluster --config-file /cluster/test.yaml
41
+ docker run --rm -it -v ${PWD}:/cluster -v ${HOME}/.ssh:/tmp/.ssh vitobotta/hetzner-k3s:v0.4.4 create-cluster --config-file /cluster/test.yaml
42
42
  ```
43
43
 
44
44
  Replace `test.yaml` with the name of your config file.
@@ -53,11 +53,13 @@ hetzner_token: <your token>
53
53
  cluster_name: test
54
54
  kubeconfig_path: "./kubeconfig"
55
55
  k3s_version: v1.21.3+k3s1
56
- ssh_key_path: "~/.ssh/id_rsa.pub"
56
+ public_ssh_key_path: "~/.ssh/id_rsa.pub"
57
+ private_ssh_key_path: "~/.ssh/id_rsa"
57
58
  ssh_allowed_networks:
58
59
  - 0.0.0.0/0
59
60
  verify_host_key: false
60
61
  location: nbg1
62
+ schedule_workloads_on_masters: false
61
63
  masters:
62
64
  instance_type: cpx21
63
65
  instance_count: 3
@@ -76,7 +78,7 @@ If you are using Docker, then set `kubeconfig_path` to `/cluster/kubeconfig` so
76
78
 
77
79
  If you don't want to specify the Hetzner token in the config file (for example if you want to use the tool with CI), then you can use the `HCLOUD_TOKEN` environment variable instead, which has predecence.
78
80
 
79
- **Important**: The tool assignes the label `cluster` to each server it creates, with the clsuter name you specify in the config file, as the value. So please ensure you don't create unrelated servers in the same project having
81
+ **Important**: The tool assignes the label `cluster` to each server it creates, with the cluster name you specify in the config file, as the value. So please ensure you don't create unrelated servers in the same project having
80
82
  the label `cluster=<cluster name>`, because otherwise they will be deleted if you delete the cluster. I recommend you create a separate Hetzner project for each cluster, see note at the end of this README for more details.
81
83
 
82
84
 
@@ -84,7 +86,8 @@ If you set `masters.instance_count` to 1 then the tool will create a non highly
84
86
 
85
87
  You can specify any number of worker node pools for example to have mixed nodes with different specs for different workloads.
86
88
 
87
- At the moment Hetzner Cloud has three locations: two in Germany (`nbg1`, Nuremberg and `fsn1`, Falkensteing) and one in Finland (`hel1`, Helsinki).
89
+ At the moment Hetzner Cloud has four locations: two in Germany (`nbg1`, Nuremberg and `fsn1`, Falkensteing), one in Finland (`hel1`, Helsinki) and one in the USA (`ash`, Ashburn, Virginia). Please note that the Ashburn, Virginia location has just
90
+ been announced and it's limited to AMD instances for now.
88
91
 
89
92
  For the available instance types and their specs, either check from inside a project when adding a server manually or run the following with your Hetzner token:
90
93
 
@@ -239,11 +242,30 @@ I recommend that you create a separate Hetzner project for each cluster, because
239
242
 
240
243
  ## changelog
241
244
 
245
+ - 0.4.4
246
+ - Add support for the new Ashburn, Virginia (USA) location
247
+ - Automatically use a placement group so that the instances are all created on different physical hosts for high availability
248
+
249
+ - 0.4.3
250
+ - Fix an issue with SSH key creation
251
+
252
+ - 0.4.2
253
+ - Update Hetzner CSI driver to v1.6.0
254
+ - Update System Upgrade Controller to v0.8.0
255
+
256
+ - 0.4.1
257
+ - Allow to optionally specify the path of the private SSH key
258
+ - Set correct permissions for the kubeconfig file
259
+ - Retry fetching manifests a few times to allow for temporary network issues
260
+ - Allow to optionally schedule workloads on masters
261
+ - Allow clusters with no worker node pools if scheduling is enabled for the masters
262
+
242
263
  - 0.4.0
243
264
  - Ensure the masters are removed from the API load balancer before deleting the load balancer
244
265
  - Ensure the servers are removed from the firewall before deleting it
245
266
  - Allow using an environment variable to specify the Hetzner token
246
267
  - Allow restricting SSH access to the nodes to specific networks
268
+ - Do not open the port 6443 on the nodes if a load balancer is created for an HA cluster
247
269
 
248
270
  - 0.3.9
249
271
  - Add command "version" to print the version of the tool in use
data/bin/build.sh ADDED
@@ -0,0 +1,14 @@
1
+ #!/bin/bash
2
+
3
+ set -e
4
+
5
+
6
+
7
+ IMAGE="vitobotta/hetzner-k3s"
8
+
9
+ docker build -t ${IMAGE}:v0.4.4 \
10
+ --platform=linux/amd64 \
11
+ --cache-from ${IMAGE}:v0.4.3 \
12
+ --build-arg BUILDKIT_INLINE_CACHE=1 .
13
+
14
+ docker push vitobotta/hetzner-k3s:v0.4.4
@@ -5,7 +5,8 @@ module Hetzner
5
5
  @cluster_name = cluster_name
6
6
  end
7
7
 
8
- def create
8
+ def create(location:)
9
+ @location = location
9
10
  puts
10
11
 
11
12
  if network = find_network
@@ -38,7 +39,7 @@ module Hetzner
38
39
 
39
40
  private
40
41
 
41
- attr_reader :hetzner_client, :cluster_name
42
+ attr_reader :hetzner_client, :cluster_name, :location
42
43
 
43
44
  def network_config
44
45
  {
@@ -47,7 +48,7 @@ module Hetzner
47
48
  subnets: [
48
49
  {
49
50
  ip_range: "10.0.0.0/16",
50
- network_zone: "eu-central",
51
+ network_zone: (location ? "us-east" : "eu-central"),
51
52
  type: "cloud"
52
53
  }
53
54
  ]
@@ -0,0 +1,55 @@
1
+ module Hetzner
2
+ class PlacementGroup
3
+ def initialize(hetzner_client:, cluster_name:)
4
+ @hetzner_client = hetzner_client
5
+ @cluster_name = cluster_name
6
+ end
7
+
8
+ def create
9
+ puts
10
+
11
+ if (placement_group = find_placement_group)
12
+ puts "Placement group already exists, skipping."
13
+ puts
14
+ return placement_group["id"]
15
+ end
16
+
17
+ puts "Creating placement group..."
18
+
19
+ response = hetzner_client.post("/placement_groups", placement_group_config).body
20
+
21
+ puts "...placement group created."
22
+ puts
23
+
24
+ JSON.parse(response)["placement_group"]["id"]
25
+ end
26
+
27
+ def delete
28
+ if (placement_group = find_placement_group)
29
+ puts "Deleting placement group..."
30
+ hetzner_client.delete("/placement_groups", placement_group["id"])
31
+ puts "...placement group deleted."
32
+ else
33
+ puts "Placement group no longer exists, skipping."
34
+ end
35
+
36
+ puts
37
+ end
38
+
39
+ private
40
+
41
+ attr_reader :hetzner_client, :cluster_name
42
+
43
+ def placement_group_config
44
+ {
45
+ name: cluster_name,
46
+ type: "spread"
47
+ }
48
+ end
49
+
50
+ def find_placement_group
51
+ hetzner_client.get("/placement_groups")["placement_groups"].detect{ |placement_group| placement_group["name"] == cluster_name }
52
+ end
53
+
54
+ end
55
+ end
@@ -5,7 +5,7 @@ module Hetzner
5
5
  @cluster_name = cluster_name
6
6
  end
7
7
 
8
- def create(location:, instance_type:, instance_id:, firewall_id:, network_id:, ssh_key_id:)
8
+ def create(location:, instance_type:, instance_id:, firewall_id:, network_id:, ssh_key_id:, placement_group_id:)
9
9
  puts
10
10
 
11
11
  server_name = "#{cluster_name}-#{instance_type}-#{instance_id}"
@@ -36,7 +36,8 @@ module Hetzner
36
36
  labels: {
37
37
  cluster: cluster_name,
38
38
  role: (server_name =~ /master/ ? "master" : "worker")
39
- }
39
+ },
40
+ placement_group: placement_group_id
40
41
  }
41
42
 
42
43
  response = hetzner_client.post("/servers", server_config).body
@@ -5,15 +5,15 @@ module Hetzner
5
5
  @cluster_name = cluster_name
6
6
  end
7
7
 
8
- def create(ssh_key_path:)
9
- @ssh_key_path = ssh_key_path
8
+ def create(public_ssh_key_path:)
9
+ @public_ssh_key_path = public_ssh_key_path
10
10
 
11
11
  puts
12
12
 
13
- if ssh_key = find_ssh_key
13
+ if (public_ssh_key = find_public_ssh_key)
14
14
  puts "SSH key already exists, skipping."
15
15
  puts
16
- return ssh_key["id"]
16
+ return public_ssh_key["id"]
17
17
  end
18
18
 
19
19
  puts "Creating SSH key..."
@@ -26,13 +26,13 @@ module Hetzner
26
26
  JSON.parse(response)["ssh_key"]["id"]
27
27
  end
28
28
 
29
- def delete(ssh_key_path:)
30
- @ssh_key_path = ssh_key_path
29
+ def delete(public_ssh_key_path:)
30
+ @public_ssh_key_path = public_ssh_key_path
31
31
 
32
- if ssh_key = find_ssh_key
33
- if ssh_key["name"] == cluster_name
32
+ if (public_ssh_key = find_public_ssh_key)
33
+ if public_ssh_key["name"] == cluster_name
34
34
  puts "Deleting ssh_key..."
35
- hetzner_client.delete("/ssh_keys", ssh_key["id"])
35
+ hetzner_client.delete("/ssh_keys", public_ssh_key["id"])
36
36
  puts "...ssh_key deleted."
37
37
  else
38
38
  puts "The SSH key existed before creating the cluster, so I won't delete it."
@@ -46,24 +46,24 @@ module Hetzner
46
46
 
47
47
  private
48
48
 
49
- attr_reader :hetzner_client, :cluster_name, :ssh_key_path
49
+ attr_reader :hetzner_client, :cluster_name, :public_ssh_key_path
50
50
 
51
- def public_key
52
- @public_key ||= File.read(ssh_key_path).chop
51
+ def public_ssh_key
52
+ @public_ssh_key ||= File.read(public_ssh_key_path).chop
53
53
  end
54
54
 
55
55
  def ssh_key_config
56
56
  {
57
57
  name: cluster_name,
58
- public_key: public_key
58
+ public_key: public_ssh_key
59
59
  }
60
60
  end
61
61
 
62
62
  def fingerprint
63
- @fingerprint ||= ::SSHKey.fingerprint(public_key)
63
+ @fingerprint ||= ::SSHKey.fingerprint(public_ssh_key)
64
64
  end
65
65
 
66
- def find_ssh_key
66
+ def find_public_ssh_key
67
67
  key = hetzner_client.get("/ssh_keys")["ssh_keys"].detect do |ssh_key|
68
68
  ssh_key["fingerprint"] == fingerprint
69
69
  end
@@ -83,7 +83,8 @@ module Hetzner
83
83
 
84
84
  case action
85
85
  when :create
86
- validate_ssh_key
86
+ validate_public_ssh_key
87
+ validate_private_ssh_key
87
88
  validate_ssh_allowed_networks
88
89
  validate_location
89
90
  validate_k3s_version
@@ -147,16 +148,25 @@ module Hetzner
147
148
  errors << "Invalid path for the kubeconfig"
148
149
  end
149
150
 
150
- def validate_ssh_key
151
- path = File.expand_path(configuration.dig("ssh_key_path"))
151
+ def validate_public_ssh_key
152
+ path = File.expand_path(configuration.dig("public_ssh_key_path"))
152
153
  errors << "Invalid Public SSH key path" and return unless File.exists? path
153
154
 
154
155
  key = File.read(path)
155
- errors << "Public SSH key is invalid" unless ::SSHKey.valid_ssh_public_key? key
156
+ errors << "Public SSH key is invalid" unless ::SSHKey.valid_ssh_public_key?(key)
156
157
  rescue
157
158
  errors << "Invalid Public SSH key path"
158
159
  end
159
160
 
161
+ def validate_private_ssh_key
162
+ return unless (private_ssh_key_path = configuration.dig("private_ssh_key_path"))
163
+
164
+ path = File.expand_path(private_ssh_key_path)
165
+ errors << "Invalid Private SSH key path" and return unless File.exists?(path)
166
+ rescue
167
+ errors << "Invalid Private SSH key path"
168
+ end
169
+
160
170
  def validate_kubeconfig_path_must_exist
161
171
  path = File.expand_path configuration.dig("kubeconfig_path")
162
172
  errors << "kubeconfig path is invalid" and return unless File.exists? path
@@ -183,7 +193,7 @@ module Hetzner
183
193
 
184
194
  def validate_location
185
195
  return if locations.empty? && !valid_token?
186
- errors << "Invalid location - available locations: nbg1 (Nuremberg, Germany), fsn1 (Falkenstein, Germany), hel1 (Helsinki, Finland)" unless locations.include? configuration.dig("location")
196
+ errors << "Invalid location - available locations: nbg1 (Nuremberg, Germany), fsn1 (Falkenstein, Germany), hel1 (Helsinki, Finland) or ash (Ashburn, Virginia, USA)" unless locations.include? configuration.dig("location")
187
197
  end
188
198
 
189
199
  def find_available_releases
@@ -231,14 +241,22 @@ module Hetzner
231
241
  begin
232
242
  worker_node_pools = configuration.dig("worker_node_pools")
233
243
  rescue
234
- errors << "Invalid node pools configuration"
244
+ unless schedule_workloads_on_masters?
245
+ errors << "Invalid node pools configuration"
246
+ return
247
+ end
248
+ end
249
+
250
+ if worker_node_pools.nil? && schedule_workloads_on_masters?
235
251
  return
236
252
  end
237
253
 
238
254
  if !worker_node_pools.is_a? Array
239
255
  errors << "Invalid node pools configuration"
240
256
  elsif worker_node_pools.size == 0
241
- errors << "At least one node pool is required in order to schedule workloads"
257
+ unless schedule_workloads_on_masters?
258
+ errors << "At least one node pool is required in order to schedule workloads"
259
+ end
242
260
  elsif worker_node_pools.map{ |worker_node_pool| worker_node_pool["name"]}.uniq.size != worker_node_pools.size
243
261
  errors << "Each node pool must have an unique name"
244
262
  elsif server_types
@@ -248,6 +266,11 @@ module Hetzner
248
266
  end
249
267
  end
250
268
 
269
+ def schedule_workloads_on_masters?
270
+ schedule_workloads_on_masters = configuration.dig("schedule_workloads_on_masters")
271
+ schedule_workloads_on_masters ? !!schedule_workloads_on_masters : false
272
+ end
273
+
251
274
  def validate_new_k3s_version_must_be_more_recent
252
275
  return if options[:force] == "true"
253
276
  return unless kubernetes_client
@@ -316,12 +339,13 @@ module Hetzner
316
339
  config_hash = YAML.load_file(File.expand_path(configuration["kubeconfig_path"]))
317
340
  config_hash['current-context'] = configuration["cluster_name"]
318
341
  @kubernetes_client = K8s::Client.config(K8s::Config.new(config_hash))
342
+ rescue
319
343
  errors << "Cannot connect to the Kubernetes cluster"
320
344
  false
321
345
  end
322
346
 
323
347
  def validate_verify_host_key
324
- return unless [true, false].include?(configuration.fetch("ssh_key_path", false))
348
+ return unless [true, false].include?(configuration.fetch("public_ssh_key_path", false))
325
349
  errors << "Please set the verify_host_key option to either true or false"
326
350
  end
327
351
 
@@ -11,6 +11,7 @@ require_relative "../infra/network"
11
11
  require_relative "../infra/ssh_key"
12
12
  require_relative "../infra/server"
13
13
  require_relative "../infra/load_balancer"
14
+ require_relative "../infra/placement_group"
14
15
 
15
16
  require_relative "../k3s/client_patch"
16
17
 
@@ -22,12 +23,15 @@ class Cluster
22
23
  end
23
24
 
24
25
  def create(configuration:)
26
+ @configuration = configuration
25
27
  @cluster_name = configuration.dig("cluster_name")
26
28
  @kubeconfig_path = File.expand_path(configuration.dig("kubeconfig_path"))
27
- @ssh_key_path = File.expand_path(configuration.dig("ssh_key_path"))
29
+ @public_ssh_key_path = File.expand_path(configuration.dig("public_ssh_key_path"))
30
+ private_ssh_key_path = configuration.dig("private_ssh_key_path")
31
+ @private_ssh_key_path = File.expand_path(private_ssh_key_path) if private_ssh_key_path
28
32
  @k3s_version = configuration.dig("k3s_version")
29
33
  @masters_config = configuration.dig("masters")
30
- @worker_node_pools = configuration.dig("worker_node_pools")
34
+ @worker_node_pools = find_worker_node_pools(configuration)
31
35
  @location = configuration.dig("location")
32
36
  @verify_host_key = configuration.fetch("verify_host_key", false)
33
37
  @servers = []
@@ -47,7 +51,7 @@ class Cluster
47
51
  def delete(configuration:)
48
52
  @cluster_name = configuration.dig("cluster_name")
49
53
  @kubeconfig_path = File.expand_path(configuration.dig("kubeconfig_path"))
50
- @ssh_key_path = File.expand_path(configuration.dig("ssh_key_path"))
54
+ @public_ssh_key_path = File.expand_path(configuration.dig("public_ssh_key_path"))
51
55
 
52
56
  delete_resources
53
57
  end
@@ -64,13 +68,17 @@ class Cluster
64
68
 
65
69
  private
66
70
 
71
+ def find_worker_node_pools(configuration)
72
+ configuration.fetch("worker_node_pools", [])
73
+ end
74
+
67
75
  attr_accessor :servers
68
76
 
69
77
  attr_reader :hetzner_client, :cluster_name, :kubeconfig_path, :k3s_version,
70
78
  :masters_config, :worker_node_pools,
71
- :location, :ssh_key_path, :kubernetes_client,
79
+ :location, :public_ssh_key_path, :kubernetes_client,
72
80
  :hetzner_token, :tls_sans, :new_k3s_version, :configuration,
73
- :config_file, :verify_host_key, :networks
81
+ :config_file, :verify_host_key, :networks, :private_ssh_key_path, :configuration
74
82
 
75
83
 
76
84
  def latest_k3s_version
@@ -82,6 +90,11 @@ class Cluster
82
90
  master_instance_type = masters_config["instance_type"]
83
91
  masters_count = masters_config["instance_count"]
84
92
 
93
+ placement_group_id = Hetzner::PlacementGroup.new(
94
+ hetzner_client: hetzner_client,
95
+ cluster_name: cluster_name
96
+ ).create
97
+
85
98
  firewall_id = Hetzner::Firewall.new(
86
99
  hetzner_client: hetzner_client,
87
100
  cluster_name: cluster_name
@@ -90,12 +103,12 @@ class Cluster
90
103
  network_id = Hetzner::Network.new(
91
104
  hetzner_client: hetzner_client,
92
105
  cluster_name: cluster_name
93
- ).create
106
+ ).create(location: location)
94
107
 
95
108
  ssh_key_id = Hetzner::SSHKey.new(
96
109
  hetzner_client: hetzner_client,
97
110
  cluster_name: cluster_name
98
- ).create(ssh_key_path: ssh_key_path)
111
+ ).create(public_ssh_key_path: public_ssh_key_path)
99
112
 
100
113
  server_configs = []
101
114
 
@@ -106,7 +119,8 @@ class Cluster
106
119
  instance_id: "master#{i+1}",
107
120
  firewall_id: firewall_id,
108
121
  network_id: network_id,
109
- ssh_key_id: ssh_key_id
122
+ ssh_key_id: ssh_key_id,
123
+ placement_group_id: placement_group_id
110
124
  }
111
125
  end
112
126
 
@@ -129,7 +143,8 @@ class Cluster
129
143
  instance_id: "pool-#{worker_node_pool_name}-worker#{i+1}",
130
144
  firewall_id: firewall_id,
131
145
  network_id: network_id,
132
- ssh_key_id: ssh_key_id
146
+ ssh_key_id: ssh_key_id,
147
+ placement_group_id: placement_group_id
133
148
  }
134
149
  end
135
150
  end
@@ -151,6 +166,11 @@ class Cluster
151
166
  end
152
167
 
153
168
  def delete_resources
169
+ Hetzner::PlacementGroup.new(
170
+ hetzner_client: hetzner_client,
171
+ cluster_name: cluster_name
172
+ ).delete
173
+
154
174
  Hetzner::LoadBalancer.new(
155
175
  hetzner_client: hetzner_client,
156
176
  cluster_name: cluster_name
@@ -169,7 +189,7 @@ class Cluster
169
189
  Hetzner::SSHKey.new(
170
190
  hetzner_client: hetzner_client,
171
191
  cluster_name: cluster_name
172
- ).delete(ssh_key_path: ssh_key_path)
192
+ ).delete(public_ssh_key_path: public_ssh_key_path)
173
193
 
174
194
  threads = all_servers.map do |server|
175
195
  Thread.new do
@@ -207,6 +227,8 @@ class Cluster
207
227
  server = master == first_master ? " --cluster-init " : " --server https://#{first_master_private_ip}:6443 "
208
228
  flannel_interface = find_flannel_interface(master)
209
229
 
230
+ taint = schedule_workloads_on_masters? ? " " : " --node-taint CriticalAddonsOnly=true:NoExecute "
231
+
210
232
  <<~EOF
211
233
  curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION="#{k3s_version}" K3S_TOKEN="#{k3s_token}" INSTALL_K3S_EXEC="server \
212
234
  --disable-cloud-controller \
@@ -223,7 +245,7 @@ class Cluster
223
245
  --kube-proxy-arg="metrics-bind-address=0.0.0.0" \
224
246
  --kube-scheduler-arg="address=0.0.0.0" \
225
247
  --kube-scheduler-arg="bind-address=0.0.0.0" \
226
- --node-taint CriticalAddonsOnly=true:NoExecute \
248
+ #{taint} \
227
249
  --kubelet-arg="cloud-provider=external" \
228
250
  --advertise-address=$(hostname -I | awk '{print $2}') \
229
251
  --node-ip=$(hostname -I | awk '{print $2}') \
@@ -313,7 +335,7 @@ class Cluster
313
335
  end
314
336
 
315
337
 
316
- manifest = HTTP.follow.get("https://github.com/hetznercloud/hcloud-cloud-controller-manager/releases/latest/download/ccm-networks.yaml").body
338
+ manifest = fetch_manifest("https://github.com/hetznercloud/hcloud-cloud-controller-manager/releases/latest/download/ccm-networks.yaml")
317
339
 
318
340
  File.write("/tmp/cloud-controller-manager.yaml", manifest)
319
341
 
@@ -338,11 +360,18 @@ class Cluster
338
360
  retry
339
361
  end
340
362
 
363
+ def fetch_manifest(url)
364
+ retries ||= 1
365
+ HTTP.follow.get(url).body
366
+ rescue
367
+ retry if (retries += 1) <= 10
368
+ end
369
+
341
370
  def deploy_system_upgrade_controller
342
371
  puts
343
372
  puts "Deploying k3s System Upgrade Controller..."
344
373
 
345
- manifest = HTTP.follow.get("https://github.com/rancher/system-upgrade-controller/releases/download/v0.7.3/system-upgrade-controller.yaml").body
374
+ manifest = HTTP.follow.get("https://github.com/rancher/system-upgrade-controller/releases/download/v0.8.0/system-upgrade-controller.yaml").body
346
375
 
347
376
  File.write("/tmp/system-upgrade-controller.yaml", manifest)
348
377
 
@@ -391,7 +420,7 @@ class Cluster
391
420
  end
392
421
 
393
422
 
394
- manifest = HTTP.follow.get("https://raw.githubusercontent.com/hetznercloud/csi-driver/v1.5.3/deploy/kubernetes/hcloud-csi.yml").body
423
+ manifest = HTTP.follow.get("https://raw.githubusercontent.com/hetznercloud/csi-driver/v1.6.0/deploy/kubernetes/hcloud-csi.yml").body
395
424
 
396
425
  File.write("/tmp/csi-driver.yaml", manifest)
397
426
 
@@ -442,7 +471,13 @@ class Cluster
442
471
  public_ip = server.dig("public_net", "ipv4", "ip")
443
472
  output = ""
444
473
 
445
- Net::SSH.start(public_ip, "root", verify_host_key: (verify_host_key ? :always : :never)) do |session|
474
+ params = { verify_host_key: (verify_host_key ? :always : :never) }
475
+
476
+ if private_ssh_key_path
477
+ params[:keys] = [private_ssh_key_path]
478
+ end
479
+
480
+ Net::SSH.start(public_ip, "root", params) do |session|
446
481
  session.exec!(command) do |channel, stream, data|
447
482
  output << data
448
483
  puts data if print_output
@@ -453,6 +488,10 @@ class Cluster
453
488
  retry unless e.message =~ /Too many authentication failures/
454
489
  rescue Net::SSH::ConnectionTimeout, Errno::ECONNREFUSED, Errno::ENETUNREACH, Errno::EHOSTUNREACH
455
490
  retry
491
+ rescue Net::SSH::AuthenticationFailed
492
+ puts
493
+ puts "Cannot continue: SSH authentication failed. Please ensure that the private SSH key is correct."
494
+ exit 1
456
495
  rescue Net::SSH::HostKeyMismatch
457
496
  puts
458
497
  puts "Cannot continue: Unable to SSH into server with IP #{public_ip} because the existing fingerprint in the known_hosts file does not match that of the actual host key."
@@ -542,6 +581,8 @@ class Cluster
542
581
  gsub("default", cluster_name)
543
582
 
544
583
  File.write(kubeconfig_path, kubeconfig)
584
+
585
+ FileUtils.chmod "go-r", kubeconfig_path
545
586
  end
546
587
 
547
588
  def ugrade_plan_manifest_path
@@ -605,4 +646,9 @@ class Cluster
605
646
  server.dig("labels", "cluster") == cluster_name
606
647
  end
607
648
 
649
+ def schedule_workloads_on_masters?
650
+ schedule_workloads_on_masters = configuration.dig("schedule_workloads_on_masters")
651
+ schedule_workloads_on_masters ? !!schedule_workloads_on_masters : false
652
+ end
653
+
608
654
  end
@@ -1,5 +1,5 @@
1
1
  module Hetzner
2
2
  module K3s
3
- VERSION = "0.4.0"
3
+ VERSION = "0.4.4"
4
4
  end
5
5
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: hetzner-k3s
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.4.0
4
+ version: 0.4.4
5
5
  platform: ruby
6
6
  authors:
7
7
  - Vito Botta
8
8
  autorequire:
9
9
  bindir: exe
10
10
  cert_chain: []
11
- date: 2021-08-24 00:00:00.000000000 Z
11
+ date: 2021-11-03 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: thor
@@ -127,6 +127,7 @@ files:
127
127
  - LICENSE.txt
128
128
  - README.md
129
129
  - Rakefile
130
+ - bin/build.sh
130
131
  - bin/console
131
132
  - bin/setup
132
133
  - cluster_config.yaml.example
@@ -139,6 +140,7 @@ files:
139
140
  - lib/hetzner/infra/firewall.rb
140
141
  - lib/hetzner/infra/load_balancer.rb
141
142
  - lib/hetzner/infra/network.rb
143
+ - lib/hetzner/infra/placement_group.rb
142
144
  - lib/hetzner/infra/server.rb
143
145
  - lib/hetzner/infra/ssh_key.rb
144
146
  - lib/hetzner/k3s/cli.rb