hetzner-k3s 0.6.0 → 0.6.2
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +4 -4
- data/Dockerfile +1 -1
- data/Gemfile.lock +1 -1
- data/README.md +77 -25
- data/cluster_config.yaml.example +39 -0
- data/lib/hetzner/k3s/cluster.rb +98 -3
- data/lib/hetzner/k3s/configuration.rb +12 -13
- data/lib/hetzner/k3s/version.rb +1 -1
- data/lib/hetzner/utils.rb +0 -3
- metadata +2 -2
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA256:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: 873c76ec7a993a8c890f72c8158daa82597eb994e3f5e70c9b53d98604903f38
|
4
|
+
data.tar.gz: d0b1f622ff21728d1bb6b41b2a373eca693121a0eb619776adf9590ef926f80a
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: 4bdc7f2fa5f6ef40bcd1e089d89b26c228b77fe3c06fb4af884910e4fe69fb6ad67951160a4ecdbcb6f076adbc16521bfd45b661644e7f6b927ba002e5a3ba67
|
7
|
+
data.tar.gz: 94a8f6b3df49db94d7412d5f350ce3f996980f0741e694a8b91a7ff0c71783cdaa12b4263cab31395288497dd3c790be45ca62e00f844d9df863a13b3623de79
|
data/Dockerfile
CHANGED
data/Gemfile.lock
CHANGED
data/README.md
CHANGED
@@ -1,6 +1,20 @@
|
|
1
1
|
# Create production grade Kubernetes clusters in Hetzner Cloud in a couple of minutes or less
|
2
2
|
|
3
|
-
|
3
|
+
![GitHub release (latest SemVer)](https://img.shields.io/github/v/release/vitobotta/hetzner-k3s)
|
4
|
+
![GitHub Release Date](https://img.shields.io/github/release-date/vitobotta/hetzner-k3s)
|
5
|
+
![GitHub last commit](https://img.shields.io/github/last-commit/vitobotta/hetzner-k3s)
|
6
|
+
![GitHub Workflow Status](https://img.shields.io/github/workflow/status/vitobotta/hetzner-k3s/Create%20Release)
|
7
|
+
![GitHub issues](https://img.shields.io/github/issues-raw/vitobotta/hetzner-k3s)
|
8
|
+
![GitHub pull requests](https://img.shields.io/github/issues-pr-raw/vitobotta/hetzner-k3s)
|
9
|
+
![GitHub](https://img.shields.io/github/license/vitobotta/hetzner-k3s)
|
10
|
+
![GitHub Discussions](https://img.shields.io/github/discussions/vitobotta/hetzner-k3s)
|
11
|
+
![GitHub top language](https://img.shields.io/github/languages/top/vitobotta/hetzner-k3s)
|
12
|
+
|
13
|
+
![GitHub forks](https://img.shields.io/github/forks/vitobotta/hetzner-k3s?style=social)
|
14
|
+
![GitHub Repo stars](https://img.shields.io/github/stars/vitobotta/hetzner-k3s?style=social)
|
15
|
+
## What is this?
|
16
|
+
|
17
|
+
This is a CLI tool to quickly create and manage Kubernetes clusters in [Hetzner Cloud](https://www.hetzner.com/cloud) using the lightweight Kubernetes distribution [k3s](https://k3s.io/) from [Rancher](https://rancher.com/).
|
4
18
|
|
5
19
|
Hetzner Cloud is an awesome cloud provider which offers a truly great service with the best performance/cost ratio in the market. With Hetzner's Cloud Controller Manager and CSI driver you can provision load balancers and persistent volumes very easily.
|
6
20
|
|
@@ -8,7 +22,7 @@ k3s is my favorite Kubernetes distribution now because it uses much less memory
|
|
8
22
|
|
9
23
|
Using this tool, creating a highly available k3s cluster with 3 masters for the control plane and 3 worker nodes takes about **a couple of minutes** only. This includes
|
10
24
|
|
11
|
-
- creating the
|
25
|
+
- creating the infrastructure resources (servers, private network, firewall, load balancer for the API server for HA clusters)
|
12
26
|
- deploying k3s to the nodes
|
13
27
|
- installing the [Hetzner Cloud Controller Manager](https://github.com/hetznercloud/hcloud-cloud-controller-manager) to provision load balancers right away
|
14
28
|
- installing the [Hetzner CSI Driver](https://github.com/hetznercloud/csi-driver) to provision persistent volumes using Hetzner's block storage
|
@@ -18,28 +32,47 @@ See roadmap [here](https://github.com/vitobotta/hetzner-k3s/projects/1) for the
|
|
18
32
|
|
19
33
|
Also see this [wiki page](https://github.com/vitobotta/hetzner-k3s/wiki/Tutorial:---Setting-up-a-cluster) for a tutorial on how to set up a cluster with the most common setup to get you started.
|
20
34
|
|
21
|
-
|
35
|
+
___
|
36
|
+
## Who am I?
|
37
|
+
|
38
|
+
I'm a Senior Backend Engineer and DevOps based in Finland and working for event management platform [Brella](https://www.brella.io/).
|
39
|
+
|
40
|
+
I also write a [technical blog](https://vitobotta.com/) on programming, DevOps and related technologies.
|
41
|
+
|
42
|
+
___
|
43
|
+
## Prerequisites
|
22
44
|
|
23
45
|
All that is needed to use this tool is
|
24
46
|
|
25
47
|
- an Hetzner Cloud account
|
26
|
-
- an Hetzner Cloud token: for this you need to create a project from the cloud console, and then an API token with **both read and write permissions** (sidebar > Security > API Tokens); you will see the token only once, so
|
27
|
-
- a recent Ruby runtime installed (see [this page](https://www.ruby-lang.org/en/documentation/installation/) for instructions if you are not familiar with Ruby). I
|
48
|
+
- an Hetzner Cloud token: for this you need to create a project from the cloud console, and then an API token with **both read and write permissions** (sidebar > Security > API Tokens); you will see the token only once, so be sure to take note of it somewhere safe
|
49
|
+
- a recent Ruby runtime installed if you install the tool as Ruby gem (see [this page](https://www.ruby-lang.org/en/documentation/installation/) for instructions if you are not familiar with Ruby). I recommend you use the standalone binaries either downloaded directly or installed with Homebrew though, since it's easier and you don't have to set up Ruby.
|
50
|
+
|
51
|
+
___
|
52
|
+
## Getting Started - Installation
|
53
|
+
|
54
|
+
Before using the tool, be sure to have kubectl installed as it's required to install some software in the cluster to provision load balancers/persistent volumes and perform k3s upgrades.
|
28
55
|
|
29
|
-
|
56
|
+
### macOS
|
57
|
+
|
58
|
+
#### With Homebrew
|
59
|
+
|
60
|
+
```bash
|
61
|
+
brew install vitobotta/tap/hetzner-k3s
|
62
|
+
```
|
30
63
|
|
31
|
-
|
64
|
+
#### Binary installation (Intel)
|
32
65
|
|
33
66
|
```bash
|
34
|
-
wget https://github.com/vitobotta/hetzner-k3s/releases/download/v0.6.
|
67
|
+
wget https://github.com/vitobotta/hetzner-k3s/releases/download/v0.6.1/hetzner-k3s-mac-amd64
|
35
68
|
chmod +x hetzner-k3s-mac-x64
|
36
69
|
sudo mv hetzner-k3s-mac-x64 /usr/local/bin/hetzner-k3s
|
37
70
|
```
|
38
71
|
|
39
|
-
|
72
|
+
#### Binary installation (Apple Silicon/M1)
|
40
73
|
|
41
74
|
```bash
|
42
|
-
wget https://github.com/vitobotta/hetzner-k3s/releases/download/v0.6.
|
75
|
+
wget https://github.com/vitobotta/hetzner-k3s/releases/download/v0.6.1/hetzner-k3s-mac-arm64
|
43
76
|
chmod +x hetzner-k3s-mac-arm
|
44
77
|
sudo mv hetzner-k3s-mac-arm /usr/local/bin/hetzner-k3s
|
45
78
|
```
|
@@ -49,12 +82,14 @@ NOTE: currently the ARM version still requires [Rosetta](https://support.apple.c
|
|
49
82
|
### Linux
|
50
83
|
|
51
84
|
```bash
|
52
|
-
wget https://github.com/vitobotta/hetzner-k3s/releases/download/v0.6.
|
53
|
-
chmod +x hetzner-k3s-linux-
|
54
|
-
sudo mv hetzner-k3s-linux-
|
85
|
+
wget https://github.com/vitobotta/hetzner-k3s/releases/download/v0.6.1/hetzner-k3s-linux-x86_64
|
86
|
+
chmod +x hetzner-k3s-linux-x86_64
|
87
|
+
sudo mv hetzner-k3s-linux-x86_64 /usr/local/bin/hetzner-k3s
|
55
88
|
```
|
56
89
|
|
57
|
-
###
|
90
|
+
### macOS, Linux, Windows
|
91
|
+
|
92
|
+
#### As Ruby gem executable
|
58
93
|
|
59
94
|
Once you have the Ruby runtime up and running (2.7.1 required), you just need to install the gem:
|
60
95
|
|
@@ -64,7 +99,7 @@ gem install hetzner-k3s
|
|
64
99
|
|
65
100
|
This will install the `hetzner-k3s` executable in your PATH.
|
66
101
|
|
67
|
-
|
102
|
+
#### With Docker
|
68
103
|
|
69
104
|
Alternatively, if you don't want to set up a Ruby runtime but have Docker installed, you can use a container. Run the following from inside the directory where you have the config file for the cluster (described in the next section):
|
70
105
|
|
@@ -72,13 +107,15 @@ Alternatively, if you don't want to set up a Ruby runtime but have Docker instal
|
|
72
107
|
docker run --rm -it \
|
73
108
|
-v ${PWD}:/cluster \
|
74
109
|
-v ${HOME}/.ssh:/tmp/.ssh \
|
75
|
-
vitobotta/hetzner-k3s:v0.
|
110
|
+
vitobotta/hetzner-k3s:v0.6.1 \
|
76
111
|
create-cluster \
|
77
112
|
--config-file /cluster/test.yaml
|
78
113
|
```
|
79
114
|
|
80
115
|
Replace `test.yaml` with the name of your config file.
|
81
116
|
|
117
|
+
___
|
118
|
+
|
82
119
|
## Creating a cluster
|
83
120
|
|
84
121
|
The tool requires a simple configuration file in order to create/upgrade/delete clusters, in the YAML format like in the example below:
|
@@ -101,10 +138,20 @@ schedule_workloads_on_masters: false
|
|
101
138
|
masters:
|
102
139
|
instance_type: cpx21
|
103
140
|
instance_count: 3
|
141
|
+
# labels:
|
142
|
+
# purpose: master
|
143
|
+
# size: cpx21
|
144
|
+
# taints:
|
145
|
+
# something: value1:NoSchedule
|
104
146
|
worker_node_pools:
|
105
147
|
- name: small
|
106
148
|
instance_type: cpx21
|
107
149
|
instance_count: 4
|
150
|
+
# labels:
|
151
|
+
# purpose: worker
|
152
|
+
# size: cpx21
|
153
|
+
# taints:
|
154
|
+
# something: GpuWorkloadsOnly:NoSchedule
|
108
155
|
- name: big
|
109
156
|
instance_type: cpx31
|
110
157
|
instance_count: 2
|
@@ -135,10 +182,9 @@ enable_encryption: true
|
|
135
182
|
# - arg1
|
136
183
|
# - ...
|
137
184
|
# existing_network: <specify if you want to use an existing network, otherwise one will be created for this cluster>
|
138
|
-
|
139
185
|
```
|
140
186
|
|
141
|
-
It should hopefully be self explanatory; you can run `hetzner-k3s releases` to see a list of the available releases
|
187
|
+
It should hopefully be self explanatory; you can run `hetzner-k3s releases` to see a list of the available k3s releases.
|
142
188
|
|
143
189
|
If you are using Docker, then set `kubeconfig_path` to `/cluster/kubeconfig` so that the kubeconfig is created in the same directory where your config file is. Also set the config file path to `/cluster/<filename>`.
|
144
190
|
|
@@ -147,7 +193,6 @@ If you don't want to specify the Hetzner token in the config file (for example i
|
|
147
193
|
**Important**: The tool assignes the label `cluster` to each server it creates, with the cluster name you specify in the config file, as the value. So please ensure you don't create unrelated servers in the same project having
|
148
194
|
the label `cluster=<cluster name>`, because otherwise they will be deleted if you delete the cluster. I recommend you create a separate Hetzner project for each cluster, see note at the end of this README for more details.
|
149
195
|
|
150
|
-
|
151
196
|
If you set `masters.instance_count` to 1 then the tool will create a non highly available control plane; for production clusters you may want to set it to a number greater than 1. This number must be odd to avoid split brain issues with etcd and the recommended number is 3.
|
152
197
|
|
153
198
|
You can specify any number of worker node pools for example to have mixed nodes with different specs for different workloads.
|
@@ -186,7 +231,7 @@ Finally, to create the cluster run:
|
|
186
231
|
hetzner-k3s create-cluster --config-file cluster_config.yaml
|
187
232
|
```
|
188
233
|
|
189
|
-
This will take a
|
234
|
+
This will take a few minutes depending on the number of masters and worker nodes.
|
190
235
|
|
191
236
|
If you are creating an HA cluster and see the following in the output you can safely ignore it - it happens when additional masters are joining the first one:
|
192
237
|
|
@@ -227,6 +272,7 @@ In a future relese I will add some automation for the cleanup.
|
|
227
272
|
|
228
273
|
It's easy to convert a non-HA with a single master cluster to HA with multiple masters. Just change the masters instance count and re-run the create command. This will create a load balancer for the API server and update the kubeconfig so that all the API requests go through the load balancer.
|
229
274
|
|
275
|
+
___
|
230
276
|
## Upgrading to a new version of k3s
|
231
277
|
|
232
278
|
If it's the first time you upgrade the cluster, all you need to do to upgrade it to a newer version of k3s is run the following command:
|
@@ -275,7 +321,7 @@ A final note about upgrades is that if for some reason the upgrade gets stuck af
|
|
275
321
|
```bash
|
276
322
|
kubectl label node <master1> <master2> <master2> plan.upgrade.cattle.io/k3s-server=upgraded
|
277
323
|
```
|
278
|
-
|
324
|
+
___
|
279
325
|
## Upgrading the OS on nodes
|
280
326
|
|
281
327
|
- consider adding a temporary node during the process if you don't have enough spare capacity in the cluster
|
@@ -285,6 +331,7 @@ kubectl label node <master1> <master2> <master2> plan.upgrade.cattle.io/k3s-serv
|
|
285
331
|
- uncordon
|
286
332
|
- proceed with the next node
|
287
333
|
|
334
|
+
___
|
288
335
|
## Deleting a cluster
|
289
336
|
|
290
337
|
To delete a cluster, running
|
@@ -295,7 +342,11 @@ hetzner-k3s delete-cluster --config-file cluster_config.yaml
|
|
295
342
|
|
296
343
|
This will delete all the resources in the Hetzner Cloud project for the cluster being deleted.
|
297
344
|
|
345
|
+
## Troubleshooting
|
346
|
+
|
347
|
+
See [this page](https://github.com/vitobotta/hetzner-k3s/wiki/Troubleshooting) for solutions to common issues.
|
298
348
|
|
349
|
+
___
|
299
350
|
## Additional info
|
300
351
|
|
301
352
|
### Load balancers
|
@@ -321,16 +372,15 @@ The annotation `load-balancer.hetzner.cloud/use-private-ip: "true"` ensures that
|
|
321
372
|
|
322
373
|
The other annotations should be self explanatory. You can find a list of the available annotations [here](https://pkg.go.dev/github.com/hetznercloud/hcloud-cloud-controller-manager/internal/annotation).
|
323
374
|
|
324
|
-
|
375
|
+
### Persistent volumes
|
325
376
|
|
326
377
|
Once the cluster is ready you can create persistent volumes out of the box with the default storage class `hcloud-volumes`, since the Hetzner CSI driver is installed automatically. This will use Hetzner's block storage (based on Ceph so it's replicated and highly available) for your persistent volumes. Note that the minimum size of a volume is 10Gi. If you specify a smaller size for a volume, the volume will be created with a capacity of 10Gi anyway.
|
327
378
|
|
328
|
-
|
329
|
-
## Keeping a project per cluster
|
379
|
+
### Keeping a project per cluster
|
330
380
|
|
331
381
|
I recommend that you create a separate Hetzner project for each cluster, because otherwise multiple clusters will attempt to create overlapping routes. I will make the pod cidr configurable in the future to avoid this, but I still recommend keeping clusters separated from each other. This way, if you want to delete a cluster with all the resources created for it, you can just delete the project.
|
332
382
|
|
333
|
-
|
383
|
+
___
|
334
384
|
## Contributing and support
|
335
385
|
|
336
386
|
Please create a PR if you want to propose any changes, or open an issue if you are having trouble with the tool - I will do my best to help if I can.
|
@@ -339,10 +389,12 @@ Contributors:
|
|
339
389
|
|
340
390
|
- [TitanFighter](https://github.com/TitanFighter) for [this awesome tutorial](https://github.com/vitobotta/hetzner-k3s/wiki/Tutorial:---Setting-up-a-cluster)
|
341
391
|
|
392
|
+
___
|
342
393
|
## License
|
343
394
|
|
344
395
|
The gem is available as open source under the terms of the [MIT License](https://opensource.org/licenses/MIT).
|
345
396
|
|
397
|
+
___
|
346
398
|
## Code of Conduct
|
347
399
|
|
348
400
|
Everyone interacting in the hetzner-k3s project's codebases, issue trackers, chat rooms and mailing lists is expected to follow the [code of conduct](https://github.com/vitobotta/hetzner-k3s/blob/main/CODE_OF_CONDUCT.md).
|
data/cluster_config.yaml.example
CHANGED
@@ -7,16 +7,55 @@ public_ssh_key_path: "~/.ssh/id_rsa.pub"
|
|
7
7
|
private_ssh_key_path: "~/.ssh/id_rsa"
|
8
8
|
ssh_allowed_networks:
|
9
9
|
- 0.0.0.0/0
|
10
|
+
api_allowed_networks:
|
11
|
+
- 0.0.0.0/0
|
10
12
|
verify_host_key: false
|
11
13
|
location: nbg1
|
12
14
|
schedule_workloads_on_masters: false
|
13
15
|
masters:
|
14
16
|
instance_type: cpx21
|
15
17
|
instance_count: 3
|
18
|
+
# labels:
|
19
|
+
# purpose: master
|
20
|
+
# size: cpx21
|
21
|
+
# taints:
|
22
|
+
# something: value1:NoSchedule
|
16
23
|
worker_node_pools:
|
17
24
|
- name: small
|
18
25
|
instance_type: cpx21
|
19
26
|
instance_count: 4
|
27
|
+
# labels:
|
28
|
+
# purpose: worker
|
29
|
+
# size: cpx21
|
30
|
+
# taints:
|
31
|
+
# something: GpuWorkloadsOnly:NoSchedule
|
20
32
|
- name: big
|
21
33
|
instance_type: cpx31
|
22
34
|
instance_count: 2
|
35
|
+
additional_packages:
|
36
|
+
- somepackage
|
37
|
+
post_create_commands:
|
38
|
+
- apt update
|
39
|
+
- apt upgrade -y
|
40
|
+
- apt autoremove -y
|
41
|
+
- shutdown -r now
|
42
|
+
enable_encryption: true
|
43
|
+
# kube_api_server_args:
|
44
|
+
# - arg1
|
45
|
+
# - ...
|
46
|
+
# kube_scheduler_args:
|
47
|
+
# - arg1
|
48
|
+
# - ...
|
49
|
+
# kube_controller_manager_args:
|
50
|
+
# - arg1
|
51
|
+
# - ...
|
52
|
+
# kube_cloud_controller_manager_args:
|
53
|
+
# - arg1
|
54
|
+
# - ...
|
55
|
+
# kubelet_args:
|
56
|
+
# - arg1
|
57
|
+
# - ...
|
58
|
+
# kube_proxy_args:
|
59
|
+
# - arg1
|
60
|
+
# - ...
|
61
|
+
# existing_network: <specify if you want to use an existing network, otherwise one will be created for this cluster>
|
data/lib/hetzner/k3s/cluster.rb
CHANGED
@@ -51,6 +51,9 @@ class Cluster
|
|
51
51
|
|
52
52
|
sleep 10
|
53
53
|
|
54
|
+
label_nodes
|
55
|
+
taint_nodes
|
56
|
+
|
54
57
|
deploy_cloud_controller_manager
|
55
58
|
deploy_csi_driver
|
56
59
|
deploy_system_upgrade_controller
|
@@ -294,6 +297,82 @@ class Cluster
|
|
294
297
|
threads.each(&:join) unless threads.empty?
|
295
298
|
end
|
296
299
|
|
300
|
+
def label_nodes
|
301
|
+
check_kubectl
|
302
|
+
|
303
|
+
if master_definitions_for_create.first[:labels]
|
304
|
+
master_labels = master_definitions_for_create.first[:labels].map{ |k, v| "#{k}=#{v}" }.join(' ')
|
305
|
+
master_node_names = []
|
306
|
+
|
307
|
+
master_definitions_for_create.each do |master|
|
308
|
+
master_node_names << "#{configuration['cluster_name']}-#{master[:instance_type]}-#{master[:instance_id]}"
|
309
|
+
end
|
310
|
+
|
311
|
+
master_node_names = master_node_names.join(' ')
|
312
|
+
|
313
|
+
cmd = "kubectl label --overwrite nodes #{master_node_names} #{master_labels}"
|
314
|
+
|
315
|
+
run cmd, kubeconfig_path: kubeconfig_path
|
316
|
+
end
|
317
|
+
|
318
|
+
workers = []
|
319
|
+
|
320
|
+
worker_node_pools.each do |worker_node_pool|
|
321
|
+
workers += worker_node_pool_definitions(worker_node_pool)
|
322
|
+
end
|
323
|
+
|
324
|
+
return unless workers.any?
|
325
|
+
|
326
|
+
workers.each do |worker|
|
327
|
+
next unless worker[:labels]
|
328
|
+
|
329
|
+
worker_labels = worker[:labels].map{ |k, v| "#{k}=#{v}" }.join(' ')
|
330
|
+
worker_node_name = "#{configuration['cluster_name']}-#{worker[:instance_type]}-#{worker[:instance_id]}"
|
331
|
+
|
332
|
+
cmd = "kubectl label --overwrite nodes #{worker_node_name} #{worker_labels}"
|
333
|
+
|
334
|
+
run cmd, kubeconfig_path: kubeconfig_path
|
335
|
+
end
|
336
|
+
end
|
337
|
+
|
338
|
+
def taint_nodes
|
339
|
+
check_kubectl
|
340
|
+
|
341
|
+
if master_definitions_for_create.first[:taints]
|
342
|
+
master_taints = master_definitions_for_create.first[:taints].map{ |k, v| "#{k}=#{v}" }.join(' ')
|
343
|
+
master_node_names = []
|
344
|
+
|
345
|
+
master_definitions_for_create.each do |master|
|
346
|
+
master_node_names << "#{configuration['cluster_name']}-#{master[:instance_type]}-#{master[:instance_id]}"
|
347
|
+
end
|
348
|
+
|
349
|
+
master_node_names = master_node_names.join(' ')
|
350
|
+
|
351
|
+
cmd = "kubectl taint --overwrite nodes #{master_node_names} #{master_taints}"
|
352
|
+
|
353
|
+
run cmd, kubeconfig_path: kubeconfig_path
|
354
|
+
end
|
355
|
+
|
356
|
+
workers = []
|
357
|
+
|
358
|
+
worker_node_pools.each do |worker_node_pool|
|
359
|
+
workers += worker_node_pool_definitions(worker_node_pool)
|
360
|
+
end
|
361
|
+
|
362
|
+
return unless workers.any?
|
363
|
+
|
364
|
+
workers.each do |worker|
|
365
|
+
next unless worker[:taints]
|
366
|
+
|
367
|
+
worker_taints = worker[:taints].map{ |k, v| "#{k}=#{v}" }.join(' ')
|
368
|
+
worker_node_name = "#{configuration['cluster_name']}-#{worker[:instance_type]}-#{worker[:instance_id]}"
|
369
|
+
|
370
|
+
cmd = "kubectl taint --overwrite nodes #{worker_node_name} #{worker_taints}"
|
371
|
+
|
372
|
+
run cmd, kubeconfig_path: kubeconfig_path
|
373
|
+
end
|
374
|
+
end
|
375
|
+
|
297
376
|
def deploy_cloud_controller_manager
|
298
377
|
check_kubectl
|
299
378
|
|
@@ -480,6 +559,14 @@ class Cluster
|
|
480
559
|
@master_instance_type ||= masters_config['instance_type']
|
481
560
|
end
|
482
561
|
|
562
|
+
def master_labels
|
563
|
+
@master_labels ||= masters_config['labels']
|
564
|
+
end
|
565
|
+
|
566
|
+
def master_taints
|
567
|
+
@master_taints ||= masters_config['taints']
|
568
|
+
end
|
569
|
+
|
483
570
|
def masters_count
|
484
571
|
@masters_count ||= masters_config['instance_count']
|
485
572
|
end
|
@@ -510,7 +597,9 @@ class Cluster
|
|
510
597
|
ssh_key_id: ssh_key_id,
|
511
598
|
image: image,
|
512
599
|
additional_packages: additional_packages,
|
513
|
-
additional_post_create_commands: additional_post_create_commands
|
600
|
+
additional_post_create_commands: additional_post_create_commands,
|
601
|
+
labels: master_labels,
|
602
|
+
taints: master_taints
|
514
603
|
}
|
515
604
|
end
|
516
605
|
|
@@ -535,6 +624,8 @@ class Cluster
|
|
535
624
|
worker_instance_type = worker_node_pool['instance_type']
|
536
625
|
worker_count = worker_node_pool['instance_count']
|
537
626
|
worker_location = worker_node_pool['location'] || masters_location
|
627
|
+
labels = worker_node_pool['labels']
|
628
|
+
taints = worker_node_pool['taints']
|
538
629
|
|
539
630
|
definitions = []
|
540
631
|
|
@@ -549,7 +640,9 @@ class Cluster
|
|
549
640
|
ssh_key_id: ssh_key_id,
|
550
641
|
image: image,
|
551
642
|
additional_packages: additional_packages,
|
552
|
-
additional_post_create_commands: additional_post_create_commands
|
643
|
+
additional_post_create_commands: additional_post_create_commands,
|
644
|
+
labels: labels,
|
645
|
+
taints: taints
|
553
646
|
}
|
554
647
|
end
|
555
648
|
|
@@ -576,8 +669,10 @@ class Cluster
|
|
576
669
|
servers = []
|
577
670
|
|
578
671
|
threads = server_configs.map do |server_config|
|
672
|
+
config = server_config.reject! { |k, _v| %i[labels taints].include?(k) }
|
673
|
+
|
579
674
|
Thread.new do
|
580
|
-
servers << Hetzner::Server.new(hetzner_client: hetzner_client, cluster_name: cluster_name).create(**
|
675
|
+
servers << Hetzner::Server.new(hetzner_client: hetzner_client, cluster_name: cluster_name).create(**config)
|
581
676
|
end
|
582
677
|
end
|
583
678
|
|
@@ -3,7 +3,7 @@
|
|
3
3
|
module Hetzner
|
4
4
|
class Configuration
|
5
5
|
GITHUB_DELIM_LINKS = ','
|
6
|
-
GITHUB_LINK_REGEX = /<([^>]+)>; rel="([^"]+)"
|
6
|
+
GITHUB_LINK_REGEX = /<([^>]+)>; rel="([^"]+)"/.freeze
|
7
7
|
|
8
8
|
attr_reader :hetzner_client
|
9
9
|
|
@@ -92,8 +92,6 @@ module Hetzner
|
|
92
92
|
configuration
|
93
93
|
end
|
94
94
|
|
95
|
-
private_class_method
|
96
|
-
|
97
95
|
def self.fetch_releases(url)
|
98
96
|
response = HTTParty.get(url)
|
99
97
|
[response, JSON.parse(response.body).map { |hash| hash['name'] }]
|
@@ -196,7 +194,7 @@ module Hetzner
|
|
196
194
|
|
197
195
|
unless invalid_ranges.empty?
|
198
196
|
invalid_ranges.each do |_network|
|
199
|
-
errors <<
|
197
|
+
errors << "Please use the CIDR notation for the #{access_type} networks to avoid ambiguity"
|
200
198
|
end
|
201
199
|
end
|
202
200
|
|
@@ -210,19 +208,17 @@ module Hetzner
|
|
210
208
|
false
|
211
209
|
end
|
212
210
|
|
213
|
-
|
214
|
-
|
215
|
-
|
216
|
-
|
217
|
-
|
218
|
-
|
219
|
-
|
211
|
+
return if current_ip_network
|
212
|
+
|
213
|
+
case access_type
|
214
|
+
when 'SSH'
|
215
|
+
errors << "Your current IP #{current_ip} is not included into any of the #{access_type} networks you've specified, so we won't be able to SSH into the nodes "
|
216
|
+
when 'API'
|
217
|
+
errors << "Your current IP #{current_ip} is not included into any of the #{access_type} networks you've specified, so we won't be able to connect to the Kubernetes API"
|
220
218
|
end
|
221
219
|
end
|
222
220
|
|
223
|
-
|
224
221
|
def validate_ssh_allowed_networks
|
225
|
-
return
|
226
222
|
validate_networks('ssh_allowed_networks', 'SSH')
|
227
223
|
end
|
228
224
|
|
@@ -441,6 +437,9 @@ module Hetzner
|
|
441
437
|
instance_group_errors << "#{instance_group_type} has an invalid instance count"
|
442
438
|
end
|
443
439
|
|
440
|
+
instance_group_errors << "#{instance_group_type} has an invalid labels format - a hash is expected" if !instance_group['labels'].nil? && !instance_group['labels'].is_a?(Hash)
|
441
|
+
instance_group_errors << "#{instance_group_type} has an invalid taints format - a hash is expected" if !instance_group['taints'].nil? && !instance_group['taints'].is_a?(Hash)
|
442
|
+
|
444
443
|
errors << instance_group_errors
|
445
444
|
end
|
446
445
|
|
data/lib/hetzner/k3s/version.rb
CHANGED
data/lib/hetzner/utils.rb
CHANGED
@@ -1,8 +1,5 @@
|
|
1
1
|
# frozen_string_literal: true
|
2
2
|
|
3
|
-
Net::SSH::Transport::Algorithms::ALGORITHMS.values.each { |algs| algs.reject! { |a| a =~ /^ecd(sa|h)-sha2/ } }
|
4
|
-
Net::SSH::KnownHosts::SUPPORTED_TYPE.reject! { |t| t =~ /^ecd(sa|h)-sha2/ }
|
5
|
-
|
6
3
|
require 'childprocess'
|
7
4
|
|
8
5
|
module Utils
|
metadata
CHANGED
@@ -1,14 +1,14 @@
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
2
2
|
name: hetzner-k3s
|
3
3
|
version: !ruby/object:Gem::Version
|
4
|
-
version: 0.6.
|
4
|
+
version: 0.6.2
|
5
5
|
platform: ruby
|
6
6
|
authors:
|
7
7
|
- Vito Botta
|
8
8
|
autorequire:
|
9
9
|
bindir: exe
|
10
10
|
cert_chain: []
|
11
|
-
date: 2022-08-
|
11
|
+
date: 2022-08-30 00:00:00.000000000 Z
|
12
12
|
dependencies:
|
13
13
|
- !ruby/object:Gem::Dependency
|
14
14
|
name: bcrypt_pbkdf
|