hetzner-k3s 0.1.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml ADDED
@@ -0,0 +1,7 @@
1
+ ---
2
+ SHA256:
3
+ metadata.gz: 600b092e02f2bca4fd0be7e830a13913d2bbf82d2e3b98226ab52a2b5df4e859
4
+ data.tar.gz: f595dda56d1ca9aeaa611a77d82c168a63d300a83c5f3e8edc44b3da5790d46a
5
+ SHA512:
6
+ metadata.gz: 6024a0b99ecc6d97d56e50f39e9e89f9cd2d92db7261604162ec37628df6bd4b942f50be64c4f05d9a745e55a3e18e567a759f20626671e81746c4160a280a9a
7
+ data.tar.gz: 17f2095befd0035555adc902dc52baed71f77bd82ef26ecb9bff3df5e7b5ec14068cdd16923fd6a2297e22badaab922b3fbc745939285e41fed8699922e26017
data/.gitignore ADDED
@@ -0,0 +1,13 @@
1
+ /.bundle/
2
+ /.yardoc
3
+ /_yardoc/
4
+ /coverage/
5
+ /doc/
6
+ /pkg/
7
+ /spec/reports/
8
+ /tmp/
9
+
10
+ # rspec failure tracking
11
+ .rspec_status
12
+ /kubeconfig
13
+ /cluster_config.yaml
data/.rspec ADDED
@@ -0,0 +1,3 @@
1
+ --format documentation
2
+ --color
3
+ --require spec_helper
data/.travis.yml ADDED
@@ -0,0 +1,6 @@
1
+ ---
2
+ language: ruby
3
+ cache: bundler
4
+ rvm:
5
+ - 2.7.2
6
+ before_install: gem install bundler -v 2.1.4
@@ -0,0 +1,74 @@
1
+ # Contributor Covenant Code of Conduct
2
+
3
+ ## Our Pledge
4
+
5
+ In the interest of fostering an open and welcoming environment, we as
6
+ contributors and maintainers pledge to making participation in our project and
7
+ our community a harassment-free experience for everyone, regardless of age, body
8
+ size, disability, ethnicity, gender identity and expression, level of experience,
9
+ nationality, personal appearance, race, religion, or sexual identity and
10
+ orientation.
11
+
12
+ ## Our Standards
13
+
14
+ Examples of behavior that contributes to creating a positive environment
15
+ include:
16
+
17
+ * Using welcoming and inclusive language
18
+ * Being respectful of differing viewpoints and experiences
19
+ * Gracefully accepting constructive criticism
20
+ * Focusing on what is best for the community
21
+ * Showing empathy towards other community members
22
+
23
+ Examples of unacceptable behavior by participants include:
24
+
25
+ * The use of sexualized language or imagery and unwelcome sexual attention or
26
+ advances
27
+ * Trolling, insulting/derogatory comments, and personal or political attacks
28
+ * Public or private harassment
29
+ * Publishing others' private information, such as a physical or electronic
30
+ address, without explicit permission
31
+ * Other conduct which could reasonably be considered inappropriate in a
32
+ professional setting
33
+
34
+ ## Our Responsibilities
35
+
36
+ Project maintainers are responsible for clarifying the standards of acceptable
37
+ behavior and are expected to take appropriate and fair corrective action in
38
+ response to any instances of unacceptable behavior.
39
+
40
+ Project maintainers have the right and responsibility to remove, edit, or
41
+ reject comments, commits, code, wiki edits, issues, and other contributions
42
+ that are not aligned to this Code of Conduct, or to ban temporarily or
43
+ permanently any contributor for other behaviors that they deem inappropriate,
44
+ threatening, offensive, or harmful.
45
+
46
+ ## Scope
47
+
48
+ This Code of Conduct applies both within project spaces and in public spaces
49
+ when an individual is representing the project or its community. Examples of
50
+ representing a project or community include using an official project e-mail
51
+ address, posting via an official social media account, or acting as an appointed
52
+ representative at an online or offline event. Representation of a project may be
53
+ further defined and clarified by project maintainers.
54
+
55
+ ## Enforcement
56
+
57
+ Instances of abusive, harassing, or otherwise unacceptable behavior may be
58
+ reported by contacting the project team at vito@botta.me. All
59
+ complaints will be reviewed and investigated and will result in a response that
60
+ is deemed necessary and appropriate to the circumstances. The project team is
61
+ obligated to maintain confidentiality with regard to the reporter of an incident.
62
+ Further details of specific enforcement policies may be posted separately.
63
+
64
+ Project maintainers who do not follow or enforce the Code of Conduct in good
65
+ faith may face temporary or permanent repercussions as determined by other
66
+ members of the project's leadership.
67
+
68
+ ## Attribution
69
+
70
+ This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
71
+ available at [https://contributor-covenant.org/version/1/4][version]
72
+
73
+ [homepage]: https://contributor-covenant.org
74
+ [version]: https://contributor-covenant.org/version/1/4/
data/Gemfile ADDED
@@ -0,0 +1,7 @@
1
+ source "https://rubygems.org"
2
+
3
+ # Specify your gem's dependencies in k3s.gemspec
4
+ gemspec
5
+
6
+ gem "rake", "~> 12.0"
7
+ gem "rspec", "~> 3.0"
data/Gemfile.lock ADDED
@@ -0,0 +1,111 @@
1
+ PATH
2
+ remote: .
3
+ specs:
4
+ hetzner-k3s (0.1.0)
5
+ http
6
+ k8s-ruby
7
+ net-ssh
8
+ sshkey
9
+ thor
10
+
11
+ GEM
12
+ remote: https://rubygems.org/
13
+ specs:
14
+ addressable (2.8.0)
15
+ public_suffix (>= 2.0.2, < 5.0)
16
+ concurrent-ruby (1.1.9)
17
+ diff-lcs (1.4.4)
18
+ domain_name (0.5.20190701)
19
+ unf (>= 0.0.5, < 1.0.0)
20
+ dry-configurable (0.12.1)
21
+ concurrent-ruby (~> 1.0)
22
+ dry-core (~> 0.5, >= 0.5.0)
23
+ dry-container (0.8.0)
24
+ concurrent-ruby (~> 1.0)
25
+ dry-configurable (~> 0.1, >= 0.1.3)
26
+ dry-core (0.7.1)
27
+ concurrent-ruby (~> 1.0)
28
+ dry-equalizer (0.3.0)
29
+ dry-inflector (0.2.1)
30
+ dry-logic (0.6.1)
31
+ concurrent-ruby (~> 1.0)
32
+ dry-core (~> 0.2)
33
+ dry-equalizer (~> 0.2)
34
+ dry-struct (0.5.1)
35
+ dry-core (~> 0.4, >= 0.4.3)
36
+ dry-equalizer (~> 0.2)
37
+ dry-types (~> 0.13)
38
+ ice_nine (~> 0.11)
39
+ dry-types (0.13.4)
40
+ concurrent-ruby (~> 1.0)
41
+ dry-container (~> 0.3)
42
+ dry-core (~> 0.4, >= 0.4.4)
43
+ dry-equalizer (~> 0.2)
44
+ dry-inflector (~> 0.1, >= 0.1.2)
45
+ dry-logic (~> 0.4, >= 0.4.2)
46
+ excon (0.85.0)
47
+ ffi (1.15.3)
48
+ ffi-compiler (1.0.1)
49
+ ffi (>= 1.0.0)
50
+ rake
51
+ hashdiff (1.0.1)
52
+ http (4.4.1)
53
+ addressable (~> 2.3)
54
+ http-cookie (~> 1.0)
55
+ http-form_data (~> 2.2)
56
+ http-parser (~> 1.2.0)
57
+ http-cookie (1.0.4)
58
+ domain_name (~> 0.5)
59
+ http-form_data (2.3.0)
60
+ http-parser (1.2.3)
61
+ ffi-compiler (>= 1.0, < 2.0)
62
+ ice_nine (0.11.2)
63
+ jsonpath (0.9.9)
64
+ multi_json
65
+ to_regexp (~> 0.2.1)
66
+ k8s-ruby (0.10.5)
67
+ dry-struct (~> 0.5.0)
68
+ dry-types (~> 0.13.0)
69
+ excon (~> 0.71)
70
+ hashdiff (~> 1.0.0)
71
+ jsonpath (~> 0.9.5)
72
+ recursive-open-struct (~> 1.1.0)
73
+ yajl-ruby (~> 1.4.0)
74
+ yaml-safe_load_stream (~> 0.1)
75
+ multi_json (1.15.0)
76
+ net-ssh (6.1.0)
77
+ public_suffix (4.0.6)
78
+ rake (12.3.3)
79
+ recursive-open-struct (1.1.3)
80
+ rspec (3.10.0)
81
+ rspec-core (~> 3.10.0)
82
+ rspec-expectations (~> 3.10.0)
83
+ rspec-mocks (~> 3.10.0)
84
+ rspec-core (3.10.1)
85
+ rspec-support (~> 3.10.0)
86
+ rspec-expectations (3.10.1)
87
+ diff-lcs (>= 1.2.0, < 2.0)
88
+ rspec-support (~> 3.10.0)
89
+ rspec-mocks (3.10.2)
90
+ diff-lcs (>= 1.2.0, < 2.0)
91
+ rspec-support (~> 3.10.0)
92
+ rspec-support (3.10.2)
93
+ sshkey (2.0.0)
94
+ thor (1.1.0)
95
+ to_regexp (0.2.1)
96
+ unf (0.1.4)
97
+ unf_ext
98
+ unf_ext (0.0.7.7)
99
+ yajl-ruby (1.4.1)
100
+ yaml-safe_load_stream (0.1.1)
101
+
102
+ PLATFORMS
103
+ ruby
104
+
105
+ DEPENDENCIES
106
+ hetzner-k3s!
107
+ rake (~> 12.0)
108
+ rspec (~> 3.0)
109
+
110
+ BUNDLED WITH
111
+ 2.1.4
data/LICENSE.txt ADDED
@@ -0,0 +1,21 @@
1
+ The MIT License (MIT)
2
+
3
+ Copyright (c) 2021 Vito Botta
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in
13
+ all copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
21
+ THE SOFTWARE.
data/README.md ADDED
@@ -0,0 +1,221 @@
1
+ # Create production grade Kubernetes clusters in Hetzner Cloud in a couple of minutes or less
2
+
3
+ This is a CLI tool - based on a Ruby gem - to quickly create and manage Kubernetes clusters in [Hetzner Cloud](https://www.hetzner.com/cloud) using the lightweight Kubernetes distribution [k3s](https://k3s.io/) from [Rancher](https://rancher.com/).
4
+
5
+ Hetzner Cloud is an awesome cloud provider which offers a truly great service with the best performance/cost ratio in the market. I highly recommend them if European locations (Germany and Finland) are OK for your projects (the Nuremberg data center has decent latency for US users as well). With Hetzner's Cloud Controller Manager and CSI driver you can provision load balancers and persistent volumes very easily.
6
+
7
+ k3s is my favorite Kubernetes distribution now because it uses much less memory and CPU, leaving more resources to workloads. It is also super quick to deploy because it's a single binary.
8
+
9
+ Using this tool, creating a highly available k3s cluster with 3 masters for the control plane and 3 worker nodes takes about **a couple of minutes** only. This includes
10
+
11
+ - creating the infra resources (servers, private network, firewall, load balancer for the API server for HA clusters)
12
+ - deploying k3s to the nodes
13
+ - installing the [Hetzner Cloud Controller Manager](https://github.com/hetznercloud/hcloud-cloud-controller-manager) to provision load balancers right away
14
+ - installing the [Hetzner CSI Driver](https://github.com/hetznercloud/csi-driver) to provision persistent volumes using Hetzner's block storage
15
+ - installing the [Rancher System Upgrade Controller](https://github.com/rancher/system-upgrade-controller) to make upgrades to a newer version of k3s easy and quick
16
+
17
+
18
+ ## Requirements
19
+
20
+ All that is needed to use this tool is
21
+
22
+ - an Hetzner Cloud account
23
+ - an Hetzner Cloud token: for this you need to create a project from the cloud console, and then an API token with **both read and write permissions** (sidebar > Security > API Tokens); you will see the token only once, so ensure you take note of it somewhere safe
24
+ - a recent Ruby runtime installed (see [this page](https://www.ruby-lang.org/en/documentation/installation/) for instructions if you are not familiar with Ruby). I am also going to try and create single binaries for this tool that will include the Ruby runtime, for easier installation.
25
+
26
+ ## Installation
27
+
28
+ Once you have the Ruby runtime up and running, you just need to install the gem:
29
+
30
+ ```bash
31
+ gem install hetzner-k3s
32
+ ```
33
+
34
+ This will install the `hetzner-k3s` executable in your PATH.
35
+
36
+ ## Creating a cluster
37
+
38
+ The tool requires a simple configuration file in order to create/upgrade/delete clusters, in the YAML format like in the example below:
39
+
40
+ ```yaml
41
+ ---
42
+ hetzner_token: <your token>
43
+ cluster_name: test
44
+ kubeconfig_path: "./kubeconfig"
45
+ k3s_version: v1.21.3+k3s1
46
+ ssh_key_path: "~/.ssh/id_rsa.pub"
47
+ location: nbg1
48
+ masters:
49
+ instance_type: cpx21
50
+ instance_count: 3
51
+ worker_node_pools:
52
+ - name: small
53
+ instance_type: cpx21
54
+ instance_count: 4
55
+ - name: big
56
+ instance_type: cp321
57
+ instance_count: 2
58
+ ```
59
+
60
+ It should hopefully be self explanatory; you can run `hetzner-k3s releases` to see a list of the available releases from the most recent to the oldest available.
61
+
62
+ If you set `masters.instance_count` to 1 then the tool will create a non highly available control plane; for production clusters you may want to set it to a number greater than 1. This number must be odd to avoid split brain issues with etcd and the recommended number is 3.
63
+
64
+ You can specify any number of worker node pools for example to have mixed nodes with different specs for different workloads.
65
+
66
+ At the moment Hetzner Cloud has three locations: two in Germany (`nbg1`, Nuremberg and `fsn1`, Falkensteing) and one in Finland (`hel1`, Helsinki).
67
+
68
+ For the available instance types and their specs, either check from inside a project when adding a server manually or run the following with your Hetzner token:
69
+
70
+ ```bash
71
+ curl \
72
+ -H "Authorization: Bearer $API_TOKEN" \
73
+ 'https://api.hetzner.cloud/v1/server_types'
74
+ ```
75
+
76
+
77
+ Finally, to create the cluster run:
78
+
79
+ ```bash
80
+ hetzner-k3s create-cluster --config-file cluster_config.yaml
81
+ ```
82
+
83
+ This will take a couple of minutes or less depending on the number of masters and worker nodes.
84
+
85
+ If you are creating an HA cluster and see the following in the output you can safely ignore it - it happens when additional masters are joining the first one:
86
+
87
+ ```
88
+ Job for k3s.service failed because the control process exited with error code.
89
+ See "systemctl status k3s.service" and "journalctl -xe" for details.
90
+ ```
91
+
92
+
93
+ ### Idempotency
94
+
95
+ The `create-cluster` command can be run any number of times with the same configuration without causing any issue, since the process is idempotent. This means that if for some reason the create process gets stuck or throws errors (for example if the Hetzner API is unavailable or there are timeouts etc), you can just stop the current command, and re-run it with the same configuration to continue from where it left.
96
+
97
+ ### Adding nodes
98
+
99
+ To add one or more nodes to a node pool, just change the instance count in the configuration file for that node pool and re-run the create command.
100
+
101
+ ### Scaling down a node pool
102
+
103
+ To make a node pool smaller:
104
+
105
+ - decrease the instance count for the node pool in the configuration file so that those extra nodes are not recreated in the future
106
+ - delete the nodes from Kubernetes (`kubectl delete node <name>`)
107
+ - delete the instances from the cloud console (make sure you delete the correct ones :p)
108
+
109
+ In a future relese I will add some automation for the cleanup.
110
+
111
+ ### Replacing a problematic node
112
+
113
+ - delete the node from Kubernetes (`kubectl delete node <name>`)
114
+ - delete the correct instance from the cloud console
115
+ - re-run the create script. This will re-create the missing node and have it join to the cluster
116
+
117
+
118
+ ### Converting a non-HA cluster to HA
119
+
120
+ It's easy to convert a non-HA with a single master cluster to HA with multiple masters. Just change the masters instance count and re-run the create command. This will create a load balancer for the API server and update the kubeconfig so that all the API requests go through the load balancer.
121
+
122
+ ## Upgrading to a new version of k3s
123
+
124
+ If it's the first time you upgrade the cluster, all you need to do to upgrade it to a newer version of k3s is run the following command:
125
+
126
+ ```bash
127
+ hetzner-k3s upgrade-cluster --config-file cluster_config.yaml --new-k3s-version v1.21.3+k3s1
128
+ ```
129
+
130
+ So you just need to specify the new k3s version as an additional parameter and the configuration file will be updated with the new version automatically during the upgrade. To see the list of available k3s releases run the command `hetzner-k3s releases`.
131
+
132
+ Note that the API server will briefly be unavailable during the upgrade of the controlplane.
133
+
134
+ To check the upgrade progress, run `watch kubectl get nodes -owide`. You will see the masters being upgraded one per time, followed by the worker nodes.
135
+
136
+
137
+ ### What to do if the upgrade doesn't go smoothly
138
+
139
+ If the upgrade gets stuck for some reason, or it doesn't upgrade all the nodes:
140
+
141
+ 1. Clean up the existing upgrade plans and jobs, and restart the upgrade controller
142
+
143
+ ```bash
144
+ kubectl -n system-upgrade delete job --all
145
+ kubectl -n system-upgrade delete plan --all
146
+
147
+ kubectl label node --all plan.upgrade.cattle.io/k3s-server- plan.upgrade.cattle.io/k3s-agent-
148
+
149
+ kubectl -n system-upgrade rollout restart deployment system-upgrade-controller
150
+ kubectl -n system-upgrade rollout status deployment system-upgrade-controller
151
+ ```
152
+
153
+ I recommend running the above commands also when upgrading a cluster that has already been upgraded at least once previously, since the upgrade leaves some stuff behind that needs to be cleaned up.
154
+
155
+ 2. Re-run the `upgrade-cluster` command with an additiona parameter `--force true`.
156
+
157
+ I have noticed that sometimes I need to re-run the upgrade command a couple of times to complete an upgrade successfully. Must be some bug in the system upgrade controller but I haven't investigated further.
158
+
159
+ You can also check the logs of the system upgrade controller's pod:
160
+
161
+ ```bash
162
+ kubectl -n system-upgrade logs -f $(kubectl -n system-upgrade get pod -l pod-template-hash -o jsonpath="{.items[0].metadata.name}")
163
+ ```
164
+
165
+ A final note about upgrades is that if for some reason the upgrade gets stuck after upgrading the masters and before upgrading the worker nodes, just cleaning up the resources as described above might not be enough. In that case also try running the following to tell the upgrade job for the workers that the masters have already been upgraded, so the upgrade can continue for the workers:
166
+
167
+ ```bash
168
+ kubectl label node <master1> <master2> <master2> plan.upgrade.cattle.io/k3s-server=upgraded
169
+ ```
170
+
171
+ ## Deleting a cluster
172
+
173
+ To delete a cluster, running
174
+
175
+ ```bash
176
+ hetzner-k3s delete-cluster --config-file cluster_config.yam
177
+ ```
178
+
179
+ This will delete all the resources in the Hetzner Cloud project for the cluster being deleted.
180
+
181
+
182
+ ## Additional info
183
+
184
+ ### Load balancers
185
+
186
+ Once the cluster is ready, you can already provision services of type LoadBalancer for your workloads (such as the Nginx ingress controller for example) thanks to the Hetzner Cloud Controller Manager that is installed automatically.
187
+
188
+ There are some annotations that you can add to your services to configure the load balancers. I personally use the following:
189
+
190
+ ```yaml
191
+ service:
192
+ annotations:
193
+ load-balancer.hetzner.cloud/hostname: <a valid fqdn>
194
+ load-balancer.hetzner.cloud/http-redirect-https: 'false'
195
+ load-balancer.hetzner.cloud/location: nbg1
196
+ load-balancer.hetzner.cloud/name: <lb name>
197
+ load-balancer.hetzner.cloud/uses-proxyprotocol: 'true'
198
+ load-balancer.hetzner.cloud/use-private-ip: "true"
199
+ ```
200
+
201
+ I set `load-balancer.hetzner.cloud/hostname` to a valid hostname that I configure (after creating the load balancer) with the IP of the load balancer; I use this together with the annotation `load-balancer.hetzner.cloud/uses-proxyprotocol: 'true'` to enable the proxy protocol. Reason: I enable the proxy protocol on the load balancers so that my ingress controller and applications can "see" the real IP address of the client. However when this is enabled, there is a problem where [cert-manager](https://cert-manager.io/docs/) fails http01 challenges; you can find an explanation of why [here](https://github.com/compumike/hairpin-proxy) but the easy fix provided by some providers - including Hetzner - is to configure the load balancer so that it uses a hostname instead of an IP. Again, read the explanation for the reason but if you care about seeing the actual IP of the client then I recommend you use these two annotations.
202
+
203
+ The annotation `load-balancer.hetzner.cloud/use-private-ip: "true"` ensures that the communication between the load balancer and the nodes happens through the private network, so we don't have to open any ports on the nodes (other than the port 6443 for the Kubernetes API server).
204
+
205
+ The other annotations should be self explanatory. You can find a list of the available annotations here.
206
+
207
+ ## Persistent volumes
208
+
209
+ Once the cluster is ready you can create persistent volumes out of the box with the default storage class `hcloud-volumes`, since the Hetzner CSI driver is installed automatically. This will use Hetzner's block storage (based on Ceph so it's replicated and highly available) for your persistent volumes. Note that the minimum size of a volume is 10Gi. If you specify a smaller size for a volume, the volume will be created with a capacity of 10Gi anyway.
210
+
211
+ ## Contributing and support
212
+
213
+ Please create a PR if you want to propose any changes, or open an issue if you are having trouble with the tool - I will do my best to help if I can.
214
+
215
+ ## License
216
+
217
+ The gem is available as open source under the terms of the [MIT License](https://opensource.org/licenses/MIT).
218
+
219
+ ## Code of Conduct
220
+
221
+ Everyone interacting in the hetzner-k3s project's codebases, issue trackers, chat rooms and mailing lists is expected to follow the [code of conduct](https://github.com/vitobotta/k3s/blob/master/CODE_OF_CONDUCT.md).