krane 2.1.10 → 2.3.2
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +4 -4
- data/.github/CODEOWNERS +1 -1
- data/.rubocop.yml +3 -0
- data/CHANGELOG.md +18 -0
- data/README.md +5 -1
- data/dev.yml +1 -1
- data/lib/krane/cli/deploy_command.rb +10 -0
- data/lib/krane/cli/global_deploy_command.rb +10 -0
- data/lib/krane/cli/restart_command.rb +7 -1
- data/lib/krane/deploy_task.rb +5 -1
- data/lib/krane/global_deploy_task.rb +5 -1
- data/lib/krane/kubernetes_resource.rb +4 -0
- data/lib/krane/renderer.rb +1 -1
- data/lib/krane/restart_task.rb +152 -48
- data/lib/krane/version.rb +1 -1
- metadata +3 -3
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA256:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: fbfdb40df63a2759718235ad79d484ba0b142e60bbf7ab5d660c0b41be2a15c3
|
4
|
+
data.tar.gz: 60530f0c9a7e5952ab441779da15f8d50b0018b181059a4294449bf55fabecfb
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: e26defe201edb72da12aeea056a31652da6f1dd4bf8be78af4ed5d3d78ada303a56093179432a2a390a13a9beadf8d09166bca7b8162f710c5c9058bed78a7d1
|
7
|
+
data.tar.gz: abc57bf64f5f0f9ba94cb03d5c8464f86ed426b66c3dd20bfdca56afb9a7d68a3b158b49513199856a65f1da27b7302760701bce734b0285920bcc365e0e99e2
|
data/.github/CODEOWNERS
CHANGED
@@ -1 +1 @@
|
|
1
|
-
* @Shopify/
|
1
|
+
* @Shopify/app-lifecycle
|
data/.rubocop.yml
CHANGED
data/CHANGELOG.md
CHANGED
@@ -1,5 +1,23 @@
|
|
1
1
|
## next
|
2
2
|
|
3
|
+
## 2.3.2
|
4
|
+
|
5
|
+
- Fix compatibility with Psych 4.0 [#843](https://github.com/Shopify/krane/pull/843)
|
6
|
+
|
7
|
+
## 2.3.1
|
8
|
+
|
9
|
+
- Fix a bug in RestartTask where a NoMethodError is thrown if any of the target resources do not have annotations [#841](https://github.com/Shopify/krane/pull/841)
|
10
|
+
|
11
|
+
## 2.3.0
|
12
|
+
|
13
|
+
- Restart tasks now support restarting StatefulSets and DaemonSets, in addition to Deployments [#836](https://github.com/Shopify/krane/pull/836)
|
14
|
+
|
15
|
+
## 2.2.0
|
16
|
+
|
17
|
+
*Enhancements*
|
18
|
+
|
19
|
+
- Add a new option `--selector-as-filter` to command `krane deploy` and `krane global-deploy` [#831](https://github.com/Shopify/krane/pull/831)
|
20
|
+
|
3
21
|
## 2.1.10
|
4
22
|
|
5
23
|
*Bug Fixes*
|
data/README.md
CHANGED
@@ -117,6 +117,7 @@ Refer to `krane help` for the authoritative set of options.
|
|
117
117
|
- `--global-timeout=duration`: Raise a timeout error if it takes longer than _duration_ for any
|
118
118
|
resource to deploy.
|
119
119
|
- `--selector`: Instructs krane to only prune resources which match the specified label selector, such as `environment=staging`. If you use this option, all resource templates must specify matching labels. See [Sharing a namespace](#sharing-a-namespace) below.
|
120
|
+
- `--selector-as-filter`: Instructs krane to only deploy resources that are filtered by the specified labels in `--selector`. The deploy will not fail if not all resources match the labels. This is useful if you only want to deploy a subset of resources within a given YAML file. See [Sharing a namespace](#sharing-a-namespace) below.
|
120
121
|
- `--no-verify-result`: Skip verification that workloads correctly deployed.
|
121
122
|
- `--protected-namespaces=default kube-system kube-public`: Fail validation if a deploy is targeted at a protected namespace.
|
122
123
|
- `--verbose-log-prefix`: Add [context][namespace] to the log prefix
|
@@ -132,6 +133,8 @@ If you need to, you may specify `--no-prune` to disable all pruning behaviour, b
|
|
132
133
|
|
133
134
|
If you need to share a namespace with resources which are managed by other tools or indeed other krane deployments, you can supply the `--selector` option, such that only resources with labels matching the selector are considered for pruning.
|
134
135
|
|
136
|
+
If you need to share a namespace with different set of resources using the same YAML file, you can supply the `--selector` and `--selector-as-filter` options, such that only the resources that match with the labels will be deployed. In each run of deploy, you can use different labels in `--selector` to deploy a different set of resources. Only the deployed resources in each run are considered for pruning.
|
137
|
+
|
135
138
|
### Using templates
|
136
139
|
|
137
140
|
All templates must be YAML formatted.
|
@@ -441,13 +444,14 @@ Refer to `krane global-deploy help` for the authoritative set of options.
|
|
441
444
|
- `--filenames / -f [PATHS]`: Accepts a list of directories and/or filenames to specify the set of directories/files that will be deployed. Use `-` to specify STDIN.
|
442
445
|
- `--no-prune`: Skips pruning of resources that are no longer in your Kubernetes template set. Not recommended, as it allows your namespace to accumulate cruft that is not reflected in your deploy directory.
|
443
446
|
- `--selector`: Instructs krane to only prune resources which match the specified label selector, such as `environment=staging`. By using this option, all resource templates must specify matching labels. See [Sharing a namespace](#sharing-a-namespace) below.
|
447
|
+
- `--selector-as-filter`: Instructs krane to only deploy resources that are filtered by the specified labels in `--selector`. The deploy will not fail if not all resources match the labels. This is useful if you only want to deploy a subset of resources within a given YAML file. See [Sharing a namespace](#sharing-a-namespace) below.
|
444
448
|
- `--global-timeout=duration`: Raise a timeout error if it takes longer than _duration_ for any
|
445
449
|
resource to deploy.
|
446
450
|
- `--no-verify-result`: Skip verification that resources correctly deployed.
|
447
451
|
|
448
452
|
# krane restart
|
449
453
|
|
450
|
-
`krane restart` is a tool for restarting all of the pods in one or more deployments. It triggers the restart by
|
454
|
+
`krane restart` is a tool for restarting all of the pods in one or more deployments, statefuls sets, and/or daemon sets. It triggers the restart by patching template metadata with the `kubectl.kubernetes.io/restartedAt` annotation (with the value being an RFC 3339 representation of the current time). Note this is the manner in which `kubectl rollout restart` itself triggers restarts.
|
451
455
|
|
452
456
|
## Usage
|
453
457
|
|
data/dev.yml
CHANGED
@@ -13,7 +13,7 @@ up:
|
|
13
13
|
- custom:
|
14
14
|
name: Minikube Cluster
|
15
15
|
met?: test $(minikube status | grep Running | wc -l) -ge 2 && $(minikube status | grep -q 'Configured')
|
16
|
-
meet: minikube start --kubernetes-version=v1.
|
16
|
+
meet: minikube start --kubernetes-version=v1.18.18 --vm-driver=hyperkit
|
17
17
|
down: minikube stop
|
18
18
|
commands:
|
19
19
|
reset-minikube: minikube delete && rm -rf ~/.minikube
|
@@ -25,6 +25,10 @@ module Krane
|
|
25
25
|
default: true },
|
26
26
|
"selector" => { type: :string, banner: "'label=value'",
|
27
27
|
desc: "Select workloads by selector(s)" },
|
28
|
+
"selector-as-filter" => { type: :boolean,
|
29
|
+
desc: "Use --selector as a label filter to deploy only a subset "\
|
30
|
+
"of the provided resources",
|
31
|
+
default: false },
|
28
32
|
"verbose-log-prefix" => { type: :boolean, desc: "Add [context][namespace] to the log prefix",
|
29
33
|
default: false },
|
30
34
|
"verify-result" => { type: :boolean, default: true,
|
@@ -37,6 +41,11 @@ module Krane
|
|
37
41
|
require 'krane/label_selector'
|
38
42
|
|
39
43
|
selector = ::Krane::LabelSelector.parse(options[:selector]) if options[:selector]
|
44
|
+
selector_as_filter = options['selector-as-filter']
|
45
|
+
|
46
|
+
if selector_as_filter && !selector
|
47
|
+
raise(Thor::RequiredArgumentMissingError, '--selector must be set when --selector-as-filter is set')
|
48
|
+
end
|
40
49
|
|
41
50
|
logger = ::Krane::FormattedLogger.build(namespace, context,
|
42
51
|
verbose_prefix: options['verbose-log-prefix'])
|
@@ -60,6 +69,7 @@ module Krane
|
|
60
69
|
logger: logger,
|
61
70
|
global_timeout: ::Krane::DurationParser.new(options["global-timeout"]).parse!.to_i,
|
62
71
|
selector: selector,
|
72
|
+
selector_as_filter: selector_as_filter,
|
63
73
|
protected_namespaces: protected_namespaces,
|
64
74
|
)
|
65
75
|
|
@@ -16,6 +16,10 @@ module Krane
|
|
16
16
|
desc: "Verify workloads correctly deployed" },
|
17
17
|
"selector" => { type: :string, banner: "'label=value'", required: true,
|
18
18
|
desc: "Select workloads owned by selector(s)" },
|
19
|
+
"selector-as-filter" => { type: :boolean,
|
20
|
+
desc: "Use --selector as a label filter to deploy only a subset "\
|
21
|
+
"of the provided resources",
|
22
|
+
default: false },
|
19
23
|
"prune" => { type: :boolean, desc: "Enable deletion of resources that match"\
|
20
24
|
" the provided selector and do not appear in the provided templates",
|
21
25
|
default: true },
|
@@ -28,6 +32,11 @@ module Krane
|
|
28
32
|
require 'krane/duration_parser'
|
29
33
|
|
30
34
|
selector = ::Krane::LabelSelector.parse(options[:selector])
|
35
|
+
selector_as_filter = options['selector-as-filter']
|
36
|
+
|
37
|
+
if selector_as_filter && !selector
|
38
|
+
raise(Thor::RequiredArgumentMissingError, '--selector must be set when --selector-as-filter is set')
|
39
|
+
end
|
31
40
|
|
32
41
|
filenames = options[:filenames].dup
|
33
42
|
filenames << "-" if options[:stdin]
|
@@ -41,6 +50,7 @@ module Krane
|
|
41
50
|
filenames: paths,
|
42
51
|
global_timeout: ::Krane::DurationParser.new(options["global-timeout"]).parse!.to_i,
|
43
52
|
selector: selector,
|
53
|
+
selector_as_filter: selector_as_filter,
|
44
54
|
)
|
45
55
|
|
46
56
|
deploy.run!(
|
@@ -6,7 +6,11 @@ module Krane
|
|
6
6
|
DEFAULT_RESTART_TIMEOUT = '300s'
|
7
7
|
OPTIONS = {
|
8
8
|
"deployments" => { type: :array, banner: "list of deployments",
|
9
|
-
desc: "List of
|
9
|
+
desc: "List of deployment names to restart", default: [] },
|
10
|
+
"statefulsets" => { type: :array, banner: "list of statefulsets",
|
11
|
+
desc: "List of statefulset names to restart", default: [] },
|
12
|
+
"daemonsets" => { type: :array, banner: "list of daemonsets",
|
13
|
+
desc: "List of daemonset names to restart", default: [] },
|
10
14
|
"global-timeout" => { type: :string, banner: "duration", default: DEFAULT_RESTART_TIMEOUT,
|
11
15
|
desc: "Max duration to monitor workloads correctly restarted" },
|
12
16
|
"selector" => { type: :string, banner: "'label=value'",
|
@@ -25,6 +29,8 @@ module Krane
|
|
25
29
|
)
|
26
30
|
restart.run!(
|
27
31
|
deployments: options[:deployments],
|
32
|
+
statefulsets: options[:statefulsets],
|
33
|
+
daemonsets: options[:daemonsets],
|
28
34
|
selector: selector,
|
29
35
|
verify_result: options["verify-result"]
|
30
36
|
)
|
data/lib/krane/deploy_task.rb
CHANGED
@@ -100,12 +100,13 @@ module Krane
|
|
100
100
|
# @param bindings [Hash] Bindings parsed by Krane::BindingsParser
|
101
101
|
# @param global_timeout [Integer] Timeout in seconds
|
102
102
|
# @param selector [Hash] Selector(s) parsed by Krane::LabelSelector
|
103
|
+
# @param selector_as_filter [Boolean] Allow selecting a subset of Kubernetes resource templates to deploy
|
103
104
|
# @param filenames [Array<String>] An array of filenames and/or directories containing templates (*required*)
|
104
105
|
# @param protected_namespaces [Array<String>] Array of protected Kubernetes namespaces (defaults
|
105
106
|
# to Krane::DeployTask::PROTECTED_NAMESPACES)
|
106
107
|
# @param render_erb [Boolean] Enable ERB rendering
|
107
108
|
def initialize(namespace:, context:, current_sha: nil, logger: nil, kubectl_instance: nil, bindings: {},
|
108
|
-
global_timeout: nil, selector: nil, filenames: [], protected_namespaces: nil,
|
109
|
+
global_timeout: nil, selector: nil, selector_as_filter: false, filenames: [], protected_namespaces: nil,
|
109
110
|
render_erb: false, kubeconfig: nil)
|
110
111
|
@logger = logger || Krane::FormattedLogger.build(namespace, context)
|
111
112
|
@template_sets = TemplateSets.from_dirs_and_files(paths: filenames, logger: @logger, render_erb: render_erb)
|
@@ -118,6 +119,7 @@ module Krane
|
|
118
119
|
@kubectl = kubectl_instance
|
119
120
|
@global_timeout = global_timeout
|
120
121
|
@selector = selector
|
122
|
+
@selector_as_filter = selector_as_filter
|
121
123
|
@protected_namespaces = protected_namespaces || PROTECTED_NAMESPACES
|
122
124
|
@render_erb = render_erb
|
123
125
|
end
|
@@ -273,6 +275,7 @@ module Krane
|
|
273
275
|
|
274
276
|
confirm_ejson_keys_not_prunable if prune
|
275
277
|
@logger.info("Using resource selector #{@selector}") if @selector
|
278
|
+
@logger.info("Only deploying resources filtered by labels in selector") if @selector && @selector_as_filter
|
276
279
|
@namespace_tags |= tags_from_namespace_labels
|
277
280
|
@logger.info("All required parameters and files are present")
|
278
281
|
end
|
@@ -295,6 +298,7 @@ module Krane
|
|
295
298
|
batchable_resources, individuals = partition_dry_run_resources(resources.dup)
|
296
299
|
batch_dry_run_success = kubectl.server_dry_run_enabled? && validate_dry_run(batchable_resources)
|
297
300
|
individuals += batchable_resources unless batch_dry_run_success
|
301
|
+
resources.select! { |r| r.selected?(@selector) } if @selector_as_filter
|
298
302
|
Krane::Concurrency.split_across_threads(resources) do |r|
|
299
303
|
r.validate_definition(kubectl: kubectl, selector: @selector, dry_run: individuals.include?(r))
|
300
304
|
end
|
@@ -33,8 +33,10 @@ module Krane
|
|
33
33
|
# @param context [String] Kubernetes context (*required*)
|
34
34
|
# @param global_timeout [Integer] Timeout in seconds
|
35
35
|
# @param selector [Hash] Selector(s) parsed by Krane::LabelSelector (*required*)
|
36
|
+
# @param selector_as_filter [Boolean] Allow selecting a subset of Kubernetes resource templates to deploy
|
36
37
|
# @param filenames [Array<String>] An array of filenames and/or directories containing templates (*required*)
|
37
|
-
def initialize(context:, global_timeout: nil, selector: nil,
|
38
|
+
def initialize(context:, global_timeout: nil, selector: nil, selector_as_filter: false,
|
39
|
+
filenames: [], logger: nil, kubeconfig: nil)
|
38
40
|
template_paths = filenames.map { |path| File.expand_path(path) }
|
39
41
|
|
40
42
|
@task_config = TaskConfig.new(context, nil, logger, kubeconfig)
|
@@ -42,6 +44,7 @@ module Krane
|
|
42
44
|
logger: @task_config.logger, render_erb: false)
|
43
45
|
@global_timeout = global_timeout
|
44
46
|
@selector = selector
|
47
|
+
@selector_as_filter = selector_as_filter
|
45
48
|
end
|
46
49
|
|
47
50
|
# Runs the task, returning a boolean representing success or failure
|
@@ -130,6 +133,7 @@ module Krane
|
|
130
133
|
def validate_resources(resources)
|
131
134
|
validate_globals(resources)
|
132
135
|
|
136
|
+
resources.select! { |r| r.selected?(@selector) } if @selector_as_filter
|
133
137
|
Concurrency.split_across_threads(resources) do |r|
|
134
138
|
r.validate_definition(kubectl: @kubectl, selector: @selector)
|
135
139
|
end
|
data/lib/krane/renderer.rb
CHANGED
@@ -58,7 +58,7 @@ module Krane
|
|
58
58
|
template = File.read(partial_path)
|
59
59
|
expanded_template = ERB.new(template, trim_mode: '-').result(erb_binding)
|
60
60
|
|
61
|
-
docs = Psych.parse_stream(expanded_template, partial_path)
|
61
|
+
docs = Psych.parse_stream(expanded_template, filename: partial_path)
|
62
62
|
# If the partial contains multiple documents or has an explicit document header,
|
63
63
|
# we know it cannot validly be indented in the parent, so return it immediately.
|
64
64
|
return expanded_template unless docs.children.one? && docs.children.first.implicit
|
data/lib/krane/restart_task.rb
CHANGED
@@ -22,6 +22,8 @@ module Krane
|
|
22
22
|
HTTP_OK_RANGE = 200..299
|
23
23
|
ANNOTATION = "shipit.shopify.io/restart"
|
24
24
|
|
25
|
+
RESTART_TRIGGER_ANNOTATION = "kubectl.kubernetes.io/restartedAt"
|
26
|
+
|
25
27
|
attr_reader :task_config
|
26
28
|
|
27
29
|
delegate :kubeclient_builder, to: :task_config
|
@@ -58,33 +60,41 @@ module Krane
|
|
58
60
|
# @param verify_result [Boolean] Wait for completion and verify success
|
59
61
|
#
|
60
62
|
# @return [nil]
|
61
|
-
def run!(deployments:
|
63
|
+
def run!(deployments: [], statefulsets: [], daemonsets: [], selector: nil, verify_result: true)
|
62
64
|
start = Time.now.utc
|
63
65
|
@logger.reset
|
64
66
|
|
65
67
|
@logger.phase_heading("Initializing restart")
|
66
68
|
verify_config!
|
67
|
-
deployments =
|
69
|
+
deployments, statefulsets, daemonsets = identify_target_workloads(deployments, statefulsets,
|
70
|
+
daemonsets, selector: selector)
|
68
71
|
|
69
|
-
@logger.phase_heading("Triggering restart by
|
72
|
+
@logger.phase_heading("Triggering restart by annotating pod template #{RESTART_TRIGGER_ANNOTATION} annotation")
|
70
73
|
patch_kubeclient_deployments(deployments)
|
74
|
+
patch_kubeclient_statefulsets(statefulsets)
|
75
|
+
patch_kubeclient_daemonsets(daemonsets)
|
71
76
|
|
72
77
|
if verify_result
|
73
78
|
@logger.phase_heading("Waiting for rollout")
|
74
|
-
resources = build_watchables(deployments, start)
|
79
|
+
resources = build_watchables(deployments, start, Deployment)
|
80
|
+
resources += build_watchables(statefulsets, start, StatefulSet)
|
81
|
+
resources += build_watchables(daemonsets, start, DaemonSet)
|
75
82
|
verify_restart(resources)
|
76
83
|
else
|
77
84
|
warning = "Result verification is disabled for this task"
|
78
85
|
@logger.summary.add_paragraph(ColorizedString.new(warning).yellow)
|
79
86
|
end
|
80
|
-
StatsD.client.distribution('restart.duration', StatsD.duration(start),
|
87
|
+
StatsD.client.distribution('restart.duration', StatsD.duration(start),
|
88
|
+
tags: tags('success', deployments, statefulsets, daemonsets))
|
81
89
|
@logger.print_summary(:success)
|
82
90
|
rescue DeploymentTimeoutError
|
83
|
-
StatsD.client.distribution('restart.duration', StatsD.duration(start),
|
91
|
+
StatsD.client.distribution('restart.duration', StatsD.duration(start),
|
92
|
+
tags: tags('timeout', deployments, statefulsets, daemonsets))
|
84
93
|
@logger.print_summary(:timed_out)
|
85
94
|
raise
|
86
95
|
rescue FatalDeploymentError => error
|
87
|
-
StatsD.client.distribution('restart.duration', StatsD.duration(start),
|
96
|
+
StatsD.client.distribution('restart.duration', StatsD.duration(start),
|
97
|
+
tags: tags('failure', deployments, statefulsets, daemonsets))
|
88
98
|
@logger.summary.add_action(error.message) if error.message != error.class.to_s
|
89
99
|
@logger.print_summary(:failure)
|
90
100
|
raise
|
@@ -93,66 +103,140 @@ module Krane
|
|
93
103
|
|
94
104
|
private
|
95
105
|
|
96
|
-
def tags(status, deployments)
|
97
|
-
%W(namespace:#{@namespace} context:#{@context} status:#{status} deployments:#{deployments.to_a.length}
|
106
|
+
def tags(status, deployments, statefulsets, daemonsets)
|
107
|
+
%W(namespace:#{@namespace} context:#{@context} status:#{status} deployments:#{deployments.to_a.length}
|
108
|
+
statefulsets:#{statefulsets.to_a.length} daemonsets:#{daemonsets.to_a.length}})
|
98
109
|
end
|
99
110
|
|
100
|
-
def
|
101
|
-
if deployment_names.
|
102
|
-
|
103
|
-
@logger.info("Configured to restart all
|
104
|
-
apps_v1_kubeclient.get_deployments(namespace: @namespace)
|
111
|
+
def identify_target_workloads(deployment_names, statefulset_names, daemonset_names, selector: nil)
|
112
|
+
if deployment_names.blank? && statefulset_names.blank? && daemonset_names.blank?
|
113
|
+
if selector.nil?
|
114
|
+
@logger.info("Configured to restart all workloads with the `#{ANNOTATION}` annotation")
|
105
115
|
else
|
106
|
-
selector_string = selector.to_s
|
107
116
|
@logger.info(
|
108
|
-
"Configured to restart all
|
117
|
+
"Configured to restart all workloads with the `#{ANNOTATION}` annotation and #{selector} selector"
|
109
118
|
)
|
110
|
-
apps_v1_kubeclient.get_deployments(namespace: @namespace, label_selector: selector_string)
|
111
119
|
end
|
112
|
-
deployments
|
120
|
+
deployments = identify_target_deployments(selector: selector)
|
121
|
+
statefulsets = identify_target_statefulsets(selector: selector)
|
122
|
+
daemonsets = identify_target_daemonsets(selector: selector)
|
113
123
|
|
114
|
-
if deployments.none?
|
115
|
-
raise FatalRestartError, "no deployments with the `#{ANNOTATION}`
|
124
|
+
if deployments.none? && statefulsets.none? && daemonsets.none?
|
125
|
+
raise FatalRestartError, "no deployments, statefulsets, or daemonsets, with the `#{ANNOTATION}` " \
|
126
|
+
"annotation found in namespace #{@namespace}"
|
116
127
|
end
|
117
|
-
elsif deployment_names.empty?
|
118
|
-
raise FatalRestartError, "Configured to restart deployments by name, but list of names was blank"
|
119
128
|
elsif !selector.nil?
|
120
|
-
raise FatalRestartError, "Can't specify
|
129
|
+
raise FatalRestartError, "Can't specify workload names and selector at the same time"
|
121
130
|
else
|
122
|
-
|
123
|
-
|
124
|
-
|
125
|
-
|
126
|
-
|
127
|
-
|
128
|
-
|
131
|
+
deployments, statefulsets, daemonsets = identify_target_workloads_by_name(deployment_names,
|
132
|
+
statefulset_names, daemonset_names)
|
133
|
+
if deployments.none? && statefulsets.none? && daemonsets.none?
|
134
|
+
error_msgs = []
|
135
|
+
error_msgs << "no deployments with names #{list} found in namespace #{@namespace}" if deployment_names
|
136
|
+
error_msgs << "no statefulsets with names #{list} found in namespace #{@namespace}" if statefulset_names
|
137
|
+
error_msgs << "no daemonsets with names #{list} found in namespace #{@namespace}" if daemonset_names
|
138
|
+
raise FatalRestartError, error_msgs.join(', ')
|
129
139
|
end
|
130
140
|
end
|
131
|
-
deployments
|
141
|
+
[deployments, statefulsets, daemonsets]
|
132
142
|
end
|
133
143
|
|
134
|
-
def
|
144
|
+
def identify_target_workloads_by_name(deployment_names, statefulset_names, daemonset_names)
|
145
|
+
deployment_names = deployment_names.uniq
|
146
|
+
statefulset_names = statefulset_names.uniq
|
147
|
+
daemonset_names = daemonset_names.uniq
|
148
|
+
|
149
|
+
if deployment_names.present?
|
150
|
+
@logger.info("Configured to restart deployments by name: #{deployment_names.join(', ')}")
|
151
|
+
end
|
152
|
+
if statefulset_names.present?
|
153
|
+
@logger.info("Configured to restart statefulsets by name: #{statefulset_names.join(', ')}")
|
154
|
+
end
|
155
|
+
if daemonset_names.present?
|
156
|
+
@logger.info("Configured to restart daemonsets by name: #{daemonset_names.join(', ')}")
|
157
|
+
end
|
158
|
+
|
159
|
+
[fetch_deployments(deployment_names), fetch_statefulsets(statefulset_names), fetch_daemonsets(daemonset_names)]
|
160
|
+
end
|
161
|
+
|
162
|
+
def identify_target_deployments(selector: nil)
|
163
|
+
deployments = if selector.nil?
|
164
|
+
apps_v1_kubeclient.get_deployments(namespace: @namespace)
|
165
|
+
else
|
166
|
+
selector_string = selector.to_s
|
167
|
+
apps_v1_kubeclient.get_deployments(namespace: @namespace, label_selector: selector_string)
|
168
|
+
end
|
169
|
+
deployments.select { |d| d.dig(:metadata, :annotations, ANNOTATION) }
|
170
|
+
end
|
171
|
+
|
172
|
+
def identify_target_statefulsets(selector: nil)
|
173
|
+
statefulsets = if selector.nil?
|
174
|
+
apps_v1_kubeclient.get_stateful_sets(namespace: @namespace)
|
175
|
+
else
|
176
|
+
selector_string = selector.to_s
|
177
|
+
apps_v1_kubeclient.get_stateful_sets(namespace: @namespace, label_selector: selector_string)
|
178
|
+
end
|
179
|
+
statefulsets.select { |ss| ss.dig(:metadata, :annotations, ANNOTATION) }
|
180
|
+
end
|
181
|
+
|
182
|
+
def identify_target_daemonsets(selector: nil)
|
183
|
+
daemonsets = if selector.nil?
|
184
|
+
apps_v1_kubeclient.get_daemon_sets(namespace: @namespace)
|
185
|
+
else
|
186
|
+
selector_string = selector.to_s
|
187
|
+
apps_v1_kubeclient.get_daemon_sets(namespace: @namespace, label_selector: selector_string)
|
188
|
+
end
|
189
|
+
daemonsets.select { |ds| ds.dig(:metadata, :annotations, ANNOTATION) }
|
190
|
+
end
|
191
|
+
|
192
|
+
def build_watchables(kubeclient_resources, started, klass)
|
135
193
|
kubeclient_resources.map do |d|
|
136
194
|
definition = d.to_h.deep_stringify_keys
|
137
|
-
r =
|
195
|
+
r = klass.new(namespace: @namespace, context: @context, definition: definition, logger: @logger)
|
138
196
|
r.deploy_started_at = started # we don't care what happened to the resource before the restart cmd ran
|
139
197
|
r
|
140
198
|
end
|
141
199
|
end
|
142
200
|
|
143
201
|
def patch_deployment_with_restart(record)
|
144
|
-
apps_v1_kubeclient.patch_deployment(
|
145
|
-
|
146
|
-
|
147
|
-
|
148
|
-
)
|
202
|
+
apps_v1_kubeclient.patch_deployment(record.metadata.name, build_patch_payload(record), @namespace)
|
203
|
+
end
|
204
|
+
|
205
|
+
def patch_statefulset_with_restart(record)
|
206
|
+
apps_v1_kubeclient.patch_stateful_set(record.metadata.name, build_patch_payload(record), @namespace)
|
207
|
+
end
|
208
|
+
|
209
|
+
def patch_daemonset_with_restart(record)
|
210
|
+
apps_v1_kubeclient.patch_daemon_set(record.metadata.name, build_patch_payload(record), @namespace)
|
149
211
|
end
|
150
212
|
|
151
213
|
def patch_kubeclient_deployments(deployments)
|
152
214
|
deployments.each do |record|
|
153
215
|
begin
|
154
216
|
patch_deployment_with_restart(record)
|
155
|
-
@logger.info("Triggered
|
217
|
+
@logger.info("Triggered `Deployment/#{record.metadata.name}` restart")
|
218
|
+
rescue Kubeclient::HttpError => e
|
219
|
+
raise RestartAPIError.new(record.metadata.name, e.message)
|
220
|
+
end
|
221
|
+
end
|
222
|
+
end
|
223
|
+
|
224
|
+
def patch_kubeclient_statefulsets(statefulsets)
|
225
|
+
statefulsets.each do |record|
|
226
|
+
begin
|
227
|
+
patch_statefulset_with_restart(record)
|
228
|
+
@logger.info("Triggered `StatefulSet/#{record.metadata.name}` restart")
|
229
|
+
rescue Kubeclient::HttpError => e
|
230
|
+
raise RestartAPIError.new(record.metadata.name, e.message)
|
231
|
+
end
|
232
|
+
end
|
233
|
+
end
|
234
|
+
|
235
|
+
def patch_kubeclient_daemonsets(daemonsets)
|
236
|
+
daemonsets.each do |record|
|
237
|
+
begin
|
238
|
+
patch_daemonset_with_restart(record)
|
239
|
+
@logger.info("Triggered `DaemonSet/#{record.metadata.name}` restart")
|
156
240
|
rescue Kubeclient::HttpError => e
|
157
241
|
raise RestartAPIError.new(record.metadata.name, e.message)
|
158
242
|
end
|
@@ -171,18 +255,38 @@ module Krane
|
|
171
255
|
end
|
172
256
|
end
|
173
257
|
|
174
|
-
def
|
175
|
-
|
258
|
+
def fetch_statefulsets(list)
|
259
|
+
list.map do |name|
|
260
|
+
record = nil
|
261
|
+
begin
|
262
|
+
record = apps_v1_kubeclient.get_stateful_set(name, @namespace)
|
263
|
+
rescue Kubeclient::ResourceNotFoundError
|
264
|
+
raise FatalRestartError, "StatefulSet `#{name}` not found in namespace `#{@namespace}`"
|
265
|
+
end
|
266
|
+
record
|
267
|
+
end
|
268
|
+
end
|
269
|
+
|
270
|
+
def fetch_daemonsets(list)
|
271
|
+
list.map do |name|
|
272
|
+
record = nil
|
273
|
+
begin
|
274
|
+
record = apps_v1_kubeclient.get_daemon_set(name, @namespace)
|
275
|
+
rescue Kubeclient::ResourceNotFoundError
|
276
|
+
raise FatalRestartError, "DaemonSet `#{name}` not found in namespace `#{@namespace}`"
|
277
|
+
end
|
278
|
+
record
|
279
|
+
end
|
280
|
+
end
|
281
|
+
|
282
|
+
def build_patch_payload(_deployment)
|
176
283
|
{
|
177
284
|
spec: {
|
178
285
|
template: {
|
179
|
-
|
180
|
-
|
181
|
-
|
182
|
-
|
183
|
-
env: [{ name: "RESTARTED_AT", value: Time.now.to_i.to_s }],
|
184
|
-
}
|
185
|
-
end,
|
286
|
+
metadata: {
|
287
|
+
annotations: {
|
288
|
+
RESTART_TRIGGER_ANNOTATION => Time.now.utc.to_datetime.rfc3339,
|
289
|
+
},
|
186
290
|
},
|
187
291
|
},
|
188
292
|
},
|
data/lib/krane/version.rb
CHANGED
metadata
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
2
2
|
name: krane
|
3
3
|
version: !ruby/object:Gem::Version
|
4
|
-
version: 2.
|
4
|
+
version: 2.3.2
|
5
5
|
platform: ruby
|
6
6
|
authors:
|
7
7
|
- Katrina Verey
|
@@ -10,7 +10,7 @@ authors:
|
|
10
10
|
autorequire:
|
11
11
|
bindir: exe
|
12
12
|
cert_chain: []
|
13
|
-
date: 2021-
|
13
|
+
date: 2021-11-18 00:00:00.000000000 Z
|
14
14
|
dependencies:
|
15
15
|
- !ruby/object:Gem::Dependency
|
16
16
|
name: activesupport
|
@@ -528,7 +528,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
|
|
528
528
|
- !ruby/object:Gem::Version
|
529
529
|
version: '0'
|
530
530
|
requirements: []
|
531
|
-
rubygems_version: 3.2.
|
531
|
+
rubygems_version: 3.2.20
|
532
532
|
signing_key:
|
533
533
|
specification_version: 4
|
534
534
|
summary: A command line tool that helps you ship changes to a Kubernetes namespace
|