krane 2.1.8 → 2.3.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 0f0acd44dff6399afd220b377ae825e964d07da8a0574525ff7b3f966857bf80
4
- data.tar.gz: 31c0f004b0f4f5db39e73b5fbde06b54bfce2bfbf393676f44b036397e303515
3
+ metadata.gz: 676fdc875449178de6e0f4ff1f649271bd691637e4e56d540c5e5ea71dc8ded4
4
+ data.tar.gz: 6a16a1e9e32947b0bd6aff7f679ca7f4958a4ef4d3c27e8855fc2a7a6e39abfa
5
5
  SHA512:
6
- metadata.gz: 240a8713adc34a340ff3d127828c3a1a740d3543656b6b0c730a26db295b7b6f9c306d9fa53cf10ad425a00116d99895b2feec207aae94a1f3b3b5b571bef006
7
- data.tar.gz: dd13e23ab5b37b152665ce4bd29a10cb77a30115938df6a50ef394b3bccf84ea51fcf12ba7590bd899c071fb083df70eeb94095b1a72086932a9e0122db1cdb3
6
+ metadata.gz: 20aad308cbb7c96ad518d0bb57026d6b95e6e8ced1f9eb968c2d144906920a45da272fdf20c94fd8d455a6265b266b6d10912573839e48b08c8162c382905774
7
+ data.tar.gz: 31711f6a966d5364d6e084eb67b8bdcff4e511e40967cc9fb0528a9e8dd1739ed3f59363ab1dd2ba122d40331019eaf2a9ef850830d6c92ddcdfc5a291bd5e89
data/.rubocop.yml CHANGED
@@ -1,5 +1,8 @@
1
- inherit_from:
2
- - http://shopify.github.io/ruby-style-guide/rubocop.yml
1
+ inherit_gem:
2
+ rubocop-shopify: rubocop.yml
3
+
4
+ Style/DateTime:
5
+ Enabled: false
3
6
 
4
7
  AllCops:
5
8
  TargetRubyVersion: 2.4
@@ -8,6 +8,14 @@ steps:
8
8
  run:
9
9
  - bundle: ~
10
10
  - bundle exec rubocop
11
+ - label: 'Run Test Suite (:kubernetes: 1.21-latest :ruby: 3.0)'
12
+ command: bin/ci
13
+ agents:
14
+ queue: k8s-ci
15
+ env:
16
+ LOGGING_LEVEL: "4"
17
+ KUBERNETES_VERSION: v1.21-latest
18
+ RUBY_VERSION: "3.0"
11
19
  - label: 'Run Test Suite (:kubernetes: 1.20-latest :ruby: 3.0)'
12
20
  command: bin/ci
13
21
  agents:
@@ -38,17 +46,3 @@ steps:
38
46
  env:
39
47
  LOGGING_LEVEL: "4"
40
48
  KUBERNETES_VERSION: v1.17-latest
41
- - label: 'Run Test Suite (:kubernetes: 1.16-latest)'
42
- command: bin/ci
43
- agents:
44
- queue: k8s-ci
45
- env:
46
- LOGGING_LEVEL: "4"
47
- KUBERNETES_VERSION: v1.16-latest
48
- - label: 'Run Test Suite (:kubernetes: 1.15-latest)'
49
- command: bin/ci
50
- agents:
51
- queue: k8s-ci
52
- env:
53
- LOGGING_LEVEL: "4"
54
- KUBERNETES_VERSION: v1.15-latest
data/CHANGELOG.md CHANGED
@@ -1,5 +1,32 @@
1
1
  ## next
2
2
 
3
+ ## 2.3.0
4
+
5
+ - Restart tasks now support restarting StatefulSets and DaemonSets, in addition to Deployments [#836](https://github.com/Shopify/krane/pull/836)
6
+
7
+ ## 2.2.0
8
+
9
+ *Enhancements*
10
+
11
+ - Add a new option `--selector-as-filter` to command `krane deploy` and `krane global-deploy` [#831](https://github.com/Shopify/krane/pull/831)
12
+
13
+ ## 2.1.10
14
+
15
+ *Bug Fixes*
16
+
17
+ - Don't gather prunable resources by calling uniq only on `kind`: use `group` as well. Otherwise certain resources may not be added to the prune whitelist if the same kind exists across multiple groups [#825](https://github.com/Shopify/krane/pull/825)
18
+ - Fix resource discovery failures when API paths are not located at the root of the API server (this occurs, for example, when using Rancher proxy) [#827](https://github.com/Shopify/krane/pull/827)
19
+
20
+ *Other*
21
+
22
+ - Fix ERB deprecation of positional arguments [#828](https://github.com/Shopify/krane/pull/828)
23
+
24
+ ## 2.1.9
25
+
26
+ *Other*
27
+
28
+ - Don't package screenshots in the built gem to reduce size [#817](https://github.com/Shopify/krane/pull/817)
29
+
3
30
  ## 2.1.8
4
31
 
5
32
  *Other*
data/README.md CHANGED
@@ -117,6 +117,7 @@ Refer to `krane help` for the authoritative set of options.
117
117
  - `--global-timeout=duration`: Raise a timeout error if it takes longer than _duration_ for any
118
118
  resource to deploy.
119
119
  - `--selector`: Instructs krane to only prune resources which match the specified label selector, such as `environment=staging`. If you use this option, all resource templates must specify matching labels. See [Sharing a namespace](#sharing-a-namespace) below.
120
+ - `--selector-as-filter`: Instructs krane to only deploy resources that are filtered by the specified labels in `--selector`. The deploy will not fail if not all resources match the labels. This is useful if you only want to deploy a subset of resources within a given YAML file. See [Sharing a namespace](#sharing-a-namespace) below.
120
121
  - `--no-verify-result`: Skip verification that workloads correctly deployed.
121
122
  - `--protected-namespaces=default kube-system kube-public`: Fail validation if a deploy is targeted at a protected namespace.
122
123
  - `--verbose-log-prefix`: Add [context][namespace] to the log prefix
@@ -132,6 +133,8 @@ If you need to, you may specify `--no-prune` to disable all pruning behaviour, b
132
133
 
133
134
  If you need to share a namespace with resources which are managed by other tools or indeed other krane deployments, you can supply the `--selector` option, such that only resources with labels matching the selector are considered for pruning.
134
135
 
136
+ If you need to share a namespace with different set of resources using the same YAML file, you can supply the `--selector` and `--selector-as-filter` options, such that only the resources that match with the labels will be deployed. In each run of deploy, you can use different labels in `--selector` to deploy a different set of resources. Only the deployed resources in each run are considered for pruning.
137
+
135
138
  ### Using templates
136
139
 
137
140
  All templates must be YAML formatted.
@@ -441,13 +444,14 @@ Refer to `krane global-deploy help` for the authoritative set of options.
441
444
  - `--filenames / -f [PATHS]`: Accepts a list of directories and/or filenames to specify the set of directories/files that will be deployed. Use `-` to specify STDIN.
442
445
  - `--no-prune`: Skips pruning of resources that are no longer in your Kubernetes template set. Not recommended, as it allows your namespace to accumulate cruft that is not reflected in your deploy directory.
443
446
  - `--selector`: Instructs krane to only prune resources which match the specified label selector, such as `environment=staging`. By using this option, all resource templates must specify matching labels. See [Sharing a namespace](#sharing-a-namespace) below.
447
+ - `--selector-as-filter`: Instructs krane to only deploy resources that are filtered by the specified labels in `--selector`. The deploy will not fail if not all resources match the labels. This is useful if you only want to deploy a subset of resources within a given YAML file. See [Sharing a namespace](#sharing-a-namespace) below.
444
448
  - `--global-timeout=duration`: Raise a timeout error if it takes longer than _duration_ for any
445
449
  resource to deploy.
446
450
  - `--no-verify-result`: Skip verification that resources correctly deployed.
447
451
 
448
452
  # krane restart
449
453
 
450
- `krane restart` is a tool for restarting all of the pods in one or more deployments. It triggers the restart by touching the `RESTARTED_AT` environment variable in the deployment's podSpec. The rollout strategy defined for each deployment will be respected by the restart.
454
+ `krane restart` is a tool for restarting all of the pods in one or more deployments, statefuls sets, and/or daemon sets. It triggers the restart by patching template metadata with the `kubectl.kubernetes.io/restartedAt` annotation (with the value being an RFC 3339 representation of the current time). Note this is the manner in which `kubectl rollout restart` itself triggers restarts.
451
455
 
452
456
  ## Usage
453
457
 
data/dev.yml CHANGED
@@ -4,7 +4,7 @@ up:
4
4
  - ruby: 2.6.6 # Matches gemspec
5
5
  - bundler
6
6
  - homebrew:
7
- - homebrew/cask/minikube
7
+ - minikube
8
8
  - hyperkit
9
9
  - custom:
10
10
  name: Install the minikube fork of driver-hyperkit
@@ -13,7 +13,7 @@ up:
13
13
  - custom:
14
14
  name: Minikube Cluster
15
15
  met?: test $(minikube status | grep Running | wc -l) -ge 2 && $(minikube status | grep -q 'Configured')
16
- meet: minikube start --kubernetes-version=v1.15.12 --vm-driver=hyperkit
16
+ meet: minikube start --kubernetes-version=v1.18.18 --vm-driver=hyperkit
17
17
  down: minikube stop
18
18
  commands:
19
19
  reset-minikube: minikube delete && rm -rf ~/.minikube
data/krane.gemspec CHANGED
@@ -17,7 +17,7 @@ Gem::Specification.new do |spec|
17
17
  spec.license = "MIT"
18
18
 
19
19
  spec.files = %x(git ls-files -z).split("\x0").reject do |f|
20
- f.match(%r{^(test|spec|features)/})
20
+ f.match(%r{^(test|spec|features|screenshots)/})
21
21
  end
22
22
  spec.bindir = "exe"
23
23
  spec.executables = spec.files.grep(%r{^exe/}) { |f| File.basename(f) }
@@ -57,5 +57,6 @@ Gem::Specification.new do |spec|
57
57
  spec.add_development_dependency("ruby-prof")
58
58
  spec.add_development_dependency("ruby-prof-flamegraph")
59
59
  spec.add_development_dependency("rubocop", "~> 0.89.1")
60
+ spec.add_development_dependency("rubocop-shopify", "~> 1.0.5")
60
61
  spec.add_development_dependency("simplecov")
61
62
  end
@@ -25,6 +25,10 @@ module Krane
25
25
  default: true },
26
26
  "selector" => { type: :string, banner: "'label=value'",
27
27
  desc: "Select workloads by selector(s)" },
28
+ "selector-as-filter" => { type: :boolean,
29
+ desc: "Use --selector as a label filter to deploy only a subset "\
30
+ "of the provided resources",
31
+ default: false },
28
32
  "verbose-log-prefix" => { type: :boolean, desc: "Add [context][namespace] to the log prefix",
29
33
  default: false },
30
34
  "verify-result" => { type: :boolean, default: true,
@@ -37,6 +41,11 @@ module Krane
37
41
  require 'krane/label_selector'
38
42
 
39
43
  selector = ::Krane::LabelSelector.parse(options[:selector]) if options[:selector]
44
+ selector_as_filter = options['selector-as-filter']
45
+
46
+ if selector_as_filter && !selector
47
+ raise(Thor::RequiredArgumentMissingError, '--selector must be set when --selector-as-filter is set')
48
+ end
40
49
 
41
50
  logger = ::Krane::FormattedLogger.build(namespace, context,
42
51
  verbose_prefix: options['verbose-log-prefix'])
@@ -60,6 +69,7 @@ module Krane
60
69
  logger: logger,
61
70
  global_timeout: ::Krane::DurationParser.new(options["global-timeout"]).parse!.to_i,
62
71
  selector: selector,
72
+ selector_as_filter: selector_as_filter,
63
73
  protected_namespaces: protected_namespaces,
64
74
  )
65
75
 
@@ -16,6 +16,10 @@ module Krane
16
16
  desc: "Verify workloads correctly deployed" },
17
17
  "selector" => { type: :string, banner: "'label=value'", required: true,
18
18
  desc: "Select workloads owned by selector(s)" },
19
+ "selector-as-filter" => { type: :boolean,
20
+ desc: "Use --selector as a label filter to deploy only a subset "\
21
+ "of the provided resources",
22
+ default: false },
19
23
  "prune" => { type: :boolean, desc: "Enable deletion of resources that match"\
20
24
  " the provided selector and do not appear in the provided templates",
21
25
  default: true },
@@ -28,6 +32,11 @@ module Krane
28
32
  require 'krane/duration_parser'
29
33
 
30
34
  selector = ::Krane::LabelSelector.parse(options[:selector])
35
+ selector_as_filter = options['selector-as-filter']
36
+
37
+ if selector_as_filter && !selector
38
+ raise(Thor::RequiredArgumentMissingError, '--selector must be set when --selector-as-filter is set')
39
+ end
31
40
 
32
41
  filenames = options[:filenames].dup
33
42
  filenames << "-" if options[:stdin]
@@ -41,6 +50,7 @@ module Krane
41
50
  filenames: paths,
42
51
  global_timeout: ::Krane::DurationParser.new(options["global-timeout"]).parse!.to_i,
43
52
  selector: selector,
53
+ selector_as_filter: selector_as_filter,
44
54
  )
45
55
 
46
56
  deploy.run!(
@@ -6,7 +6,11 @@ module Krane
6
6
  DEFAULT_RESTART_TIMEOUT = '300s'
7
7
  OPTIONS = {
8
8
  "deployments" => { type: :array, banner: "list of deployments",
9
- desc: "List of workload names to restart" },
9
+ desc: "List of deployment names to restart", default: [] },
10
+ "statefulsets" => { type: :array, banner: "list of statefulsets",
11
+ desc: "List of statefulset names to restart", default: [] },
12
+ "daemonsets" => { type: :array, banner: "list of daemonsets",
13
+ desc: "List of daemonset names to restart", default: [] },
10
14
  "global-timeout" => { type: :string, banner: "duration", default: DEFAULT_RESTART_TIMEOUT,
11
15
  desc: "Max duration to monitor workloads correctly restarted" },
12
16
  "selector" => { type: :string, banner: "'label=value'",
@@ -25,6 +29,8 @@ module Krane
25
29
  )
26
30
  restart.run!(
27
31
  deployments: options[:deployments],
32
+ statefulsets: options[:statefulsets],
33
+ daemonsets: options[:daemonsets],
28
34
  selector: selector,
29
35
  verify_result: options["verify-result"]
30
36
  )
@@ -34,7 +34,7 @@ module Krane
34
34
  end
35
35
  responses.flat_map do |path, resources|
36
36
  resources.map { |r| resource_hash(path, namespaced, r) }
37
- end.compact.uniq { |r| r["kind"] }
37
+ end.compact.uniq { |r| "#{r['apigroup']}/#{r['kind']}" }
38
38
  end
39
39
 
40
40
  def fetch_mutating_webhook_configurations
@@ -52,9 +52,20 @@ module Krane
52
52
 
53
53
  private
54
54
 
55
+ # During discovery, the api paths may not actually be at the root, so we must programatically find it.
56
+ def base_api_path
57
+ @base_api_path ||= begin
58
+ raw_response, err, st = kubectl.run("config", "view", "--minify", "--output",
59
+ "jsonpath={.clusters[*].cluster.server}", attempts: 5, use_namespace: false)
60
+ raise FatalKubeAPIError, "Error retrieving cluster url: #{err}" unless st.success?
61
+
62
+ URI(raw_response).path.blank? ? "/" : URI(raw_response).path
63
+ end
64
+ end
65
+
55
66
  def api_paths
56
67
  @api_path_cache["/"] ||= begin
57
- raw_json, err, st = kubectl.run("get", "--raw", "/", attempts: 5, use_namespace: false)
68
+ raw_json, err, st = kubectl.run("get", "--raw", base_api_path, attempts: 5, use_namespace: false)
58
69
  paths = if st.success?
59
70
  JSON.parse(raw_json)["paths"]
60
71
  else
@@ -100,12 +100,13 @@ module Krane
100
100
  # @param bindings [Hash] Bindings parsed by Krane::BindingsParser
101
101
  # @param global_timeout [Integer] Timeout in seconds
102
102
  # @param selector [Hash] Selector(s) parsed by Krane::LabelSelector
103
+ # @param selector_as_filter [Boolean] Allow selecting a subset of Kubernetes resource templates to deploy
103
104
  # @param filenames [Array<String>] An array of filenames and/or directories containing templates (*required*)
104
105
  # @param protected_namespaces [Array<String>] Array of protected Kubernetes namespaces (defaults
105
106
  # to Krane::DeployTask::PROTECTED_NAMESPACES)
106
107
  # @param render_erb [Boolean] Enable ERB rendering
107
108
  def initialize(namespace:, context:, current_sha: nil, logger: nil, kubectl_instance: nil, bindings: {},
108
- global_timeout: nil, selector: nil, filenames: [], protected_namespaces: nil,
109
+ global_timeout: nil, selector: nil, selector_as_filter: false, filenames: [], protected_namespaces: nil,
109
110
  render_erb: false, kubeconfig: nil)
110
111
  @logger = logger || Krane::FormattedLogger.build(namespace, context)
111
112
  @template_sets = TemplateSets.from_dirs_and_files(paths: filenames, logger: @logger, render_erb: render_erb)
@@ -118,6 +119,7 @@ module Krane
118
119
  @kubectl = kubectl_instance
119
120
  @global_timeout = global_timeout
120
121
  @selector = selector
122
+ @selector_as_filter = selector_as_filter
121
123
  @protected_namespaces = protected_namespaces || PROTECTED_NAMESPACES
122
124
  @render_erb = render_erb
123
125
  end
@@ -273,6 +275,7 @@ module Krane
273
275
 
274
276
  confirm_ejson_keys_not_prunable if prune
275
277
  @logger.info("Using resource selector #{@selector}") if @selector
278
+ @logger.info("Only deploying resources filtered by labels in selector") if @selector && @selector_as_filter
276
279
  @namespace_tags |= tags_from_namespace_labels
277
280
  @logger.info("All required parameters and files are present")
278
281
  end
@@ -295,6 +298,7 @@ module Krane
295
298
  batchable_resources, individuals = partition_dry_run_resources(resources.dup)
296
299
  batch_dry_run_success = kubectl.server_dry_run_enabled? && validate_dry_run(batchable_resources)
297
300
  individuals += batchable_resources unless batch_dry_run_success
301
+ resources.select! { |r| r.selected?(@selector) } if @selector_as_filter
298
302
  Krane::Concurrency.split_across_threads(resources) do |r|
299
303
  r.validate_definition(kubectl: kubectl, selector: @selector, dry_run: individuals.include?(r))
300
304
  end
@@ -33,8 +33,10 @@ module Krane
33
33
  # @param context [String] Kubernetes context (*required*)
34
34
  # @param global_timeout [Integer] Timeout in seconds
35
35
  # @param selector [Hash] Selector(s) parsed by Krane::LabelSelector (*required*)
36
+ # @param selector_as_filter [Boolean] Allow selecting a subset of Kubernetes resource templates to deploy
36
37
  # @param filenames [Array<String>] An array of filenames and/or directories containing templates (*required*)
37
- def initialize(context:, global_timeout: nil, selector: nil, filenames: [], logger: nil, kubeconfig: nil)
38
+ def initialize(context:, global_timeout: nil, selector: nil, selector_as_filter: false,
39
+ filenames: [], logger: nil, kubeconfig: nil)
38
40
  template_paths = filenames.map { |path| File.expand_path(path) }
39
41
 
40
42
  @task_config = TaskConfig.new(context, nil, logger, kubeconfig)
@@ -42,6 +44,7 @@ module Krane
42
44
  logger: @task_config.logger, render_erb: false)
43
45
  @global_timeout = global_timeout
44
46
  @selector = selector
47
+ @selector_as_filter = selector_as_filter
45
48
  end
46
49
 
47
50
  # Runs the task, returning a boolean representing success or failure
@@ -130,6 +133,7 @@ module Krane
130
133
  def validate_resources(resources)
131
134
  validate_globals(resources)
132
135
 
136
+ resources.select! { |r| r.selected?(@selector) } if @selector_as_filter
133
137
  Concurrency.split_across_threads(resources) do |r|
134
138
  r.validate_definition(kubectl: @kubectl, selector: @selector)
135
139
  end
@@ -499,6 +499,10 @@ module Krane
499
499
  @global || self.class::GLOBAL
500
500
  end
501
501
 
502
+ def selected?(selector)
503
+ selector.nil? || selector.to_h <= labels
504
+ end
505
+
502
506
  private
503
507
 
504
508
  def validate_timeout_annotation
@@ -39,7 +39,7 @@ module Krane
39
39
  erb_binding = TemplateContext.new(self).template_binding
40
40
  bind_template_variables(erb_binding, template_variables)
41
41
 
42
- ERB.new(raw_template, nil, '-').result(erb_binding)
42
+ ERB.new(raw_template, trim_mode: '-').result(erb_binding)
43
43
  rescue InvalidPartialError => err
44
44
  err.parents = err.parents.dup.unshift(filename)
45
45
  err.filename = "#{err.filename} (partial included from: #{err.parents.join(' -> ')})"
@@ -56,7 +56,7 @@ module Krane
56
56
 
57
57
  partial_path = find_partial(partial)
58
58
  template = File.read(partial_path)
59
- expanded_template = ERB.new(template, nil, '-').result(erb_binding)
59
+ expanded_template = ERB.new(template, trim_mode: '-').result(erb_binding)
60
60
 
61
61
  docs = Psych.parse_stream(expanded_template, partial_path)
62
62
  # If the partial contains multiple documents or has an explicit document header,
@@ -22,6 +22,8 @@ module Krane
22
22
  HTTP_OK_RANGE = 200..299
23
23
  ANNOTATION = "shipit.shopify.io/restart"
24
24
 
25
+ RESTART_TRIGGER_ANNOTATION = "kubectl.kubernetes.io/restartedAt"
26
+
25
27
  attr_reader :task_config
26
28
 
27
29
  delegate :kubeclient_builder, to: :task_config
@@ -58,33 +60,41 @@ module Krane
58
60
  # @param verify_result [Boolean] Wait for completion and verify success
59
61
  #
60
62
  # @return [nil]
61
- def run!(deployments: nil, selector: nil, verify_result: true)
63
+ def run!(deployments: [], statefulsets: [], daemonsets: [], selector: nil, verify_result: true)
62
64
  start = Time.now.utc
63
65
  @logger.reset
64
66
 
65
67
  @logger.phase_heading("Initializing restart")
66
68
  verify_config!
67
- deployments = identify_target_deployments(deployments, selector: selector)
69
+ deployments, statefulsets, daemonsets = identify_target_workloads(deployments, statefulsets,
70
+ daemonsets, selector: selector)
68
71
 
69
- @logger.phase_heading("Triggering restart by touching ENV[RESTARTED_AT]")
72
+ @logger.phase_heading("Triggering restart by annotating pod template #{RESTART_TRIGGER_ANNOTATION} annotation")
70
73
  patch_kubeclient_deployments(deployments)
74
+ patch_kubeclient_statefulsets(statefulsets)
75
+ patch_kubeclient_daemonsets(daemonsets)
71
76
 
72
77
  if verify_result
73
78
  @logger.phase_heading("Waiting for rollout")
74
- resources = build_watchables(deployments, start)
79
+ resources = build_watchables(deployments, start, Deployment)
80
+ resources += build_watchables(statefulsets, start, StatefulSet)
81
+ resources += build_watchables(daemonsets, start, DaemonSet)
75
82
  verify_restart(resources)
76
83
  else
77
84
  warning = "Result verification is disabled for this task"
78
85
  @logger.summary.add_paragraph(ColorizedString.new(warning).yellow)
79
86
  end
80
- StatsD.client.distribution('restart.duration', StatsD.duration(start), tags: tags('success', deployments))
87
+ StatsD.client.distribution('restart.duration', StatsD.duration(start),
88
+ tags: tags('success', deployments, statefulsets, daemonsets))
81
89
  @logger.print_summary(:success)
82
90
  rescue DeploymentTimeoutError
83
- StatsD.client.distribution('restart.duration', StatsD.duration(start), tags: tags('timeout', deployments))
91
+ StatsD.client.distribution('restart.duration', StatsD.duration(start),
92
+ tags: tags('timeout', deployments, statefulsets, daemonsets))
84
93
  @logger.print_summary(:timed_out)
85
94
  raise
86
95
  rescue FatalDeploymentError => error
87
- StatsD.client.distribution('restart.duration', StatsD.duration(start), tags: tags('failure', deployments))
96
+ StatsD.client.distribution('restart.duration', StatsD.duration(start),
97
+ tags: tags('failure', deployments, statefulsets, daemonsets))
88
98
  @logger.summary.add_action(error.message) if error.message != error.class.to_s
89
99
  @logger.print_summary(:failure)
90
100
  raise
@@ -93,66 +103,140 @@ module Krane
93
103
 
94
104
  private
95
105
 
96
- def tags(status, deployments)
97
- %W(namespace:#{@namespace} context:#{@context} status:#{status} deployments:#{deployments.to_a.length}})
106
+ def tags(status, deployments, statefulsets, daemonsets)
107
+ %W(namespace:#{@namespace} context:#{@context} status:#{status} deployments:#{deployments.to_a.length}
108
+ statefulsets:#{statefulsets.to_a.length} daemonsets:#{daemonsets.to_a.length}})
98
109
  end
99
110
 
100
- def identify_target_deployments(deployment_names, selector: nil)
101
- if deployment_names.nil?
102
- deployments = if selector.nil?
103
- @logger.info("Configured to restart all deployments with the `#{ANNOTATION}` annotation")
104
- apps_v1_kubeclient.get_deployments(namespace: @namespace)
111
+ def identify_target_workloads(deployment_names, statefulset_names, daemonset_names, selector: nil)
112
+ if deployment_names.blank? && statefulset_names.blank? && daemonset_names.blank?
113
+ if selector.nil?
114
+ @logger.info("Configured to restart all workloads with the `#{ANNOTATION}` annotation")
105
115
  else
106
- selector_string = selector.to_s
107
116
  @logger.info(
108
- "Configured to restart all deployments with the `#{ANNOTATION}` annotation and #{selector_string} selector"
117
+ "Configured to restart all workloads with the `#{ANNOTATION}` annotation and #{selector} selector"
109
118
  )
110
- apps_v1_kubeclient.get_deployments(namespace: @namespace, label_selector: selector_string)
111
119
  end
112
- deployments.select! { |d| d.metadata.annotations[ANNOTATION] }
120
+ deployments = identify_target_deployments(selector: selector)
121
+ statefulsets = identify_target_statefulsets(selector: selector)
122
+ daemonsets = identify_target_daemonsets(selector: selector)
113
123
 
114
- if deployments.none?
115
- raise FatalRestartError, "no deployments with the `#{ANNOTATION}` annotation found in namespace #{@namespace}"
124
+ if deployments.none? && statefulsets.none? && daemonsets.none?
125
+ raise FatalRestartError, "no deployments, statefulsets, or daemonsets, with the `#{ANNOTATION}` " \
126
+ "annotation found in namespace #{@namespace}"
116
127
  end
117
- elsif deployment_names.empty?
118
- raise FatalRestartError, "Configured to restart deployments by name, but list of names was blank"
119
128
  elsif !selector.nil?
120
- raise FatalRestartError, "Can't specify deployment names and selector at the same time"
129
+ raise FatalRestartError, "Can't specify workload names and selector at the same time"
121
130
  else
122
- deployment_names = deployment_names.uniq
123
- list = deployment_names.join(', ')
124
- @logger.info("Configured to restart deployments by name: #{list}")
125
-
126
- deployments = fetch_deployments(deployment_names)
127
- if deployments.none?
128
- raise FatalRestartError, "no deployments with names #{list} found in namespace #{@namespace}"
131
+ deployments, statefulsets, daemonsets = identify_target_workloads_by_name(deployment_names,
132
+ statefulset_names, daemonset_names)
133
+ if deployments.none? && statefulsets.none? && daemonsets.none?
134
+ error_msgs = []
135
+ error_msgs << "no deployments with names #{list} found in namespace #{@namespace}" if deployment_names
136
+ error_msgs << "no statefulsets with names #{list} found in namespace #{@namespace}" if statefulset_names
137
+ error_msgs << "no daemonsets with names #{list} found in namespace #{@namespace}" if daemonset_names
138
+ raise FatalRestartError, error_msgs.join(', ')
129
139
  end
130
140
  end
131
- deployments
141
+ [deployments, statefulsets, daemonsets]
132
142
  end
133
143
 
134
- def build_watchables(kubeclient_resources, started)
144
+ def identify_target_workloads_by_name(deployment_names, statefulset_names, daemonset_names)
145
+ deployment_names = deployment_names.uniq
146
+ statefulset_names = statefulset_names.uniq
147
+ daemonset_names = daemonset_names.uniq
148
+
149
+ if deployment_names.present?
150
+ @logger.info("Configured to restart deployments by name: #{deployment_names.join(', ')}")
151
+ end
152
+ if statefulset_names.present?
153
+ @logger.info("Configured to restart statefulsets by name: #{statefulset_names.join(', ')}")
154
+ end
155
+ if daemonset_names.present?
156
+ @logger.info("Configured to restart daemonsets by name: #{daemonset_names.join(', ')}")
157
+ end
158
+
159
+ [fetch_deployments(deployment_names), fetch_statefulsets(statefulset_names), fetch_daemonsets(daemonset_names)]
160
+ end
161
+
162
+ def identify_target_deployments(selector: nil)
163
+ deployments = if selector.nil?
164
+ apps_v1_kubeclient.get_deployments(namespace: @namespace)
165
+ else
166
+ selector_string = selector.to_s
167
+ apps_v1_kubeclient.get_deployments(namespace: @namespace, label_selector: selector_string)
168
+ end
169
+ deployments.select { |d| d.metadata.annotations[ANNOTATION] }
170
+ end
171
+
172
+ def identify_target_statefulsets(selector: nil)
173
+ statefulsets = if selector.nil?
174
+ apps_v1_kubeclient.get_stateful_sets(namespace: @namespace)
175
+ else
176
+ selector_string = selector.to_s
177
+ apps_v1_kubeclient.get_stateful_sets(namespace: @namespace, label_selector: selector_string)
178
+ end
179
+ statefulsets.select { |d| d.metadata.annotations[ANNOTATION] }
180
+ end
181
+
182
+ def identify_target_daemonsets(selector: nil)
183
+ daemonsets = if selector.nil?
184
+ apps_v1_kubeclient.get_daemon_sets(namespace: @namespace)
185
+ else
186
+ selector_string = selector.to_s
187
+ apps_v1_kubeclient.get_daemon_sets(namespace: @namespace, label_selector: selector_string)
188
+ end
189
+ daemonsets.select { |d| d.metadata.annotations[ANNOTATION] }
190
+ end
191
+
192
+ def build_watchables(kubeclient_resources, started, klass)
135
193
  kubeclient_resources.map do |d|
136
194
  definition = d.to_h.deep_stringify_keys
137
- r = Deployment.new(namespace: @namespace, context: @context, definition: definition, logger: @logger)
195
+ r = klass.new(namespace: @namespace, context: @context, definition: definition, logger: @logger)
138
196
  r.deploy_started_at = started # we don't care what happened to the resource before the restart cmd ran
139
197
  r
140
198
  end
141
199
  end
142
200
 
143
201
  def patch_deployment_with_restart(record)
144
- apps_v1_kubeclient.patch_deployment(
145
- record.metadata.name,
146
- build_patch_payload(record),
147
- @namespace
148
- )
202
+ apps_v1_kubeclient.patch_deployment(record.metadata.name, build_patch_payload(record), @namespace)
203
+ end
204
+
205
+ def patch_statefulset_with_restart(record)
206
+ apps_v1_kubeclient.patch_stateful_set(record.metadata.name, build_patch_payload(record), @namespace)
207
+ end
208
+
209
+ def patch_daemonset_with_restart(record)
210
+ apps_v1_kubeclient.patch_daemon_set(record.metadata.name, build_patch_payload(record), @namespace)
149
211
  end
150
212
 
151
213
  def patch_kubeclient_deployments(deployments)
152
214
  deployments.each do |record|
153
215
  begin
154
216
  patch_deployment_with_restart(record)
155
- @logger.info("Triggered `#{record.metadata.name}` restart")
217
+ @logger.info("Triggered `Deployment/#{record.metadata.name}` restart")
218
+ rescue Kubeclient::HttpError => e
219
+ raise RestartAPIError.new(record.metadata.name, e.message)
220
+ end
221
+ end
222
+ end
223
+
224
+ def patch_kubeclient_statefulsets(statefulsets)
225
+ statefulsets.each do |record|
226
+ begin
227
+ patch_statefulset_with_restart(record)
228
+ @logger.info("Triggered `StatefulSet/#{record.metadata.name}` restart")
229
+ rescue Kubeclient::HttpError => e
230
+ raise RestartAPIError.new(record.metadata.name, e.message)
231
+ end
232
+ end
233
+ end
234
+
235
+ def patch_kubeclient_daemonsets(daemonsets)
236
+ daemonsets.each do |record|
237
+ begin
238
+ patch_daemonset_with_restart(record)
239
+ @logger.info("Triggered `DaemonSet/#{record.metadata.name}` restart")
156
240
  rescue Kubeclient::HttpError => e
157
241
  raise RestartAPIError.new(record.metadata.name, e.message)
158
242
  end
@@ -171,18 +255,38 @@ module Krane
171
255
  end
172
256
  end
173
257
 
174
- def build_patch_payload(deployment)
175
- containers = deployment.spec.template.spec.containers
258
+ def fetch_statefulsets(list)
259
+ list.map do |name|
260
+ record = nil
261
+ begin
262
+ record = apps_v1_kubeclient.get_stateful_set(name, @namespace)
263
+ rescue Kubeclient::ResourceNotFoundError
264
+ raise FatalRestartError, "StatefulSet `#{name}` not found in namespace `#{@namespace}`"
265
+ end
266
+ record
267
+ end
268
+ end
269
+
270
+ def fetch_daemonsets(list)
271
+ list.map do |name|
272
+ record = nil
273
+ begin
274
+ record = apps_v1_kubeclient.get_daemon_set(name, @namespace)
275
+ rescue Kubeclient::ResourceNotFoundError
276
+ raise FatalRestartError, "DaemonSet `#{name}` not found in namespace `#{@namespace}`"
277
+ end
278
+ record
279
+ end
280
+ end
281
+
282
+ def build_patch_payload(_deployment)
176
283
  {
177
284
  spec: {
178
285
  template: {
179
- spec: {
180
- containers: containers.map do |container|
181
- {
182
- name: container.name,
183
- env: [{ name: "RESTARTED_AT", value: Time.now.to_i.to_s }],
184
- }
185
- end,
286
+ metadata: {
287
+ annotations: {
288
+ RESTART_TRIGGER_ANNOTATION => Time.now.utc.to_datetime.rfc3339,
289
+ },
186
290
  },
187
291
  },
188
292
  },
data/lib/krane/version.rb CHANGED
@@ -1,4 +1,4 @@
1
1
  # frozen_string_literal: true
2
2
  module Krane
3
- VERSION = "2.1.8"
3
+ VERSION = "2.3.0"
4
4
  end
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: krane
3
3
  version: !ruby/object:Gem::Version
4
- version: 2.1.8
4
+ version: 2.3.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Katrina Verey
@@ -10,7 +10,7 @@ authors:
10
10
  autorequire:
11
11
  bindir: exe
12
12
  cert_chain: []
13
- date: 2021-04-13 00:00:00.000000000 Z
13
+ date: 2021-10-01 00:00:00.000000000 Z
14
14
  dependencies:
15
15
  - !ruby/object:Gem::Dependency
16
16
  name: activesupport
@@ -374,6 +374,20 @@ dependencies:
374
374
  - - "~>"
375
375
  - !ruby/object:Gem::Version
376
376
  version: 0.89.1
377
+ - !ruby/object:Gem::Dependency
378
+ name: rubocop-shopify
379
+ requirement: !ruby/object:Gem::Requirement
380
+ requirements:
381
+ - - "~>"
382
+ - !ruby/object:Gem::Version
383
+ version: 1.0.5
384
+ type: :development
385
+ prerelease: false
386
+ version_requirements: !ruby/object:Gem::Requirement
387
+ requirements:
388
+ - - "~>"
389
+ - !ruby/object:Gem::Version
390
+ version: 1.0.5
377
391
  - !ruby/object:Gem::Dependency
378
392
  name: simplecov
379
393
  requirement: !ruby/object:Gem::Requirement
@@ -494,11 +508,6 @@ files:
494
508
  - lib/krane/task_config_validator.rb
495
509
  - lib/krane/template_sets.rb
496
510
  - lib/krane/version.rb
497
- - screenshots/deploy-demo.gif
498
- - screenshots/migrate-logs.png
499
- - screenshots/missing-secret-fail.png
500
- - screenshots/success.png
501
- - screenshots/test-output.png
502
511
  homepage: https://github.com/Shopify/krane
503
512
  licenses:
504
513
  - MIT
@@ -519,7 +528,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
519
528
  - !ruby/object:Gem::Version
520
529
  version: '0'
521
530
  requirements: []
522
- rubygems_version: 3.0.3
531
+ rubygems_version: 3.2.20
523
532
  signing_key:
524
533
  specification_version: 4
525
534
  summary: A command line tool that helps you ship changes to a Kubernetes namespace
Binary file
Binary file
Binary file
Binary file
Binary file