kubernetes-deploy 0.29.0 → 0.30.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: a0641a291f2f01c007051cf5001bc97ecb9216b55e7204d954c920083e0d303a
4
- data.tar.gz: 2c2415cfd94a63f046b94e90c943a7f9e01a46207833c1d5d2a498ceeb1ff333
3
+ metadata.gz: 4e7827354d151cb81a0955882abede556706c6e82faabad54e4b44d36746927f
4
+ data.tar.gz: '083dd77e75f92608d6c04c59a8edab5dbc35da3725e3cddc7fee99706c3655d7'
5
5
  SHA512:
6
- metadata.gz: 18a4e86f4f144dfa23ebc191b7d06382605ada3b050e8e1c9e717cbc9db981f16b419fb27a342f4e1267663f8f82fb12bcebfe0697c9ac4ac40c15108ab73072
7
- data.tar.gz: 314bdadd3d8f864895c13e81193845f4632fa46dc23550f12955a5ffe7a7b15ed3df1e12d769d5aefdb8ee4b7e5c3e3e13d5f8378f5c828a2e4bf3c04db8c56c
6
+ metadata.gz: 8ee6de674126f680b470b35f33c08056095284fd29ee5a267781172b170e4de39f77d442a4e530d44ecee46ddd35a5323821f2dafbe20618aa7d0e30135eb801
7
+ data.tar.gz: 1746664b8c13c7c8bc64f2c6903bb62761b13f8d45e6177f82338776c2d67f3979d9e80b8a38b15290b7818d473b2967d18e1a8784cf72c30880c64f40ff5f6d
@@ -3,11 +3,29 @@
3
3
  *Important!*
4
4
  - The next release will be 1.0.0, which means that master will contain breaking changes.
5
5
 
6
+ ## 0.30.0
7
+
8
+ *Enhancements*
9
+ - **[Breaking change]** Added PersistentVolumeClaim to the prune whitelist. ([#573](https://github.com/Shopify/kubernetes-deploy/pull/573))
10
+ * To see what resources may be affected, run `kubectl get pvc -o jsonpath='{ range .items[*] }{.metadata.namespace}{ "\t" }{.metadata.name}{ "\t" }{.metadata.annotations}{ "\n" }{ end }' --all-namespaces | grep "last-applied"`
11
+ * To exclude a resource from kubernetes-deploy (and kubectl apply) management, remove the last-applied annotation `kubectl annotate pvc $PVC_NAME kubectl.kubernetes.io/last-applied-configuration-`.
12
+ - Deploying global resources directly from `KubernetesDeploy::DeployTask` is disabled by default. You can use `allow_globals: true` to enable the old behavior. This will be disabled in the Krane version of the task, and a separate purpose-built task will be provided. [#567](https://github.com/Shopify/kubernetes-deploy/pull/567)
13
+ - Deployments to daemonsets now better tolerate autoscaling: nodes that appear mid-deploy aren't required for convergence. [#580](https://github.com/Shopify/kubernetes-deploy/pull/580)
14
+
15
+ ## 0.29.0
16
+
17
+ *Enhancements*
18
+ - The KubernetesDeploy::RenderTask now supports a template_paths argument. ([#555](https://github.com/Shopify/kubernetes-deploy/pull/546))
19
+ - We no longer hide errors from apply if all sensitive resources have passed server-dry-run validation. ([#570](https://github.com/Shopify/kubernetes-deploy/pull/570))
20
+
21
+
6
22
  *Bug Fixes*
7
23
  - Handle improper duration values more elegantly with better messaging
8
24
 
25
+
9
26
  *Other*
10
27
  - We now require Ruby 2.4.x since Ruby 2.3 is past EoL.
28
+ - Lock statsd-instrument to 2.3.X due to breaking changes in 2.5.0
11
29
 
12
30
  ## 0.28.0
13
31
 
data/Gemfile CHANGED
@@ -13,3 +13,4 @@ gem 'codecov', require: false
13
13
  gem 'ruby-prof', require: false
14
14
  gem 'ruby-prof-flamegraph', require: false
15
15
  gem 'minitest-reporters'
16
+ gem 'yard', require: false
data/README.md CHANGED
@@ -47,6 +47,7 @@ This repo also includes related tools for [running tasks](#kubernetes-run) and [
47
47
  * [Running tasks at the beginning of a deploy](#running-tasks-at-the-beginning-of-a-deploy)
48
48
  * [Deploying Kubernetes secrets (from EJSON)](#deploying-kubernetes-secrets-from-ejson)
49
49
  * [Deploying custom resources](#deploying-custom-resources)
50
+ * [Walk through the steps of a deployment](#deploy-walkthrough)
50
51
 
51
52
  **KUBERNETES-RESTART**
52
53
  * [Usage](#usage-1)
@@ -420,6 +421,89 @@ status:
420
421
  - `$.status.conditions[?(@.type == "Failed")].status == "True"` means that a failure condition has been fulfilled and the resource is considered failed.
421
422
  - Since `error_msg_path` is specified, kubernetes-deploy will log the contents of `$.status.conditions[?(@.type == "Failed")].message`, which in this case is: `resource is failed`.
422
423
 
424
+ ### Deploy walkthrough
425
+
426
+ Let's walk through what happens when you run the `deploy` task with [this directory of templates](https://github.com/Shopify/kubernetes-deploy/tree/master/test/fixtures/hello-cloud). You can see this for yourself by running the following command:
427
+
428
+ ```bash
429
+ krane deploy my-namespace my-k8s-cluster -f test/fixtures/hello-cloud --render-erb
430
+ ```
431
+
432
+ As soon as you run this, you'll start seeing some output being streamed to STDERR.
433
+
434
+ #### Phase 1: Initializing deploy
435
+
436
+ In this phase, we:
437
+
438
+ - Perform basic validation to ensure we can proceed with the deploy. This includes checking if we can reach the context, if the context is valid, if the namespace exists within the context, and more. We try to validate as much as we can before trying to ship something because we want to avoid having an incomplete deploy in case of a failure (this is especially important because there's no rollback support).
439
+ - List out all the resources we want to deploy (as described in the template files we used).
440
+ - Render ERB templates and apply partials, if enabled (which is the case for this example). If enabled, we also perform basic validation on the parsed templates.
441
+
442
+ #### Phase 2: Checking initial resource statuses
443
+
444
+ In this phase, we check resource statuses. For each resource listed in the previous step, we check Kubernetes for their status; in the first deploy this might show a bunch of items as "Not Found", but for the deploy of a new version, this is an example of what it could look like:
445
+
446
+ ```
447
+ Certificate/services-foo-tls Exists
448
+ Cloudsql/foo-production Provisioned
449
+ Deployment/jobs 3 replicas, 3 updatedReplicas, 3 availableReplicas
450
+ Deployment/web 3 replicas, 3 updatedReplicas, 3 availableReplicas
451
+ Ingress/web Created
452
+ Memcached/foo-production Healthy
453
+ Pod/db-migrate-856359 Unknown
454
+ Pod/upload-assets-856359 Unknown
455
+ Redis/foo-production Healthy
456
+ Service/web Selects at least 1 pod
457
+ ```
458
+
459
+ The next phase might be either "Predeploying priority resources" (if there's any) or "Deploying all resources". In this example we'll go through the former, as we do have predeployable resources.
460
+
461
+ #### Phase 3: Predeploying priority resources
462
+
463
+ This is the first phase that could modify the cluster.
464
+
465
+ In this phase we predeploy certain types of resources (e.g. `ConfigMap`, `PersistentVolumeClaim`, `Secret`, ...) to make sure the latest version will be available when resources that might consume them (e.g. `Deployment`) are deployed. This phase will be skipped if the templates don't include any resources that would need to be predeployed.
466
+
467
+ When this runs, we essentially run `kubectl apply` on those templates and periodically check the cluster for the current status of each resource so we can display error or success information. This will look different depending on the type of resource. If you're running the command described above, you should see something like this in the output:
468
+
469
+ ```
470
+ Deploying ConfigMap/hello-cloud-configmap-data (timeout: 30s)
471
+ Successfully deployed in 0.2s: ConfigMap/hello-cloud-configmap-data
472
+
473
+ Deploying PersistentVolumeClaim/hello-cloud-redis (timeout: 300s)
474
+ Successfully deployed in 3.3s: PersistentVolumeClaim/hello-cloud-redis
475
+
476
+ Deploying Role/role (timeout: 300s)
477
+ Don't know how to monitor resources of type Role. Assuming Role/role deployed successfully.
478
+ Successfully deployed in 0.2s: Role/role
479
+ ```
480
+
481
+ As you can see, different types of resources might have different timeout values and different success criteria; in some specific cases (such as with Role) we might not know how to confirm success or failure, so we use a higher timeout value and assume it did work.
482
+
483
+ #### Phase 4: Deploying all resources
484
+
485
+ In this phase, we:
486
+
487
+ - Deploy all resources found in the templates, including resources that were predeployed in the previous step (which should be treated as a no-op by Kubernetes). We deploy everything so the pruning logic (described below) doesn't remove any predeployed resources.
488
+ - Prune resources not found in the templates (you can disable this by using `--no-prune`).
489
+
490
+ Just like in the previous phase, we essentially run `kubectl apply` on those templates and periodically check the cluster for the current status of each resource so we can display error or success information.
491
+
492
+ If pruning is enabled (which, again, is the default), any [resource which type is listed in `DeployTask.prune_whitelist`](https://github.com/Shopify/kubernetes-deploy/blob/ac42ad7c8c4f6f6b27e706d6642ebe002ca1f683/lib/kubernetes-deploy/deploy_task.rb#L80-L104) that we can find in the namespace but not in the templates will be removed. A particular message about pruning will be printed in the next phase if any resource matches this criteria.
493
+
494
+ #### Result
495
+
496
+ The result section will show:
497
+ - A global status: if **all** resources were deployed successfully, this will show up as "SUCCESS"; if at least one resource failed to deploy (due to an error or timeout), this will show up as "FAILURE".
498
+ - A list of resources and their individual status: this will show up as something like "Available", "Created", and "1 replica, 1 availableReplica, 1 readyReplica".
499
+
500
+ At this point the command also returns a status code:
501
+ - If it was a success, `0`
502
+ - If there was a timeout, `70`
503
+ - If any other failure happened, `1`
504
+
505
+ **On timeouts**: It's important to notice that a single resource timeout or a global deploy timeout doesn't necessarily mean that the operation failed. Since Kubernetes updates are asynchronous, maybe something was just too slow to return in the configured time; in those cases, usually running the deploy again might work (that should be a no-op for most - if not all - resources).
506
+
423
507
  # kubernetes-restart
424
508
 
425
509
  `kubernetes-restart` is a tool for restarting all of the pods in one or more deployments. It triggers the restart by touching the `RESTARTED_AT` environment variable in the deployment's podSpec. The rollout strategy defined for each deployment will be respected by the restart.
data/dev.yml CHANGED
@@ -24,3 +24,5 @@ commands:
24
24
  syntax:
25
25
  optional:
26
26
  argument: TEST_REGEX
27
+ doc:
28
+ run: bundle exec yard doc
@@ -78,6 +78,7 @@ begin
78
78
  logger: logger,
79
79
  max_watch_seconds: max_watch_seconds,
80
80
  selector: selector,
81
+ allow_globals: true
81
82
  )
82
83
 
83
84
  runner.run!(
@@ -33,7 +33,7 @@ module Krane
33
33
  }
34
34
 
35
35
  def self.from_options(namespace, context, options)
36
- require 'kubernetes-deploy/deploy_task'
36
+ require 'krane/deploy_task'
37
37
  require 'kubernetes-deploy/options_helper'
38
38
  require 'kubernetes-deploy/bindings_parser'
39
39
  require 'kubernetes-deploy/label_selector'
@@ -53,7 +53,7 @@ module Krane
53
53
 
54
54
  KubernetesDeploy::OptionsHelper.with_processed_template_paths([options[:filenames]],
55
55
  require_explicit_path: true) do |paths|
56
- deploy = KubernetesDeploy::DeployTask.new(
56
+ deploy = ::Krane::DeployTask.new(
57
57
  namespace: namespace,
58
58
  context: context,
59
59
  current_sha: ENV["REVISION"],
@@ -22,7 +22,7 @@ module Krane
22
22
  "template" => {
23
23
  type: :string,
24
24
  desc: "The template file you'll be rendering",
25
- default: 'task-runner-template',
25
+ required: true,
26
26
  },
27
27
  "env-vars" => {
28
28
  type: :string,
@@ -0,0 +1,12 @@
1
+ # frozen_string_literal: true
2
+
3
+ require 'kubernetes-deploy/deploy_task'
4
+
5
+ module Krane
6
+ class DeployTask < KubernetesDeploy::DeployTask
7
+ def initialize(**args)
8
+ raise "Use Krane::DeployGlobalTask to deploy global resources" if args[:allow_globals]
9
+ super(args.merge(allow_globals: false))
10
+ end
11
+ end
12
+ end
@@ -2,22 +2,45 @@
2
2
 
3
3
  module KubernetesDeploy
4
4
  class ClusterResourceDiscovery
5
- def initialize(namespace:, context:, logger:, namespace_tags:)
6
- @namespace = namespace
7
- @context = context
8
- @logger = logger
5
+ delegate :namespace, :context, :logger, to: :@task_config
6
+
7
+ def initialize(task_config:, namespace_tags: [])
8
+ @task_config = task_config
9
9
  @namespace_tags = namespace_tags
10
10
  end
11
11
 
12
12
  def crds
13
13
  @crds ||= fetch_crds.map do |cr_def|
14
- CustomResourceDefinition.new(namespace: @namespace, context: @context, logger: @logger,
14
+ CustomResourceDefinition.new(namespace: namespace, context: context, logger: logger,
15
15
  definition: cr_def, statsd_tags: @namespace_tags)
16
16
  end
17
17
  end
18
18
 
19
+ def global_resource_kinds
20
+ @globals ||= fetch_globals.map { |g| g["kind"] }
21
+ end
22
+
19
23
  private
20
24
 
25
+ def fetch_globals
26
+ raw, _, st = kubectl.run("api-resources", "--namespaced=false", output: "wide", attempts: 5)
27
+ if st.success?
28
+ rows = raw.split("\n")
29
+ header = rows[0]
30
+ resources = rows[1..-1]
31
+ full_width_field_names = header.downcase.scan(/[a-z]+[\W]*/)
32
+ cursor = 0
33
+ fields = full_width_field_names.each_with_object({}) do |name, hash|
34
+ start = cursor
35
+ cursor = start + name.length
36
+ hash[name.strip] = [start, cursor - 1]
37
+ end
38
+ resources.map { |r| fields.map { |k, (s, e)| [k.strip, r[s..e].strip] }.to_h }
39
+ else
40
+ []
41
+ end
42
+ end
43
+
21
44
  def fetch_crds
22
45
  raw_json, _, st = kubectl.run("get", "CustomResourceDefinition", output: "json", attempts: 5)
23
46
  if st.success?
@@ -28,7 +51,7 @@ module KubernetesDeploy
28
51
  end
29
52
 
30
53
  def kubectl
31
- @kubectl ||= Kubectl.new(namespace: @namespace, context: @context, logger: @logger, log_failure_by_default: true)
54
+ @kubectl ||= Kubectl.new(task_config: @task_config, log_failure_by_default: true)
32
55
  end
33
56
  end
34
57
  end
@@ -59,7 +59,8 @@ module KubernetesDeploy
59
59
  end
60
60
 
61
61
  def kubectl
62
- @kubectl ||= Kubectl.new(namespace: @namespace, context: @context, logger: @logger, log_failure_by_default: false)
62
+ task_config = TaskConfig.new(@context, @namespace, @logger)
63
+ @kubectl ||= Kubectl.new(task_config: task_config, log_failure_by_default: false)
63
64
  end
64
65
 
65
66
  def rfc3339_timestamp(time)
@@ -40,8 +40,10 @@ require 'kubernetes-deploy/ejson_secret_provisioner'
40
40
  require 'kubernetes-deploy/renderer'
41
41
  require 'kubernetes-deploy/cluster_resource_discovery'
42
42
  require 'kubernetes-deploy/template_sets'
43
+ require 'kubernetes-deploy/deploy_task_config_validator'
43
44
 
44
45
  module KubernetesDeploy
46
+ # Ship resources to a namespace
45
47
  class DeployTask
46
48
  extend KubernetesDeploy::StatsD::MeasureMethods
47
49
 
@@ -84,13 +86,14 @@ module KubernetesDeploy
84
86
  core/v1/Secret
85
87
  core/v1/ServiceAccount
86
88
  core/v1/PodTemplate
89
+ core/v1/PersistentVolumeClaim
87
90
  batch/v1/Job
88
- extensions/v1beta1/ReplicaSet
89
- extensions/v1beta1/DaemonSet
90
- extensions/v1beta1/Deployment
91
+ apps/v1/ReplicaSet
92
+ apps/v1/DaemonSet
93
+ apps/v1/Deployment
91
94
  extensions/v1beta1/Ingress
92
95
  networking.k8s.io/v1/NetworkPolicy
93
- apps/v1beta1/StatefulSet
96
+ apps/v1/StatefulSet
94
97
  autoscaling/v1/HorizontalPodAutoscaler
95
98
  policy/v1beta1/PodDisruptionBudget
96
99
  batch/v1beta1/CronJob
@@ -104,9 +107,24 @@ module KubernetesDeploy
104
107
  kubectl.server_version
105
108
  end
106
109
 
110
+ # Initializes the deploy task
111
+ #
112
+ # @param namespace [String] Kubernetes namespace
113
+ # @param context [String] Kubernetes context
114
+ # @param current_sha [String] The SHA of the commit
115
+ # @param logger [Object] Logger object (defaults to an instance of KubernetesDeploy::FormattedLogger)
116
+ # @param kubectl_instance [Kubectl] Kubectl instance
117
+ # @param bindings [Hash] Bindings parsed by KubernetesDeploy::BindingsParser
118
+ # @param max_watch_seconds [Integer] Timeout in seconds
119
+ # @param selector [Hash] Selector(s) parsed by KubernetesDeploy::LabelSelector
120
+ # @param template_paths [Array<String>] An array of template paths
121
+ # @param template_dir [String] Path to a directory with templates (deprecated)
122
+ # @param protected_namespaces [Array<String>] Array of protected Kubernetes namespaces (defaults
123
+ # to KubernetesDeploy::DeployTask::PROTECTED_NAMESPACES)
124
+ # @param render_erb [Boolean] Enable ERB rendering
107
125
  def initialize(namespace:, context:, current_sha:, logger: nil, kubectl_instance: nil, bindings: {},
108
126
  max_watch_seconds: nil, selector: nil, template_paths: [], template_dir: nil, protected_namespaces: nil,
109
- render_erb: true)
127
+ render_erb: true, allow_globals: false)
110
128
  template_dir = File.expand_path(template_dir) if template_dir
111
129
  template_paths = (template_paths.map { |path| File.expand_path(path) } << template_dir).compact
112
130
 
@@ -123,8 +141,12 @@ module KubernetesDeploy
123
141
  @selector = selector
124
142
  @protected_namespaces = protected_namespaces || PROTECTED_NAMESPACES
125
143
  @render_erb = render_erb
144
+ @allow_globals = allow_globals
126
145
  end
127
146
 
147
+ # Runs the task, returning a boolean representing success or failure
148
+ #
149
+ # @return [Boolean]
128
150
  def run(*args)
129
151
  run!(*args)
130
152
  true
@@ -132,6 +154,13 @@ module KubernetesDeploy
132
154
  false
133
155
  end
134
156
 
157
+ # Runs the task, raising exceptions in case of issues
158
+ #
159
+ # @param verify_result [Boolean] Wait for completion and verify success
160
+ # @param allow_protected_ns [Boolean] Enable deploying to protected namespaces
161
+ # @param prune [Boolean] Enable deletion of resources that do not appear in the template dir
162
+ #
163
+ # @return [nil]
135
164
  def run!(verify_result: true, allow_protected_ns: false, prune: true)
136
165
  start = Time.now.utc
137
166
  @logger.reset
@@ -195,15 +224,17 @@ module KubernetesDeploy
195
224
 
196
225
  private
197
226
 
227
+ def global_resource_names
228
+ cluster_resource_discoverer.global_resource_kinds
229
+ end
230
+
198
231
  def kubeclient_builder
199
232
  @kubeclient_builder ||= KubeclientBuilder.new
200
233
  end
201
234
 
202
235
  def cluster_resource_discoverer
203
236
  @cluster_resource_discoverer ||= ClusterResourceDiscovery.new(
204
- namespace: @namespace,
205
- context: @context,
206
- logger: @logger,
237
+ task_config: @task_config,
207
238
  namespace_tags: @namespace_tags
208
239
  )
209
240
  end
@@ -211,11 +242,9 @@ module KubernetesDeploy
211
242
  def ejson_provisioners
212
243
  @ejson_provisoners ||= @template_sets.ejson_secrets_files.map do |ejson_secret_file|
213
244
  EjsonSecretProvisioner.new(
214
- namespace: @namespace,
215
- context: @context,
245
+ task_config: @task_config,
216
246
  ejson_keys_secret: ejson_keys_secret,
217
247
  ejson_file: ejson_secret_file,
218
- logger: @logger,
219
248
  statsd_tags: @namespace_tags,
220
249
  selector: @selector,
221
250
  )
@@ -261,18 +290,38 @@ module KubernetesDeploy
261
290
  end
262
291
 
263
292
  failed_resources = resources.select(&:validation_failed?)
264
- return unless failed_resources.present?
293
+ if failed_resources.present?
265
294
 
266
- failed_resources.each do |r|
267
- content = File.read(r.file_path) if File.file?(r.file_path) && !r.sensitive_template_content?
268
- record_invalid_template(err: r.validation_error_msg, filename: File.basename(r.file_path), content: content)
295
+ failed_resources.each do |r|
296
+ content = File.read(r.file_path) if File.file?(r.file_path) && !r.sensitive_template_content?
297
+ record_invalid_template(err: r.validation_error_msg, filename: File.basename(r.file_path), content: content)
298
+ end
299
+ raise FatalDeploymentError, "Template validation failed"
269
300
  end
270
- raise FatalDeploymentError, "Template validation failed"
301
+ validate_globals(resources)
271
302
  end
272
303
  measure_method(:validate_resources)
273
304
 
305
+ def validate_globals(resources)
306
+ return unless (global = resources.select(&:global?).presence)
307
+ global_names = global.map do |resource|
308
+ "#{resource.name} (#{resource.type}) in #{File.basename(resource.file_path)}"
309
+ end
310
+ global_names = FormattedLogger.indent_four(global_names.join("\n"))
311
+
312
+ if @allow_globals
313
+ msg = "The ability for this task to deploy global resources will be removed in the next version,"\
314
+ " which will affect the following resources:"
315
+ msg += "\n#{global_names}"
316
+ @logger.summary.add_paragraph(ColorizedString.new(msg).yellow)
317
+ else
318
+ @logger.summary.add_paragraph(ColorizedString.new("Global resources:\n#{global_names}").yellow)
319
+ raise FatalDeploymentError, "This command is namespaced and cannot be used to deploy global resources."
320
+ end
321
+ end
322
+
274
323
  def check_initial_status(resources)
275
- cache = ResourceCache.new(@namespace, @context, @logger)
324
+ cache = ResourceCache.new(@task_config)
276
325
  KubernetesDeploy::Concurrency.split_across_threads(resources) { |r| r.sync(cache) }
277
326
  resources.each { |r| @logger.info(r.pretty_status) }
278
327
  end
@@ -290,7 +339,7 @@ module KubernetesDeploy
290
339
  current_sha: @current_sha, bindings: @bindings) do |r_def|
291
340
  crd = crds_by_kind[r_def["kind"]]&.first
292
341
  r = KubernetesResource.build(namespace: @namespace, context: @context, logger: @logger, definition: r_def,
293
- statsd_tags: @namespace_tags, crd: crd)
342
+ statsd_tags: @namespace_tags, crd: crd, global_names: global_resource_names)
294
343
  resources << r
295
344
  @logger.info(" - #{r.id}")
296
345
  end
@@ -300,10 +349,6 @@ module KubernetesDeploy
300
349
  @logger.info(" - #{secret.id} (from ejson)")
301
350
  end
302
351
 
303
- if (global = resources.select(&:global?).presence)
304
- @logger.warn("Detected non-namespaced #{'resource'.pluralize(global.count)} which will never be pruned:")
305
- global.each { |r| @logger.warn(" - #{r.id}") }
306
- end
307
352
  resources.sort
308
353
  rescue InvalidTemplateError => e
309
354
  record_invalid_template(err: e.message, filename: e.filename, content: e.content)
@@ -331,36 +376,17 @@ module KubernetesDeploy
331
376
  end
332
377
 
333
378
  def validate_configuration(allow_protected_ns:, prune:)
379
+ task_config_validator = DeployTaskConfigValidator.new(@protected_namespaces, allow_protected_ns, prune,
380
+ @task_config, kubectl, kubeclient_builder)
334
381
  errors = []
335
- errors += kubeclient_builder.validate_config_files
382
+ errors += task_config_validator.errors
336
383
  errors += @template_sets.validate
337
-
338
- if @namespace.blank?
339
- errors << "Namespace must be specified"
340
- elsif @protected_namespaces.include?(@namespace)
341
- if allow_protected_ns && prune
342
- errors << "Refusing to deploy to protected namespace '#{@namespace}' with pruning enabled"
343
- elsif allow_protected_ns
344
- @logger.warn("You're deploying to protected namespace #{@namespace}, which cannot be pruned.")
345
- @logger.warn("Existing resources can only be removed manually with kubectl. " \
346
- "Removing templates from the set deployed will have no effect.")
347
- @logger.warn("***Please do not deploy to #{@namespace} unless you really know what you are doing.***")
348
- else
349
- errors << "Refusing to deploy to protected namespace '#{@namespace}'"
350
- end
351
- end
352
-
353
- if @context.blank?
354
- errors << "Context must be specified"
355
- end
356
-
357
384
  unless errors.empty?
385
+ @logger.summary.add_action("Configuration invalid")
358
386
  @logger.summary.add_paragraph(errors.map { |err| "- #{err}" }.join("\n"))
359
- raise FatalDeploymentError, "Configuration invalid"
387
+ raise KubernetesDeploy::TaskConfigurationError
360
388
  end
361
389
 
362
- confirm_context_exists
363
- confirm_namespace_exists
364
390
  confirm_ejson_keys_not_prunable if prune
365
391
  @logger.info("Using resource selector #{@selector}") if @selector
366
392
  @namespace_tags |= tags_from_namespace_labels
@@ -415,8 +441,8 @@ module KubernetesDeploy
415
441
  apply_all(applyables, prune)
416
442
 
417
443
  if verify
418
- watcher = ResourceWatcher.new(resources: resources, logger: @logger, deploy_started_at: deploy_started_at,
419
- timeout: @max_watch_seconds, namespace: @namespace, context: @context, sha: @current_sha)
444
+ watcher = ResourceWatcher.new(resources: resources, deploy_started_at: deploy_started_at,
445
+ timeout: @max_watch_seconds, task_config: @task_config, sha: @current_sha)
420
446
  watcher.run(record_summary: record_summary)
421
447
  end
422
448
  end
@@ -528,37 +554,6 @@ module KubernetesDeploy
528
554
  end
529
555
  end
530
556
 
531
- def confirm_context_exists
532
- out, err, st = kubectl.run("config", "get-contexts", "-o", "name",
533
- use_namespace: false, use_context: false, log_failure: false)
534
- available_contexts = out.split("\n")
535
- if !st.success?
536
- raise FatalDeploymentError, err
537
- elsif !available_contexts.include?(@context)
538
- raise FatalDeploymentError, "Context #{@context} is not available. Valid contexts: #{available_contexts}"
539
- end
540
- confirm_cluster_reachable
541
- @logger.info("Context #{@context} found")
542
- end
543
-
544
- def confirm_cluster_reachable
545
- success = false
546
- with_retries(2) do
547
- begin
548
- success = kubectl.version_info
549
- rescue KubectlError
550
- success = false
551
- end
552
- end
553
- raise FatalDeploymentError, "Failed to reach server for #{@context}" unless success
554
- TaskConfigValidator.new(@task_config, kubectl, kubeclient_builder, only: [:validate_server_version]).valid?
555
- end
556
-
557
- def confirm_namespace_exists
558
- raise FatalDeploymentError, "Namespace #{@namespace} not found" unless namespace_definition.present?
559
- @logger.info("Namespace #{@namespace} found")
560
- end
561
-
562
557
  def namespace_definition
563
558
  @namespace_definition ||= begin
564
559
  definition, _err, st = kubectl.run("get", "namespace", @namespace, use_namespace: false,
@@ -587,7 +582,7 @@ module KubernetesDeploy
587
582
  end
588
583
 
589
584
  def kubectl
590
- @kubectl ||= Kubectl.new(namespace: @namespace, context: @context, logger: @logger, log_failure_by_default: true)
585
+ @kubectl ||= Kubectl.new(task_config: @task_config, log_failure_by_default: true)
591
586
  end
592
587
 
593
588
  def ejson_keys_secret
@@ -0,0 +1,29 @@
1
+ # frozen_string_literal: true
2
+ module KubernetesDeploy
3
+ class DeployTaskConfigValidator < TaskConfigValidator
4
+ def initialize(protected_namespaces, allow_protected_ns, prune, *arguments)
5
+ super(*arguments)
6
+ @protected_namespaces = protected_namespaces
7
+ @allow_protected_ns = allow_protected_ns
8
+ @prune = prune
9
+ @validations += %i(validate_protected_namespaces)
10
+ end
11
+
12
+ private
13
+
14
+ def validate_protected_namespaces
15
+ if @protected_namespaces.include?(namespace)
16
+ if @allow_protected_ns && @prune
17
+ @errors << "Refusing to deploy to protected namespace '#{namespace}' with pruning enabled"
18
+ elsif @allow_protected_ns
19
+ logger.warn("You're deploying to protected namespace #{namespace}, which cannot be pruned.")
20
+ logger.warn("Existing resources can only be removed manually with kubectl. " \
21
+ "Removing templates from the set deployed will have no effect.")
22
+ logger.warn("***Please do not deploy to #{namespace} unless you really know what you are doing.***")
23
+ else
24
+ @errors << "Refusing to deploy to protected namespace '#{namespace}'"
25
+ end
26
+ end
27
+ end
28
+ end
29
+ end
@@ -3,12 +3,10 @@
3
3
  require 'active_support/duration'
4
4
 
5
5
  module KubernetesDeploy
6
- ##
7
6
  # This class is a less strict extension of ActiveSupport::Duration::ISO8601Parser.
8
7
  # In addition to full ISO8601 durations, it can parse unprefixed ISO8601 time components (e.g. '1H').
9
8
  # It is also case-insensitive.
10
9
  # For example, this class considers the values "1H", "1h" and "PT1H" to be valid and equivalent.
11
-
12
10
  class DurationParser
13
11
  class ParsingError < ArgumentError; end
14
12
 
@@ -16,19 +16,16 @@ module KubernetesDeploy
16
16
  EJSON_SECRET_KEY = "kubernetes_secrets"
17
17
  EJSON_SECRETS_FILE = "secrets.ejson"
18
18
  EJSON_KEYS_SECRET = "ejson-keys"
19
+ delegate :namespace, :context, :logger, to: :@task_config
19
20
 
20
- def initialize(namespace:, context:, ejson_keys_secret:, ejson_file:, logger:, statsd_tags:, selector: nil)
21
- @namespace = namespace
22
- @context = context
21
+ def initialize(task_config:, ejson_keys_secret:, ejson_file:, statsd_tags:, selector: nil)
23
22
  @ejson_keys_secret = ejson_keys_secret
24
23
  @ejson_file = ejson_file
25
- @logger = logger
26
24
  @statsd_tags = statsd_tags
27
25
  @selector = selector
26
+ @task_config = task_config
28
27
  @kubectl = Kubectl.new(
29
- namespace: @namespace,
30
- context: @context,
31
- logger: @logger,
28
+ task_config: @task_config,
32
29
  log_failure_by_default: false,
33
30
  output_is_sensitive_default: true # output may contain ejson secrets
34
31
  )
@@ -48,7 +45,7 @@ module KubernetesDeploy
48
45
  with_decrypted_ejson do |decrypted|
49
46
  secrets = decrypted[EJSON_SECRET_KEY]
50
47
  unless secrets.present?
51
- @logger.warn("#{EJSON_SECRETS_FILE} does not have key #{EJSON_SECRET_KEY}."\
48
+ logger.warn("#{EJSON_SECRETS_FILE} does not have key #{EJSON_SECRET_KEY}."\
52
49
  "No secrets will be created.")
53
50
  return []
54
51
  end
@@ -108,14 +105,14 @@ module KubernetesDeploy
108
105
  'metadata' => {
109
106
  "name" => secret_name,
110
107
  "labels" => labels,
111
- "namespace" => @namespace,
108
+ "namespace" => namespace,
112
109
  "annotations" => { EJSON_SECRET_ANNOTATION => "true" },
113
110
  },
114
111
  "data" => encoded_data,
115
112
  }
116
113
 
117
114
  KubernetesDeploy::Secret.build(
118
- namespace: @namespace, context: @context, logger: @logger, definition: secret, statsd_tags: @statsd_tags,
115
+ namespace: namespace, context: context, logger: logger, definition: secret, statsd_tags: @statsd_tags,
119
116
  )
120
117
  end
121
118
 
@@ -13,30 +13,28 @@ module KubernetesDeploy
13
13
 
14
14
  class ResourceNotFoundError < StandardError; end
15
15
 
16
- def initialize(namespace:, context:, logger:, log_failure_by_default:, default_timeout: DEFAULT_TIMEOUT,
16
+ delegate :namespace, :context, :logger, to: :@task_config
17
+
18
+ def initialize(task_config:, log_failure_by_default:, default_timeout: DEFAULT_TIMEOUT,
17
19
  output_is_sensitive_default: false)
18
- @namespace = namespace
19
- @context = context
20
- @logger = logger
20
+ @task_config = task_config
21
21
  @log_failure_by_default = log_failure_by_default
22
22
  @default_timeout = default_timeout
23
23
  @output_is_sensitive_default = output_is_sensitive_default
24
-
25
- raise ArgumentError, "namespace is required" if namespace.blank?
26
- raise ArgumentError, "context is required" if context.blank?
27
24
  end
28
25
 
29
26
  def run(*args, log_failure: nil, use_context: true, use_namespace: true, output: nil,
30
27
  raise_if_not_found: false, attempts: 1, output_is_sensitive: nil, retry_whitelist: nil)
28
+ raise ArgumentError, "namespace is required" if namespace.blank? && use_namespace
31
29
  log_failure = @log_failure_by_default if log_failure.nil?
32
30
  output_is_sensitive = @output_is_sensitive_default if output_is_sensitive.nil?
33
31
  cmd = build_command_from_options(args, use_namespace, use_context, output)
34
32
  out, err, st = nil
35
33
 
36
34
  (1..attempts).to_a.each do |current_attempt|
37
- @logger.debug("Running command (attempt #{current_attempt}): #{cmd.join(' ')}")
35
+ logger.debug("Running command (attempt #{current_attempt}): #{cmd.join(' ')}")
38
36
  out, err, st = Open3.capture3(*cmd)
39
- @logger.debug("Kubectl out: " + out.gsub(/\s+/, ' ')) unless output_is_sensitive
37
+ logger.debug("Kubectl out: " + out.gsub(/\s+/, ' ')) unless output_is_sensitive
40
38
 
41
39
  break if st.success?
42
40
  raise(ResourceNotFoundError, err) if err.match(ERROR_MATCHERS[:not_found]) && raise_if_not_found
@@ -49,12 +47,12 @@ module KubernetesDeploy
49
47
  else
50
48
  "The following command failed and cannot be retried"
51
49
  end
52
- @logger.warn("#{warning}: #{Shellwords.join(cmd)}")
53
- @logger.warn(err) unless output_is_sensitive
50
+ logger.warn("#{warning}: #{Shellwords.join(cmd)}")
51
+ logger.warn(err) unless output_is_sensitive
54
52
  else
55
- @logger.debug("Kubectl err: #{output_is_sensitive ? '<suppressed sensitive output>' : err}")
53
+ logger.debug("Kubectl err: #{output_is_sensitive ? '<suppressed sensitive output>' : err}")
56
54
  end
57
- StatsD.increment('kubectl.error', 1, tags: { context: @context, namespace: @namespace, cmd: cmd[1] })
55
+ StatsD.increment('kubectl.error', 1, tags: { context: context, namespace: namespace, cmd: cmd[1] })
58
56
 
59
57
  break unless retriable_err?(err, retry_whitelist) && current_attempt < attempts
60
58
  sleep(retry_delay(current_attempt))
@@ -93,8 +91,8 @@ module KubernetesDeploy
93
91
 
94
92
  def build_command_from_options(args, use_namespace, use_context, output)
95
93
  cmd = ["kubectl"] + args
96
- cmd.push("--namespace=#{@namespace}") if use_namespace
97
- cmd.push("--context=#{@context}") if use_context
94
+ cmd.push("--namespace=#{namespace}") if use_namespace
95
+ cmd.push("--context=#{context}") if use_context
98
96
  cmd.push("--output=#{output}") if output
99
97
  cmd.push("--request-timeout=#{@default_timeout}") if @default_timeout
100
98
  cmd
@@ -10,7 +10,7 @@ require 'kubernetes-deploy/rollout_conditions'
10
10
  module KubernetesDeploy
11
11
  class KubernetesResource
12
12
  attr_reader :name, :namespace, :context
13
- attr_writer :type, :deploy_started_at
13
+ attr_writer :type, :deploy_started_at, :global
14
14
 
15
15
  GLOBAL = false
16
16
  TIMEOUT = 5.minutes
@@ -40,7 +40,7 @@ module KubernetesDeploy
40
40
  SERVER_DRY_RUNNABLE = false
41
41
 
42
42
  class << self
43
- def build(namespace:, context:, definition:, logger:, statsd_tags:, crd: nil)
43
+ def build(namespace:, context:, definition:, logger:, statsd_tags:, crd: nil, global_names: [])
44
44
  validate_definition_essentials(definition)
45
45
  opts = { namespace: namespace, context: context, definition: definition, logger: logger,
46
46
  statsd_tags: statsd_tags }
@@ -50,8 +50,10 @@ module KubernetesDeploy
50
50
  if crd
51
51
  CustomResource.new(crd: crd, **opts)
52
52
  else
53
+ type = definition["kind"]
53
54
  inst = new(**opts)
54
- inst.type = definition["kind"]
55
+ inst.type = type
56
+ inst.global = global_names.map(&:downcase).include?(type.downcase)
55
57
  inst
56
58
  end
57
59
  end
@@ -416,7 +418,7 @@ module KubernetesDeploy
416
418
  end
417
419
 
418
420
  def global?
419
- self.class::GLOBAL
421
+ @global || self.class::GLOBAL
420
422
  end
421
423
 
422
424
  private
@@ -8,6 +8,7 @@ module KubernetesDeploy
8
8
  def sync(cache)
9
9
  super
10
10
  @pods = exists? ? find_pods(cache) : []
11
+ @nodes = find_nodes(cache) if @nodes.blank?
11
12
  end
12
13
 
13
14
  def status
@@ -17,9 +18,9 @@ module KubernetesDeploy
17
18
 
18
19
  def deploy_succeeded?
19
20
  return false unless exists?
20
- rollout_data["desiredNumberScheduled"].to_i == rollout_data["updatedNumberScheduled"].to_i &&
21
- rollout_data["desiredNumberScheduled"].to_i == rollout_data["numberReady"].to_i &&
22
- current_generation == observed_generation
21
+ current_generation == observed_generation &&
22
+ rollout_data["desiredNumberScheduled"].to_i == rollout_data["updatedNumberScheduled"].to_i &&
23
+ relevant_pods_ready?
23
24
  end
24
25
 
25
26
  def deploy_failed?
@@ -38,6 +39,34 @@ module KubernetesDeploy
38
39
 
39
40
  private
40
41
 
42
+ class Node
43
+ attr_reader :name
44
+
45
+ class << self
46
+ def kind
47
+ name.demodulize
48
+ end
49
+ end
50
+
51
+ def initialize(definition:)
52
+ @name = definition.dig("metadata", "name").to_s
53
+ @definition = definition
54
+ end
55
+ end
56
+
57
+ def relevant_pods_ready?
58
+ return true if rollout_data["desiredNumberScheduled"].to_i == rollout_data["numberReady"].to_i # all pods ready
59
+ relevant_node_names = @nodes.map(&:name)
60
+ considered_pods = @pods.select { |p| relevant_node_names.include?(p.node_name) }
61
+ @logger.debug("Considered #{considered_pods.size} pods out of #{@pods.size} for #{@nodes.size} nodes")
62
+ considered_pods.present? && considered_pods.all?(&:deploy_succeeded?)
63
+ end
64
+
65
+ def find_nodes(cache)
66
+ all_nodes = cache.get_all(Node.kind)
67
+ all_nodes.map { |node_data| Node.new(definition: node_data) }
68
+ end
69
+
41
70
  def rollout_data
42
71
  return { "currentNumberScheduled" => 0 } unless exists?
43
72
  @instance_data["status"]
@@ -101,6 +101,10 @@ module KubernetesDeploy
101
101
  exists? && !@stream_logs # don't print them a second time
102
102
  end
103
103
 
104
+ def node_name
105
+ @instance_data.dig('spec', 'nodeName')
106
+ end
107
+
104
108
  private
105
109
 
106
110
  def failed_schedule_reason
@@ -6,7 +6,15 @@ require 'kubernetes-deploy/renderer'
6
6
  require 'kubernetes-deploy/template_sets'
7
7
 
8
8
  module KubernetesDeploy
9
+ # Render templates
9
10
  class RenderTask
11
+ # Initializes the render task
12
+ #
13
+ # @param logger [Object] Logger object (defaults to an instance of KubernetesDeploy::FormattedLogger)
14
+ # @param current_sha [String] The SHA of the commit
15
+ # @param template_dir [String] Path to a directory with templates to render (deprecated)
16
+ # @param template_paths [Array<String>] An array of template paths to render
17
+ # @param bindings [Hash] Bindings parsed by KubernetesDeploy::BindingsParser
10
18
  def initialize(logger: nil, current_sha:, template_dir: nil, template_paths: [], bindings:)
11
19
  @logger = logger || KubernetesDeploy::FormattedLogger.build
12
20
  @template_dir = template_dir
@@ -15,6 +23,9 @@ module KubernetesDeploy
15
23
  @current_sha = current_sha
16
24
  end
17
25
 
26
+ # Runs the task, returning a boolean representing success or failure
27
+ #
28
+ # @return [Boolean]
18
29
  def run(*args)
19
30
  run!(*args)
20
31
  true
@@ -22,6 +33,12 @@ module KubernetesDeploy
22
33
  false
23
34
  end
24
35
 
36
+ # Runs the task, raising exceptions in case of issues
37
+ #
38
+ # @param stream [IO] Place to stream the output to
39
+ # @param only_filenames [Array<String>] List of filenames to render
40
+ #
41
+ # @return [nil]
25
42
  def run!(stream, only_filenames = [])
26
43
  @logger.reset
27
44
  @logger.phase_heading("Initializing render task")
@@ -4,14 +4,14 @@ require 'concurrent/hash'
4
4
 
5
5
  module KubernetesDeploy
6
6
  class ResourceCache
7
- def initialize(namespace, context, logger)
8
- @namespace = namespace
9
- @context = context
10
- @logger = logger
7
+ delegate :namespace, :context, :logger, to: :@task_config
8
+
9
+ def initialize(task_config)
10
+ @task_config = task_config
11
11
 
12
12
  @kind_fetcher_locks = Concurrent::Hash.new { |hash, key| hash[key] = Mutex.new }
13
13
  @data = Concurrent::Hash.new
14
- @kubectl = Kubectl.new(namespace: @namespace, context: @context, logger: @logger, log_failure_by_default: false)
14
+ @kubectl = Kubectl.new(task_config: @task_config, log_failure_by_default: false)
15
15
  end
16
16
 
17
17
  def get_instance(kind, resource_name, raise_if_not_found: false)
@@ -39,7 +39,7 @@ module KubernetesDeploy
39
39
  private
40
40
 
41
41
  def statsd_tags
42
- { namespace: @namespace, context: @context }
42
+ { namespace: namespace, context: context }
43
43
  end
44
44
 
45
45
  def use_or_populate_cache(kind)
@@ -6,18 +6,17 @@ require 'kubernetes-deploy/resource_cache'
6
6
  module KubernetesDeploy
7
7
  class ResourceWatcher
8
8
  extend KubernetesDeploy::StatsD::MeasureMethods
9
+ delegate :namespace, :context, :logger, to: :@task_config
9
10
 
10
- def initialize(resources:, logger:, context:, namespace:,
11
- deploy_started_at: Time.now.utc, operation_name: "deploy", timeout: nil, sha: nil)
11
+ def initialize(resources:, task_config:, deploy_started_at: Time.now.utc,
12
+ operation_name: "deploy", timeout: nil, sha: nil)
12
13
  unless resources.is_a?(Enumerable)
13
14
  raise ArgumentError, <<~MSG
14
15
  ResourceWatcher expects Enumerable collection, got `#{resources.class}` instead
15
16
  MSG
16
17
  end
17
18
  @resources = resources
18
- @logger = logger
19
- @namespace = namespace
20
- @context = context
19
+ @task_config = task_config
21
20
  @deploy_started_at = deploy_started_at
22
21
  @operation_name = operation_name
23
22
  @timeout = timeout
@@ -53,7 +52,7 @@ module KubernetesDeploy
53
52
  private
54
53
 
55
54
  def sync_resources(resources)
56
- cache = ResourceCache.new(@namespace, @context, @logger)
55
+ cache = ResourceCache.new(@task_config)
57
56
  KubernetesDeploy::Concurrency.split_across_threads(resources) { |r| r.sync(cache) }
58
57
  resources.each(&:after_sync)
59
58
  end
@@ -61,8 +60,8 @@ module KubernetesDeploy
61
60
 
62
61
  def statsd_tags
63
62
  {
64
- namespace: @namespace,
65
- context: @context,
63
+ namespace: namespace,
64
+ context: context,
66
65
  sha: @sha,
67
66
  }
68
67
  end
@@ -83,18 +82,18 @@ module KubernetesDeploy
83
82
  watch_time = (Time.now.utc - @deploy_started_at).round(1)
84
83
  new_failures.each do |resource|
85
84
  resource.report_status_to_statsd(watch_time)
86
- @logger.error("#{resource.id} failed to #{@operation_name} after #{watch_time}s")
85
+ logger.error("#{resource.id} failed to #{@operation_name} after #{watch_time}s")
87
86
  end
88
87
 
89
88
  new_timeouts.each do |resource|
90
89
  resource.report_status_to_statsd(watch_time)
91
- @logger.error("#{resource.id} rollout timed out after #{watch_time}s")
90
+ logger.error("#{resource.id} rollout timed out after #{watch_time}s")
92
91
  end
93
92
 
94
93
  if new_successes.present?
95
94
  new_successes.each { |r| r.report_status_to_statsd(watch_time) }
96
95
  success_string = ColorizedString.new("Successfully #{past_tense_operation} in #{watch_time}s:").green
97
- @logger.info("#{success_string} #{new_successes.map(&:id).join(', ')}")
96
+ logger.info("#{success_string} #{new_successes.map(&:id).join(', ')}")
98
97
  end
99
98
  end
100
99
 
@@ -102,7 +101,7 @@ module KubernetesDeploy
102
101
  return unless resources.present?
103
102
  resource_list = resources.map(&:id).join(', ')
104
103
  msg = reminder ? "Still waiting for: #{resource_list}" : "Continuing to wait for: #{resource_list}"
105
- @logger.info(msg)
104
+ logger.info(msg)
106
105
  end
107
106
 
108
107
  def report_and_give_up(remaining_resources)
@@ -130,34 +129,34 @@ module KubernetesDeploy
130
129
  timeouts, failures = failed_resources.partition(&:deploy_timed_out?)
131
130
  timeouts += global_timeouts
132
131
  if timeouts.present?
133
- @logger.summary.add_action(
132
+ logger.summary.add_action(
134
133
  "timed out waiting for #{timeouts.length} #{'resource'.pluralize(timeouts.length)} to #{@operation_name}"
135
134
  )
136
135
  end
137
136
 
138
137
  if failures.present?
139
- @logger.summary.add_action(
138
+ logger.summary.add_action(
140
139
  "failed to #{@operation_name} #{failures.length} #{'resource'.pluralize(failures.length)}"
141
140
  )
142
141
  end
143
142
 
144
- kubectl = Kubectl.new(namespace: @namespace, context: @context, logger: @logger, log_failure_by_default: false)
143
+ kubectl = Kubectl.new(task_config: @task_config, log_failure_by_default: false)
145
144
  KubernetesDeploy::Concurrency.split_across_threads(failed_resources + global_timeouts) do |r|
146
145
  r.sync_debug_info(kubectl)
147
146
  end
148
147
 
149
- failed_resources.each { |r| @logger.summary.add_paragraph(r.debug_message) }
150
- global_timeouts.each { |r| @logger.summary.add_paragraph(r.debug_message(:gave_up, timeout: @timeout)) }
148
+ failed_resources.each { |r| logger.summary.add_paragraph(r.debug_message) }
149
+ global_timeouts.each { |r| logger.summary.add_paragraph(r.debug_message(:gave_up, timeout: @timeout)) }
151
150
  end
152
151
  end
153
152
 
154
153
  def record_success_statuses(successful_resources)
155
154
  success_count = successful_resources.length
156
155
  if success_count > 0
157
- @logger.summary.add_action("successfully #{past_tense_operation} #{success_count} "\
156
+ logger.summary.add_action("successfully #{past_tense_operation} #{success_count} "\
158
157
  "#{'resource'.pluralize(success_count)}")
159
158
  final_statuses = successful_resources.map(&:pretty_status).join("\n")
160
- @logger.summary.add_paragraph("#{ColorizedString.new('Successful resources').green}\n#{final_statuses}")
159
+ logger.summary.add_paragraph("#{ColorizedString.new('Successful resources').green}\n#{final_statuses}")
161
160
  end
162
161
  end
163
162
 
@@ -7,6 +7,7 @@ require 'kubernetes-deploy/resource_watcher'
7
7
  require 'kubernetes-deploy/kubectl'
8
8
 
9
9
  module KubernetesDeploy
10
+ # Restart the pods in one or more deployments
10
11
  class RestartTask
11
12
  class FatalRestartError < FatalDeploymentError; end
12
13
 
@@ -21,6 +22,12 @@ module KubernetesDeploy
21
22
  HTTP_OK_RANGE = 200..299
22
23
  ANNOTATION = "shipit.shopify.io/restart"
23
24
 
25
+ # Initializes the restart task
26
+ #
27
+ # @param context [String] Kubernetes context / cluster
28
+ # @param namespace [String] Kubernetes namespace
29
+ # @param logger [Object] Logger object (defaults to an instance of KubernetesDeploy::FormattedLogger)
30
+ # @param max_watch_seconds [Integer] Timeout in seconds
24
31
  def initialize(context:, namespace:, logger: nil, max_watch_seconds: nil)
25
32
  @logger = logger || KubernetesDeploy::FormattedLogger.build(namespace, context)
26
33
  @task_config = KubernetesDeploy::TaskConfig.new(context, namespace, @logger)
@@ -29,6 +36,9 @@ module KubernetesDeploy
29
36
  @max_watch_seconds = max_watch_seconds
30
37
  end
31
38
 
39
+ # Runs the task, returning a boolean representing success or failure
40
+ #
41
+ # @return [Boolean]
32
42
  def run(*args)
33
43
  perform!(*args)
34
44
  true
@@ -37,6 +47,13 @@ module KubernetesDeploy
37
47
  end
38
48
  alias_method :perform, :run
39
49
 
50
+ # Runs the task, raising exceptions in case of issues
51
+ #
52
+ # @param deployments_names [Array<String>] Array of workload names to restart
53
+ # @param selector [Hash] Selector(s) parsed by KubernetesDeploy::LabelSelector
54
+ # @param verify_result [Boolean] Wait for completion and verify success
55
+ #
56
+ # @return [nil]
40
57
  def run!(deployments_names = nil, selector: nil, verify_result: true)
41
58
  start = Time.now.utc
42
59
  @logger.reset
@@ -169,8 +186,8 @@ module KubernetesDeploy
169
186
  end
170
187
 
171
188
  def verify_restart(resources)
172
- ResourceWatcher.new(resources: resources, logger: @logger, operation_name: "restart",
173
- timeout: @max_watch_seconds, namespace: @namespace, context: @context).run
189
+ ResourceWatcher.new(resources: resources, operation_name: "restart",
190
+ timeout: @max_watch_seconds, task_config: @task_config).run
174
191
  failed_resources = resources.reject(&:deploy_succeeded?)
175
192
  success = failed_resources.empty?
176
193
  if !success && failed_resources.all?(&:deploy_timed_out?)
@@ -193,7 +210,7 @@ module KubernetesDeploy
193
210
  end
194
211
 
195
212
  def kubectl
196
- @kubectl ||= Kubectl.new(namespace: @namespace, context: @context, logger: @logger, log_failure_by_default: true)
213
+ @kubectl ||= Kubectl.new(task_config: @task_config, log_failure_by_default: true)
197
214
  end
198
215
 
199
216
  def v1beta1_kubeclient
@@ -11,11 +11,18 @@ require 'kubernetes-deploy/kubernetes_resource/pod'
11
11
  require 'kubernetes-deploy/runner_task_config_validator'
12
12
 
13
13
  module KubernetesDeploy
14
+ # Run a pod that exits upon completing a task
14
15
  class RunnerTask
15
16
  class TaskTemplateMissingError < TaskConfigurationError; end
16
17
 
17
18
  attr_reader :pod_name
18
19
 
20
+ # Initializes the runner task
21
+ #
22
+ # @param namespace [String] Kubernetes namespace
23
+ # @param context [String] Kubernetes context / cluster
24
+ # @param logger [Object] Logger object (defaults to an instance of KubernetesDeploy::FormattedLogger)
25
+ # @param max_watch_seconds [Integer] Timeout in seconds
19
26
  def initialize(namespace:, context:, logger: nil, max_watch_seconds: nil)
20
27
  @logger = logger || KubernetesDeploy::FormattedLogger.build(namespace, context)
21
28
  @task_config = KubernetesDeploy::TaskConfig.new(context, namespace, @logger)
@@ -24,6 +31,9 @@ module KubernetesDeploy
24
31
  @max_watch_seconds = max_watch_seconds
25
32
  end
26
33
 
34
+ # Runs the task, returning a boolean representing success or failure
35
+ #
36
+ # @return [Boolean]
27
37
  def run(*args)
28
38
  run!(*args)
29
39
  true
@@ -31,6 +41,15 @@ module KubernetesDeploy
31
41
  false
32
42
  end
33
43
 
44
+ # Runs the task, raising exceptions in case of issues
45
+ #
46
+ # @param task_template [String] The template file you'll be rendering
47
+ # @param entrypoint [Array<String>] Override the default command in the container image
48
+ # @param args [Array<String>] Override the default arguments for the command
49
+ # @param env_vars [Array<String>] List of env vars
50
+ # @param verify_result [Boolean] Wait for completion and verify pod success
51
+ #
52
+ # @return [nil]
34
53
  def run!(task_template:, entrypoint:, args:, env_vars: [], verify_result: true)
35
54
  start = Time.now.utc
36
55
  @logger.reset
@@ -94,15 +113,15 @@ module KubernetesDeploy
94
113
  end
95
114
 
96
115
  def watch_pod(pod)
97
- rw = ResourceWatcher.new(resources: [pod], logger: @logger, timeout: @max_watch_seconds,
98
- operation_name: "run", namespace: @namespace, context: @context)
116
+ rw = ResourceWatcher.new(resources: [pod], timeout: @max_watch_seconds,
117
+ operation_name: "run", task_config: @task_config)
99
118
  rw.run(delay_sync: 1, reminder_interval: 30.seconds)
100
119
  raise DeploymentTimeoutError if pod.deploy_timed_out?
101
120
  raise FatalDeploymentError if pod.deploy_failed?
102
121
  end
103
122
 
104
123
  def record_status_once(pod)
105
- cache = ResourceCache.new(@namespace, @context, @logger)
124
+ cache = ResourceCache.new(@task_config)
106
125
  pod.sync(cache)
107
126
  warning = <<~STRING
108
127
  #{ColorizedString.new('Result verification is disabled for this task.').yellow}
@@ -175,7 +194,7 @@ module KubernetesDeploy
175
194
  end
176
195
 
177
196
  def kubectl
178
- @kubectl ||= Kubectl.new(namespace: @namespace, context: @context, logger: @logger, log_failure_by_default: true)
197
+ @kubectl ||= Kubectl.new(task_config: @task_config, log_failure_by_default: true)
179
198
  end
180
199
 
181
200
  def kubeclient
@@ -1,4 +1,4 @@
1
1
  # frozen_string_literal: true
2
2
  module KubernetesDeploy
3
- VERSION = "0.29.0"
3
+ VERSION = "0.30.0"
4
4
  end
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: kubernetes-deploy
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.29.0
4
+ version: 0.30.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Katrina Verey
@@ -9,7 +9,7 @@ authors:
9
9
  autorequire:
10
10
  bindir: exe
11
11
  cert_chain: []
12
- date: 2019-09-30 00:00:00.000000000 Z
12
+ date: 2019-10-21 00:00:00.000000000 Z
13
13
  dependencies:
14
14
  - !ruby/object:Gem::Dependency
15
15
  name: activesupport
@@ -279,6 +279,7 @@ files:
279
279
  - lib/krane/cli/restart_command.rb
280
280
  - lib/krane/cli/run_command.rb
281
281
  - lib/krane/cli/version_command.rb
282
+ - lib/krane/deploy_task.rb
282
283
  - lib/kubernetes-deploy.rb
283
284
  - lib/kubernetes-deploy/bindings_parser.rb
284
285
  - lib/kubernetes-deploy/cluster_resource_discovery.rb
@@ -288,6 +289,7 @@ files:
288
289
  - lib/kubernetes-deploy/deferred_summary_logging.rb
289
290
  - lib/kubernetes-deploy/delayed_exceptions.rb
290
291
  - lib/kubernetes-deploy/deploy_task.rb
292
+ - lib/kubernetes-deploy/deploy_task_config_validator.rb
291
293
  - lib/kubernetes-deploy/duration_parser.rb
292
294
  - lib/kubernetes-deploy/ejson_secret_provisioner.rb
293
295
  - lib/kubernetes-deploy/errors.rb