eks_cli 0.4.4 → 0.4.5

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: ba49f9e0042e7f67e85c83c1b1da3baf19b285128a042c630f70dd14dcab66d7
4
- data.tar.gz: '058b044ee0b320a9989110cb408b2a35a2a5f9317eeab4b45b2f34c4386076ac'
3
+ metadata.gz: 13604399317ad223e406ab42975a58de062eba61d6f5f2f073c545818c89b625
4
+ data.tar.gz: 237117a49388e1d2dde8044a8935e5a2dd56ae9c92769930919df1d1750fa01f
5
5
  SHA512:
6
- metadata.gz: 97bf31805bbd9b3c741ef8cc9778462d5300a39cdc001ab760b67a01a273dd48a6c3adeefbb36bead38fb8183706bcfac11da6a78aa8e449c7ceb719195cb28c
7
- data.tar.gz: fec0ee13e7c70bbf76384cd69468d763cf0551b264927283783a1073333799009e59c301e483ebac88931f3c8ec478a99d6c80cf2a19a9a29d93356eb07198be
6
+ metadata.gz: 25568fb30b957afd895a46666fc1e7b6f6a83c119b5294031f7505a933d81c4fbf112a805de02ca0d66c490c483b585f7253316da942cba22b37c8d8c7c18253
7
+ data.tar.gz: d982788e52ef2485652940c5506a6ba654d77f33bbcd331a3149cb37858f0a7c58493a0582590768b60a85cb2e1a0d50760e49371e65c77b2223d882b622db50
data/README.md CHANGED
@@ -10,16 +10,18 @@ EKS cluster bootstrap with batteries included
10
10
  * Manage IAM policies that will be attached to your nodes
11
11
  * Easily configure docker repository secrets to allow pulling private images
12
12
  * Manage Route53 DNS records to point at your Kubernetes services
13
- * Export nodegroups to SporInst Elastigroups
13
+ * Export nodegroups to SpotInst Elastigroups
14
14
  * Auto resolving AMIs by region & instance types (GPU enabled AMIs)
15
- * Even more...
15
+ * Supports both kubernetes 1.12 and 1.13
16
+ * Configuration is saved on S3 for easy collaboration
16
17
 
17
18
  ## Usage
18
19
 
19
20
  ```
20
21
  $ gem install eks_cli
21
- $ eks create --cluster-name My-EKS-Cluster
22
- $ eks create-nodegroup --cluster-name My-EKS-Cluster --group-name nodes --ssh-key-name <my-ssh-key> --yes
22
+ $ eks create --kubernetes-version 1.13 --cluster-name my-eks-cluster --s3-bucket my-eks-config-bucket
23
+ $ eks create-nodegroup --cluster-name my-eks-cluster --group-name nodes --ssh-key-name <my-ssh-key> --s3-bucket my-eks-config-bucket --yes
24
+ $ eks delete-cluster --cluster-name my-eks-cluster --s3-bucket my-eks-config-bucket
23
25
  ```
24
26
 
25
27
  You can type `eks` in your shell to get the full synopsis of available commands
@@ -28,38 +30,45 @@ You can type `eks` in your shell to get the full synopsis of available commands
28
30
  Commands:
29
31
  eks add-iam-user IAM_ARN # adds an IAM user as an authorized member on the EKS cluster
30
32
  eks create # creates a new EKS cluster
31
- eks create-cluster-security-group # creates a SG for cluster communication
32
- eks create-cluster-vpc # creates a vpc according to aws cloudformation template
33
33
  eks create-default-storage-class # creates default storage class on a new k8s cluster
34
34
  eks create-dns-autoscaler # creates kube dns autoscaler
35
- eks create-eks-cluster # create EKS cluster on AWS
36
- eks create-eks-role # creates an IAM role for usage by EKS
37
35
  eks create-nodegroup # creates all nodegroups on environment
36
+ eks delete-cluster # deletes a cluster, including nodegroups
38
37
  eks delete-nodegroup # deletes cloudformation stack for nodegroup
39
- eks detach-iam-policies # detaches added policies to nodegroup IAM Role
40
38
  eks enable-gpu # installs nvidia plugin as a daemonset on the cluster
41
39
  eks export-nodegroup # exports nodegroup auto scaling group to spotinst
42
40
  eks help [COMMAND] # Describe available commands or one specific command
43
- eks scale-nodegroup --group-name=GROUP_NAME --max=N --min=N # scales a nodegroup
41
+ eks scale-nodegroup # scales a nodegroup
44
42
  eks set-docker-registry-credentials USERNAME PASSWORD EMAIL # sets docker registry credentials
45
43
  eks set-iam-policies --policies=one two three # sets IAM policies to be attached to created nodegroups
46
44
  eks set-inter-vpc-networking TO_VPC_ID TO_SG_ID # creates a vpc peering connection, sets route tables and allows network access on SG
47
45
  eks show-config # print cluster configuration
48
46
  eks update-auth # update aws auth configmap to allow all nodegroups to connect to control plane
47
+ eks update-cluster-cni # updates cni with warm ip target
49
48
  eks update-dns HOSTNAME K8S_SERVICE_NAME # alters route53 CNAME records to point to k8s service ELBs
50
49
  eks version # prints eks_cli version
51
50
  eks wait-for-cluster # waits until cluster responds to HTTP requests
52
51
 
53
52
  Options:
54
- c, --cluster-name=CLUSTER_NAME
53
+ c, [--cluster-name=CLUSTER_NAME] # eks cluster name (env: EKS_CLI_CLUSTER_NAME)
54
+ s3, [--s3-bucket=S3_BUCKET] # s3 bucket name to save configurtaion and state (env: EKS_CLI_S3_BUCKET)
55
55
  ```
56
-
57
56
  ## Prerequisites
58
57
 
59
58
  1. Ruby
60
59
  2. [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) version >= 10 on your `PATH`
61
60
  3. [aws-iam-authenticator](https://github.com/kubernetes-sigs/aws-iam-authenticator) on your `PATH`
62
61
  4. [aws-cli](https://docs.aws.amazon.com/cli/latest/userguide/installing.html) version >= 1.16.18 on your `PATH`
62
+ 5. S3 bucket with write/read permissions to store configuration
63
+
64
+ ## Environment variables
65
+
66
+ You are encouraged to use both `EKS_CLI_CLUSTER_NAME` and `EKS_CLI_S3_BUCKET` environment variables instead of using the corresponding flags on each command. It makes the command clearer and reduces the chance for typos.
67
+ The following selected commands assumes you have exported both environment variables:
68
+ ```bash
69
+ export EKS_CLI_S3_BUCKET=my-eks-config-bucket
70
+ export EKS_CLI_CLUSTER_NAME=my-eks-cluster
71
+ ```
63
72
 
64
73
  ## Selected Commands
65
74
 
@@ -74,30 +83,30 @@ Nodes in different nodegroups may communicate freely thanks to a shared Security
74
83
 
75
84
  Scale nodegroups up and down using
76
85
 
77
- `$ eks scale-nodegroup --cluster-name My-EKS-Cluster --group-name nodes --min 1 --max 10`
86
+ `$ eks scale-nodegroup --group-name nodes --min 1 --max 10`
78
87
 
79
88
  ### Authorize an IAM user to access the cluster
80
89
 
81
- `$ eks add-iam-user arn:aws:iam::XXXXXXXX:user/XXXXXXXX --cluster-name=My-EKS-Cluster --yes`
90
+ `$ eks add-iam-user arn:aws:iam::XXXXXXXX:user/XXXXXXXX --yes`
82
91
 
83
92
  Edits `aws-auth` configmap and updates it on EKS to allow an IAM user access the cluster via `kubectl`
84
93
 
85
94
  ### Setting IAM policies to be attached to EKS nodes
86
95
 
87
- `$ eks set-iam-policies --cluster-name=My-EKS-Cluster --policies=AmazonS3FullAccess AmazonDynamoDBFullAccess`
96
+ `$ eks set-iam-policies --policies=AmazonS3FullAccess AmazonDynamoDBFullAccess`
88
97
 
89
98
  Sets IAM policies to be attached to nodegroups once created.
90
99
  This settings does not work retro-actively - only affects future `eks create-nodegroup` commands.
91
100
 
92
101
  ### Routing Route53 hostnames to Kubernetes service
93
102
 
94
- `$ eks update-dns my-cool-service.my-company.com cool-service --route53-hosted-zone-id=XXXXX --elb-hosted-zone-id=XXXXXX --cluster-name=My-EKS-Cluster`
103
+ `$ eks update-dns my-cool-service.my-company.com cool-service --route53-hosted-zone-id=XXXXX --elb-hosted-zone-id=XXXXXX`
95
104
 
96
105
  Takes the ELB endpoint from `cool-service` and puts it as an alias record of `my-cool-service.my-company.com` on Route53
97
106
 
98
107
  ### Enabling GPU
99
108
 
100
- `$ eks enable-gpu --cluster-name EKS-Staging`
109
+ `$ eks enable-gpu`
101
110
 
102
111
  Installs the nvidia device plugin required to have your GPUs exposed
103
112
 
@@ -108,21 +117,21 @@ Installs the nvidia device plugin required to have your GPUs exposed
108
117
 
109
118
  ### Adding Dockerhub Secrets
110
119
 
111
- `$ eks set-docker-registry-credentials <dockerhub-user> <dockerhub-password> <dockerhub-email> --cluster-name My-EKS-Cluster`
120
+ `$ eks set-docker-registry-credentials <dockerhub-user> <dockerhub-password> <dockerhub-email>`
112
121
 
113
122
  Adds your dockerhub credentials as a secret and attaches it to the default ServiceAccount's imagePullSecrets
114
123
 
115
124
  ### Creating Default Storage Class
116
125
 
117
- `$ eks create-default-storage-class --cluster-name My-EKS-Cluster`
126
+ `$ eks create-default-storage-class`
118
127
 
119
128
  Creates a standard gp2 default storage class named gp2
120
129
 
121
130
  ### Installing DNS autoscaler
122
131
 
123
- `$ eks create-dns-autoscaler --cluster-name My-EKS-Cluster`
132
+ `$ eks create-dns-autoscaler`
124
133
 
125
- Creates kube-dns autoscaler with sane defaults
134
+ Creates coredns autoscaler with production defaults
126
135
 
127
136
  ### Connecting to an existing VPC
128
137
 
data/eks_cli.gemspec CHANGED
@@ -20,6 +20,7 @@ Gem::Specification.new do |s|
20
20
  s.bindir = "bin"
21
21
  s.executables = ["eks"]
22
22
  s.require_paths = ["lib"]
23
+ s.add_dependency 'aws-sdk-s3', '~> 1'
23
24
  s.add_dependency 'thor', '0.20.3'
24
25
  s.add_dependency 'aws-sdk-ec2', '1.62.0'
25
26
  s.add_dependency 'aws-sdk-cloudformation', '1.13.0'
data/lib/eks_cli/cli.rb CHANGED
@@ -27,7 +27,8 @@ module EksCli
27
27
 
28
28
  class Cli < Thor
29
29
 
30
- class_option :cluster_name, required: true, aliases: :c
30
+ class_option :cluster_name, required: false, aliases: :c, desc: 'eks cluster name (env: EKS_CLI_CLUSTER_NAME)'
31
+ class_option :s3_bucket, required: false, aliases: :s3, desc: "s3 bucket name to save configurtaion and state (env: EKS_CLI_S3_BUCKET)"
31
32
 
32
33
  desc "create", "creates a new EKS cluster"
33
34
  option :region, type: :string, default: "us-west-2", desc: "AWS region for EKS cluster"
@@ -42,53 +43,59 @@ module EksCli
42
43
  option :create_dns_autoscaler, type: :boolean, default: true, desc: "creates dns autoscaler on the cluster"
43
44
  option :warm_ip_target, type: :numeric, desc: "set a default custom warm ip target for CNI"
44
45
  def create
45
- opts = {region: options[:region],
46
- kubernetes_version: options[:kubernetes_version],
47
- open_ports: options[:open_ports],
48
- cidr: options[:cidr],
49
- warm_ip_target: options[:warm_ip_target] ? options[:warm_ip_target].to_i : nil,
50
- subnet1_az: (options[:subnet1_az] || Config::AZS[options[:region]][0]),
51
- subnet2_az: (options[:subnet2_az] || Config::AZS[options[:region]][1]),
52
- subnet3_az: (options[:subnet3_az] || Config::AZS[options[:region]][2])}
53
- config.bootstrap(opts)
54
- cluster = EKS::Cluster.new(cluster_name).create
55
- cluster.update_kubeconfig
56
- wait_for_cluster
57
- enable_gpu if options[:enable_gpu]
58
- create_default_storage_class if options[:create_default_storage_class]
59
- create_dns_autoscaler if options[:create_dns_autoscaler]
60
- update_cluster_cni if options[:warm_ip_target]
61
- Log.info "cluster creation completed"
46
+ with_context do
47
+
48
+ opts = {region: options[:region],
49
+ kubernetes_version: options[:kubernetes_version],
50
+ open_ports: options[:open_ports],
51
+ cidr: options[:cidr],
52
+ warm_ip_target: options[:warm_ip_target] ? options[:warm_ip_target].to_i : nil,
53
+ subnet1_az: (options[:subnet1_az] || Config::AZS[options[:region]][0]),
54
+ subnet2_az: (options[:subnet2_az] || Config::AZS[options[:region]][1]),
55
+ subnet3_az: (options[:subnet3_az] || Config::AZS[options[:region]][2])}
56
+
57
+ config.bootstrap(opts)
58
+ cluster = EKS::Cluster.new(cluster_name).create
59
+ cluster.update_kubeconfig
60
+ wait_for_cluster
61
+ enable_gpu if options[:enable_gpu]
62
+ create_default_storage_class if options[:create_default_storage_class]
63
+ create_dns_autoscaler if options[:create_dns_autoscaler]
64
+ update_cluster_cni if options[:warm_ip_target]
65
+ Log.info "cluster creation completed"
66
+ end
62
67
  end
63
68
 
64
69
  desc "show-config", "print cluster configuration"
65
70
  option :group_name, desc: "group name to show configuration for"
66
71
  def show_config
67
- if options[:group_name]
68
- puts JSON.pretty_generate(config.for_group(options[:group_name]))
69
- else
70
- puts JSON.pretty_generate(config.read_from_disk)
72
+ with_context do
73
+ if options[:group_name]
74
+ puts JSON.pretty_generate(config.for_group(options[:group_name]))
75
+ else
76
+ puts JSON.pretty_generate(config.read_from_disk)
77
+ end
71
78
  end
72
79
  end
73
80
 
74
81
  desc "update-cluster-cni", "updates cni with warm ip target"
75
82
  def update_cluster_cni
76
- K8s::Client.new(cluster_name).update_cni
83
+ with_context { K8s::Client.new(cluster_name).update_cni }
77
84
  end
78
85
 
79
86
  desc "enable-gpu", "installs nvidia plugin as a daemonset on the cluster"
80
87
  def enable_gpu
81
- K8s::Client.new(cluster_name).enable_gpu
88
+ with_context { K8s::Client.new(cluster_name).enable_gpu }
82
89
  end
83
90
 
84
91
  desc "set-docker-registry-credentials USERNAME PASSWORD EMAIL", "sets docker registry credentials"
85
92
  def set_docker_registry_credentials(username, password, email)
86
- K8s::Client.new(cluster_name).set_docker_registry_credentials(username, password, email)
93
+ with_context { K8s::Client.new(cluster_name).set_docker_registry_credentials(username, password, email) }
87
94
  end
88
95
 
89
96
  desc "create-default-storage-class", "creates default storage class on a new k8s cluster"
90
97
  def create_default_storage_class
91
- K8s::Client.new(cluster_name).create_default_storage_class
98
+ with_context { K8s::Client.new(cluster_name).create_default_storage_class }
92
99
  end
93
100
 
94
101
  desc "create-nodegroup", "creates all nodegroups on environment"
@@ -106,13 +113,15 @@ module EksCli
106
113
  option :enable_docker_bridge, type: :boolean, default: false, desc: "pass --enable-docker-bridge true on bootstrap.sh (https://github.com/kubernetes/kubernetes/issues/40182))"
107
114
  option :yes, type: :boolean, default: false, desc: "perform nodegroup creation"
108
115
  def create_nodegroup
109
- opts = options.dup
110
- opts[:subnets] = opts[:subnets].map(&:to_i)
111
- Config[cluster_name].update_nodegroup(opts) unless opts[:all]
112
- if opts[:yes]
113
- cf_stacks = nodegroups.map {|ng| ng.create(wait_for_completion: false)}
114
- CloudFormation::Stack.await(cf_stacks)
115
- K8s::Auth.new(cluster_name).update
116
+ with_context do
117
+ opts = options.dup
118
+ opts[:subnets] = opts[:subnets].map(&:to_i)
119
+ Config[cluster_name].update_nodegroup(opts) unless opts[:all]
120
+ if opts[:yes]
121
+ cf_stacks = nodegroups.map {|ng| ng.create(wait_for_completion: false)}
122
+ CloudFormation::Stack.await(cf_stacks)
123
+ K8s::Auth.new(cluster_name).update
124
+ end
116
125
  end
117
126
  end
118
127
 
@@ -125,35 +134,37 @@ module EksCli
125
134
  option :asg, type: :boolean, default: true, desc: "scale ec2 auto scaling group"
126
135
  option :update, type: :boolean, default: false, desc: "update the nodegroup attributes"
127
136
  def scale_nodegroup
128
- nodegroups.each do |ng|
129
- min = (options[:min] || config.for_group(ng.name)["min"]).to_i
130
- max = (options[:max] || config.for_group(ng.name)["max"]).to_i
131
- ng.scale(min, max, options[:asg], options[:spotinst])
132
- Config[cluster_name].update_nodegroup(options.slice("min", "max").merge({"group_name" => ng.name})) if options[:update]
137
+ with_context do
138
+ nodegroups.each do |ng|
139
+ min = (options[:min] || config.for_group(ng.name)["min"]).to_i
140
+ max = (options[:max] || config.for_group(ng.name)["max"]).to_i
141
+ ng.scale(min, max, options[:asg], options[:spotinst])
142
+ Config[cluster_name].update_nodegroup(options.slice("min", "max").merge({"group_name" => ng.name})) if options[:update]
143
+ end
133
144
  end
134
145
  end
135
146
 
136
- desc "delete-cluster", "deleted cluster"
147
+ desc "delete-cluster", "deletes a cluster, including nodegroups/elastigroups and cloudformation stacks"
137
148
  def delete_cluster
138
- EKS::Cluster.new(cluster_name).delete
149
+ with_context { EKS::Cluster.new(cluster_name).delete }
139
150
  end
140
151
 
141
152
  desc "delete-nodegroup", "deletes cloudformation stack for nodegroup"
142
153
  option :all, type: :boolean, default: false, desc: "delete all nodegroups. can't be used with --name"
143
154
  option :group_name, type: :string, desc: "delete a specific nodegroup. can't be used with --all"
144
155
  def delete_nodegroup
145
- nodegroups.each(&:delete)
156
+ with_context { nodegroups.each(&:delete) }
146
157
  end
147
158
 
148
159
  desc "update-auth", "update aws auth configmap to allow all nodegroups to connect to control plane"
149
160
  def update_auth
150
- K8s::Auth.new(cluster_name).update
161
+ with_context { K8s::Auth.new(cluster_name).update }
151
162
  end
152
163
 
153
164
  desc "set-iam-policies", "sets IAM policies to be attached to created nodegroups"
154
165
  option :policies, type: :array, required: true, desc: "IAM policies ARNs"
155
166
  def set_iam_policies
156
- Config[cluster_name].set_iam_policies(options[:policies])
167
+ with_context { Config[cluster_name].set_iam_policies(options[:policies]) }
157
168
  end
158
169
 
159
170
  desc "update-dns HOSTNAME K8S_SERVICE_NAME", "alters route53 CNAME records to point to k8s service ELBs"
@@ -161,22 +172,22 @@ module EksCli
161
172
  option :elb_hosted_zone_id, required: true, desc: "hosted zone ID for the ELB on ec2"
162
173
  option :namespace, default: "default", desc: "the k8s namespace of the service"
163
174
  def update_dns(hostname, k8s_service_name)
164
- Route53::Client.new(cluster_name).update_dns(hostname, k8s_service_name, options[:namespace], options[:route53_hosted_zone_id], options[:elb_hosted_zone_id])
175
+ with_context { Route53::Client.new(cluster_name).update_dns(hostname, k8s_service_name, options[:namespace], options[:route53_hosted_zone_id], options[:elb_hosted_zone_id]) }
165
176
  end
166
177
 
167
178
  desc "set-inter-vpc-networking TO_VPC_ID TO_SG_ID", "creates a vpc peering connection, sets route tables and allows network access on SG"
168
179
  def set_inter_vpc_networking(to_vpc_id, to_sg_id)
169
- VPC::Client.new(cluster_name).set_inter_vpc_networking(to_vpc_id, to_sg_id)
180
+ with_context { VPC::Client.new(cluster_name).set_inter_vpc_networking(to_vpc_id, to_sg_id) }
170
181
  end
171
182
 
172
183
  desc "create-dns-autoscaler", "creates kube dns autoscaler"
173
184
  def create_dns_autoscaler
174
- K8s::Client.new(cluster_name).create_dns_autoscaler
185
+ with_context { K8s::Client.new(cluster_name).create_dns_autoscaler }
175
186
  end
176
187
 
177
188
  desc "wait-for-cluster", "waits until cluster responds to HTTP requests"
178
189
  def wait_for_cluster
179
- K8s::Client.new(cluster_name).wait_for_cluster
190
+ with_context { K8s::Client.new(cluster_name).wait_for_cluster }
180
191
  end
181
192
 
182
193
  desc "export-nodegroup", "exports nodegroup auto scaling group to spotinst"
@@ -184,7 +195,7 @@ module EksCli
184
195
  option :group_name, type: :string, desc: "create a specific nodegroup. can't be used with --all"
185
196
  option :exact_instance_type, type: :boolean, default: false, desc: "enforce spotinst to use existing instance type only"
186
197
  def export_nodegroup
187
- nodegroups.each {|ng| ng.export_to_spotinst(options[:exact_instance_type]) }
198
+ with_context { nodegroups.each {|ng| ng.export_to_spotinst(options[:exact_instance_type]) } }
188
199
  end
189
200
 
190
201
  desc "add-iam-user IAM_ARN", "adds an IAM user as an authorized member on the EKS cluster"
@@ -192,8 +203,10 @@ module EksCli
192
203
  option :groups, type: :array, default: ["system:masters"], desc: "which group should the user be added to"
193
204
  option :yes, type: :boolean, default: false, desc: "update aws-auth configmap"
194
205
  def add_iam_user(iam_arn)
195
- Config[cluster_name].add_user(iam_arn, options[:username], options[:groups])
196
- K8s::Auth.new(cluster_name).update if options[:yes]
206
+ with_context do
207
+ Config[cluster_name].add_user(iam_arn, options[:username], options[:groups])
208
+ K8s::Auth.new(cluster_name).update if options[:yes]
209
+ end
197
210
  end
198
211
 
199
212
  disable_required_check! :version
@@ -203,16 +216,35 @@ module EksCli
203
216
  end
204
217
 
205
218
  no_commands do
206
- def cluster_name; options[:cluster_name]; end
219
+ def cluster_name; options_or_env(:cluster_name); end
220
+ def s3_bucket; options_or_env(:s3_bucket); end
221
+
222
+ def with_context
223
+ Config.s3_bucket=(s3_bucket)
224
+ yield
225
+ end
207
226
 
208
- def config; Config[cluster_name]; end
227
+ def config; Config.new(cluster_name); end
209
228
 
210
- def all_nodegroups; Config[cluster_name]["groups"].keys ;end
229
+ def all_nodegroups; config["groups"].keys ;end
211
230
 
212
231
  def nodegroups
213
232
  ng = options[:all] ? all_nodegroups : [options[:group_name]]
214
233
  ng.map {|n| NodeGroup.new(cluster_name, n)}
215
234
  end
235
+
236
+ def options_or_env(k)
237
+ v = options[k] || ENV[env_param_name(k)]
238
+ if v == nil || v == ""
239
+ Log.error "missing #{k} or #{env_param_name(k)}"
240
+ exit 1
241
+ end
242
+ v
243
+ end
244
+
245
+ def env_param_name(k)
246
+ "EKS_CLI_#{k.to_s.upcase}"
247
+ end
216
248
  end
217
249
 
218
250
  end
@@ -2,6 +2,7 @@ require 'json'
2
2
  require_relative 'log'
3
3
  require 'active_support/core_ext/hash'
4
4
  require 'fileutils'
5
+ require 'aws-sdk-s3'
5
6
  module EksCli
6
7
  class Config
7
8
 
@@ -16,6 +17,13 @@ module EksCli
16
17
  new(cluster_name)
17
18
  end
18
19
 
20
+ def s3_bucket=(bucket)
21
+ @s3_bucket = bucket
22
+ end
23
+
24
+ def s3_bucket
25
+ @s3_bucket || raise("no s3 bucket set")
26
+ end
19
27
  end
20
28
 
21
29
  def initialize(cluster_name)
@@ -24,7 +32,10 @@ module EksCli
24
32
 
25
33
  def delete
26
34
  Log.info "deleting configuration for #{@cluster_name} at #{dir}"
27
- FileUtils.rm_rf(dir)
35
+ s3.delete_object(bucket: s3_bucket, key: config_path)
36
+ s3.delete_object(bucket: s3_bucket, key: state_path)
37
+ s3.delete_object(bucket: s3_bucket, key: groups_path)
38
+ s3.delete_object(bucket: s3_bucket, key: dir)
28
39
  end
29
40
 
30
41
  def read_from_disk
@@ -91,33 +102,37 @@ module EksCli
91
102
  end
92
103
 
93
104
  def write_to_file(attrs, path)
94
- File.open(path, 'w') {|file| file.write(attrs.to_json)}
105
+ s3.put_object(bucket: s3_bucket, key: path, body: attrs.to_json)
95
106
  end
96
107
 
97
108
  def read(path)
98
- f = File.read(path)
99
- JSON.parse(f)
109
+ resp = s3.get_object(bucket: s3_bucket, key: path)
110
+ body = resp.body.read
111
+ JSON.parse(body)
100
112
  end
101
113
 
102
114
  def groups_path
103
- with_config_dir { |dir| "#{dir}/groups.json" }
115
+ "#{dir}/groups.json"
104
116
  end
105
117
 
106
118
  def state_path
107
- with_config_dir { |dir| "#{dir}/state.json" }
119
+ "#{dir}/state.json"
108
120
  end
109
121
 
110
122
  def config_path
111
- with_config_dir { |dir| "#{dir}/config.json" }
123
+ "#{dir}/config.json"
112
124
  end
113
125
 
114
126
  def dir
115
- "#{ENV['HOME']}/.eks/#{@cluster_name}"
127
+ "eks-cli/#{@cluster_name}"
128
+ end
129
+
130
+ def s3_bucket
131
+ self.class.s3_bucket
116
132
  end
117
133
 
118
- def with_config_dir
119
- FileUtils.mkdir_p(dir)
120
- yield dir
134
+ def s3
135
+ @s3 ||= Aws::S3::Client.new
121
136
  end
122
137
 
123
138
  end
@@ -1,3 +1,3 @@
1
1
  module EksCli
2
- VERSION = "0.4.4"
2
+ VERSION = "0.4.5"
3
3
  end
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: eks_cli
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.4.4
4
+ version: 0.4.5
5
5
  platform: ruby
6
6
  authors:
7
7
  - Erez Rabih
@@ -10,6 +10,20 @@ bindir: bin
10
10
  cert_chain: []
11
11
  date: 2018-11-18 00:00:00.000000000 Z
12
12
  dependencies:
13
+ - !ruby/object:Gem::Dependency
14
+ name: aws-sdk-s3
15
+ requirement: !ruby/object:Gem::Requirement
16
+ requirements:
17
+ - - "~>"
18
+ - !ruby/object:Gem::Version
19
+ version: '1'
20
+ type: :runtime
21
+ prerelease: false
22
+ version_requirements: !ruby/object:Gem::Requirement
23
+ requirements:
24
+ - - "~>"
25
+ - !ruby/object:Gem::Version
26
+ version: '1'
13
27
  - !ruby/object:Gem::Dependency
14
28
  name: thor
15
29
  requirement: !ruby/object:Gem::Requirement