aws_helper 0.0.9 → 0.0.10

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA1:
3
- metadata.gz: 9bc2ba4458b519bf2ca28bedfd8fb596708dda1e
4
- data.tar.gz: c6066bedd227ac810582245edbbd17dd157f40d9
3
+ metadata.gz: 360b8512c1db61130f88ee2fad4812c8e1009bea
4
+ data.tar.gz: 32d47f3a1a62bc514cc1e08bbca18700862a84ea
5
5
  SHA512:
6
- metadata.gz: 130e46eb00aa48fcb8652a179c4dc76cc02e27ee83d813c3994803fc5faac3c81ff77bb16109ec02999a361e2743d53cf8c259f1b0885628ccdf1bf2736631c2
7
- data.tar.gz: aacd302d977261929dfcdda2e9aa1666c5fe2c86b6074680db7d751fde738032904f0d1864d52eb0e3cf62438c9506b4691f5430c798679f7c21fc0336ca451a
6
+ metadata.gz: dda28d1acf9f78761567694632fb472086339cc6d98a57d72fcd740476de56504d3eda50c5f7662142ac2f58347cc0e7fd9bd4829f4852aca6357fbdd7c31e4c
7
+ data.tar.gz: aba629ecf24255f352820fb8f7a3083d38682a96ada72aa82c0f003d5916ef6be1a6452fd9a229d2bb197824247bde6dd8ca48b2e4a9bc915e927f89e2f25a7b
data/README.md CHANGED
@@ -1,71 +1,71 @@
1
- # aws_helper
2
-
3
- Aws Helper for an instance
4
-
5
- Allows functions on EBS volumes, snapshots, IP addresses and more
6
- * initially snapshots are supported
7
-
8
- ## Installation
9
-
10
- Add this line to your application's Gemfile:
11
-
12
- gem 'aws_helper'
13
-
14
- And then execute:
15
-
16
- $ bundle
17
-
18
- Or install it yourself as:
19
-
20
- $ gem install aws_helper
21
-
22
- ## Minimal Usage
23
-
24
- Assuming server start with an IAM role that have read access to AWS can create and delete snapshots:
25
-
26
- Snapshot EBS root device at /dev/sda1
27
-
28
- aws_helper snap /dev/sda1 --description zzzzzzzzz
29
-
30
- Prune so only keep 7 snapshots:
31
-
32
- aws_helper snap_prune /dev/sda1 --snapshots_to_keep=7
33
-
34
- Email me a list of the latest 20 snapshots:
35
-
36
- aws_helper snap_email me@company.com ebs.backups@company.com mysmtpemailserver.com
37
-
38
- Cleanup ebs disks - Delete old server root disks:
39
-
40
- aws_helper ebs_cleanup
41
-
42
- Disks that are 8GB in size, not attached to a server, not tagged in any way and from a snapshot will be deleted.
43
-
44
- ## Complex Usage
45
-
46
- If your server does not have a role then you need to code the AWS keys which is not best practice:
47
-
48
- Snapshot EBS attached to device /dev/sdf volume vol-123456 access AWS through an http proxy:
49
-
50
- export AWS_ACCESS_KEY_ID ='xxxxxxxxxxxx'
51
- export AWS_SECRET_ACCESS_KEY ='yyyyyyyy'
52
- export HTTP_PROXY=http://myproxy:port
53
- aws_helper snap /dev/sdf vol-123456 --description zzzzzzzzz
54
-
55
- Prune so only keep 20 snapshots:
56
-
57
- export AWS_ACCESS_KEY_ID ='xxxxxxxxxxxx'
58
- export AWS_SECRET_ACCESS_KEY ='yyyyyyyy'
59
- export HTTP_PROXY=http://myproxy:port
60
- aws_helper snap_prune /dev/sdf vol-123456 --snapshots_to_keep=20
61
-
62
- Email me a list of the latest 30 snapshots with a subject title on email:
63
-
64
- export AWS_ACCESS_KEY_ID ='xxxxxxxxxxxx'
65
- export AWS_SECRET_ACCESS_KEY ='yyyyyyyy'
66
- export HTTP_PROXY=http://myproxy:port
67
- aws_helper snap_email me@company.com ebs.backups@company.com mysmtpemailserver.com 'My EBS Backups' --rows=30
68
-
69
- Other functions to follow
70
-
71
-
1
+ # aws_helper
2
+
3
+ Aws Helper for an instance
4
+
5
+ Allows functions on EBS volumes, snapshots, IP addresses and more
6
+ * initially snapshots are supported
7
+
8
+ ## Installation
9
+
10
+ Add this line to your application's Gemfile:
11
+
12
+ gem 'aws_helper'
13
+
14
+ And then execute:
15
+
16
+ $ bundle
17
+
18
+ Or install it yourself as:
19
+
20
+ $ gem install aws_helper
21
+
22
+ ## Minimal Usage
23
+
24
+ Assuming server start with an IAM role that have read access to AWS can create and delete snapshots:
25
+
26
+ Snapshot EBS root device at /dev/sda1
27
+
28
+ aws_helper snap /dev/sda1 --description zzzzzzzzz
29
+
30
+ Prune so only keep 7 snapshots:
31
+
32
+ aws_helper snap_prune /dev/sda1 --snapshots_to_keep=7
33
+
34
+ Email me a list of the latest 20 snapshots:
35
+
36
+ aws_helper snap_email me@company.com ebs.backups@company.com mysmtpemailserver.com
37
+
38
+ Cleanup ebs disks - Delete old server root disks:
39
+
40
+ aws_helper ebs_cleanup
41
+
42
+ Disks that are 8GB in size, not attached to a server, not tagged in any way and from a snapshot will be deleted.
43
+
44
+ ## Complex Usage
45
+
46
+ If your server does not have a role then you need to code the AWS keys which is not best practice:
47
+
48
+ Snapshot EBS attached to device /dev/sdf volume vol-123456 access AWS through an http proxy:
49
+
50
+ export AWS_ACCESS_KEY_ID ='xxxxxxxxxxxx'
51
+ export AWS_SECRET_ACCESS_KEY ='yyyyyyyy'
52
+ export HTTP_PROXY=http://myproxy:port
53
+ aws_helper snap /dev/sdf vol-123456 --description zzzzzzzzz
54
+
55
+ Prune so only keep 20 snapshots:
56
+
57
+ export AWS_ACCESS_KEY_ID ='xxxxxxxxxxxx'
58
+ export AWS_SECRET_ACCESS_KEY ='yyyyyyyy'
59
+ export HTTP_PROXY=http://myproxy:port
60
+ aws_helper snap_prune /dev/sdf vol-123456 --snapshots_to_keep=20
61
+
62
+ Email me a list of the latest 30 snapshots with a subject title on email:
63
+
64
+ export AWS_ACCESS_KEY_ID ='xxxxxxxxxxxx'
65
+ export AWS_SECRET_ACCESS_KEY ='yyyyyyyy'
66
+ export HTTP_PROXY=http://myproxy:port
67
+ aws_helper snap_email me@company.com ebs.backups@company.com mysmtpemailserver.com 'My EBS Backups' --rows=30
68
+
69
+ Other functions to follow
70
+
71
+
@@ -1,34 +1,34 @@
1
- # encoding: utf-8
2
-
3
- lib = File.expand_path('../lib', __FILE__)
4
- $LOAD_PATH.unshift(lib) unless $LOAD_PATH.include?(lib)
5
- require 'awshelper/version'
6
-
7
- Gem::Specification.new do |s|
8
- s.name = 'aws_helper'
9
- s.version = Awshelper::VERSION
10
- s.authors = ['Neill Turner']
11
- s.email = ['neillwturner@gmail.com']
12
- s.homepage = 'https://github.com/neillturner/aws_helper'
13
- s.summary = 'Aws Helper for an instance'
14
- candidates = Dir.glob('{lib}/**/*') + ['README.md', 'aws_helper.gemspec']
15
- candidates = candidates + Dir.glob("bin/*")
16
- s.files = candidates.sort
17
- s.platform = Gem::Platform::RUBY
18
- s.executables = s.files.grep(%r{^bin/}) { |f| File.basename(f) }
19
- s.require_paths = ['lib']
20
- s.add_dependency('right_aws')
21
- s.add_dependency('thor')
22
- s.rubyforge_project = '[none]'
23
- s.description = <<-EOF
24
- == DESCRIPTION:
25
-
26
- Aws Helper for an instance
27
-
28
- == FEATURES:
29
-
30
- Allows functions on EBS volumes, snapshots, IP addresses and more
31
-
32
- EOF
33
-
34
- end
1
+ # encoding: utf-8
2
+
3
+ lib = File.expand_path('../lib', __FILE__)
4
+ $LOAD_PATH.unshift(lib) unless $LOAD_PATH.include?(lib)
5
+ require 'awshelper/version'
6
+
7
+ Gem::Specification.new do |s|
8
+ s.name = 'aws_helper'
9
+ s.version = Awshelper::VERSION
10
+ s.authors = ['Neill Turner']
11
+ s.email = ['neillwturner@gmail.com']
12
+ s.homepage = 'https://github.com/neillturner/aws_helper'
13
+ s.summary = 'Aws Helper for an instance'
14
+ candidates = Dir.glob('{lib}/**/*') + ['README.md', 'aws_helper.gemspec']
15
+ candidates = candidates + Dir.glob("bin/*")
16
+ s.files = candidates.sort
17
+ s.platform = Gem::Platform::RUBY
18
+ s.executables = s.files.grep(%r{^bin/}) { |f| File.basename(f) }
19
+ s.require_paths = ['lib']
20
+ s.add_dependency('right_aws')
21
+ s.add_dependency('thor')
22
+ s.rubyforge_project = '[none]'
23
+ s.description = <<-EOF
24
+ == DESCRIPTION:
25
+
26
+ Aws Helper for an instance
27
+
28
+ == FEATURES:
29
+
30
+ Allows functions on EBS volumes, snapshots, IP addresses and more
31
+
32
+ EOF
33
+
34
+ end
@@ -1,5 +1,5 @@
1
- #!/usr/bin/env ruby
2
-
3
- require "awshelper/cli"
4
-
1
+ #!/usr/bin/env ruby
2
+
3
+ require "awshelper/cli"
4
+
5
5
  Awshelper::CLI.start(ARGV)
@@ -1,2 +1,2 @@
1
- module Awshelper
1
+ module Awshelper
2
2
  end
@@ -1,410 +1,410 @@
1
- require 'thor'
2
- require 'awshelper'
3
- require 'awshelper/ec2'
4
- require 'syslog'
5
- require 'net/smtp'
6
- require 'json'
7
-
8
- module Awshelper
9
- class CLI < Thor
10
- include Thor::Actions
11
-
12
- include Awshelper::Ec2
13
-
14
- #def ebs_create(volume_id, snapshot_id, most_recent_snapshot)
15
- # #TO DO
16
- # raise "Cannot create a volume with a specific id (EC2 chooses volume ids)" if volume_id
17
- # if snapshot_id =~ /vol/
18
- # new_resource.snapshot_id(find_snapshot_id(new_resource.snapshot_id, new_resource.most_recent_snapshot))
19
- # end
20
- #
21
- # #nvid = volume_id_in_node_data
22
- # #if nvid
23
- # # # volume id is registered in the node data, so check that the volume in fact exists in EC2
24
- # # vol = volume_by_id(nvid)
25
- # # exists = vol && vol[:aws_status] != "deleting"
26
- # # # TODO: determine whether this should be an error or just cause a new volume to be created. Currently erring on the side of failing loudly
27
- # # raise "Volume with id #{nvid} is registered with the node but does not exist in EC2. To clear this error, remove the ['aws']['ebs_volume']['#{new_resource.name}']['volume_id'] entry from this node's data." unless exists
28
- # #else
29
- # # Determine if there is a volume that meets the resource's specifications and is attached to the current
30
- # # instance in case a previous [:create, :attach] run created and attached a volume but for some reason was
31
- # # not registered in the node data (e.g. an exception is thrown after the attach_volume request was accepted
32
- # # by EC2, causing the node data to not be stored on the server)
33
- # if new_resource.device && (attached_volume = currently_attached_volume(instance_id, new_resource.device))
34
- # Chef::Log.debug("There is already a volume attached at device #{new_resource.device}")
35
- # compatible = volume_compatible_with_resource_definition?(attached_volume)
36
- # raise "Volume #{attached_volume[:aws_id]} attached at #{attached_volume[:aws_device]} but does not conform to this resource's specifications" unless compatible
37
- # Chef::Log.debug("The volume matches the resource's definition, so the volume is assumed to be already created")
38
- # converge_by("update the node data with volume id: #{attached_volume[:aws_id]}") do
39
- # node.set['aws']['ebs_volume'][new_resource.name]['volume_id'] = attached_volume[:aws_id]
40
- # node.save unless Chef::Config[:solo]
41
- # end
42
- # else
43
- # # If not, create volume and register its id in the node data
44
- # converge_by("create a volume with id=#{new_resource.snapshot_id} size=#{new_resource.size} availability_zone=#{new_resource.availability_zone} and update the node data with created volume's id") do
45
- # nvid = create_volume(new_resource.snapshot_id,
46
- # new_resource.size,
47
- # new_resource.availability_zone,
48
- # new_resource.timeout,
49
- # new_resource.volume_type,
50
- # new_resource.piops)
51
- # node.set['aws']['ebs_volume'][new_resource.name]['volume_id'] = nvid
52
- # node.save unless Chef::Config[:solo]
53
- # end
54
- # end
55
- # #end
56
- #end
57
-
58
- #def ebs_attach(device, volume_id, timeout)
59
- # # determine_volume returns a Hash, not a Mash, and the keys are
60
- # # symbols, not strings.
61
- # vol = determine_volume(device, volume_id)
62
- # if vol[:aws_status] == "in-use"
63
- # if vol[:aws_instance_id] != instance_id
64
- # raise "Volume with id #{vol[:aws_id]} exists but is attached to instance #{vol[:aws_instance_id]}"
65
- # else
66
- # Chef::Log.debug("Volume is already attached")
67
- # end
68
- # else
69
- # # attach the volume
70
- # attach_volume(vol[:aws_id], instance_id, device, timeout)
71
- # end
72
- #end
73
-
74
- #def ebs_detach(device, volume_id, timeout)
75
- # vol = determine_volume(device, volume_id)
76
- # detach_volume(vol[:aws_id], timeout)
77
- #end
78
-
79
- desc "snap DEVICE [VOLUME_ID]", "Take a snapshot of a EBS Disk."
80
- option :description
81
-
82
- long_desc <<-LONGDESC
83
- 'snap DEVICE [VOLUME_ID] --description xxxxxx'
84
- \x5 Take a snapshot of a EBS Disk by specifying device and/or volume_id.
85
- \x5 All commands rely on environment variables or the server having an IAM role
86
- \x5 export AWS_ACCESS_KEY_ID ='xxxxxxxxxx'
87
- \x5 export AWS_SECRET_ACCESS_KEY ='yyyyyy'
88
- \x5 For example
89
- \x5 aws_helper snap /dev/sdf
90
- \x5 will snap shot the EBS disk attach to device /dev/xvdj
91
- LONGDESC
92
-
93
- def snap(device, volume_id=nil)
94
- vol = determine_volume(device, volume_id)
95
- snap_description = options[:description] if options[:description]
96
- snap_description = "Created by aws_helper(#{instance_id}/#{local_ipv4}) for #{ami_id} from #{vol[:aws_id]}" if !options[:description]
97
- snapshot = ec2.create_snapshot(vol[:aws_id],snap_description)
98
- log("Created snapshot of #{vol[:aws_id]} as #{snapshot[:aws_id]}")
99
- end
100
-
101
- desc "snap_prune DEVICE [VOLUME_ID]", "Prune the number of snapshots."
102
- option :snapshots_to_keep, :type => :numeric, :required => true
103
-
104
- long_desc <<-LONGDESC
105
- 'snap_prune DEVICE [VOLUME_ID] --snapshots_to_keep=<numeric>'
106
- \x5 Prune the number of snapshots of a EBS Disk by specifying device and/or volume_id and the no to keep.
107
- \x5 All commands rely on environment variables or the server having an IAM role
108
- \x5 export AWS_ACCESS_KEY_ID ='xxxxxxxxxxxx'
109
- \x5 export AWS_SECRET_ACCESS_KEY ='yyyyyyyy'
110
- \x5 For example
111
- \x5 aws_helper snap_prune /dev/sdf --snapshots_to_keep=7
112
- \x5 will keep the last 7 snapshots of the EBS disk attach to device /dev/xvdj
113
- LONGDESC
114
-
115
- def snap_prune(device, volume_id=nil)
116
- snapshots_to_keep = options[:snapshots_to_keep]
117
- vol = determine_volume(device, volume_id)
118
- old_snapshots = Array.new
119
- log("Checking for old snapshots")
120
- ec2.describe_snapshots.sort { |a,b| b[:aws_started_at] <=> a[:aws_started_at] }.each do |snapshot|
121
- if snapshot[:aws_volume_id] == vol[:aws_id]
122
- log("Found old snapshot #{snapshot[:aws_id]} (#{snapshot[:aws_volume_id]}) #{snapshot[:aws_started_at]}")
123
- old_snapshots << snapshot
124
- end
125
- end
126
- if old_snapshots.length > snapshots_to_keep
127
- old_snapshots[snapshots_to_keep, old_snapshots.length].each do |die|
128
- log("Deleting old snapshot #{die[:aws_id]}")
129
- ec2.delete_snapshot(die[:aws_id])
130
- end
131
- end
132
- end
133
-
134
- desc "snap_email TO FROM EMAIL_SERVER", "Email Snapshot List."
135
- option :rows, :type => :numeric, :required => false
136
- option :owner, :type => :numeric, :required => false
137
-
138
- long_desc <<-LONGDESC
139
- 'snap_email TO FROM EMAIL_SERVER ['EBS Backups'] --rows=<numeric> --owner=<numeric>'
140
- \x5 Emails the last 20 snapshots from specific email address via the email_server.
141
- \x5 All commands rely on environment variables or the server having an IAM role
142
- \x5 export AWS_ACCESS_KEY_ID ='xxxxxxxxxxxx'
143
- \x5 export AWS_SECRET_ACCESS_KEY ='yyyyyyyy'
144
- \x5 For example
145
- \x5 aws_helper snap_email me@mycompany.com ebs.backups@mycompany.com emailserver.com 'My EBS Backups' --rows=20 -owner=999887777
146
- \x5 will email the list of the latest 20 snapshots to email address me@mycompany.com via email server emailserver.com
147
- \x5 that belong to aws owner 999887777
148
- LONGDESC
149
-
150
- def snap_email(to, from, email_server, subject='EBS Backups')
151
- rows = 20
152
- rows = options[:rows] if options[:rows]
153
- #owner = {}
154
- #owner = {:aws_owner => options[:owner]} if options[:owner]
155
- message = ""
156
- log("Report on snapshots")
157
- # ({ Name="start-time", Values="today in YYYY-MM-DD"})
158
- i = rows
159
- ec2.describe_snapshots().sort { |a,b| b[:aws_started_at] <=> a[:aws_started_at] }.each do |snapshot|
160
- if i >0
161
- if !options[:owner] or snapshot[:owner_id] == options[:owner]
162
- message = message+"#{snapshot[:aws_id]} #{snapshot[:aws_volume_id]} #{snapshot[:aws_started_at]} #{snapshot[:aws_description]} #{snapshot[:aws_status]}\n"
163
- i = i-1
164
- end
165
- end
166
- end
167
- opts = {}
168
- opts[:server] = email_server
169
- opts[:from] = from
170
- opts[:from_alias] = 'EBS Backups'
171
- opts[:subject] = subject
172
- opts[:body] = message
173
- send_email(to,opts)
174
- end
175
-
176
- desc "ebs_cleanup", "Cleanup ebs disks - Delete old server root disks."
177
-
178
- long_desc <<-LONGDESC
179
- 'ebs_cleanup'
180
- \x5 Cleanup ebs disks - Delete old server root disks.
181
- \x5 Disks that are 8GB in size, not attached to a server, not tagged in any way and from a snapshot.
182
- \x5 All commands rely on environment variables or the server having an IAM role.
183
- \x5 export AWS_ACCESS_KEY_ID ='xxxxxxxxxxxx'
184
- \x5 export AWS_SECRET_ACCESS_KEY ='yyyyyyyy'
185
- \x5 For example
186
- \x5 ebs_cleanup
187
- LONGDESC
188
-
189
- def ebs_cleanup()
190
- ec2.describe_volumes(:filters => { 'status' => 'available', 'size' => '8' }).each do |r|
191
- if r[:aws_size] == 8 and r[:aws_status] == 'available' and r[:tags] == {} and r[:snapshot_id] != nil and r[:snapshot_id][0,5] == 'snap-' then
192
- log("Deleting unused volume #{r[:aws_id]} from snapshot #{r[:snapshot_id]}")
193
- ec2.delete_volume(r[:aws_id])
194
- end
195
- end
196
- end
197
-
198
-
199
- private
200
-
201
- def log(message,type="info")
202
- # $0 is the current script name
203
- puts message
204
- Syslog.open($0, Syslog::LOG_PID | Syslog::LOG_CONS) { |s| s.info message } if type == "info"
205
- Syslog.open($0, Syslog::LOG_PID | Syslog::LOG_CONS) { |s| s.info message } if type == "err"
206
- end
207
-
208
- # Pulls the volume id from the volume_id attribute or the node data and verifies that the volume actually exists
209
- def determine_volume(device, volume_id)
210
- vol = currently_attached_volume(instance_id, device)
211
- vol_id = volume_id || ( vol ? vol[:aws_id] : nil )
212
- log("volume_id attribute not set and no volume is attached at the device #{device}",'err') unless vol_id
213
- raise "volume_id attribute not set and no volume is attached at the device #{device}" unless vol_id
214
-
215
- # check that volume exists
216
- vol = volume_by_id(vol_id)
217
- log("No volume with id #{vol_id} exists",'err') unless vol
218
- raise "No volume with id #{vol_id} exists" unless vol
219
-
220
- vol
221
- end
222
-
223
-
224
- def get_all_instances(filter={})
225
- data = []
226
- response = ec2.describe_instances(filter)
227
- if response.status == 200
228
- data_s = response.body['reservationSet']
229
- data_s.each do |rs|
230
- gs=rs['groupSet']
231
- rs['instancesSet'].each do |r|
232
- #r[:aws_instance_id] = r['instanceId']
233
- #r[:public_ip] = r['ipAddress']
234
- #r[:aws_state] = r['instanceState']['name']
235
- #r['groupSet']=rs['groupSet']
236
- data.push(r)
237
- end
238
- end
239
- end
240
- data
241
- end
242
-
243
-
244
- # Retrieves information for a volume
245
- def volume_by_id(volume_id)
246
- ec2.describe_volumes.find{|v| v[:aws_id] == volume_id}
247
- end
248
-
249
- # Returns the volume that's attached to the instance at the given device or nil if none matches
250
- def currently_attached_volume(instance_id, device)
251
- ec2.describe_volumes.find{|v| v[:aws_instance_id] == instance_id && v[:aws_device] == device}
252
- end
253
-
254
- # Returns true if the given volume meets the resource's attributes
255
- #def volume_compatible_with_resource_definition?(volume)
256
- # if new_resource.snapshot_id =~ /vol/
257
- # new_resource.snapshot_id(find_snapshot_id(new_resource.snapshot_id, new_resource.most_recent_snapshot))
258
- # end
259
- # (new_resource.size.nil? || new_resource.size == volume[:aws_size]) &&
260
- # (new_resource.availability_zone.nil? || new_resource.availability_zone == volume[:zone]) &&
261
- # (new_resource.snapshot_id.nil? || new_resource.snapshot_id == volume[:snapshot_id])
262
- #end
263
-
264
- # TODO: support tags in deswcription
265
- #def tag_value(instance,tag_key)
266
- # options = ec2.describe_tags({:filters => {:resource_id => instance }} )
267
- # end
268
-
269
- # Creates a volume according to specifications and blocks until done (or times out)
270
- def create_volume(snapshot_id, size, availability_zone, timeout, volume_type, piops)
271
- availability_zone ||= instance_availability_zone
272
-
273
- # Sanity checks so we don't shoot ourselves.
274
- raise "Invalid volume type: #{volume_type}" unless ['standard', 'io1', 'gp2'].include?(volume_type)
275
-
276
- # PIOPs requested. Must specify an iops param and probably won't be "low".
277
- if volume_type == 'io1'
278
- raise 'IOPS value not specified.' unless piops >= 100
279
- end
280
-
281
- # Shouldn't see non-zero piops param without appropriate type.
282
- if piops > 0
283
- raise 'IOPS param without piops volume type.' unless volume_type == 'io1'
284
- end
285
-
286
- create_volume_opts = { :volume_type => volume_type }
287
- # TODO: this may have to be casted to a string. rightaws vs aws doc discrepancy.
288
- create_volume_opts[:iops] = piops if volume_type == 'io1'
289
-
290
- nv = ec2.create_volume(snapshot_id, size, availability_zone, create_volume_opts)
291
- Chef::Log.debug("Created new volume #{nv[:aws_id]}#{snapshot_id ? " based on #{snapshot_id}" : ""}")
292
-
293
- # block until created
294
- begin
295
- Timeout::timeout(timeout) do
296
- while true
297
- vol = volume_by_id(nv[:aws_id])
298
- if vol && vol[:aws_status] != "deleting"
299
- if ["in-use", "available"].include?(vol[:aws_status])
300
- Chef::Log.info("Volume #{nv[:aws_id]} is available")
301
- break
302
- else
303
- Chef::Log.debug("Volume is #{vol[:aws_status]}")
304
- end
305
- sleep 3
306
- else
307
- raise "Volume #{nv[:aws_id]} no longer exists"
308
- end
309
- end
310
- end
311
- rescue Timeout::Error
312
- raise "Timed out waiting for volume creation after #{timeout} seconds"
313
- end
314
-
315
- nv[:aws_id]
316
- end
317
-
318
- # Attaches the volume and blocks until done (or times out)
319
- def attach_volume(volume_id, instance_id, device, timeout)
320
- Chef::Log.debug("Attaching #{volume_id} as #{device}")
321
- ec2.attach_volume(volume_id, instance_id, device)
322
-
323
- # block until attached
324
- begin
325
- Timeout::timeout(timeout) do
326
- while true
327
- vol = volume_by_id(volume_id)
328
- if vol && vol[:aws_status] != "deleting"
329
- if vol[:aws_attachment_status] == "attached"
330
- if vol[:aws_instance_id] == instance_id
331
- Chef::Log.info("Volume #{volume_id} is attached to #{instance_id}")
332
- break
333
- else
334
- raise "Volume is attached to instance #{vol[:aws_instance_id]} instead of #{instance_id}"
335
- end
336
- else
337
- Chef::Log.debug("Volume is #{vol[:aws_status]}")
338
- end
339
- sleep 3
340
- else
341
- raise "Volume #{volume_id} no longer exists"
342
- end
343
- end
344
- end
345
- rescue Timeout::Error
346
- raise "Timed out waiting for volume attachment after #{timeout} seconds"
347
- end
348
- end
349
-
350
- # Detaches the volume and blocks until done (or times out)
351
- def detach_volume(volume_id, timeout)
352
- vol = volume_by_id(volume_id)
353
- if vol[:aws_instance_id] != instance_id
354
- Chef::Log.debug("EBS Volume #{volume_id} is not attached to this instance (attached to #{vol[:aws_instance_id]}). Skipping...")
355
- return
356
- end
357
- Chef::Log.debug("Detaching #{volume_id}")
358
- orig_instance_id = vol[:aws_instance_id]
359
- ec2.detach_volume(volume_id)
360
-
361
- # block until detached
362
- begin
363
- Timeout::timeout(timeout) do
364
- while true
365
- vol = volume_by_id(volume_id)
366
- if vol && vol[:aws_status] != "deleting"
367
- if vol[:aws_instance_id] != orig_instance_id
368
- Chef::Log.info("Volume detached from #{orig_instance_id}")
369
- break
370
- else
371
- Chef::Log.debug("Volume: #{vol.inspect}")
372
- end
373
- else
374
- Chef::Log.debug("Volume #{volume_id} no longer exists")
375
- break
376
- end
377
- sleep 3
378
- end
379
- end
380
- rescue Timeout::Error
381
- raise "Timed out waiting for volume detachment after #{timeout} seconds"
382
- end
383
- end
384
-
385
- def send_email(to,opts={})
386
- opts[:server] ||= 'localhost'
387
- opts[:from] ||= 'email@example.com'
388
- opts[:from_alias] ||= 'Example Emailer'
389
- opts[:subject] ||= "You need to see this"
390
- opts[:body] ||= "Important stuff!"
391
-
392
- msg = <<END_OF_MESSAGE
393
- From: #{opts[:from_alias]} <#{opts[:from]}>
394
- To: <#{to}>
395
- Subject: #{opts[:subject]}
396
-
397
- #{opts[:body]}
398
- END_OF_MESSAGE
399
- puts "Sending to #{to} from #{opts[:from]} email server #{opts[:server]}"
400
- Net::SMTP.start(opts[:server]) do |smtp|
401
- smtp.send_message msg, opts[:from], to
402
- end
403
- end
404
-
405
-
406
- end
407
-
408
- end
409
-
410
-
1
+ require 'thor'
2
+ require 'awshelper'
3
+ require 'awshelper/ec2'
4
+ require 'syslog'
5
+ require 'net/smtp'
6
+ require 'json'
7
+
8
+ module Awshelper
9
+ class CLI < Thor
10
+ include Thor::Actions
11
+
12
+ include Awshelper::Ec2
13
+
14
+ #def ebs_create(volume_id, snapshot_id, most_recent_snapshot)
15
+ # #TO DO
16
+ # raise "Cannot create a volume with a specific id (EC2 chooses volume ids)" if volume_id
17
+ # if snapshot_id =~ /vol/
18
+ # new_resource.snapshot_id(find_snapshot_id(new_resource.snapshot_id, new_resource.most_recent_snapshot))
19
+ # end
20
+ #
21
+ # #nvid = volume_id_in_node_data
22
+ # #if nvid
23
+ # # # volume id is registered in the node data, so check that the volume in fact exists in EC2
24
+ # # vol = volume_by_id(nvid)
25
+ # # exists = vol && vol[:aws_status] != "deleting"
26
+ # # # TODO: determine whether this should be an error or just cause a new volume to be created. Currently erring on the side of failing loudly
27
+ # # raise "Volume with id #{nvid} is registered with the node but does not exist in EC2. To clear this error, remove the ['aws']['ebs_volume']['#{new_resource.name}']['volume_id'] entry from this node's data." unless exists
28
+ # #else
29
+ # # Determine if there is a volume that meets the resource's specifications and is attached to the current
30
+ # # instance in case a previous [:create, :attach] run created and attached a volume but for some reason was
31
+ # # not registered in the node data (e.g. an exception is thrown after the attach_volume request was accepted
32
+ # # by EC2, causing the node data to not be stored on the server)
33
+ # if new_resource.device && (attached_volume = currently_attached_volume(instance_id, new_resource.device))
34
+ # Chef::Log.debug("There is already a volume attached at device #{new_resource.device}")
35
+ # compatible = volume_compatible_with_resource_definition?(attached_volume)
36
+ # raise "Volume #{attached_volume[:aws_id]} attached at #{attached_volume[:aws_device]} but does not conform to this resource's specifications" unless compatible
37
+ # Chef::Log.debug("The volume matches the resource's definition, so the volume is assumed to be already created")
38
+ # converge_by("update the node data with volume id: #{attached_volume[:aws_id]}") do
39
+ # node.set['aws']['ebs_volume'][new_resource.name]['volume_id'] = attached_volume[:aws_id]
40
+ # node.save unless Chef::Config[:solo]
41
+ # end
42
+ # else
43
+ # # If not, create volume and register its id in the node data
44
+ # converge_by("create a volume with id=#{new_resource.snapshot_id} size=#{new_resource.size} availability_zone=#{new_resource.availability_zone} and update the node data with created volume's id") do
45
+ # nvid = create_volume(new_resource.snapshot_id,
46
+ # new_resource.size,
47
+ # new_resource.availability_zone,
48
+ # new_resource.timeout,
49
+ # new_resource.volume_type,
50
+ # new_resource.piops)
51
+ # node.set['aws']['ebs_volume'][new_resource.name]['volume_id'] = nvid
52
+ # node.save unless Chef::Config[:solo]
53
+ # end
54
+ # end
55
+ # #end
56
+ #end
57
+
58
+ #def ebs_attach(device, volume_id, timeout)
59
+ # # determine_volume returns a Hash, not a Mash, and the keys are
60
+ # # symbols, not strings.
61
+ # vol = determine_volume(device, volume_id)
62
+ # if vol[:aws_status] == "in-use"
63
+ # if vol[:aws_instance_id] != instance_id
64
+ # raise "Volume with id #{vol[:aws_id]} exists but is attached to instance #{vol[:aws_instance_id]}"
65
+ # else
66
+ # Chef::Log.debug("Volume is already attached")
67
+ # end
68
+ # else
69
+ # # attach the volume
70
+ # attach_volume(vol[:aws_id], instance_id, device, timeout)
71
+ # end
72
+ #end
73
+
74
+ #def ebs_detach(device, volume_id, timeout)
75
+ # vol = determine_volume(device, volume_id)
76
+ # detach_volume(vol[:aws_id], timeout)
77
+ #end
78
+
79
+ desc "snap DEVICE [VOLUME_ID]", "Take a snapshot of a EBS Disk."
80
+ option :description
81
+
82
+ long_desc <<-LONGDESC
83
+ 'snap DEVICE [VOLUME_ID] --description xxxxxx'
84
+ \x5 Take a snapshot of a EBS Disk by specifying device and/or volume_id.
85
+ \x5 All commands rely on environment variables or the server having an IAM role
86
+ \x5 export AWS_ACCESS_KEY_ID ='xxxxxxxxxx'
87
+ \x5 export AWS_SECRET_ACCESS_KEY ='yyyyyy'
88
+ \x5 For example
89
+ \x5 aws_helper snap /dev/sdf
90
+ \x5 will snap shot the EBS disk attach to device /dev/xvdj
91
+ LONGDESC
92
+
93
+ def snap(device, volume_id=nil)
94
+ vol = determine_volume(device, volume_id)
95
+ snap_description = options[:description] if options[:description]
96
+ snap_description = "Created by aws_helper(#{instance_id}/#{local_ipv4}) for #{ami_id} from #{vol[:aws_id]}" if !options[:description]
97
+ snapshot = ec2.create_snapshot(vol[:aws_id],snap_description)
98
+ log("Created snapshot of #{vol[:aws_id]} as #{snapshot[:aws_id]}")
99
+ end
100
+
101
+ desc "snap_prune DEVICE [VOLUME_ID]", "Prune the number of snapshots."
102
+ option :snapshots_to_keep, :type => :numeric, :required => true
103
+
104
+ long_desc <<-LONGDESC
105
+ 'snap_prune DEVICE [VOLUME_ID] --snapshots_to_keep=<numeric>'
106
+ \x5 Prune the number of snapshots of a EBS Disk by specifying device and/or volume_id and the no to keep.
107
+ \x5 All commands rely on environment variables or the server having an IAM role
108
+ \x5 export AWS_ACCESS_KEY_ID ='xxxxxxxxxxxx'
109
+ \x5 export AWS_SECRET_ACCESS_KEY ='yyyyyyyy'
110
+ \x5 For example
111
+ \x5 aws_helper snap_prune /dev/sdf --snapshots_to_keep=7
112
+ \x5 will keep the last 7 snapshots of the EBS disk attach to device /dev/xvdj
113
+ LONGDESC
114
+
115
+ def snap_prune(device, volume_id=nil)
116
+ snapshots_to_keep = options[:snapshots_to_keep]
117
+ vol = determine_volume(device, volume_id)
118
+ old_snapshots = Array.new
119
+ log("Checking for old snapshots")
120
+ ec2.describe_snapshots.sort { |a,b| b[:aws_started_at] <=> a[:aws_started_at] }.each do |snapshot|
121
+ if snapshot[:aws_volume_id] == vol[:aws_id]
122
+ log("Found old snapshot #{snapshot[:aws_id]} (#{snapshot[:aws_volume_id]}) #{snapshot[:aws_started_at]}")
123
+ old_snapshots << snapshot
124
+ end
125
+ end
126
+ if old_snapshots.length > snapshots_to_keep
127
+ old_snapshots[snapshots_to_keep, old_snapshots.length].each do |die|
128
+ log("Deleting old snapshot #{die[:aws_id]}")
129
+ ec2.delete_snapshot(die[:aws_id])
130
+ end
131
+ end
132
+ end
133
+
134
+ desc "snap_email TO FROM EMAIL_SERVER", "Email Snapshot List."
135
+ option :rows, :type => :numeric, :required => false
136
+ option :owner, :type => :numeric, :required => false
137
+
138
+ long_desc <<-LONGDESC
139
+ 'snap_email TO FROM EMAIL_SERVER ['EBS Backups'] --rows=<numeric> --owner=<numeric>'
140
+ \x5 Emails the last 20 snapshots from specific email address via the email_server.
141
+ \x5 All commands rely on environment variables or the server having an IAM role
142
+ \x5 export AWS_ACCESS_KEY_ID ='xxxxxxxxxxxx'
143
+ \x5 export AWS_SECRET_ACCESS_KEY ='yyyyyyyy'
144
+ \x5 For example
145
+ \x5 aws_helper snap_email me@mycompany.com ebs.backups@mycompany.com emailserver.com 'My EBS Backups' --rows=20 -owner=999887777
146
+ \x5 will email the list of the latest 20 snapshots to email address me@mycompany.com via email server emailserver.com
147
+ \x5 that belong to aws owner 999887777
148
+ LONGDESC
149
+
150
+ def snap_email(to, from, email_server, subject='EBS Backups')
151
+ rows = 20
152
+ rows = options[:rows] if options[:rows]
153
+ #owner = {}
154
+ #owner = {:aws_owner => options[:owner]} if options[:owner]
155
+ message = ""
156
+ log("Report on snapshots")
157
+ # ({ Name="start-time", Values="today in YYYY-MM-DD"})
158
+ i = rows
159
+ ec2.describe_snapshots().sort { |a,b| b[:aws_started_at] <=> a[:aws_started_at] }.each do |snapshot|
160
+ if i >0
161
+ if options[:owner].to_s == '' || snapshot[:owner_id].to_s == options[:owner].to_s
162
+ message = message+"#{snapshot[:aws_id]} #{snapshot[:aws_volume_id]} #{snapshot[:aws_started_at]} #{snapshot[:aws_description]} #{snapshot[:aws_status]}\n"
163
+ i = i-1
164
+ end
165
+ end
166
+ end
167
+ opts = {}
168
+ opts[:server] = email_server
169
+ opts[:from] = from
170
+ opts[:from_alias] = 'EBS Backups'
171
+ opts[:subject] = subject
172
+ opts[:body] = message
173
+ send_email(to,opts)
174
+ end
175
+
176
+ desc "ebs_cleanup", "Cleanup ebs disks - Delete old server root disks."
177
+
178
+ long_desc <<-LONGDESC
179
+ 'ebs_cleanup'
180
+ \x5 Cleanup ebs disks - Delete old server root disks.
181
+ \x5 Disks that are 8GB in size, not attached to a server, not tagged in any way and from a snapshot.
182
+ \x5 All commands rely on environment variables or the server having an IAM role.
183
+ \x5 export AWS_ACCESS_KEY_ID ='xxxxxxxxxxxx'
184
+ \x5 export AWS_SECRET_ACCESS_KEY ='yyyyyyyy'
185
+ \x5 For example
186
+ \x5 ebs_cleanup
187
+ LONGDESC
188
+
189
+ def ebs_cleanup()
190
+ ec2.describe_volumes(:filters => { 'status' => 'available', 'size' => '8' }).each do |r|
191
+ if r[:aws_size] == 8 and r[:aws_status] == 'available' and r[:tags] == {} and r[:snapshot_id] != nil and r[:snapshot_id][0,5] == 'snap-' then
192
+ log("Deleting unused volume #{r[:aws_id]} from snapshot #{r[:snapshot_id]}")
193
+ ec2.delete_volume(r[:aws_id])
194
+ end
195
+ end
196
+ end
197
+
198
+
199
+ private
200
+
201
+ def log(message,type="info")
202
+ # $0 is the current script name
203
+ puts message
204
+ Syslog.open($0, Syslog::LOG_PID | Syslog::LOG_CONS) { |s| s.info message } if type == "info"
205
+ Syslog.open($0, Syslog::LOG_PID | Syslog::LOG_CONS) { |s| s.info message } if type == "err"
206
+ end
207
+
208
+ # Pulls the volume id from the volume_id attribute or the node data and verifies that the volume actually exists
209
+ def determine_volume(device, volume_id)
210
+ vol = currently_attached_volume(instance_id, device)
211
+ vol_id = volume_id || ( vol ? vol[:aws_id] : nil )
212
+ log("volume_id attribute not set and no volume is attached at the device #{device}",'err') unless vol_id
213
+ raise "volume_id attribute not set and no volume is attached at the device #{device}" unless vol_id
214
+
215
+ # check that volume exists
216
+ vol = volume_by_id(vol_id)
217
+ log("No volume with id #{vol_id} exists",'err') unless vol
218
+ raise "No volume with id #{vol_id} exists" unless vol
219
+
220
+ vol
221
+ end
222
+
223
+
224
+ def get_all_instances(filter={})
225
+ data = []
226
+ response = ec2.describe_instances(filter)
227
+ if response.status == 200
228
+ data_s = response.body['reservationSet']
229
+ data_s.each do |rs|
230
+ gs=rs['groupSet']
231
+ rs['instancesSet'].each do |r|
232
+ #r[:aws_instance_id] = r['instanceId']
233
+ #r[:public_ip] = r['ipAddress']
234
+ #r[:aws_state] = r['instanceState']['name']
235
+ #r['groupSet']=rs['groupSet']
236
+ data.push(r)
237
+ end
238
+ end
239
+ end
240
+ data
241
+ end
242
+
243
+
244
+ # Retrieves information for a volume
245
+ def volume_by_id(volume_id)
246
+ ec2.describe_volumes.find{|v| v[:aws_id] == volume_id}
247
+ end
248
+
249
+ # Returns the volume that's attached to the instance at the given device or nil if none matches
250
+ def currently_attached_volume(instance_id, device)
251
+ ec2.describe_volumes.find{|v| v[:aws_instance_id] == instance_id && v[:aws_device] == device}
252
+ end
253
+
254
+ # Returns true if the given volume meets the resource's attributes
255
+ #def volume_compatible_with_resource_definition?(volume)
256
+ # if new_resource.snapshot_id =~ /vol/
257
+ # new_resource.snapshot_id(find_snapshot_id(new_resource.snapshot_id, new_resource.most_recent_snapshot))
258
+ # end
259
+ # (new_resource.size.nil? || new_resource.size == volume[:aws_size]) &&
260
+ # (new_resource.availability_zone.nil? || new_resource.availability_zone == volume[:zone]) &&
261
+ # (new_resource.snapshot_id.nil? || new_resource.snapshot_id == volume[:snapshot_id])
262
+ #end
263
+
264
+ # TODO: support tags in deswcription
265
+ #def tag_value(instance,tag_key)
266
+ # options = ec2.describe_tags({:filters => {:resource_id => instance }} )
267
+ # end
268
+
269
+ # Creates a volume according to specifications and blocks until done (or times out)
270
+ def create_volume(snapshot_id, size, availability_zone, timeout, volume_type, piops)
271
+ availability_zone ||= instance_availability_zone
272
+
273
+ # Sanity checks so we don't shoot ourselves.
274
+ raise "Invalid volume type: #{volume_type}" unless ['standard', 'io1', 'gp2'].include?(volume_type)
275
+
276
+ # PIOPs requested. Must specify an iops param and probably won't be "low".
277
+ if volume_type == 'io1'
278
+ raise 'IOPS value not specified.' unless piops >= 100
279
+ end
280
+
281
+ # Shouldn't see non-zero piops param without appropriate type.
282
+ if piops > 0
283
+ raise 'IOPS param without piops volume type.' unless volume_type == 'io1'
284
+ end
285
+
286
+ create_volume_opts = { :volume_type => volume_type }
287
+ # TODO: this may have to be casted to a string. rightaws vs aws doc discrepancy.
288
+ create_volume_opts[:iops] = piops if volume_type == 'io1'
289
+
290
+ nv = ec2.create_volume(snapshot_id, size, availability_zone, create_volume_opts)
291
+ Chef::Log.debug("Created new volume #{nv[:aws_id]}#{snapshot_id ? " based on #{snapshot_id}" : ""}")
292
+
293
+ # block until created
294
+ begin
295
+ Timeout::timeout(timeout) do
296
+ while true
297
+ vol = volume_by_id(nv[:aws_id])
298
+ if vol && vol[:aws_status] != "deleting"
299
+ if ["in-use", "available"].include?(vol[:aws_status])
300
+ Chef::Log.info("Volume #{nv[:aws_id]} is available")
301
+ break
302
+ else
303
+ Chef::Log.debug("Volume is #{vol[:aws_status]}")
304
+ end
305
+ sleep 3
306
+ else
307
+ raise "Volume #{nv[:aws_id]} no longer exists"
308
+ end
309
+ end
310
+ end
311
+ rescue Timeout::Error
312
+ raise "Timed out waiting for volume creation after #{timeout} seconds"
313
+ end
314
+
315
+ nv[:aws_id]
316
+ end
317
+
318
+ # Attaches the volume and blocks until done (or times out)
319
+ def attach_volume(volume_id, instance_id, device, timeout)
320
+ Chef::Log.debug("Attaching #{volume_id} as #{device}")
321
+ ec2.attach_volume(volume_id, instance_id, device)
322
+
323
+ # block until attached
324
+ begin
325
+ Timeout::timeout(timeout) do
326
+ while true
327
+ vol = volume_by_id(volume_id)
328
+ if vol && vol[:aws_status] != "deleting"
329
+ if vol[:aws_attachment_status] == "attached"
330
+ if vol[:aws_instance_id] == instance_id
331
+ Chef::Log.info("Volume #{volume_id} is attached to #{instance_id}")
332
+ break
333
+ else
334
+ raise "Volume is attached to instance #{vol[:aws_instance_id]} instead of #{instance_id}"
335
+ end
336
+ else
337
+ Chef::Log.debug("Volume is #{vol[:aws_status]}")
338
+ end
339
+ sleep 3
340
+ else
341
+ raise "Volume #{volume_id} no longer exists"
342
+ end
343
+ end
344
+ end
345
+ rescue Timeout::Error
346
+ raise "Timed out waiting for volume attachment after #{timeout} seconds"
347
+ end
348
+ end
349
+
350
+ # Detaches the volume and blocks until done (or times out)
351
+ def detach_volume(volume_id, timeout)
352
+ vol = volume_by_id(volume_id)
353
+ if vol[:aws_instance_id] != instance_id
354
+ Chef::Log.debug("EBS Volume #{volume_id} is not attached to this instance (attached to #{vol[:aws_instance_id]}). Skipping...")
355
+ return
356
+ end
357
+ Chef::Log.debug("Detaching #{volume_id}")
358
+ orig_instance_id = vol[:aws_instance_id]
359
+ ec2.detach_volume(volume_id)
360
+
361
+ # block until detached
362
+ begin
363
+ Timeout::timeout(timeout) do
364
+ while true
365
+ vol = volume_by_id(volume_id)
366
+ if vol && vol[:aws_status] != "deleting"
367
+ if vol[:aws_instance_id] != orig_instance_id
368
+ Chef::Log.info("Volume detached from #{orig_instance_id}")
369
+ break
370
+ else
371
+ Chef::Log.debug("Volume: #{vol.inspect}")
372
+ end
373
+ else
374
+ Chef::Log.debug("Volume #{volume_id} no longer exists")
375
+ break
376
+ end
377
+ sleep 3
378
+ end
379
+ end
380
+ rescue Timeout::Error
381
+ raise "Timed out waiting for volume detachment after #{timeout} seconds"
382
+ end
383
+ end
384
+
385
+ def send_email(to,opts={})
386
+ opts[:server] ||= 'localhost'
387
+ opts[:from] ||= 'email@example.com'
388
+ opts[:from_alias] ||= 'Example Emailer'
389
+ opts[:subject] ||= "You need to see this"
390
+ opts[:body] ||= "Important stuff!"
391
+
392
+ msg = <<END_OF_MESSAGE
393
+ From: #{opts[:from_alias]} <#{opts[:from]}>
394
+ To: <#{to}>
395
+ Subject: #{opts[:subject]}
396
+
397
+ #{opts[:body]}
398
+ END_OF_MESSAGE
399
+ puts "Sending to #{to} from #{opts[:from]} email server #{opts[:server]}"
400
+ Net::SMTP.start(opts[:server]) do |smtp|
401
+ smtp.send_message msg, opts[:from], to
402
+ end
403
+ end
404
+
405
+
406
+ end
407
+
408
+ end
409
+
410
+