aws_helper 0.0.11 → 0.0.12

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA1:
3
- metadata.gz: 2d34578116d8ee13fbc57dce29ee95c014ab20cb
4
- data.tar.gz: f58a1cf60a2a06ba15cb3805b9038b01fbb01690
3
+ metadata.gz: df95e55f694e4aa18d36716375d570b159c3b650
4
+ data.tar.gz: 04c549eb82ab387412425d066d34aa6cda243938
5
5
  SHA512:
6
- metadata.gz: 861419fbe9cd55fef1a655155377b97b372151adab8220815fdcc9c68e10540b83bd8cff8ec14f8bf08734724d06650ec70b9bcfbea7339d0b296c2d51f3b043
7
- data.tar.gz: b1717c2559ab3b4472a3933173fcfadeb61ce1217057fb6e438bf0c7634bdc2eb43c5b50b3e612290bfd94ca97202edc2afa99854637cc39d27f64275cc392c7
6
+ metadata.gz: 21b2bb8386b7b36c6e8a276b6f183daf0ec321b2767db2bc576cf58befec991447068193e2aacd1641bb661f492037cbfb2d1b0fae8a820b096f3245c23ae5f0
7
+ data.tar.gz: 523408548c7c84f8f0636a7e684666b608b3e6e1813bd06d72155318af3e4da7ad58b0f5a60bccf748ad6a13b87dd9947c7d5d54039bb69d66a8fb84bed6534b
data/README.md CHANGED
@@ -1,75 +1,75 @@
1
- # aws_helper
2
-
3
- [![Gem Version](https://badge.fury.io/rb/aws_helper.svg)](http://badge.fury.io/rb/aws_helper)
4
- [![Gem Downloads](http://ruby-gem-downloads-badge.herokuapp.com/aws_helper?type=total&color=brightgreen)](https://rubygems.org/gems/aws_helper)
5
- [![Build Status](https://travis-ci.org/neillturner/aws_helper.png)](https://travis-ci.org/neillturner/aws_helper)
6
-
7
- Aws Helper for an instance
8
-
9
- Allows functions on EBS volumes, snapshots, IP addresses and more
10
- * initially snapshots are supported
11
-
12
- ## Installation
13
-
14
- Add this line to your application's Gemfile:
15
-
16
- gem 'aws_helper'
17
-
18
- And then execute:
19
-
20
- $ bundle
21
-
22
- Or install it yourself as:
23
-
24
- $ gem install aws_helper
25
-
26
- ## Minimal Usage
27
-
28
- Assuming server start with an IAM role that have read access to AWS can create and delete snapshots:
29
-
30
- Snapshot EBS root device at /dev/sda1
31
-
32
- aws_helper snap /dev/sda1 --description zzzzzzzzz
33
-
34
- Prune so only keep 7 snapshots:
35
-
36
- aws_helper snap_prune /dev/sda1 --snapshots_to_keep=7
37
-
38
- Email me a list of the latest 20 snapshots:
39
-
40
- aws_helper snap_email me@company.com ebs.backups@company.com mysmtpemailserver.com
41
-
42
- Cleanup ebs disks - Delete old server root disks:
43
-
44
- aws_helper ebs_cleanup
45
-
46
- Disks that are 8GB in size, not attached to a server, not tagged in any way and from a snapshot will be deleted.
47
-
48
- ## Complex Usage
49
-
50
- If your server does not have a role then you need to code the AWS keys which is not best practice:
51
-
52
- Snapshot EBS attached to device /dev/sdf volume vol-123456 access AWS through an http proxy:
53
-
54
- export AWS_ACCESS_KEY_ID ='xxxxxxxxxxxx'
55
- export AWS_SECRET_ACCESS_KEY ='yyyyyyyy'
56
- export HTTP_PROXY=http://myproxy:port
57
- aws_helper snap /dev/sdf vol-123456 --description zzzzzzzzz
58
-
59
- Prune so only keep 20 snapshots:
60
-
61
- export AWS_ACCESS_KEY_ID ='xxxxxxxxxxxx'
62
- export AWS_SECRET_ACCESS_KEY ='yyyyyyyy'
63
- export HTTP_PROXY=http://myproxy:port
64
- aws_helper snap_prune /dev/sdf vol-123456 --snapshots_to_keep=20
65
-
66
- Email me a list of the latest 30 snapshots with a subject title on email:
67
-
68
- export AWS_ACCESS_KEY_ID ='xxxxxxxxxxxx'
69
- export AWS_SECRET_ACCESS_KEY ='yyyyyyyy'
70
- export HTTP_PROXY=http://myproxy:port
71
- aws_helper snap_email me@company.com ebs.backups@company.com mysmtpemailserver.com 'My EBS Backups' --rows=30
72
-
73
- Other functions to follow
74
-
75
-
1
+ # aws_helper
2
+
3
+ [![Gem Version](https://badge.fury.io/rb/aws_helper.svg)](http://badge.fury.io/rb/aws_helper)
4
+ [![Gem Downloads](http://ruby-gem-downloads-badge.herokuapp.com/aws_helper?type=total&color=brightgreen)](https://rubygems.org/gems/aws_helper)
5
+ [![Build Status](https://travis-ci.org/neillturner/aws_helper.png)](https://travis-ci.org/neillturner/aws_helper)
6
+
7
+ Aws Helper for an instance
8
+
9
+ Allows functions on EBS volumes, snapshots, IP addresses and more
10
+ * initially snapshots are supported
11
+
12
+ ## Installation
13
+
14
+ Add this line to your application's Gemfile:
15
+
16
+ gem 'aws_helper'
17
+
18
+ And then execute:
19
+
20
+ $ bundle
21
+
22
+ Or install it yourself as:
23
+
24
+ $ gem install aws_helper
25
+
26
+ ## Minimal Usage
27
+
28
+ Assuming server start with an IAM role that have read access to AWS can create and delete snapshots:
29
+
30
+ Snapshot EBS root device at /dev/sda1
31
+
32
+ aws_helper snap /dev/sda1 --description zzzzzzzzz
33
+
34
+ Prune so only keep 7 snapshots:
35
+
36
+ aws_helper snap_prune /dev/sda1 --snapshots_to_keep=7
37
+
38
+ Email me a list of the latest 20 snapshots:
39
+
40
+ aws_helper snap_email me@company.com ebs.backups@company.com mysmtpemailserver.com
41
+
42
+ Cleanup ebs disks - Delete old server root disks:
43
+
44
+ aws_helper ebs_cleanup
45
+
46
+ Disks that are 8GB in size, not attached to a server, not tagged in any way and from a snapshot will be deleted.
47
+
48
+ ## Complex Usage
49
+
50
+ If your server does not have a role then you need to code the AWS keys which is not best practice:
51
+
52
+ Snapshot EBS attached to device /dev/sdf volume vol-123456 access AWS through an http proxy:
53
+
54
+ export AWS_ACCESS_KEY_ID ='xxxxxxxxxxxx'
55
+ export AWS_SECRET_ACCESS_KEY ='yyyyyyyy'
56
+ export HTTP_PROXY=http://myproxy:port
57
+ aws_helper snap /dev/sdf vol-123456 --description zzzzzzzzz
58
+
59
+ Prune so only keep 20 snapshots:
60
+
61
+ export AWS_ACCESS_KEY_ID ='xxxxxxxxxxxx'
62
+ export AWS_SECRET_ACCESS_KEY ='yyyyyyyy'
63
+ export HTTP_PROXY=http://myproxy:port
64
+ aws_helper snap_prune /dev/sdf vol-123456 --snapshots_to_keep=20
65
+
66
+ Email me a list of the latest 30 snapshots with a subject title on email:
67
+
68
+ export AWS_ACCESS_KEY_ID ='xxxxxxxxxxxx'
69
+ export AWS_SECRET_ACCESS_KEY ='yyyyyyyy'
70
+ export HTTP_PROXY=http://myproxy:port
71
+ aws_helper snap_email me@company.com ebs.backups@company.com mysmtpemailserver.com 'My EBS Backups' --rows=30
72
+
73
+ Other functions to follow
74
+
75
+
@@ -1,34 +1,34 @@
1
- # encoding: utf-8
2
-
3
- lib = File.expand_path('../lib', __FILE__)
4
- $LOAD_PATH.unshift(lib) unless $LOAD_PATH.include?(lib)
5
- require 'awshelper/version'
6
-
7
- Gem::Specification.new do |s|
8
- s.name = 'aws_helper'
9
- s.version = Awshelper::VERSION
10
- s.authors = ['Neill Turner']
11
- s.email = ['neillwturner@gmail.com']
12
- s.homepage = 'https://github.com/neillturner/aws_helper'
13
- s.summary = 'Aws Helper for an instance'
14
- candidates = Dir.glob('{lib}/**/*') + ['README.md', 'aws_helper.gemspec']
15
- candidates = candidates + Dir.glob("bin/*")
16
- s.files = candidates.sort
17
- s.platform = Gem::Platform::RUBY
18
- s.executables = s.files.grep(%r{^bin/}) { |f| File.basename(f) }
19
- s.require_paths = ['lib']
20
- s.add_dependency('right_aws')
21
- s.add_dependency('thor')
22
- s.rubyforge_project = '[none]'
23
- s.description = <<-EOF
24
- == DESCRIPTION:
25
-
26
- Aws Helper for an instance
27
-
28
- == FEATURES:
29
-
30
- Allows functions on EBS volumes, snapshots, IP addresses and more
31
-
32
- EOF
33
-
34
- end
1
+ # encoding: utf-8
2
+
3
+ lib = File.expand_path('../lib', __FILE__)
4
+ $LOAD_PATH.unshift(lib) unless $LOAD_PATH.include?(lib)
5
+ require 'awshelper/version'
6
+
7
+ Gem::Specification.new do |s|
8
+ s.name = 'aws_helper'
9
+ s.version = Awshelper::VERSION
10
+ s.authors = ['Neill Turner']
11
+ s.email = ['neillwturner@gmail.com']
12
+ s.homepage = 'https://github.com/neillturner/aws_helper'
13
+ s.summary = 'Aws Helper for an instance'
14
+ candidates = Dir.glob('{lib}/**/*') + ['README.md', 'aws_helper.gemspec']
15
+ candidates = candidates + Dir.glob("bin/*")
16
+ s.files = candidates.sort
17
+ s.platform = Gem::Platform::RUBY
18
+ s.executables = s.files.grep(%r{^bin/}) { |f| File.basename(f) }
19
+ s.require_paths = ['lib']
20
+ s.add_dependency('right_aws')
21
+ s.add_dependency('thor')
22
+ s.rubyforge_project = '[none]'
23
+ s.description = <<-EOF
24
+ == DESCRIPTION:
25
+
26
+ Aws Helper for an instance
27
+
28
+ == FEATURES:
29
+
30
+ Allows functions on EBS volumes, snapshots, IP addresses and more
31
+
32
+ EOF
33
+
34
+ end
@@ -1,5 +1,5 @@
1
- #!/usr/bin/env ruby
2
-
3
- require "awshelper/cli"
4
-
1
+ #!/usr/bin/env ruby
2
+
3
+ require "awshelper/cli"
4
+
5
5
  Awshelper::CLI.start(ARGV)
@@ -1,2 +1,2 @@
1
- module Awshelper
1
+ module Awshelper
2
2
  end
@@ -1,408 +1,408 @@
1
- require 'thor'
2
- require 'awshelper'
3
- require 'awshelper/ec2'
4
- require 'syslog'
5
- require 'net/smtp'
6
- require 'json'
7
-
8
- module Awshelper
9
- class CLI < Thor
10
- include Thor::Actions
11
-
12
- include Awshelper::Ec2
13
-
14
- #def ebs_create(volume_id, snapshot_id, most_recent_snapshot)
15
- # #TO DO
16
- # raise "Cannot create a volume with a specific id (EC2 chooses volume ids)" if volume_id
17
- # if snapshot_id =~ /vol/
18
- # new_resource.snapshot_id(find_snapshot_id(new_resource.snapshot_id, new_resource.most_recent_snapshot))
19
- # end
20
- #
21
- # #nvid = volume_id_in_node_data
22
- # #if nvid
23
- # # # volume id is registered in the node data, so check that the volume in fact exists in EC2
24
- # # vol = volume_by_id(nvid)
25
- # # exists = vol && vol[:aws_status] != "deleting"
26
- # # # TODO: determine whether this should be an error or just cause a new volume to be created. Currently erring on the side of failing loudly
27
- # # raise "Volume with id #{nvid} is registered with the node but does not exist in EC2. To clear this error, remove the ['aws']['ebs_volume']['#{new_resource.name}']['volume_id'] entry from this node's data." unless exists
28
- # #else
29
- # # Determine if there is a volume that meets the resource's specifications and is attached to the current
30
- # # instance in case a previous [:create, :attach] run created and attached a volume but for some reason was
31
- # # not registered in the node data (e.g. an exception is thrown after the attach_volume request was accepted
32
- # # by EC2, causing the node data to not be stored on the server)
33
- # if new_resource.device && (attached_volume = currently_attached_volume(instance_id, new_resource.device))
34
- # Chef::Log.debug("There is already a volume attached at device #{new_resource.device}")
35
- # compatible = volume_compatible_with_resource_definition?(attached_volume)
36
- # raise "Volume #{attached_volume[:aws_id]} attached at #{attached_volume[:aws_device]} but does not conform to this resource's specifications" unless compatible
37
- # Chef::Log.debug("The volume matches the resource's definition, so the volume is assumed to be already created")
38
- # converge_by("update the node data with volume id: #{attached_volume[:aws_id]}") do
39
- # node.set['aws']['ebs_volume'][new_resource.name]['volume_id'] = attached_volume[:aws_id]
40
- # node.save unless Chef::Config[:solo]
41
- # end
42
- # else
43
- # # If not, create volume and register its id in the node data
44
- # converge_by("create a volume with id=#{new_resource.snapshot_id} size=#{new_resource.size} availability_zone=#{new_resource.availability_zone} and update the node data with created volume's id") do
45
- # nvid = create_volume(new_resource.snapshot_id,
46
- # new_resource.size,
47
- # new_resource.availability_zone,
48
- # new_resource.timeout,
49
- # new_resource.volume_type,
50
- # new_resource.piops)
51
- # node.set['aws']['ebs_volume'][new_resource.name]['volume_id'] = nvid
52
- # node.save unless Chef::Config[:solo]
53
- # end
54
- # end
55
- # #end
56
- #end
57
-
58
- #def ebs_attach(device, volume_id, timeout)
59
- # # determine_volume returns a Hash, not a Mash, and the keys are
60
- # # symbols, not strings.
61
- # vol = determine_volume(device, volume_id)
62
- # if vol[:aws_status] == "in-use"
63
- # if vol[:aws_instance_id] != instance_id
64
- # raise "Volume with id #{vol[:aws_id]} exists but is attached to instance #{vol[:aws_instance_id]}"
65
- # else
66
- # Chef::Log.debug("Volume is already attached")
67
- # end
68
- # else
69
- # # attach the volume
70
- # attach_volume(vol[:aws_id], instance_id, device, timeout)
71
- # end
72
- #end
73
-
74
- #def ebs_detach(device, volume_id, timeout)
75
- # vol = determine_volume(device, volume_id)
76
- # detach_volume(vol[:aws_id], timeout)
77
- #end
78
-
79
- desc "snap DEVICE [VOLUME_ID]", "Take a snapshot of a EBS Disk."
80
- option :description
81
-
82
- long_desc <<-LONGDESC
83
- 'snap DEVICE [VOLUME_ID] --description xxxxxx'
84
- \x5 Take a snapshot of a EBS Disk by specifying device and/or volume_id.
85
- \x5 All commands rely on environment variables or the server having an IAM role
86
- \x5 export AWS_ACCESS_KEY_ID ='xxxxxxxxxx'
87
- \x5 export AWS_SECRET_ACCESS_KEY ='yyyyyy'
88
- \x5 For example
89
- \x5 aws_helper snap /dev/sdf
90
- \x5 will snap shot the EBS disk attach to device /dev/xvdj
91
- LONGDESC
92
-
93
- def snap(device, volume_id=nil)
94
- vol = determine_volume(device, volume_id)
95
- snap_description = options[:description] if options[:description]
96
- snap_description = "Created by aws_helper(#{instance_id}/#{local_ipv4}) for #{ami_id} from #{vol[:aws_id]}" if !options[:description]
97
- snapshot = ec2.create_snapshot(vol[:aws_id],snap_description)
98
- log("Created snapshot of #{vol[:aws_id]} as #{snapshot[:aws_id]}")
99
- end
100
-
101
- desc "snap_prune DEVICE [VOLUME_ID]", "Prune the number of snapshots."
102
- option :snapshots_to_keep, :type => :numeric, :required => true
103
-
104
- long_desc <<-LONGDESC
105
- 'snap_prune DEVICE [VOLUME_ID] --snapshots_to_keep=<numeric>'
106
- \x5 Prune the number of snapshots of a EBS Disk by specifying device and/or volume_id and the no to keep.
107
- \x5 All commands rely on environment variables or the server having an IAM role
108
- \x5 export AWS_ACCESS_KEY_ID ='xxxxxxxxxxxx'
109
- \x5 export AWS_SECRET_ACCESS_KEY ='yyyyyyyy'
110
- \x5 For example
111
- \x5 aws_helper snap_prune /dev/sdf --snapshots_to_keep=7
112
- \x5 will keep the last 7 snapshots of the EBS disk attach to device /dev/xvdj
113
- LONGDESC
114
-
115
- def snap_prune(device, volume_id=nil)
116
- snapshots_to_keep = options[:snapshots_to_keep]
117
- vol = determine_volume(device, volume_id)
118
- old_snapshots = Array.new
119
- log("Checking for old snapshots")
120
- ec2.describe_snapshots.sort { |a,b| b[:aws_started_at] <=> a[:aws_started_at] }.each do |snapshot|
121
- if snapshot[:aws_volume_id] == vol[:aws_id]
122
- log("Found old snapshot #{snapshot[:aws_id]} (#{snapshot[:aws_volume_id]}) #{snapshot[:aws_started_at]}")
123
- old_snapshots << snapshot
124
- end
125
- end
126
- if old_snapshots.length > snapshots_to_keep
127
- old_snapshots[snapshots_to_keep, old_snapshots.length].each do |die|
128
- log("Deleting old snapshot #{die[:aws_id]}")
129
- ec2.delete_snapshot(die[:aws_id])
130
- end
131
- end
132
- end
133
-
134
- desc "snap_email TO FROM EMAIL_SERVER", "Email Snapshot List."
135
- option :rows, :type => :numeric, :required => false
136
- option :owner, :type => :numeric, :required => false
137
-
138
- long_desc <<-LONGDESC
139
- 'snap_email TO FROM EMAIL_SERVER ['EBS Backups'] --rows=<numeric> --owner=<numeric>'
140
- \x5 Emails the last 20 snapshots from specific email address via the email_server.
141
- \x5 All commands rely on environment variables or the server having an IAM role
142
- \x5 export AWS_ACCESS_KEY_ID ='xxxxxxxxxxxx'
143
- \x5 export AWS_SECRET_ACCESS_KEY ='yyyyyyyy'
144
- \x5 For example
145
- \x5 aws_helper snap_email me@mycompany.com ebs.backups@mycompany.com emailserver.com 'My EBS Backups' --rows=20 -owner=999887777
146
- \x5 will email the list of the latest 20 snapshots to email address me@mycompany.com via email server emailserver.com
147
- \x5 that belong to aws owner 999887777
148
- LONGDESC
149
-
150
- def snap_email(to, from, email_server, subject='EBS Backups')
151
- rows = 20
152
- rows = options[:rows] if options[:rows]
153
- owner = {}
154
- owner = {:owner => options[:owner]} if options[:owner]
155
- message = ""
156
- log("Report on snapshots")
157
- # ({ Name="start-time", Values="today in YYYY-MM-DD"})
158
- i = rows
159
- ec2.describe_snapshots(owner).sort { |a,b| b[:aws_started_at] <=> a[:aws_started_at] }.each do |snapshot|
160
- if i >0
161
- message = message+"#{snapshot[:aws_id]} #{snapshot[:aws_volume_id]} #{snapshot[:aws_started_at]} #{snapshot[:aws_description]} #{snapshot[:aws_status]}\n"
162
- i = i-1
163
- end
164
- end
165
- opts = {}
166
- opts[:server] = email_server
167
- opts[:from] = from
168
- opts[:from_alias] = 'EBS Backups'
169
- opts[:subject] = subject
170
- opts[:body] = message
171
- send_email(to,opts)
172
- end
173
-
174
- desc "ebs_cleanup", "Cleanup ebs disks - Delete old server root disks."
175
-
176
- long_desc <<-LONGDESC
177
- 'ebs_cleanup'
178
- \x5 Cleanup ebs disks - Delete old server root disks.
179
- \x5 Disks that are 8GB in size, not attached to a server, not tagged in any way and from a snapshot.
180
- \x5 All commands rely on environment variables or the server having an IAM role.
181
- \x5 export AWS_ACCESS_KEY_ID ='xxxxxxxxxxxx'
182
- \x5 export AWS_SECRET_ACCESS_KEY ='yyyyyyyy'
183
- \x5 For example
184
- \x5 ebs_cleanup
185
- LONGDESC
186
-
187
- def ebs_cleanup()
188
- ec2.describe_volumes(:filters => { 'status' => 'available', 'size' => '8' }).each do |r|
189
- if r[:aws_size] == 8 and r[:aws_status] == 'available' and r[:tags] == {} and r[:snapshot_id] != nil and r[:snapshot_id][0,5] == 'snap-' then
190
- log("Deleting unused volume #{r[:aws_id]} from snapshot #{r[:snapshot_id]}")
191
- ec2.delete_volume(r[:aws_id])
192
- end
193
- end
194
- end
195
-
196
-
197
- private
198
-
199
- def log(message,type="info")
200
- # $0 is the current script name
201
- puts message
202
- Syslog.open($0, Syslog::LOG_PID | Syslog::LOG_CONS) { |s| s.info message } if type == "info"
203
- Syslog.open($0, Syslog::LOG_PID | Syslog::LOG_CONS) { |s| s.info message } if type == "err"
204
- end
205
-
206
- # Pulls the volume id from the volume_id attribute or the node data and verifies that the volume actually exists
207
- def determine_volume(device, volume_id)
208
- vol = currently_attached_volume(instance_id, device)
209
- vol_id = volume_id || ( vol ? vol[:aws_id] : nil )
210
- log("volume_id attribute not set and no volume is attached at the device #{device}",'err') unless vol_id
211
- raise "volume_id attribute not set and no volume is attached at the device #{device}" unless vol_id
212
-
213
- # check that volume exists
214
- vol = volume_by_id(vol_id)
215
- log("No volume with id #{vol_id} exists",'err') unless vol
216
- raise "No volume with id #{vol_id} exists" unless vol
217
-
218
- vol
219
- end
220
-
221
-
222
- def get_all_instances(filter={})
223
- data = []
224
- response = ec2.describe_instances(filter)
225
- if response.status == 200
226
- data_s = response.body['reservationSet']
227
- data_s.each do |rs|
228
- gs=rs['groupSet']
229
- rs['instancesSet'].each do |r|
230
- #r[:aws_instance_id] = r['instanceId']
231
- #r[:public_ip] = r['ipAddress']
232
- #r[:aws_state] = r['instanceState']['name']
233
- #r['groupSet']=rs['groupSet']
234
- data.push(r)
235
- end
236
- end
237
- end
238
- data
239
- end
240
-
241
-
242
- # Retrieves information for a volume
243
- def volume_by_id(volume_id)
244
- ec2.describe_volumes.find{|v| v[:aws_id] == volume_id}
245
- end
246
-
247
- # Returns the volume that's attached to the instance at the given device or nil if none matches
248
- def currently_attached_volume(instance_id, device)
249
- ec2.describe_volumes.find{|v| v[:aws_instance_id] == instance_id && v[:aws_device] == device}
250
- end
251
-
252
- # Returns true if the given volume meets the resource's attributes
253
- #def volume_compatible_with_resource_definition?(volume)
254
- # if new_resource.snapshot_id =~ /vol/
255
- # new_resource.snapshot_id(find_snapshot_id(new_resource.snapshot_id, new_resource.most_recent_snapshot))
256
- # end
257
- # (new_resource.size.nil? || new_resource.size == volume[:aws_size]) &&
258
- # (new_resource.availability_zone.nil? || new_resource.availability_zone == volume[:zone]) &&
259
- # (new_resource.snapshot_id.nil? || new_resource.snapshot_id == volume[:snapshot_id])
260
- #end
261
-
262
- # TODO: support tags in deswcription
263
- #def tag_value(instance,tag_key)
264
- # options = ec2.describe_tags({:filters => {:resource_id => instance }} )
265
- # end
266
-
267
- # Creates a volume according to specifications and blocks until done (or times out)
268
- def create_volume(snapshot_id, size, availability_zone, timeout, volume_type, piops)
269
- availability_zone ||= instance_availability_zone
270
-
271
- # Sanity checks so we don't shoot ourselves.
272
- raise "Invalid volume type: #{volume_type}" unless ['standard', 'io1', 'gp2'].include?(volume_type)
273
-
274
- # PIOPs requested. Must specify an iops param and probably won't be "low".
275
- if volume_type == 'io1'
276
- raise 'IOPS value not specified.' unless piops >= 100
277
- end
278
-
279
- # Shouldn't see non-zero piops param without appropriate type.
280
- if piops > 0
281
- raise 'IOPS param without piops volume type.' unless volume_type == 'io1'
282
- end
283
-
284
- create_volume_opts = { :volume_type => volume_type }
285
- # TODO: this may have to be casted to a string. rightaws vs aws doc discrepancy.
286
- create_volume_opts[:iops] = piops if volume_type == 'io1'
287
-
288
- nv = ec2.create_volume(snapshot_id, size, availability_zone, create_volume_opts)
289
- Chef::Log.debug("Created new volume #{nv[:aws_id]}#{snapshot_id ? " based on #{snapshot_id}" : ""}")
290
-
291
- # block until created
292
- begin
293
- Timeout::timeout(timeout) do
294
- while true
295
- vol = volume_by_id(nv[:aws_id])
296
- if vol && vol[:aws_status] != "deleting"
297
- if ["in-use", "available"].include?(vol[:aws_status])
298
- Chef::Log.info("Volume #{nv[:aws_id]} is available")
299
- break
300
- else
301
- Chef::Log.debug("Volume is #{vol[:aws_status]}")
302
- end
303
- sleep 3
304
- else
305
- raise "Volume #{nv[:aws_id]} no longer exists"
306
- end
307
- end
308
- end
309
- rescue Timeout::Error
310
- raise "Timed out waiting for volume creation after #{timeout} seconds"
311
- end
312
-
313
- nv[:aws_id]
314
- end
315
-
316
- # Attaches the volume and blocks until done (or times out)
317
- def attach_volume(volume_id, instance_id, device, timeout)
318
- Chef::Log.debug("Attaching #{volume_id} as #{device}")
319
- ec2.attach_volume(volume_id, instance_id, device)
320
-
321
- # block until attached
322
- begin
323
- Timeout::timeout(timeout) do
324
- while true
325
- vol = volume_by_id(volume_id)
326
- if vol && vol[:aws_status] != "deleting"
327
- if vol[:aws_attachment_status] == "attached"
328
- if vol[:aws_instance_id] == instance_id
329
- Chef::Log.info("Volume #{volume_id} is attached to #{instance_id}")
330
- break
331
- else
332
- raise "Volume is attached to instance #{vol[:aws_instance_id]} instead of #{instance_id}"
333
- end
334
- else
335
- Chef::Log.debug("Volume is #{vol[:aws_status]}")
336
- end
337
- sleep 3
338
- else
339
- raise "Volume #{volume_id} no longer exists"
340
- end
341
- end
342
- end
343
- rescue Timeout::Error
344
- raise "Timed out waiting for volume attachment after #{timeout} seconds"
345
- end
346
- end
347
-
348
- # Detaches the volume and blocks until done (or times out)
349
- def detach_volume(volume_id, timeout)
350
- vol = volume_by_id(volume_id)
351
- if vol[:aws_instance_id] != instance_id
352
- Chef::Log.debug("EBS Volume #{volume_id} is not attached to this instance (attached to #{vol[:aws_instance_id]}). Skipping...")
353
- return
354
- end
355
- Chef::Log.debug("Detaching #{volume_id}")
356
- orig_instance_id = vol[:aws_instance_id]
357
- ec2.detach_volume(volume_id)
358
-
359
- # block until detached
360
- begin
361
- Timeout::timeout(timeout) do
362
- while true
363
- vol = volume_by_id(volume_id)
364
- if vol && vol[:aws_status] != "deleting"
365
- if vol[:aws_instance_id] != orig_instance_id
366
- Chef::Log.info("Volume detached from #{orig_instance_id}")
367
- break
368
- else
369
- Chef::Log.debug("Volume: #{vol.inspect}")
370
- end
371
- else
372
- Chef::Log.debug("Volume #{volume_id} no longer exists")
373
- break
374
- end
375
- sleep 3
376
- end
377
- end
378
- rescue Timeout::Error
379
- raise "Timed out waiting for volume detachment after #{timeout} seconds"
380
- end
381
- end
382
-
383
- def send_email(to,opts={})
384
- opts[:server] ||= 'localhost'
385
- opts[:from] ||= 'email@example.com'
386
- opts[:from_alias] ||= 'Example Emailer'
387
- opts[:subject] ||= "You need to see this"
388
- opts[:body] ||= "Important stuff!"
389
-
390
- msg = <<END_OF_MESSAGE
391
- From: #{opts[:from_alias]} <#{opts[:from]}>
392
- To: <#{to}>
393
- Subject: #{opts[:subject]}
394
-
395
- #{opts[:body]}
396
- END_OF_MESSAGE
397
- puts "Sending to #{to} from #{opts[:from]} email server #{opts[:server]}"
398
- Net::SMTP.start(opts[:server]) do |smtp|
399
- smtp.send_message msg, opts[:from], to
400
- end
401
- end
402
-
403
-
404
- end
405
-
406
- end
407
-
408
-
1
+ require 'thor'
2
+ require 'awshelper'
3
+ require 'awshelper/ec2'
4
+ require 'syslog'
5
+ require 'net/smtp'
6
+ require 'json'
7
+
8
+ module Awshelper
9
+ class CLI < Thor
10
+ include Thor::Actions
11
+
12
+ include Awshelper::Ec2
13
+
14
+ #def ebs_create(volume_id, snapshot_id, most_recent_snapshot)
15
+ # #TO DO
16
+ # raise "Cannot create a volume with a specific id (EC2 chooses volume ids)" if volume_id
17
+ # if snapshot_id =~ /vol/
18
+ # new_resource.snapshot_id(find_snapshot_id(new_resource.snapshot_id, new_resource.most_recent_snapshot))
19
+ # end
20
+ #
21
+ # #nvid = volume_id_in_node_data
22
+ # #if nvid
23
+ # # # volume id is registered in the node data, so check that the volume in fact exists in EC2
24
+ # # vol = volume_by_id(nvid)
25
+ # # exists = vol && vol[:aws_status] != "deleting"
26
+ # # # TODO: determine whether this should be an error or just cause a new volume to be created. Currently erring on the side of failing loudly
27
+ # # raise "Volume with id #{nvid} is registered with the node but does not exist in EC2. To clear this error, remove the ['aws']['ebs_volume']['#{new_resource.name}']['volume_id'] entry from this node's data." unless exists
28
+ # #else
29
+ # # Determine if there is a volume that meets the resource's specifications and is attached to the current
30
+ # # instance in case a previous [:create, :attach] run created and attached a volume but for some reason was
31
+ # # not registered in the node data (e.g. an exception is thrown after the attach_volume request was accepted
32
+ # # by EC2, causing the node data to not be stored on the server)
33
+ # if new_resource.device && (attached_volume = currently_attached_volume(instance_id, new_resource.device))
34
+ # Chef::Log.debug("There is already a volume attached at device #{new_resource.device}")
35
+ # compatible = volume_compatible_with_resource_definition?(attached_volume)
36
+ # raise "Volume #{attached_volume[:aws_id]} attached at #{attached_volume[:aws_device]} but does not conform to this resource's specifications" unless compatible
37
+ # Chef::Log.debug("The volume matches the resource's definition, so the volume is assumed to be already created")
38
+ # converge_by("update the node data with volume id: #{attached_volume[:aws_id]}") do
39
+ # node.set['aws']['ebs_volume'][new_resource.name]['volume_id'] = attached_volume[:aws_id]
40
+ # node.save unless Chef::Config[:solo]
41
+ # end
42
+ # else
43
+ # # If not, create volume and register its id in the node data
44
+ # converge_by("create a volume with id=#{new_resource.snapshot_id} size=#{new_resource.size} availability_zone=#{new_resource.availability_zone} and update the node data with created volume's id") do
45
+ # nvid = create_volume(new_resource.snapshot_id,
46
+ # new_resource.size,
47
+ # new_resource.availability_zone,
48
+ # new_resource.timeout,
49
+ # new_resource.volume_type,
50
+ # new_resource.piops)
51
+ # node.set['aws']['ebs_volume'][new_resource.name]['volume_id'] = nvid
52
+ # node.save unless Chef::Config[:solo]
53
+ # end
54
+ # end
55
+ # #end
56
+ #end
57
+
58
+ #def ebs_attach(device, volume_id, timeout)
59
+ # # determine_volume returns a Hash, not a Mash, and the keys are
60
+ # # symbols, not strings.
61
+ # vol = determine_volume(device, volume_id)
62
+ # if vol[:aws_status] == "in-use"
63
+ # if vol[:aws_instance_id] != instance_id
64
+ # raise "Volume with id #{vol[:aws_id]} exists but is attached to instance #{vol[:aws_instance_id]}"
65
+ # else
66
+ # Chef::Log.debug("Volume is already attached")
67
+ # end
68
+ # else
69
+ # # attach the volume
70
+ # attach_volume(vol[:aws_id], instance_id, device, timeout)
71
+ # end
72
+ #end
73
+
74
+ #def ebs_detach(device, volume_id, timeout)
75
+ # vol = determine_volume(device, volume_id)
76
+ # detach_volume(vol[:aws_id], timeout)
77
+ #end
78
+
79
+ desc "snap DEVICE [VOLUME_ID]", "Take a snapshot of a EBS Disk."
80
+ option :description
81
+
82
+ long_desc <<-LONGDESC
83
+ 'snap DEVICE [VOLUME_ID] --description xxxxxx'
84
+ \x5 Take a snapshot of a EBS Disk by specifying device and/or volume_id.
85
+ \x5 All commands rely on environment variables or the server having an IAM role
86
+ \x5 export AWS_ACCESS_KEY_ID ='xxxxxxxxxx'
87
+ \x5 export AWS_SECRET_ACCESS_KEY ='yyyyyy'
88
+ \x5 For example
89
+ \x5 aws_helper snap /dev/sdf
90
+ \x5 will snap shot the EBS disk attach to device /dev/xvdj
91
+ LONGDESC
92
+
93
+ def snap(device, volume_id=nil)
94
+ vol = determine_volume(device, volume_id)
95
+ snap_description = options[:description] if options[:description]
96
+ snap_description = "Created by aws_helper(#{instance_id}/#{local_ipv4}) for #{ami_id} from #{vol[:aws_id]}" if !options[:description]
97
+ snapshot = ec2.create_snapshot(vol[:aws_id],snap_description)
98
+ log("Created snapshot of #{vol[:aws_id]} as #{snapshot[:aws_id]}")
99
+ end
100
+
101
+ desc "snap_prune DEVICE [VOLUME_ID]", "Prune the number of snapshots."
102
+ option :snapshots_to_keep, :type => :numeric, :required => true
103
+
104
+ long_desc <<-LONGDESC
105
+ 'snap_prune DEVICE [VOLUME_ID] --snapshots_to_keep=<numeric>'
106
+ \x5 Prune the number of snapshots of a EBS Disk by specifying device and/or volume_id and the no to keep.
107
+ \x5 All commands rely on environment variables or the server having an IAM role
108
+ \x5 export AWS_ACCESS_KEY_ID ='xxxxxxxxxxxx'
109
+ \x5 export AWS_SECRET_ACCESS_KEY ='yyyyyyyy'
110
+ \x5 For example
111
+ \x5 aws_helper snap_prune /dev/sdf --snapshots_to_keep=7
112
+ \x5 will keep the last 7 snapshots of the EBS disk attach to device /dev/xvdj
113
+ LONGDESC
114
+
115
+ def snap_prune(device, volume_id=nil)
116
+ snapshots_to_keep = options[:snapshots_to_keep]
117
+ vol = determine_volume(device, volume_id)
118
+ old_snapshots = Array.new
119
+ log("Checking for old snapshots")
120
+ ec2.describe_snapshots.sort { |a,b| b[:aws_started_at] <=> a[:aws_started_at] }.each do |snapshot|
121
+ if snapshot[:aws_volume_id] == vol[:aws_id]
122
+ log("Found old snapshot #{snapshot[:aws_id]} (#{snapshot[:aws_volume_id]}) #{snapshot[:aws_started_at]}")
123
+ old_snapshots << snapshot
124
+ end
125
+ end
126
+ if old_snapshots.length > snapshots_to_keep
127
+ old_snapshots[snapshots_to_keep, old_snapshots.length].each do |die|
128
+ log("Deleting old snapshot #{die[:aws_id]}")
129
+ ec2.delete_snapshot(die[:aws_id])
130
+ end
131
+ end
132
+ end
133
+
134
+ desc "snap_email TO FROM EMAIL_SERVER", "Email Snapshot List."
135
+ option :rows, :type => :numeric, :required => false
136
+ option :owner, :type => :string, :required => false
137
+
138
+ long_desc <<-LONGDESC
139
+ 'snap_email TO FROM EMAIL_SERVER ['EBS Backups'] --rows=<numeric> --owner=<numeric>'
140
+ \x5 Emails the last 20 snapshots from specific email address via the email_server.
141
+ \x5 All commands rely on environment variables or the server having an IAM role
142
+ \x5 export AWS_ACCESS_KEY_ID ='xxxxxxxxxxxx'
143
+ \x5 export AWS_SECRET_ACCESS_KEY ='yyyyyyyy'
144
+ \x5 For example
145
+ \x5 aws_helper snap_email me@mycompany.com ebs.backups@mycompany.com emailserver.com 'My EBS Backups' --rows=20 -owner=999887777
146
+ \x5 will email the list of the latest 20 snapshots to email address me@mycompany.com via email server emailserver.com
147
+ \x5 that belong to aws owner 999887777
148
+ LONGDESC
149
+
150
+ def snap_email(to, from, email_server, subject='EBS Backups')
151
+ rows = 20
152
+ rows = options[:rows] if options[:rows]
153
+ owner = {}
154
+ owner = {:owner => options[:owner]} if options[:owner]
155
+ message = ""
156
+ log("Report on snapshots")
157
+ # ({ Name="start-time", Values="today in YYYY-MM-DD"})
158
+ i = rows
159
+ ec2.describe_snapshots(owner).sort { |a,b| b[:aws_started_at] <=> a[:aws_started_at] }.each do |snapshot|
160
+ if i >0
161
+ message = message+"#{snapshot[:aws_id]} #{snapshot[:aws_volume_id]} #{snapshot[:aws_started_at]} #{snapshot[:aws_description]} #{snapshot[:aws_status]}\n"
162
+ i = i-1
163
+ end
164
+ end
165
+ opts = {}
166
+ opts[:server] = email_server
167
+ opts[:from] = from
168
+ opts[:from_alias] = 'EBS Backups'
169
+ opts[:subject] = subject
170
+ opts[:body] = message
171
+ send_email(to,opts)
172
+ end
173
+
174
+ desc "ebs_cleanup", "Cleanup ebs disks - Delete old server root disks."
175
+
176
+ long_desc <<-LONGDESC
177
+ 'ebs_cleanup'
178
+ \x5 Cleanup ebs disks - Delete old server root disks.
179
+ \x5 Disks that are 8GB in size, not attached to a server, not tagged in any way and from a snapshot.
180
+ \x5 All commands rely on environment variables or the server having an IAM role.
181
+ \x5 export AWS_ACCESS_KEY_ID ='xxxxxxxxxxxx'
182
+ \x5 export AWS_SECRET_ACCESS_KEY ='yyyyyyyy'
183
+ \x5 For example
184
+ \x5 ebs_cleanup
185
+ LONGDESC
186
+
187
+ def ebs_cleanup()
188
+ ec2.describe_volumes(:filters => { 'status' => 'available', 'size' => '8' }).each do |r|
189
+ if r[:aws_size] == 8 and r[:aws_status] == 'available' and r[:tags] == {} and r[:snapshot_id] != nil and r[:snapshot_id][0,5] == 'snap-' then
190
+ log("Deleting unused volume #{r[:aws_id]} from snapshot #{r[:snapshot_id]}")
191
+ ec2.delete_volume(r[:aws_id])
192
+ end
193
+ end
194
+ end
195
+
196
+
197
+ private
198
+
199
+ def log(message,type="info")
200
+ # $0 is the current script name
201
+ puts message
202
+ Syslog.open($0, Syslog::LOG_PID | Syslog::LOG_CONS) { |s| s.info message } if type == "info"
203
+ Syslog.open($0, Syslog::LOG_PID | Syslog::LOG_CONS) { |s| s.info message } if type == "err"
204
+ end
205
+
206
+ # Pulls the volume id from the volume_id attribute or the node data and verifies that the volume actually exists
207
+ def determine_volume(device, volume_id)
208
+ vol = currently_attached_volume(instance_id, device)
209
+ vol_id = volume_id || ( vol ? vol[:aws_id] : nil )
210
+ log("volume_id attribute not set and no volume is attached at the device #{device}",'err') unless vol_id
211
+ raise "volume_id attribute not set and no volume is attached at the device #{device}" unless vol_id
212
+
213
+ # check that volume exists
214
+ vol = volume_by_id(vol_id)
215
+ log("No volume with id #{vol_id} exists",'err') unless vol
216
+ raise "No volume with id #{vol_id} exists" unless vol
217
+
218
+ vol
219
+ end
220
+
221
+
222
+ def get_all_instances(filter={})
223
+ data = []
224
+ response = ec2.describe_instances(filter)
225
+ if response.status == 200
226
+ data_s = response.body['reservationSet']
227
+ data_s.each do |rs|
228
+ gs=rs['groupSet']
229
+ rs['instancesSet'].each do |r|
230
+ #r[:aws_instance_id] = r['instanceId']
231
+ #r[:public_ip] = r['ipAddress']
232
+ #r[:aws_state] = r['instanceState']['name']
233
+ #r['groupSet']=rs['groupSet']
234
+ data.push(r)
235
+ end
236
+ end
237
+ end
238
+ data
239
+ end
240
+
241
+
242
+ # Retrieves information for a volume
243
+ def volume_by_id(volume_id)
244
+ ec2.describe_volumes.find{|v| v[:aws_id] == volume_id}
245
+ end
246
+
247
+ # Returns the volume that's attached to the instance at the given device or nil if none matches
248
+ def currently_attached_volume(instance_id, device)
249
+ ec2.describe_volumes.find{|v| v[:aws_instance_id] == instance_id && v[:aws_device] == device}
250
+ end
251
+
252
+ # Returns true if the given volume meets the resource's attributes
253
+ #def volume_compatible_with_resource_definition?(volume)
254
+ # if new_resource.snapshot_id =~ /vol/
255
+ # new_resource.snapshot_id(find_snapshot_id(new_resource.snapshot_id, new_resource.most_recent_snapshot))
256
+ # end
257
+ # (new_resource.size.nil? || new_resource.size == volume[:aws_size]) &&
258
+ # (new_resource.availability_zone.nil? || new_resource.availability_zone == volume[:zone]) &&
259
+ # (new_resource.snapshot_id.nil? || new_resource.snapshot_id == volume[:snapshot_id])
260
+ #end
261
+
262
+ # TODO: support tags in deswcription
263
+ #def tag_value(instance,tag_key)
264
+ # options = ec2.describe_tags({:filters => {:resource_id => instance }} )
265
+ # end
266
+
267
+ # Creates a volume according to specifications and blocks until done (or times out)
268
+ def create_volume(snapshot_id, size, availability_zone, timeout, volume_type, piops)
269
+ availability_zone ||= instance_availability_zone
270
+
271
+ # Sanity checks so we don't shoot ourselves.
272
+ raise "Invalid volume type: #{volume_type}" unless ['standard', 'io1', 'gp2'].include?(volume_type)
273
+
274
+ # PIOPs requested. Must specify an iops param and probably won't be "low".
275
+ if volume_type == 'io1'
276
+ raise 'IOPS value not specified.' unless piops >= 100
277
+ end
278
+
279
+ # Shouldn't see non-zero piops param without appropriate type.
280
+ if piops > 0
281
+ raise 'IOPS param without piops volume type.' unless volume_type == 'io1'
282
+ end
283
+
284
+ create_volume_opts = { :volume_type => volume_type }
285
+ # TODO: this may have to be casted to a string. rightaws vs aws doc discrepancy.
286
+ create_volume_opts[:iops] = piops if volume_type == 'io1'
287
+
288
+ nv = ec2.create_volume(snapshot_id, size, availability_zone, create_volume_opts)
289
+ Chef::Log.debug("Created new volume #{nv[:aws_id]}#{snapshot_id ? " based on #{snapshot_id}" : ""}")
290
+
291
+ # block until created
292
+ begin
293
+ Timeout::timeout(timeout) do
294
+ while true
295
+ vol = volume_by_id(nv[:aws_id])
296
+ if vol && vol[:aws_status] != "deleting"
297
+ if ["in-use", "available"].include?(vol[:aws_status])
298
+ Chef::Log.info("Volume #{nv[:aws_id]} is available")
299
+ break
300
+ else
301
+ Chef::Log.debug("Volume is #{vol[:aws_status]}")
302
+ end
303
+ sleep 3
304
+ else
305
+ raise "Volume #{nv[:aws_id]} no longer exists"
306
+ end
307
+ end
308
+ end
309
+ rescue Timeout::Error
310
+ raise "Timed out waiting for volume creation after #{timeout} seconds"
311
+ end
312
+
313
+ nv[:aws_id]
314
+ end
315
+
316
+ # Attaches the volume and blocks until done (or times out)
317
+ def attach_volume(volume_id, instance_id, device, timeout)
318
+ Chef::Log.debug("Attaching #{volume_id} as #{device}")
319
+ ec2.attach_volume(volume_id, instance_id, device)
320
+
321
+ # block until attached
322
+ begin
323
+ Timeout::timeout(timeout) do
324
+ while true
325
+ vol = volume_by_id(volume_id)
326
+ if vol && vol[:aws_status] != "deleting"
327
+ if vol[:aws_attachment_status] == "attached"
328
+ if vol[:aws_instance_id] == instance_id
329
+ Chef::Log.info("Volume #{volume_id} is attached to #{instance_id}")
330
+ break
331
+ else
332
+ raise "Volume is attached to instance #{vol[:aws_instance_id]} instead of #{instance_id}"
333
+ end
334
+ else
335
+ Chef::Log.debug("Volume is #{vol[:aws_status]}")
336
+ end
337
+ sleep 3
338
+ else
339
+ raise "Volume #{volume_id} no longer exists"
340
+ end
341
+ end
342
+ end
343
+ rescue Timeout::Error
344
+ raise "Timed out waiting for volume attachment after #{timeout} seconds"
345
+ end
346
+ end
347
+
348
+ # Detaches the volume and blocks until done (or times out)
349
+ def detach_volume(volume_id, timeout)
350
+ vol = volume_by_id(volume_id)
351
+ if vol[:aws_instance_id] != instance_id
352
+ Chef::Log.debug("EBS Volume #{volume_id} is not attached to this instance (attached to #{vol[:aws_instance_id]}). Skipping...")
353
+ return
354
+ end
355
+ Chef::Log.debug("Detaching #{volume_id}")
356
+ orig_instance_id = vol[:aws_instance_id]
357
+ ec2.detach_volume(volume_id)
358
+
359
+ # block until detached
360
+ begin
361
+ Timeout::timeout(timeout) do
362
+ while true
363
+ vol = volume_by_id(volume_id)
364
+ if vol && vol[:aws_status] != "deleting"
365
+ if vol[:aws_instance_id] != orig_instance_id
366
+ Chef::Log.info("Volume detached from #{orig_instance_id}")
367
+ break
368
+ else
369
+ Chef::Log.debug("Volume: #{vol.inspect}")
370
+ end
371
+ else
372
+ Chef::Log.debug("Volume #{volume_id} no longer exists")
373
+ break
374
+ end
375
+ sleep 3
376
+ end
377
+ end
378
+ rescue Timeout::Error
379
+ raise "Timed out waiting for volume detachment after #{timeout} seconds"
380
+ end
381
+ end
382
+
383
+ def send_email(to,opts={})
384
+ opts[:server] ||= 'localhost'
385
+ opts[:from] ||= 'email@example.com'
386
+ opts[:from_alias] ||= 'Example Emailer'
387
+ opts[:subject] ||= "You need to see this"
388
+ opts[:body] ||= "Important stuff!"
389
+
390
+ msg = <<END_OF_MESSAGE
391
+ From: #{opts[:from_alias]} <#{opts[:from]}>
392
+ To: <#{to}>
393
+ Subject: #{opts[:subject]}
394
+
395
+ #{opts[:body]}
396
+ END_OF_MESSAGE
397
+ puts "Sending to #{to} from #{opts[:from]} email server #{opts[:server]}"
398
+ Net::SMTP.start(opts[:server]) do |smtp|
399
+ smtp.send_message msg, opts[:from], to
400
+ end
401
+ end
402
+
403
+
404
+ end
405
+
406
+ end
407
+
408
+