anaconda 1.0.11 → 2.0.2

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA1:
3
- metadata.gz: d26bc908c337bafa06a9bb5cea0a047aea6f937e
4
- data.tar.gz: ced6b894ed0763d15cd32120df4a5da01fef10d5
3
+ metadata.gz: 57868170a368b59a870fd5cd967b553e13b65583
4
+ data.tar.gz: 2c2cc0aae3cb3b388fae8a251d43dcb28e06524d
5
5
  SHA512:
6
- metadata.gz: c144bca62db52b765c134433d8b184047b7870947987bd6fab77d08c05e774f64f83f51f2969d457c898d5a4e2c589a3a2baf4a9f0a6320ae41302ed79d79634
7
- data.tar.gz: 376a296c42df3a99ad1b84e6e76c8b93716e1f496cb058a372a297306d6ec367e03d39e5fe2e44699444e3ccc0528bac295f3977758daeb43a377489ce996735
6
+ metadata.gz: 852057515f6e9b772a4b39dd6bc73f812f058e4234884945744f0a8016db7c88751f429336aac94df4cc4a922c15a1fc04118dbfa55f827601c3585e970b9c46
7
+ data.tar.gz: 2cdfad1b498549f0fe7f268e3a0ec44fdcd967361dfd582b7712e77640aac0cc73856f7e0532b3631195099a07e381319bc036b2c4e73cfc9be639f28931afd6
@@ -126,17 +126,20 @@ We highly recommend the `figaro` gem [https://github.com/laserlemon/figaro](http
126
126
 
127
127
  At this time the available options on anaconda_for are:
128
128
  * `base_key` default: _%{plural model}/%{plural column}/%{random string}_
129
- * `aws_access_key_id` default: _aws_access_key_ specified in Anaconda config
130
- * `aws_secret_access_key` default: _aws_secret_key_ specified in Anaconda config
131
- * `bucket` default: _aws_bucket_ specified in Anaconda config
129
+ * `aws_access_key` default: _aws_access_key_ specified in Anaconda config
130
+ * `aws_secret_key` default: _aws_secret_key_ specified in Anaconda config
131
+ * `aws_bucket` default: _aws_bucket_ specified in Anaconda config
132
+ * `aws_endpoint` default: _aws_endpoint_ specified in Anaconda config
132
133
  * `acl` default _public-read_
133
134
  * `max_file_size` default: `500.megabytes`
134
135
  * `allowed_file_types` default: _all_
135
136
  * `host` String. If specified, this will be used to access publically stored objects instead of the S3 bucket. Useful for CloudFront integration. Note: At this time privately stored objects will still be requested via S3. Default: _false_
136
137
  * `protocol` `https`, `http`, or `:auto`. If `:auto`, `//` will be used as the protocol. Note: At this time, all privately stored objects are requested over https. Default: `http`
137
- * `remove_previous_s3_files_on_change` Boolean. If true, files will be removed from S3 when a new file is uploaded. Default: `true`
138
- * `remove_previous_s3_files_on_destroy` Boolean. If true, files will be removed from S3 when a record is destroyed. Default: `true`
139
- * `expiry_length` - If supplied, this is the length in seconds that a signed URL is valid for. Default: `1.hour`
138
+ * `remove_previous_s3_files_on_change` Boolean. If true, files will be removed from S3 when a new file is uploaded. Default: `true`
139
+ * `remove_previous_s3_files_on_destroy` Boolean. If true, files will be removed from S3 when a record is destroyed. Default: `true`
140
+ * `expiry_length` - If supplied, this is the length in seconds that a signed URL is valid for. Default: `1.hour`
141
+
142
+ Any `anaconda_for` option may also be a proc that will be evaluated in the context of the current instance.
140
143
 
141
144
  * Form setup
142
145
 
@@ -219,6 +222,20 @@ If you return false to the following events it will prevent the default behavior
219
222
  From version 1.0.0 on we have used [Semantic Versioning](http://semver.org/).
220
223
 
221
224
  ## Changelog
225
+ * 2.0.2
226
+ * Fix `asset_url` method. There's tests coming. I swear.
227
+
228
+ * 2.0.1
229
+ YANKED
230
+ * Fix `asset_download_url` method
231
+
232
+ * 2.0.0
233
+ YANKED
234
+ *Breaking Changes!*
235
+ * The options you can pass to `anaconda_for` have changed.
236
+ * Add ability for `anaconda_for` options to be procs so we can have instance specific data there.
237
+ * Clean the `filename` that is passed to the `asset_download_url` method
238
+
222
239
  * 1.0.11
223
240
  * Add ability to pass `filename` to the `asset_download_url` method.
224
241
 
@@ -32,8 +32,15 @@ module Anaconda
32
32
  def self.config
33
33
  yield self
34
34
 
35
- @@aws[:aws_endpoint] = "s3.amazonaws.com/#{@@aws[:aws_bucket]}" unless @@aws[:aws_endpoint].present?
36
- @@aws[:path_style] = !@@aws[:aws_endpoint].starts_with?(@@aws[:aws_bucket])
35
+ if @@aws[:aws_bucket].present?
36
+ @@aws[:aws_endpoint] = "s3.amazonaws.com/#{@@aws[:aws_bucket]}" unless @@aws[:aws_endpoint].present?
37
+ end
38
+
39
+ if @@aws[:aws_endpoint].present? && @@aws[:aws_bucket].present?
40
+ @@aws[:path_style] = !@@aws[:aws_endpoint].starts_with?(@@aws[:aws_bucket])
41
+ else
42
+ @@aws[:path_style] = false
43
+ end
37
44
  end
38
45
 
39
46
  def self.js_file_types
@@ -55,8 +62,8 @@ module Anaconda
55
62
  return js_file_types
56
63
  end
57
64
 
58
- def self.remove_s3_object_in_bucket_with_file_path(bucket, file_path)
59
- aws = Fog::Storage.new({:provider => 'AWS', :aws_access_key_id => Anaconda.aws[:aws_access_key], :aws_secret_access_key => Anaconda.aws[:aws_secret_key], :path_style => @@aws[:path_style]})
60
- aws.delete_object(bucket, file_path)
65
+ def self.remove_s3_object(file_path, options)
66
+ aws = Fog::Storage.new({:provider => 'AWS', :aws_access_key_id => options[:aws_access_key], :aws_secret_access_key => options[:aws_secret_key], :path_style => options[:aws_use_path_style]})
67
+ aws.delete_object(options[:aws_bucket], file_path)
61
68
  end
62
69
  end
@@ -30,9 +30,11 @@ module Anaconda
30
30
  end
31
31
  self.anaconda_options = Hash.new unless self.anaconda_options.kind_of? Hash
32
32
  self.anaconda_options[anaconda_column.to_sym] = options.reverse_merge(
33
- aws_access_key_id: Anaconda.aws[:aws_access_key],
34
- aws_secret_access_key: Anaconda.aws[:aws_secret_key],
35
- bucket: Anaconda.aws[:aws_bucket],
33
+ aws_access_key: Anaconda.aws[:aws_access_key],
34
+ aws_secret_key: Anaconda.aws[:aws_secret_key],
35
+ aws_bucket: Anaconda.aws[:aws_bucket],
36
+ aws_endpoint: Anaconda.aws[:aws_endpoint],
37
+ aws_use_path_style: Anaconda.aws[:path_style],
36
38
  acl: "public-read",
37
39
  max_file_size: 500.megabytes,
38
40
  allowed_file_types: [],
@@ -112,52 +114,73 @@ module Anaconda
112
114
  end
113
115
  end
114
116
 
117
+ def anaconda_options_for( anaconda_column )
118
+ if self.class.anaconda_columns.include? anaconda_column.to_sym
119
+ self.anaconda_options[anaconda_column].inject({}) do |hash, (k, v)|
120
+ if v.kind_of? Proc
121
+ hash[k] = self.instance_exec(&v)
122
+ else
123
+ hash[k] = v
124
+ end
125
+ hash
126
+ end
127
+ else
128
+ raise "#{anaconda_column} not configured for anaconda. Misspelling or did you forget to add the anaconda_for call for this field?"
129
+ end
130
+ end
131
+
115
132
  private
116
133
  def anaconda_url(column_name, *args)
117
134
  return nil unless send("#{column_name}_file_path").present?
118
135
  options = args.extract_options!
119
- logger.debug "Extracted Options:"
136
+ options = options.reverse_merge(self.anaconda_options_for( column_name ))
137
+ logger.debug "Options:"
120
138
  logger.debug(options)
121
139
 
122
140
  if send("#{column_name}_stored_privately")
123
- aws = Fog::Storage.new({:provider => 'AWS', :aws_access_key_id => Anaconda.aws[:aws_access_key], :aws_secret_access_key => Anaconda.aws[:aws_secret_key], :path_style => Anaconda.aws[:path_style]})
124
- aws.get_object_https_url(Anaconda.aws[:aws_bucket], send("#{column_name}_file_path"), anaconda_expiry_length(column_name, options[:expires]))
125
- elsif self.anaconda_options[column_name.to_sym][:host]
141
+ aws = Fog::Storage.new({:provider => 'AWS', :aws_access_key_id => options[:aws_access_key], :aws_secret_access_key => options[:aws_secret_key], :path_style => options[:aws_use_path_style]})
142
+ aws.get_object_https_url(options[:aws_bucket], send("#{column_name}_file_path"), anaconda_expiry_length(column_name, options[:expires]))
143
+ elsif options[:host]
126
144
  "#{anaconda_protocol(column_name, options[:protocol])}#{self.anaconda_options[column_name.to_sym][:host]}/#{send("#{column_name}_file_path")}"
127
145
  else
128
- "#{anaconda_protocol(column_name, options[:protocol])}#{Anaconda.aws[:aws_endpoint]}/#{send("#{column_name}_file_path")}"
146
+ "#{anaconda_protocol(column_name, options[:protocol])}#{options[:aws_endpoint]}/#{send("#{column_name}_file_path")}"
129
147
  end
130
148
  end
131
149
 
132
150
  def anaconda_download_url(column_name, *args)
133
151
  return nil unless send("#{column_name}_file_path").present?
134
152
  options = args.extract_options!
135
- logger.debug "Extracted Options:"
153
+ options = options.reverse_merge(self.anaconda_options_for( column_name ))
154
+ logger.debug "Options:"
136
155
  logger.debug(options)
156
+
137
157
  filename = nil
138
158
  if options[:filename].present?
139
- filename = "filename=#{options[:filename]}"
159
+ clean_filename = options[:filename].gsub(/[^0-9A-Za-z.\-\ \(\)]/, '-')
160
+ logger.debug "Cleaned Filename: #{clean_filename}"
161
+ filename = "filename=#{clean_filename}"
140
162
  end
141
163
 
142
164
  aws_options = {query: {"response-content-disposition" => "attachment;#{filename}"}}
143
- aws = Fog::Storage.new({:provider => 'AWS', :aws_access_key_id => Anaconda.aws[:aws_access_key], :aws_secret_access_key => Anaconda.aws[:aws_secret_key], :path_style => Anaconda.aws[:path_style]})
144
- aws.get_object_https_url(Anaconda.aws[:aws_bucket], send("#{column_name}_file_path"), anaconda_expiry_length(column_name, options[:expires]), aws_options)
165
+ aws = Fog::Storage.new({:provider => 'AWS', :aws_access_key_id => options[:aws_access_key], :aws_secret_access_key => options[:aws_secret_key], :path_style => options[:aws_use_path_style]})
166
+ aws.get_object_https_url(options[:aws_bucket], send("#{column_name}_file_path"), anaconda_expiry_length(column_name, options[:expires]), aws_options)
145
167
 
146
168
  end
147
169
 
148
170
  def anaconda_protocol(column_name, override = nil)
149
- return override if override
150
- case self.anaconda_options[column_name.to_sym][:protocol]
171
+ return "#{override}://" if override
172
+
173
+ case self.anaconda_options_for( column_name )[:protocol]
151
174
  when :auto
152
175
  "//"
153
176
  else
154
- "#{self.anaconda_options[column_name.to_sym][:protocol]}://"
177
+ "#{self.anaconda_options_for( column_name )[:protocol]}://"
155
178
  end
156
179
  end
157
180
 
158
181
  def anaconda_expiry_length(column_name, override = nil)
159
182
  return override if override
160
- self.anaconda_options[column_name.to_sym][:expiry_length].seconds.from_now
183
+ self.anaconda_options_for( column_name )[:expiry_length].seconds.from_now
161
184
  end
162
185
 
163
186
  def anaconda_default_base_key_for(column_name)
@@ -168,21 +191,21 @@ module Anaconda
168
191
 
169
192
  if self.destroyed?
170
193
  self.class.anaconda_columns.each do |column_name|
171
- next unless self.anaconda_options[column_name.to_sym][:remove_previous_s3_files_on_destroy]
194
+ next unless self.anaconda_options_for( column_name )[:remove_previous_s3_files_on_destroy]
172
195
  if self.send("#{column_name}_file_path").present?
173
- Anaconda.remove_s3_object_in_bucket_with_file_path(Anaconda.aws[:aws_bucket], self.send("#{column_name}_file_path"))
196
+ Anaconda.remove_s3_object(self.send("#{column_name}_file_path"), self.anaconda_options_for( column_name ))
174
197
  end
175
198
  end
176
199
  else
177
200
  self.class.anaconda_columns.each do |column_name|
178
- next unless self.anaconda_options[column_name.to_sym][:remove_previous_s3_files_on_change]
201
+ next unless self.anaconda_options_for( column_name )[:remove_previous_s3_files_on_change]
179
202
  if self.previous_changes["#{column_name}_file_path"].present?
180
203
  # Looks like this field was edited.
181
204
  if self.previous_changes["#{column_name}_file_path"][0].present? &&
182
205
  self.previous_changes["#{column_name}_file_path"][0] != self.previous_changes["#{column_name}_file_path"][1]
183
206
  # It's not a new entry ([0] would be nil), and it really did change, wasn't just committed for no reason
184
207
  # So let's delete the previous file from S3
185
- Anaconda.remove_s3_object_in_bucket_with_file_path(Anaconda.aws[:aws_bucket], self.previous_changes["#{column_name}_file_path"][0])
208
+ Anaconda.remove_s3_object(self.previous_changes["#{column_name}_file_path"][0], self.anaconda_options_for( column_name ))
186
209
  end
187
210
  end
188
211
  end
@@ -1,5 +1,5 @@
1
1
  module Anaconda
2
2
  module Rails
3
- VERSION = "1.0.11"
3
+ VERSION = "2.0.2"
4
4
  end
5
5
  end
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: anaconda
3
3
  version: !ruby/object:Gem::Version
4
- version: 1.0.11
4
+ version: 2.0.2
5
5
  platform: ruby
6
6
  authors:
7
7
  - Ben McFadden