shrine 2.4.1 → 2.5.0

Sign up to get free protection for your applications and to get access to all the features.

Potentially problematic release.


This version of shrine might be problematic. Click here for more details.

@@ -430,8 +430,10 @@ For default URLs you can use the `default_url` plugin:
430
430
 
431
431
  ```rb
432
432
  class ImageUploader < Shrine
433
- plugin :default_url do |context|
434
- "/attachments/#{context[:name]}/default.jpg"
433
+ plugin :default_url
434
+
435
+ Attacher.default_url do |options|
436
+ "/attachments/#{name}/default.jpg"
435
437
  end
436
438
  end
437
439
  ```
@@ -23,6 +23,7 @@ You can start by setting both temporary and permanent storage to S3 with
23
23
  different prefixes (or even buckets):
24
24
 
25
25
  ```rb
26
+ # Gemfile
26
27
  gem "aws-sdk", "~> 2.1"
27
28
  ```
28
29
  ```rb
@@ -86,6 +87,7 @@ server, and use this information to start uploading the file to S3. The
86
87
  in our application:
87
88
 
88
89
  ```rb
90
+ # Gemfile
89
91
  gem "roda"
90
92
  ```
91
93
  ```rb
@@ -185,13 +187,30 @@ With direct uploads any metadata has to be extracted on the client, since
185
187
  caching the file doesn't touch your application. When the cached file is stored,
186
188
  Shrine's default behaviour is to simply copy over cached file's metadata.
187
189
 
188
- If you want to extract metadata on the server before storing, you can just
189
- load the `restore_cached_data` plugin.
190
+ If you want to re-extract metadata on the server before file validation, you
191
+ can load the `restore_cached_data`. That will make Shrine open the S3 file for
192
+ reading, give it for metadata extraction, and then override the metadata
193
+ received from the client with one extracted by Shrine.
190
194
 
191
195
  ```rb
192
196
  plugin :restore_cached_data
193
197
  ```
194
198
 
199
+ Note that if you don't need this metadata before file validation, and you would
200
+ like to have it extracted in a background job, you can do the following trick:
201
+
202
+ ```rb
203
+ class MyUploader < Shrine
204
+ plugin :processing
205
+
206
+ process(:store) do |io, context|
207
+ real_metadata = io.open { |opened_io| extract_metadata(opened_io, context) }
208
+ io.metadata.update(real_metadata)
209
+ io # return the same cached IO
210
+ end
211
+ end
212
+ ```
213
+
195
214
  ## Clearing cache
196
215
 
197
216
  Since directly uploaded files will stay in your temporary storage, you will
@@ -0,0 +1,118 @@
1
+ # Multiple Files
2
+
3
+ There are often times when you want to allow users to attach multiple files to
4
+ a single resource. Some file attachment libraries provide a special interface
5
+ for multiple attachments, but Shrine doesn't come with one, because it's much
6
+ more robust and flexible to implement this using your ORM directly.
7
+
8
+ The idea is to create a new table, and attach each uploaded file to a separate
9
+ record on that table, while having a "many-to-one" relationship with the main
10
+ table. That way a database record from the main table can implicitly have
11
+ multiple attachments through the associated records.
12
+
13
+ ```
14
+ album
15
+ photo1
16
+ - attachment1
17
+ photo2
18
+ - attachment2
19
+ photo3
20
+ - attachment3
21
+ ```
22
+
23
+ This design gives you great flexibility, allowing you to support:
24
+
25
+ * adding new attachments
26
+ * updating existing attachments
27
+ * removing existing attachments
28
+ * sorting attachments
29
+ * having additional fields on attachments (captions, votes, number of downloads etc.)
30
+ * ...
31
+
32
+ If you're using Sequel or ActiveRecord, the easiest way to implement this is
33
+ via nested attributes, which you would in general use for any dynamic
34
+ "one-to-many" association. The examples will be using Sequel, but with
35
+ ActiveRecord it's very similar, here are the docs:
36
+
37
+ * [`Sequel::Model.nested_attributes`]
38
+ * [`ActiveRecord::Base.accepts_nested_attributes_for`]
39
+
40
+ For simplicity, for the rest of this guide we will assume that we have "albums"
41
+ that can have multiple "photos".
42
+
43
+ ## 1. Attachments table
44
+
45
+ Let's create a table for our attachments, and add a foreign key for the main table:
46
+
47
+ ```rb
48
+ Sequel.migration do
49
+ change do
50
+ create_table :photos do
51
+ primary_key :id
52
+ foreign_key :album_id, :albums
53
+ column :image_data, :text
54
+ end
55
+ end
56
+ end
57
+ ```
58
+
59
+ In our new model we can create a Shrine attachment attribute:
60
+
61
+ ```rb
62
+ class Photo < Sequel::Model
63
+ include ImageUploader[:image]
64
+ end
65
+ ```
66
+
67
+ ## 2. Nested attributes
68
+
69
+ In our main model we can now declare the association to the new table, and
70
+ allow it to directly accept attributes for the associated records:
71
+
72
+ ```rb
73
+ class Album < Sequel::Model
74
+ one_to_many :photos
75
+
76
+ plugin :nested_attributes # load the plugin
77
+ nested_attributes :photos
78
+ end
79
+ ```
80
+
81
+ ## 3. View
82
+
83
+ In order to allow the user to select multiple files in the form, we just need
84
+ to add the `multiple` attribute to the file field.
85
+
86
+ ```html
87
+ <input type="file" multiple name="file">
88
+ ```
89
+
90
+ You can then use a generic JavaScript file upload library like
91
+ [jQuery-File-Upload], [Dropzone] or [FineUploader] to asynchronously upload
92
+ each the selected files to your app or an external service. See the
93
+ `direct_upload` plugin, and [Direct Uploads to S3] guide for more details.
94
+
95
+ After each upload finishes, you can generate a nested hash for the new
96
+ associated record, and write the uploaded file JSON to the attachment field:
97
+
98
+ ```rb
99
+ album[photos_attributes][0][image] = '{"storage":"cache","id":"38k25.jpg","metadata":{...}}'
100
+ album[photos_attributes][1][image] = '{"storage":"cache","id":"sg0fg.jpg","metadata":{...}}'
101
+ album[photos_attributes][2][image] = '{"storage":"cache","id":"041jd.jpg","metadata":{...}}'
102
+ ```
103
+
104
+ Once you submit this to the app, the ORM's nested attributes behaviour will
105
+ create the associated records, and assign the Shrine attachments.
106
+
107
+ Now you can manage adding new, or updating and deleting existing attachments,
108
+ just by using your ORM's nested attributes behaviour, the same way that you
109
+ would do with any other dynamic one-to-many association. The callbacks that
110
+ are added by including the Shrine module to the associated model will
111
+ automatically take care of the attachment management.
112
+
113
+ [`Sequel::Model.nested_attributes`]: http://sequel.jeremyevans.net/rdoc-plugins/classes/Sequel/Plugins/NestedAttributes.html
114
+ [`ActiveRecord::Base.accepts_nested_attributes_for`]: http://api.rubyonrails.org/classes/ActiveRecord/NestedAttributes/ClassMethods.html
115
+ [jQuery-File-Upload]: https://github.com/blueimp/jQuery-File-Upload
116
+ [Dropzone]: https://github.com/enyo/dropzone
117
+ [FineUploader]: https://github.com/FineUploader/fine-uploader
118
+ [Direct Uploads to S3]: http://shrinerb.com/rdoc/files/doc/direct_s3_md.html
@@ -425,8 +425,10 @@ For default URLs you can use the `default_url` plugin:
425
425
 
426
426
  ```rb
427
427
  class ImageUploader < Shrine
428
- plugin :default_url do |context|
429
- "/attachments/#{context[:name]}/default.jpg"
428
+ plugin :default_url
429
+
430
+ Attacher.default_url do |options|
431
+ "/attachments/#{name}/default.jpg"
430
432
  end
431
433
  end
432
434
  ```
@@ -38,11 +38,12 @@ end
38
38
 
39
39
  If you're using an external storage in development, it is common in tests to
40
40
  switch to a filesystem storage. However, that means that you'll also have to
41
- clean up the test directory between tests, and writing to filesystem a lot can
42
- affect the performance of your tests.
41
+ clean up the test directory between tests, and writing to filesystem can affect
42
+ the performance of your tests.
43
43
 
44
- Instead of filesystem you can use [memory storage][shrine-memory], which is
45
- both faster and doesn't require you to clean up anything between tests.
44
+ If your tests are run in a single process, instead of filesystem you can use
45
+ [memory storage][shrine-memory], which is both faster and doesn't require you
46
+ to clean up anything between tests.
46
47
 
47
48
  ```rb
48
49
  gem "shrine-memory"
@@ -57,6 +58,17 @@ Shrine.storages = {
57
58
  }
58
59
  ```
59
60
 
61
+ Alternatively, if you're using Amazon S3 storage, in tests (and development)
62
+ you can swap it out for [FakeS3]. You just need tell aws-sdk that instead of
63
+ `s3.amazonaws.com` it should use the host of your FakeS3 server when generating
64
+ URLs.
65
+
66
+ ```rb
67
+ Shrine::Storage::S3.new(endpoint: "http://localhost:10000")
68
+ ```
69
+
70
+ Note that for using FakeS3 you need aws-sdk version 2.2.25 or higher.
71
+
60
72
  ## Test data
61
73
 
62
74
  If you're creating test data dynamically using libraries like [factory_girl],
@@ -219,7 +231,7 @@ end
219
231
  However, it's even better to design your processing code in such a way that
220
232
  it's easier to swap out in tests. In your *application* code you could extract
221
233
  processing into a single `#call`-able object, and register it inside uploader
222
- generic `#opts` hash.
234
+ generic `opts` hash.
223
235
 
224
236
  ```rb
225
237
  class ImageUploader < Shrine
@@ -264,3 +276,4 @@ provided by the `direct_upload` app mounted in your routes.
264
276
  [`#attach_file`]: http://www.rubydoc.info/github/jnicklas/capybara/master/Capybara/Node/Actions#attach_file-instance_method
265
277
  [Rack::Test]: https://github.com/brynary/rack-test
266
278
  [Rack::TestApp]: https://github.com/kwatch/rack-test_app
279
+ [FakeS3]: https://github.com/jubos/fake-s3
@@ -710,9 +710,11 @@ class Shrine
710
710
 
711
711
  # Initializes the uploaded file with the given data hash.
712
712
  def initialize(data)
713
+ raise Error, "#{data.inspect} isn't valid uploaded file data" unless data["id"] && data["storage"]
714
+
713
715
  @data = data
714
716
  @data["metadata"] ||= {}
715
- storage # ensure storage exists
717
+ storage # ensure storage is registered
716
718
  end
717
719
 
718
720
  # The location where the file was uploaded to the storage.
@@ -5,39 +5,77 @@ class Shrine
5
5
  #
6
6
  # plugin :add_metadata
7
7
  #
8
- # add_metadata :exif do |io, context|
9
- # MiniMagick::Image.new(io.path).exif
8
+ # add_metadata :pages do |io, context|
9
+ # PDF::Reader.new(io.path).page_count
10
10
  # end
11
11
  #
12
- # The above will add "exif" to the metadata hash, and also add the `#exif`
13
- # method to the `UploadedFile`:
12
+ # The above will add "pages" to the metadata hash, and also create the
13
+ # `#pages` reader method on Shrine::UploadedFile.
14
14
  #
15
- # uploaded_file.metadata["exif"]
15
+ # document.metadata["pages"]
16
16
  # # or
17
- # uploaded_file.exif
17
+ # document.pages
18
+ #
19
+ # You can also extract multiple metadata values at once, by using
20
+ # `add_metadata` without an argument.
21
+ #
22
+ # add_metadata do |io, context|
23
+ # movie = FFMPEG::Movie.new(io.path)
24
+ #
25
+ # { "duration" => movie.duration,
26
+ # "bitrate" => movie.bitrate,
27
+ # "resolution" => movie.resolution,
28
+ # "frame_rate" => movie.frame_rate }
29
+ # end
30
+ #
31
+ # In this case Shrine won't automatically create reader methods for the
32
+ # extracted metadata on Shrine::UploadedFile, but you can create them via
33
+ # `metadata_method`.
34
+ #
35
+ # metadata_method :duration, :bitrate, :resolution, :frame_rate
18
36
  module AddMetadata
19
37
  def self.configure(uploader)
20
- uploader.opts[:metadata] = {}
38
+ uploader.opts[:metadata] = []
21
39
  end
22
40
 
23
41
  module ClassMethods
24
- def add_metadata(name, &block)
25
- opts[:metadata][name] = block
42
+ def add_metadata(name = nil, &block)
43
+ if name
44
+ opts[:metadata] << _metadata_proc(name, &block)
45
+ metadata_method(name)
46
+ else
47
+ opts[:metadata] << block
48
+ end
49
+ end
26
50
 
51
+ def metadata_method(*names)
52
+ names.each { |name| _metadata_method(name) }
53
+ end
54
+
55
+ private
56
+
57
+ def _metadata_method(name)
27
58
  self::UploadedFile.send(:define_method, name) do
28
59
  metadata[name.to_s]
29
60
  end
30
61
  end
62
+
63
+ def _metadata_proc(name, &block)
64
+ proc do |io, context|
65
+ value = instance_exec(io, context, &block)
66
+ {name.to_s => value} unless value.nil?
67
+ end
68
+ end
31
69
  end
32
70
 
33
71
  module InstanceMethods
34
72
  def extract_metadata(io, context)
35
73
  metadata = super
36
74
 
37
- opts[:metadata].each do |name, block|
38
- value = instance_exec(io, context, &block)
39
- metadata[name.to_s] = value unless value.nil?
75
+ opts[:metadata].each do |metadata_block|
76
+ custom_metadata = instance_exec(io, context, &metadata_block)
40
77
  io.rewind
78
+ metadata.merge!(custom_metadata) unless custom_metadata.nil?
41
79
  end
42
80
 
43
81
  metadata
@@ -1,18 +1,25 @@
1
1
  class Shrine
2
2
  module Plugins
3
- # The `backgrounding` plugin enables you to move processing, storing and
4
- # deleting of files from record's lifecycle into background jobs. This is
5
- # generally useful if you're doing processing and/or your store is
6
- # something other than Storage::FileSystem.
3
+ # The `backgrounding` plugin enables you to move promoting and deleting of
4
+ # files from record's lifecycle into background jobs. This is especially
5
+ # useful if you're doing processing and/or you're storing files on an
6
+ # external storage service.
7
+ #
8
+ # plugin :backgrounding
9
+ #
10
+ # ## Usage
11
+ #
12
+ # The plugin provides `Attacher.promote` and `Attacher.delete` methods,
13
+ # which allow you to hook up to promoting and deleting and spawn background
14
+ # jobs, by passing a block.
7
15
  #
8
- # Shrine.plugin :backgrounding
9
16
  # Shrine::Attacher.promote { |data| PromoteJob.perform_async(data) }
10
17
  # Shrine::Attacher.delete { |data| DeleteJob.perform_async(data) }
11
18
  #
12
- # The `data` variable is a serializable hash containing all context needed
13
- # for promotion/deletion. You then just need to declare `PromoteJob` and
14
- # `DeleteJob`, and call `Shrine::Attacher.promote`/`Shrine::Attacher.delete`
15
- # with the data hash:
19
+ # The yielded `data` variable is a serializable hash containing all context
20
+ # needed for promotion/deletion. Now you just need to declare the job
21
+ # classes, and inside them call `Attacher.promote` or `Attacher.delete`,
22
+ # this time with the received data.
16
23
  #
17
24
  # class PromoteJob
18
25
  # include Sidekiq::Worker
@@ -30,17 +37,60 @@ class Shrine
30
37
  # end
31
38
  # end
32
39
  #
33
- # Internally these methods will resolve all necessary objects, do the
34
- # promotion/deletion, and in case of promotion update the record with the
35
- # stored attachment.
40
+ # This example used Sidekiq, but obviously you could just as well use
41
+ # any other backgrounding library. This setup will be applied globally for
42
+ # all uploaders.
43
+ #
44
+ # If you're generating versions, and you want to process some versions in
45
+ # the foreground before kicking off a background job, you can use the
46
+ # `recache` plugin.
47
+ #
48
+ # ## `Attacher.promote` and `Attacher.delete`
49
+ #
50
+ # In background jobs, `Attacher.promote` and `Attacher.delete` will resolve
51
+ # all necessary objects, and do the promotion/deletion. If
52
+ # `Attacher.find_record` is defined (which comes with ORM plugins), model
53
+ # instances will be treated as database records, with the `#id` attribute
54
+ # assumed to represent the primary key. Then promotion will have the
55
+ # following behaviour:
36
56
  #
37
- # The examples above used Sidekiq, but obviously you can just as well use
38
- # any other backgrounding library. This setup will work globally for all
39
- # uploaders.
57
+ # 1. retrieves the database record
58
+ # * if record is not found, it finishes
59
+ # * if record is found but attachment has changed, it finishes
60
+ # 2. uploads cached file to permanent storage
61
+ # 3. reloads the database record
62
+ # * if record is not found, it deletes the promoted files and finishes
63
+ # * if record is found but attachment has changed, it deletes the promoted files and finishes
64
+ # 4. updates the record with the promoted files
40
65
  #
41
- # The `backgrounding` plugin affects the `Shrine::Attacher` in a way that
42
- # `#_promote` and `#_delete` spawn background jobs, while `#promote` and
43
- # `#delete!` are always synchronous:
66
+ # Both `Attacher.promote` and `Attacher.delete` return a `Shrine::Attacher`
67
+ # instance (if the action hasn't aborted), so you can use it to perform
68
+ # additional tasks:
69
+ #
70
+ # def perform(data)
71
+ # attacher = Shrine::Attacher.promote(data)
72
+ # attacher.record.update(published: true) if attacher && attacher.record.is_a?(Post)
73
+ # end
74
+ #
75
+ # ### Plain models
76
+ #
77
+ # You can also do backgrounding with plain models which don't represent
78
+ # database records; the plugin will use that mode if `Attacher.find_record`
79
+ # is not defined. In that case promotion will have the following behaviour:
80
+ #
81
+ # 1. instantiates the model
82
+ # 2. uploads cached file to permanent storage
83
+ # 3. writes promoted files to the model instance
84
+ #
85
+ # You can then retrieve the promoted files via the attacher object that
86
+ # `Attacher.promote` returns, and do any additional tasks if you need to.
87
+ #
88
+ # ## `Attacher#_promote` and `Attacher#_delete`
89
+ #
90
+ # The plugin modifies `Attacher#_promote` and `Attacher#_delete` to call
91
+ # the registered blocks with serializable attacher data, and these methods
92
+ # are internally called by the attacher. `Attacher#promote` and
93
+ # `Attacher#delete!` remain synchronous.
44
94
  #
45
95
  # # asynchronous (spawn background jobs)
46
96
  # attacher._promote
@@ -50,38 +100,23 @@ class Shrine
50
100
  # attacher.promote
51
101
  # attacher.delete!(attachment)
52
102
  #
53
- # Both methods return the `Shrine::Attacher` instance (if the action didn't
54
- # abort), so you can use it to do additional actions:
103
+ # ## `Attacher.dump` and `Attacher.load`
55
104
  #
56
- # def perform(data)
57
- # attacher = Shrine::Attacher.promote(data)
58
- # attacher.record.update(published: true) if attacher && attacher.record.is_a?(Post)
59
- # end
105
+ # The plugin adds `Attacher.dump` and `Attacher.load` methods for
106
+ # serializing attacher object and loading it back up. You can use them to
107
+ # spawn background jobs for custom tasks.
60
108
  #
61
- # You can also write custom background jobs with `Attacher.dump` and
62
- # `Attacher.load`:
109
+ # data = Shrine::Attacher.dump(attacher)
110
+ # SomethingJob.perform_async(data)
63
111
  #
64
- # class User < Sequel::Model
65
- # def after_commit
66
- # super
67
- # if some_condition
68
- # data = Shrine::Attacher.dump(avatar_attacher)
69
- # SomethingJob.perform_async(data)
70
- # end
71
- # end
72
- # end
112
+ # # ...
73
113
  #
74
114
  # class SomethingJob
75
- # include Sidekiq::Worker
76
115
  # def perform(data)
77
116
  # attacher = Shrine::Attacher.load(data)
78
117
  # # ...
79
118
  # end
80
119
  # end
81
- #
82
- # If you're generating versions, and you want to process some versions in
83
- # the foreground before kicking off a background job, you can use the
84
- # `recache` plugin.
85
120
  module Backgrounding
86
121
  module AttacherClassMethods
87
122
  # If block is passed in, stores it to be called on promotion. Otherwise
@@ -125,15 +160,7 @@ class Shrine
125
160
  # Loads the data created by #dump, resolving the record and returning
126
161
  # the attacher.
127
162
  def load(data)
128
- record_class, record_id = data["record"]
129
- record_class = Object.const_get(record_class)
130
-
131
- record = find_record(record_class, record_id)
132
- record ||= record_class.new.tap do |instance|
133
- # so that the id is always included in file deletion logs
134
- instance.singleton_class.send(:define_method, :id) { record_id }
135
- end
136
-
163
+ record = load_record(data)
137
164
  name = data["name"].to_sym
138
165
 
139
166
  if data["shrine_class"]
@@ -146,6 +173,29 @@ class Shrine
146
173
 
147
174
  attacher
148
175
  end
176
+
177
+ # Resolves the record from backgrounding data. If the record was found,
178
+ # returns it. If the record wasn't found, returns an instance of the
179
+ # model with ID assigned for logging. If `find_record` isn't defined,
180
+ # then it is a PORO model and should be instantiated with the cached
181
+ # attachment.
182
+ def load_record(data)
183
+ record_class, record_id = data["record"]
184
+ record_class = Object.const_get(record_class)
185
+
186
+ if respond_to?(:find_record)
187
+ record = find_record(record_class, record_id)
188
+ record ||= record_class.new.tap do |instance|
189
+ # so that the id is always included in file deletion logs
190
+ instance.singleton_class.send(:define_method, :id) { record_id }
191
+ end
192
+ else
193
+ record = record_class.new
194
+ record.send(:"#{data["name"]}_data=", data["attachment"])
195
+ end
196
+
197
+ record
198
+ end
149
199
  end
150
200
 
151
201
  module AttacherMethods
@@ -193,8 +243,10 @@ class Shrine
193
243
 
194
244
  # Updates with the new file only if the attachment hasn't changed.
195
245
  def swap(new_file)
196
- reloaded = self.class.find_record(record.class, record.id)
197
- return if reloaded.nil? || self.class.new(reloaded, name).read != read
246
+ if self.class.respond_to?(:find_record)
247
+ reloaded = self.class.find_record(record.class, record.id)
248
+ return if reloaded.nil? || self.class.new(reloaded, name).read != read
249
+ end
198
250
  super
199
251
  end
200
252
  end