shrine 1.3.0 → 1.4.0

Sign up to get free protection for your applications and to get access to all the features.

Potentially problematic release.


This version of shrine might be problematic. Click here for more details.

checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA1:
3
- metadata.gz: d59731ea1e8a0376a6f8ad86e8d627250dec2f44
4
- data.tar.gz: 14bd2696930a028e157cc1535995a5342958170b
3
+ metadata.gz: cd828e546e0d244ed304996054f0bd01b790800d
4
+ data.tar.gz: 46f44feb2075dd6d8001e990199acd99deae4223
5
5
  SHA512:
6
- metadata.gz: a0ca4170b242be2e01fa6abdeb9ff0e3380e1271c0783099e738182e4d6f995ac7b8042884c52147052a751b6a7143eb16766879165c87f221ed6e84b7deb53a
7
- data.tar.gz: 6aee5bb9a3a44f162c9d2da791c498c2d592a05aa3efc7bf058d38def80040af8856db96431eb558ed93538cc327f718874f59a376e3b55e11778189c87401be
6
+ metadata.gz: a2f0a33b73fb8a012b0e8036c6e1c62b2ad661ec38af4f16b72e80f3f7a39d9f42273232f5fd718f20330848c4db09b65f4f5fa28ca754bdaa31a56b32801fde
7
+ data.tar.gz: c3e3791da4e71e600b0c973edc27a6f40145dffedbc8e0b2c835415d25990caed5d5f651f77091a4f12a7fdef26452a2ad53f20158486f0f486a1743470e5943
data/README.md CHANGED
@@ -7,10 +7,10 @@ explains the motivation behind Shrine.
7
7
 
8
8
  ## Resources
9
9
 
10
- * Documentation: [shrinerb.com](http://shrinerb.com)
11
- * Source: [github.com/janko-m/shrine](https://github.com/janko-m/shrine)
12
- * Bugs: [github.com/janko-m/shrine/issues](https://github.com/janko-m/shrine/issues)
13
- * Help & Dicussion: [groups.google.com/group/ruby-shrine](https://groups.google.com/forum/#!forum/ruby-shrine)
10
+ - Documentation: [shrinerb.com](http://shrinerb.com)
11
+ - Source: [github.com/janko-m/shrine](https://github.com/janko-m/shrine)
12
+ - Bugs: [github.com/janko-m/shrine/issues](https://github.com/janko-m/shrine/issues)
13
+ - Help & Dicussion: [groups.google.com/group/ruby-shrine](https://groups.google.com/forum/#!forum/ruby-shrine)
14
14
 
15
15
  ## Installation
16
16
 
@@ -64,10 +64,10 @@ that was uploaded, and we can do a lot with it:
64
64
 
65
65
  ```rb
66
66
  uploaded_file.url #=> "/uploads/938kjsdf932.mp4"
67
+ uploaded_file.metadata #=> {...}
67
68
  uploaded_file.read #=> "..."
68
69
  uploaded_file.exists? #=> true
69
70
  uploaded_file.download #=> #<Tempfile:/var/folders/k7/6zx6dx6x7ys3rv3srh0nyfj00000gn/T/20151004-74201-1t2jacf.mp4>
70
- uploaded_file.metadata #=> {...}
71
71
  uploaded_file.delete
72
72
  # ...
73
73
  ```
@@ -84,6 +84,7 @@ Shrine we do this by generating and including "attachment" modules.
84
84
  Firstly we need to assign the special `:cache` and `:store` storages:
85
85
 
86
86
  ```rb
87
+ require "shrine"
87
88
  require "shrine/storage/file_system"
88
89
 
89
90
  Shrine.storages = {
@@ -212,7 +213,7 @@ end
212
213
  ```
213
214
  ```js
214
215
  $('[type="file"]').fileupload({
215
- url: '/attachments/images/cache/avatar',
216
+ url: '/attachments/images/cache/upload',
216
217
  paramName: 'file',
217
218
  add: function(e, data) { /* Disable the submit button */ },
218
219
  progress: function(e, data) { /* Add a nice progress bar */ },
@@ -452,6 +453,9 @@ class ImageUploader < Shrine
452
453
  end
453
454
  ```
454
455
 
456
+ Note that if you're extracting custom metadata from Ruby, you should always
457
+ rewind the file afterwards.
458
+
455
459
  ## Locations
456
460
 
457
461
  By default files will all be put in the same folder. If you want that each
@@ -478,7 +482,7 @@ end
478
482
 
479
483
  Note that there should always be a random component in the location, otherwise
480
484
  dirty tracking won't be detected properly (you can use `Shrine#generate_uid`).
481
- Also note that you can access the extracted metadata here through
485
+ Inside this method you can access the extracted metadata through
482
486
  `context[:metadata]`.
483
487
 
484
488
  When using the uploader directly, it's possible to bypass `#generate_location`
@@ -542,11 +546,11 @@ makes it really easy to plug in your backgrounding library:
542
546
 
543
547
  ```rb
544
548
  Shrine.plugin :backgrounding
545
- Shrine::Attacher.promote { |data| UploadJob.perform_async(data) }
549
+ Shrine::Attacher.promote { |data| PromoteJob.perform_async(data) }
546
550
  Shrine::Attacher.delete { |data| DeleteJob.perform_async(data) }
547
551
  ```
548
552
  ```rb
549
- class UploadJob
553
+ class PromoteJob
550
554
  include Sidekiq::Worker
551
555
  def perform(data)
552
556
  Shrine::Attacher.promote(data)
@@ -595,7 +599,7 @@ be applied to which uploaders:
595
599
  Shrine.plugin :logging # enables logging for all uploaders
596
600
 
597
601
  class ImageUploader < Shrine
598
- plugin :store_dimensions # stores dimensions only for this uploader
602
+ plugin :store_dimensions # stores dimensions only for this uploader and its descendants
599
603
  end
600
604
  ```
601
605
 
data/doc/carrierwave.md CHANGED
@@ -451,7 +451,7 @@ you can change this with the `default_storage` plugin.
451
451
 
452
452
  #### `fog_*`
453
453
 
454
- These options will be set on the soon-to-be-released Fog storage for Shrine.
454
+ These options are set on the [shrine-fog] storage.
455
455
 
456
456
  #### `delete_tmp_file_after_storage`, `remove_previously_stored_file_after_update`
457
457
 
@@ -522,3 +522,4 @@ multipart or not.
522
522
  [image_processing]: https://github.com/janko-m/image_processing
523
523
  [example app]: https://github.com/janko-m/shrine-example
524
524
  [Regenerating versions]: http://shrinerb.com/rdoc/files/doc/regenerating_versions_md.html
525
+ [shrine-fog]: https://github.com/janko-m/shrine-fog
@@ -4,33 +4,26 @@ You have a production app with already uploaded attachments. However, you've
4
4
  realized that the existing store folder structure for attachments isn't working
5
5
  for you.
6
6
 
7
- The first step is to change the location (by overriding `#generate_location` or
8
- with the pretty_location plugin), and deploy that change. Attachments on old
9
- locations will still continue to work properly.
7
+ The first step is to change the location, by overriding `#generate_location` or
8
+ with the pretty_location plugin, and deploy that change. This will make any new
9
+ files upload to the desired location, attachments on old locations will still
10
+ continue to work normally.
10
11
 
11
- The next step is to run a script that will move those to new locations. The
12
- easiest way to do that is to reupload them, and afterwards delete them:
12
+ The next step is to run a script that will move old files to new locations. The
13
+ easiest way to do that is to reupload them and delete them. Shrine has a method
14
+ exactly for that, `Attacher#promote`, which also handles the situation when
15
+ someone attaches a new file during "moving" (since we're running this script on
16
+ live production).
13
17
 
14
18
  ```rb
15
- Shrine.plugin :migration_helpers # before the model is loaded
16
- Shrine.plugin :multi_delete # for deleting multiple files at once
17
- ```
18
- ```rb
19
- old_avatars = []
19
+ Shrine.plugin :delete_promoted
20
20
 
21
21
  User.paged_each do |user|
22
- user.update_avatar do |avatar|
23
- old_avatars << avatar
24
- user.avatar_store.upload(avatar)
25
- end
26
- end
27
-
28
- if old_avatars.any?
29
- # you'll have to change this code slightly if you're using versions
30
- uploader = old_avatars.first.uploader
31
- uploader.delete(old_avatars)
22
+ user.promote(user.avatar, phase: :change_location)
32
23
  end
33
24
  ```
34
25
 
35
- And now all your existing attachments should be happily living on new
36
- locations.
26
+ Note that the phase has to be overriden, otherwise it defaults to `:store`
27
+ which would trigger processing if you have it set up.
28
+
29
+ Now all your existing attachments should be happily living on new locations.
@@ -171,8 +171,8 @@ linter = Shrine::Storage::Linter.new(storage)
171
171
  linter.call
172
172
  ```
173
173
 
174
- The linter will test your methods with simple IO objects, and raise an error
175
- with an appropriate message if a part of the specification isn't satisfied.
174
+ The linter will test your methods with fake IO objects, and raise a
175
+ `Shrine::LintError` if any part of the contract isn't satisfied.
176
176
 
177
177
  If you want to specify the IO object to use for testing (e.g. you need the IO
178
178
  to be an actual image), you can pass in a lambda which returns the IO when
@@ -188,3 +188,7 @@ pass `action: :warn` when initializing
188
188
  ```rb
189
189
  linter = Shrine::Storage::Linter.new(storage, action: :warn)
190
190
  ```
191
+
192
+ Note that using the linter doesn't mean that you shouldn't write any manual
193
+ tests for your storage. There will likely be some edge cases that won't be
194
+ tested by the linter.
data/doc/direct_s3.md CHANGED
@@ -46,9 +46,10 @@ Shrine's JSON representation of an uploaded file looks like this:
46
46
  }
47
47
  ```
48
48
 
49
- The `id`, `storage` and `metadata.size` fields are required, and the rest of
50
- the metadata is optional. After uploading the file to S3, you need to construct
51
- this JSON and assign it to the hidden attachment field in the form.
49
+ The `id`, `storage` fields are optional, while the `metadata` values are
50
+ optional (`metadata.size` is only required to later upload that file to a
51
+ non-S3 storage). After uploading the file to S3, you need to construct this
52
+ JSON and assign it to the hidden attachment field in the form.
52
53
 
53
54
  ## Strategy A (dynamic)
54
55
 
@@ -95,7 +96,7 @@ usually write it to the hidden attachment field in the form:
95
96
 
96
97
  ```js
97
98
  var image = {
98
- id: /cache\/(.+)/.exec(key)[1], # we have to remove the prefix part
99
+ id: key.match(/cache\/(.+)/)[1], # we have to remove the prefix part
99
100
  storage: 'cache',
100
101
  metadata: {
101
102
  size: data.files[0].size,
@@ -156,6 +157,19 @@ Notice that we needed to fetch and assign the size of the uploaded file. This
156
157
  is because this hash is later transformed into an IO which requires `#size`
157
158
  to be non-nil (and it is read from the metadata field).
158
159
 
160
+ ## Metadata
161
+
162
+ With direct uploads any metadata has to be extracted on the client, since
163
+ caching the file doesn't touch your application. When the cached file is stored,
164
+ Shrine's default behaviour is to simply copy over cached file's metadata.
165
+
166
+ If you want to extract metadata on the server before storing, you can just
167
+ load the restore_cached_data plugin.
168
+
169
+ ```rb
170
+ plugin :restore_cached_data
171
+ ```
172
+
159
173
  ## Eventual consistency
160
174
 
161
175
  When uploading objects to Amazon S3, sometimes they may not be available
@@ -180,7 +194,7 @@ backgrounding library to perform the job with a delay:
180
194
  ```rb
181
195
  Shrine.plugin :backgrounding
182
196
  Shrine::Attacher.promote do |data|
183
- UploadJob.perform_in(60, data) # tells a Sidekiq worker to perform in 1 minute
197
+ PromoteJob.perform_in(60, data) # tells a Sidekiq worker to perform in 1 minute
184
198
  end
185
199
  ```
186
200
 
data/doc/refile.md CHANGED
@@ -158,7 +158,7 @@ Rails.application.routes.draw do
158
158
  end
159
159
  ```
160
160
  ```rb
161
- # POST /attachments/images/cache/avatar
161
+ # POST /attachments/images/cache/upload
162
162
  {
163
163
  "id": "43kewit94.jpg",
164
164
  "storage": "cache",
@@ -56,7 +56,7 @@ class Shrine
56
56
  #{@name}_attacher.finalize if #{@name}_attacher.attached?
57
57
  end
58
58
 
59
- after_commit on: :destroy do
59
+ after_commit on: [:destroy] do
60
60
  #{@name}_attacher.destroy
61
61
  end
62
62
  RUBY
@@ -75,18 +75,12 @@ class Shrine
75
75
 
76
76
  # Updates the current attachment with the new one, unless the current
77
77
  # attachment has changed.
78
- def swap(uploaded_file)
79
- record.class.transaction do
80
- break if record.send("#{name}_data") != record.reload.send("#{name}_data")
81
- super
82
- end
83
- rescue ::ActiveRecord::RecordNotFound
84
- end
85
-
86
- # We save the record after updating, raising any validation errors.
87
78
  def update(uploaded_file)
88
- super
89
- record.save!
79
+ record.class.where(record.class.primary_key => record.id)
80
+ .where(:"#{name}_data" => record.send(:"#{name}_data"))
81
+ .update_all(:"#{name}_data" => uploaded_file.to_json)
82
+ record.reload
83
+ rescue ::ActiveRecord::RecordNotFound
90
84
  end
91
85
  end
92
86
  end
@@ -6,15 +6,15 @@ class Shrine
6
6
  # something other than Storage::FileSystem.
7
7
  #
8
8
  # Shrine.plugin :backgrounding
9
- # Shrine::Attacher.promote { |data| UploadJob.perform_async(data) }
9
+ # Shrine::Attacher.promote { |data| PromoteJob.perform_async(data) }
10
10
  # Shrine::Attacher.delete { |data| DeleteJob.perform_async(data) }
11
11
  #
12
12
  # The `data` variable is a serializable hash containing all context needed
13
- # for promotion/deletion. You then just need to declare `UploadJob` and
13
+ # for promotion/deletion. You then just need to declare `PromoteJob` and
14
14
  # `DeleteJob`, and call `Shrine::Attacher.promote`/`Shrine::Attacher.delete`
15
15
  # with the data hash:
16
16
  #
17
- # class UploadJob
17
+ # class PromoteJob
18
18
  # include Sidekiq::Worker
19
19
  #
20
20
  # def perform(data)
@@ -47,6 +47,26 @@ class Shrine
47
47
  # record.update(published: true) if record.is_a?(Post)
48
48
  # end
49
49
  #
50
+ # You can also write custom background jobs with `Attacher.dump` and
51
+ # `Attacher.load`:
52
+ #
53
+ # class User < Sequel::Model
54
+ # def after_commit
55
+ # if some_condition
56
+ # data = Shrine::Attacher.dump(avatar_attacher)
57
+ # SomethingJob.perform_async(data)
58
+ # end
59
+ # end
60
+ # end
61
+ #
62
+ # class SomethingJob
63
+ # include Sidekiq::Worker
64
+ # def perform(data)
65
+ # attacher = Shrine::Attacher.load(data)
66
+ # # ...
67
+ # end
68
+ # end
69
+ #
50
70
  # If you're generating versions, and you want to process some versions in
51
71
  # the foreground before kicking off a background job, you can use the
52
72
  # `recache` plugin.
@@ -58,18 +78,13 @@ class Shrine
58
78
  if block
59
79
  shrine_class.opts[:backgrounding_promote] = block
60
80
  else
61
- record_class, record_id = data["record"]
62
- record_class = Object.const_get(record_class)
63
- record = find_record(record_class, record_id) or return
64
-
65
- name = data["attachment"]
66
- attacher = record.send("#{name}_attacher")
81
+ attacher = load(data)
67
82
  cached_file = attacher.uploaded_file(data["uploaded_file"])
68
- return if cached_file != record.send(name)
83
+ phase = data["phase"].to_sym
69
84
 
70
- attacher.promote(cached_file) or return
85
+ attacher.promote(cached_file, phase: phase) or return
71
86
 
72
- record
87
+ attacher.record
73
88
  end
74
89
  end
75
90
 
@@ -79,20 +94,39 @@ class Shrine
79
94
  if block
80
95
  shrine_class.opts[:backgrounding_delete] = block
81
96
  else
82
- record_class, record_id = data["record"]
83
- record = Object.const_get(record_class).new
84
- record.id = record_id
85
-
86
- name, phase = data["attachment"], data["phase"]
87
- attacher = record.send("#{name}_attacher")
97
+ attacher = load(data)
88
98
  uploaded_file = attacher.uploaded_file(data["uploaded_file"])
89
- context = {name: name.to_sym, record: record, phase: phase.to_sym}
99
+ context = {name: attacher.name, record: attacher.record, phase: data["phase"].to_sym}
90
100
 
91
101
  attacher.store.delete(uploaded_file, context)
92
102
 
93
- record
103
+ attacher.record
94
104
  end
95
105
  end
106
+
107
+ # Dumps all the information about the attacher in a serializable hash
108
+ # suitable for passing as an argument to background jobs.
109
+ def dump(attacher)
110
+ {
111
+ "uploaded_file" => attacher.get && attacher.get.to_json,
112
+ "record" => [attacher.record.class.to_s, attacher.record.id],
113
+ "attachment" => attacher.name.to_s,
114
+ }
115
+ end
116
+
117
+ # Loads the data created by #dump, resolving the record and returning
118
+ # the attacher.
119
+ def load(data)
120
+ record_class, record_id = data["record"]
121
+ record_class = Object.const_get(record_class)
122
+ record = find_record(record_class, record_id) ||
123
+ record_class.new.tap { |object| object.id = record_id }
124
+
125
+ name = data["attachment"]
126
+ attacher = record.send("#{name}_attacher")
127
+
128
+ attacher
129
+ end
96
130
  end
97
131
 
98
132
  module AttacherMethods
@@ -100,32 +134,32 @@ class Shrine
100
134
  # hash.
101
135
  def _promote
102
136
  if background_promote = shrine_class.opts[:backgrounding_promote]
103
- data = {
104
- "uploaded_file" => get.to_json,
105
- "record" => [record.class.to_s, record.id],
106
- "attachment" => name.to_s,
107
- }
108
-
137
+ data = self.class.dump(self).merge("phase" => "store")
109
138
  instance_exec(data, &background_promote) if promote?(get)
110
139
  else
111
140
  super
112
141
  end
113
142
  end
114
143
 
144
+ # Returns early if attachments don't match.
145
+ def promote(cached_file, *)
146
+ return if cached_file != get
147
+ super
148
+ end
149
+
115
150
  private
116
151
 
117
152
  # Calls the deleting block (if registered) with a serializable data
118
153
  # hash.
119
154
  def delete!(uploaded_file, phase:)
120
155
  if background_delete = shrine_class.opts[:backgrounding_delete]
121
- data = {
156
+ data = self.class.dump(self).merge(
122
157
  "uploaded_file" => uploaded_file.to_json,
123
- "record" => [record.class.to_s, record.id],
124
- "attachment" => name.to_s,
125
158
  "phase" => phase.to_s,
126
- }
127
-
159
+ )
128
160
  instance_exec(data, &background_delete)
161
+
162
+ uploaded_file
129
163
  else
130
164
  super(uploaded_file, phase: phase)
131
165
  end
@@ -1,13 +1,13 @@
1
1
  class Shrine
2
2
  module Plugins
3
- # The backup plugin allows you to automatically backup up stored files to
3
+ # The backup plugin allows you to automatically back up stored files to
4
4
  # an additional storage.
5
5
  #
6
6
  # storages[:backup_store] = Shrine::Storage::S3.new(options)
7
7
  # plugin :backup, storage: :backup_store
8
8
  #
9
- # After the cached file is promoted to store, it will be reuploaded from
10
- # store to the provided "backup" storage.
9
+ # After a file is stored, it will be reuploaded from store to the provided
10
+ # backup storage.
11
11
  #
12
12
  # user.update(avatar: file) # uploaded both to :store and :backup_store
13
13
  #
@@ -29,6 +29,29 @@ class Shrine
29
29
  end
30
30
 
31
31
  module AttacherMethods
32
+ # Backs up the stored file after promoting.
33
+ def promote(*)
34
+ result = super
35
+ store_backup!(result) if result
36
+ result
37
+ end
38
+
39
+ # Deletes the backup file in addition to the stored file.
40
+ def replace
41
+ result = super
42
+ delete_backup!(@old) if result && delete_backup?
43
+ result
44
+ end
45
+
46
+ # Deletes the backup file in addition to the stored file.
47
+ def destroy
48
+ result = super
49
+ delete_backup!(get) if result && delete_backup?
50
+ result
51
+ end
52
+
53
+ # Returns a copy of the given uploaded file with storage changed to
54
+ # backup storage.
32
55
  def backup_file(uploaded_file)
33
56
  uploaded_file(uploaded_file.to_json) do |file|
34
57
  file.data["storage"] = backup_storage.to_s
@@ -37,20 +60,6 @@ class Shrine
37
60
 
38
61
  private
39
62
 
40
- # Back up the stored file and return it.
41
- def store!(io, phase:)
42
- stored_file = super
43
- store_backup!(stored_file)
44
- stored_file
45
- end
46
-
47
- # Delete the backed up file unless `:delete` was set to false.
48
- def delete!(uploaded_file, phase:)
49
- deleted_file = super
50
- delete_backup!(deleted_file) if backup_delete?
51
- deleted_file
52
- end
53
-
54
63
  # Upload the stored file to the backup storage.
55
64
  def store_backup!(stored_file)
56
65
  backup_store.upload(stored_file, context.merge(phase: :backup))
@@ -58,7 +67,7 @@ class Shrine
58
67
 
59
68
  # Deleted the stored file from the backup storage.
60
69
  def delete_backup!(deleted_file)
61
- backup_store.delete(backup_file(deleted_file), context.merge(phase: :backup))
70
+ delete!(backup_file(deleted_file), phase: :backup)
62
71
  end
63
72
 
64
73
  def backup_store
@@ -69,7 +78,7 @@ class Shrine
69
78
  shrine_class.opts[:backup_storage]
70
79
  end
71
80
 
72
- def backup_delete?
81
+ def delete_backup?
73
82
  shrine_class.opts[:backup_delete]
74
83
  end
75
84
  end
@@ -8,7 +8,7 @@ class Shrine
8
8
  #
9
9
  # plugin :data_uri
10
10
  #
11
- # If for example your attachment is called "avatar", this plugin will add
11
+ # If your attachment is called "avatar", this plugin will add
12
12
  # `#avatar_data_uri` and `#avatar_data_uri=` methods to your model.
13
13
  #
14
14
  # user.avatar #=> nil
@@ -17,7 +17,14 @@ class Shrine
17
17
  #
18
18
  # user.avatar.mime_type #=> "image/png"
19
19
  # user.avatar.size #=> 43423
20
- # user.avatar.original_filename #=> nil
20
+ #
21
+ # If you want the uploaded file to have an extension, you can generate a
22
+ # filename based on the content type of the data URI:
23
+ #
24
+ # plugin :data_uri, filename: ->(content_type) do
25
+ # extension = MIME::Types[content_type].first.preferred_extension
26
+ # "data_uri.#{extension}"
27
+ # end
21
28
  #
22
29
  # If the data URI wasn't correctly parsed, an error message will be added to
23
30
  # the attachment column. You can change the default error message:
@@ -38,7 +45,8 @@ class Shrine
38
45
  DEFAULT_CONTENT_TYPE = "text/plain"
39
46
  DATA_URI_REGEXP = /\Adata:([-\w.+]+\/[-\w.+]+)?(;base64)?,(.*)\z/m
40
47
 
41
- def self.configure(uploader, error_message: DEFAULT_ERROR_MESSAGE)
48
+ def self.configure(uploader, filename: nil, error_message: DEFAULT_ERROR_MESSAGE)
49
+ uploader.opts[:data_uri_filename] = filename
42
50
  uploader.opts[:data_uri_error_message] = error_message
43
51
  end
44
52
 
@@ -69,8 +77,10 @@ class Shrine
69
77
  if match = uri.match(DATA_URI_REGEXP)
70
78
  content_type = match[1] || DEFAULT_CONTENT_TYPE
71
79
  content = match[2] ? Base64.decode64(match[3]) : match[3]
80
+ filename = shrine_class.opts[:data_uri_filename]
81
+ filename = filename.call(content_type) if filename
72
82
 
73
- assign DataFile.new(content, content_type: content_type)
83
+ assign DataFile.new(content, content_type: content_type, filename: filename)
74
84
  else
75
85
  message = shrine_class.opts[:data_uri_error_message]
76
86
  message = message.call(uri) if message.respond_to?(:call)
@@ -94,16 +104,16 @@ class Shrine
94
104
  # Returns contents of the file base64-encoded.
95
105
  def base64
96
106
  content = storage.read(id)
97
- base64 = Base64.encode64(content)
98
- base64.chomp
107
+ Base64.encode64(content).chomp
99
108
  end
100
109
  end
101
110
 
102
111
  class DataFile < StringIO
103
- attr_reader :content_type
112
+ attr_reader :content_type, :original_filename
104
113
 
105
- def initialize(content, content_type: nil)
114
+ def initialize(content, content_type: nil, filename: nil)
106
115
  @content_type = content_type
116
+ @original_filename = filename
107
117
  super(content)
108
118
  end
109
119
  end
@@ -0,0 +1,21 @@
1
+ class Shrine
2
+ module Plugins
3
+ # The delete_promoted plugin deletes files that have been promoted, after
4
+ # the record is saved. This means that cached files handled by the attacher
5
+ # will automatically get deleted once they're uploaded to store. This also
6
+ # applies to any other uploaded file passed to `Attacher#promote`.
7
+ #
8
+ # plugin :delete_promoted
9
+ module DeletePromoted
10
+ module AttacherMethods
11
+ def promote(uploaded_file, *)
12
+ result = super
13
+ delete!(uploaded_file, phase: :promote)
14
+ result
15
+ end
16
+ end
17
+ end
18
+
19
+ register_plugin(:delete_promoted, DeletePromoted)
20
+ end
21
+ end
@@ -0,0 +1,38 @@
1
+ class Shrine
2
+ module Plugins
3
+ # The delete_raw plugin will automatically delete raw files that have been
4
+ # uploaded. This is especially useful when doing processing, to ensure that
5
+ # temporary files have been deleted after upload.
6
+ #
7
+ # plugin :delete_raw
8
+ #
9
+ # By default any raw file that was uploaded will be deleted, but you can
10
+ # limit this only to files uploaded to certain storages:
11
+ #
12
+ # plugin :delete_raw, storages: [:store]
13
+ module DeleteRaw
14
+ def self.configure(uploader, storages: nil)
15
+ uploader.opts[:delete_uploaded_storages] = storages
16
+ end
17
+
18
+ module InstanceMethods
19
+ private
20
+
21
+ # Deletes the file that was uploaded, unless it's an UploadedFile.
22
+ def copy(io, context)
23
+ super
24
+ if io.respond_to?(:delete) && !io.is_a?(UploadedFile)
25
+ io.delete if delete_uploaded?(io)
26
+ end
27
+ end
28
+
29
+ def delete_uploaded?(io)
30
+ opts[:delete_uploaded_storages].nil? ||
31
+ opts[:delete_uploaded_storages].include?(storage_key)
32
+ end
33
+ end
34
+ end
35
+
36
+ register_plugin(:delete_raw, DeleteRaw)
37
+ end
38
+ end