condo 1.0.4 → 1.0.6

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (32) hide show
  1. checksums.yaml +7 -0
  2. data/README.textile +133 -133
  3. data/app/assets/javascripts/condo.js +9 -6
  4. data/app/assets/javascripts/condo/amazon.js +403 -406
  5. data/app/assets/javascripts/condo/condo.js +184 -0
  6. data/app/assets/javascripts/condo/config.js +69 -80
  7. data/app/assets/javascripts/condo/google.js +338 -255
  8. data/app/assets/javascripts/condo/md5/hash.worker.emulator.js +23 -23
  9. data/app/assets/javascripts/condo/md5/hash.worker.js +11 -11
  10. data/app/assets/javascripts/condo/md5/hasher.js +119 -100
  11. data/app/assets/javascripts/condo/md5/spark-md5.js +276 -161
  12. data/app/assets/javascripts/condo/rackspace.js +326 -329
  13. data/app/assets/javascripts/condo/{abstract-md5.js.erb → services/abstract-md5.js.erb} +86 -93
  14. data/app/assets/javascripts/condo/{base64.js → services/base64.js} +2 -10
  15. data/app/assets/javascripts/condo/services/broadcaster.js +26 -0
  16. data/app/assets/javascripts/condo/services/uploader.js +302 -0
  17. data/app/assets/javascripts/core/core.js +4 -0
  18. data/app/assets/javascripts/core/services/1-safe-apply.js +17 -0
  19. data/app/assets/javascripts/core/services/2-messaging.js +171 -0
  20. data/lib/condo.rb +269 -269
  21. data/lib/condo/configuration.rb +137 -139
  22. data/lib/condo/errors.rb +8 -8
  23. data/lib/condo/strata/amazon_s3.rb +301 -301
  24. data/lib/condo/strata/google_cloud_storage.rb +315 -314
  25. data/lib/condo/strata/rackspace_cloud_files.rb +245 -223
  26. data/lib/condo/version.rb +1 -1
  27. metadata +21 -44
  28. data/app/assets/javascripts/condo/broadcaster.js +0 -60
  29. data/app/assets/javascripts/condo/controller.js +0 -194
  30. data/app/assets/javascripts/condo/uploader.js +0 -310
  31. data/test/dummy/db/test.sqlite3 +0 -0
  32. data/test/dummy/log/test.log +0 -25
@@ -0,0 +1,7 @@
1
+ ---
2
+ SHA1:
3
+ metadata.gz: 335bdd9a275e2321248b001a74b6e123d50358ad
4
+ data.tar.gz: 2f13c901661a82d195d0bd090bb549d00c6de4e5
5
+ SHA512:
6
+ metadata.gz: a842e8e7fd460e1345205e3e73d5e74efdd28b5962b6a990320b919c4f5b68b58300b5249a0df31a48354ac78b8eeb788c23f95d32353ea0409179e13b1542c7
7
+ data.tar.gz: 8e9e92aa03407f99aecfb0ac2cba68f5ba077d2d19834cc75668ad992d46b8c9436c2e40e26e434b5c146ff31dc943992f09d6319f7f6700f2193fab624af14d
@@ -1,133 +1,133 @@
1
- h1. Condominios aka Condo
2
-
3
- A "Rails plugin":http://guides.rubyonrails.org/plugins.html and "AngularJS application":http://angularjs.org/ that makes direct uploads to multiple cloud storage providers easy.
4
- Only supports "XMLHttpRequest Level 2":http://en.wikipedia.org/wiki/XMLHttpRequest capable browsers and cloud providers that have a "RESTful API":http://en.wikipedia.org/wiki/Representational_state_transfer with "CORS":http://en.wikipedia.org/wiki/Cross-origin_resource_sharing support.
5
-
6
- Why compromise?
7
-
8
- Get started now: @gem install condo@ or checkout the "example application":https://github.com/cotag/condo_example
9
- Also see our "github pages site":http://cotag.github.com/Condominios/
10
-
11
-
12
- h2. License
13
-
14
- GNU Lesser General Public License v3 (LGPL version 3)
15
-
16
-
17
- h2. Concept
18
-
19
- Condominios was created to provide direct to cloud uploads using standards based browser technology. However it is not limited to that use case.
20
- The API is RESTful, providing an abstraction layer and signed URLs that can be utilised in native (mobile) applications.
21
-
22
- The main advantages are:
23
- * Off-loads processing to client machines
24
- * Better guarantees against upload corruption
25
- ** file hashing on the client side instead of an intermediary where it probably won't be hashed either
26
- * Upload results are guaranteed if the cloud provider provides atomic operations
27
- ** user is always aware of any failures in the process
28
- * Detailed progress and control over the upload
29
-
30
- This has numerous advantages over traditional Form Data style post uploads too.
31
- * Progress bars
32
- * Resumability when uploading large files
33
-
34
-
35
- Support for all major browsers
36
- * Tested in Firefox 4, Safari 6, Chromes latest stable and IE10 (IE10 on Win7)
37
-
38
-
39
- h2. Usage
40
-
41
- h3. Terms
42
-
43
- * Residence == the current storage provider
44
- * Resident == the current user
45
-
46
-
47
- h3. Quick Start
48
-
49
- See the "example application":https://github.com/cotag/condo_example which implements the steps below on an otherwise blank rails app.
50
-
51
- # Add the following to your rails application gemfile:
52
- #* @gem 'condo'@
53
- #* Add a datastore
54
- #** @gem 'condo_active_record'@ (for traditional databases)
55
- #** "condo_mongoid":https://github.com/axomi/condo_mongoid by "axomi":https://github.com/axomi for "MongoDB":http://mongodb.org/
56
- #* @gem 'condo_interface'@ (optional - an example interface)
57
- # Run migrations if using active record
58
- #* @rake railties:install:migrations FROM=condo_active_record@
59
- #* @rake db:migrate@
60
- # Create an initialiser for any default residencies. (details further down)
61
- # Create controllers that will be used as Condo endpoints
62
- #* Typically @rails g controller Uploads@
63
- #* Add the resource to your routes
64
- # At the top of the new controller add the following line to the class: @include Condo@
65
- #* This creates the following public methods at run time: new, create, edit, update, destroy implementing the API
66
- #* The following protected methods are also generated: set_residence, current_residence, current_resident, current_upload
67
- # You are encouraged to use standard filters to authenticate users and set the residence (if this is dynamic) + implement index / show if desired
68
- # You must implement the following call-backs:
69
- #* resident_id - this should provide a unique identifier for the current user, used for authorisation
70
- #* upload_complete - provides the upload information for storage in the greater application logic. Return true if successful.
71
- #* destroy_upload - provides the upload information so that a scheduled task can be created to clean up the upload. Return true if successfully scheduled.
72
- #** This should be done in the background using something like "Fog":http://fog.io/ Can't trust the client
73
-
74
-
75
- If you are using "Condo Interface":https://github.com/cotag/condo_interface then you may want to do the following:
76
- # Create an index for your controller @def index; end@
77
- # Create an index.html.erb in your view with:
78
- # Make sure your AngularJS app includes: @angular.module('YourApp', ['CondoUploader', 'CondoInterface']);@
79
- #* @<div data-ng-app="YourApp"><%= render "condo_interface/upload" %></div>@
80
-
81
- Alternative you could load an AngularJS template linking to <%= asset_path('templates/_upload.html') %>
82
-
83
-
84
- h3. Defining Static Residencies
85
-
86
- If you are creating an application that only communicates with one or two storage providers or accounts then this is the simplest way to get started.
87
- In an initialiser (<-- I'm Australian) do the following:
88
-
89
- <pre><code class="ruby">
90
- Condo::Configuration.add_residence(:AmazonS3, {
91
- :access_id => ENV['S3_KEY'],
92
- :secret_key => ENV['S3_SECRET']
93
- # :location => 'us-west-1' # or 'ap-southeast-1' etc (see http://docs.amazonwebservices.com/general/latest/gr/rande.html#s3_region)
94
- # Defaults to 'us-east-1' or US Standard - not required for Google
95
- # :namespace => :admin_resident # Allows you to assign different defaults to different controllers
96
- # Controller must have the following line 'set_namespace :admin_resident'
97
- })
98
-
99
- </code></pre>
100
-
101
- The first residence to be defined in a namespace will be the default. To change the residence for the current request use @set_residence(:name, :location)@ - location is optional
102
- Currently available residencies:
103
- * :AmazonS3
104
- * :GoogleCloudStorage
105
- * :RackspaceCloudFiles
106
-
107
-
108
- You can also define a dynamic residence each request (maybe clients provided you with access information for their storage provider)
109
-
110
- <pre><code class="ruby">
111
- set_residence(:AmazonS3, {
112
- :access_id => user.s3_key,
113
- :secret_key => user.s3_secret,
114
- :dynamic => true # Otherwise the same as add_residence
115
- });
116
-
117
-
118
- </code></pre>
119
-
120
-
121
- h3. Callbacks
122
-
123
- These are pretty well defined "here":https://github.com/cotag/condo_example/blob/master/app/controllers/uploads_controller.rb
124
-
125
-
126
- h2. TODO::
127
-
128
- # Write tests... So many tests
129
- # Create a wiki describing things in more detail
130
- # Implement API for more residencies
131
- # Sign other useful requests (bucket listings with search etc)
132
- #* For Dropbox or Megaupload style applications
133
-
1
+ h1. Condominios aka Condo
2
+
3
+ A "Rails plugin":http://guides.rubyonrails.org/plugins.html and "AngularJS application":http://angularjs.org/ that makes direct uploads to multiple cloud storage providers easy.
4
+ Only supports "XMLHttpRequest Level 2":http://en.wikipedia.org/wiki/XMLHttpRequest capable browsers and cloud providers that have a "RESTful API":http://en.wikipedia.org/wiki/Representational_state_transfer with "CORS":http://en.wikipedia.org/wiki/Cross-origin_resource_sharing support.
5
+
6
+ Why compromise?
7
+
8
+ Get started now: @gem install condo@ or checkout the "example application":https://github.com/cotag/condo_example
9
+ Also see our "github pages site":http://cotag.github.com/Condominios/
10
+
11
+
12
+ h2. License
13
+
14
+ GNU Lesser General Public License v3 (LGPL version 3)
15
+
16
+
17
+ h2. Concept
18
+
19
+ Condominios was created to provide direct to cloud uploads using standards based browser technology. However it is not limited to that use case.
20
+ The API is RESTful, providing an abstraction layer and signed URLs that can be utilised in native (mobile) applications.
21
+
22
+ The main advantages are:
23
+ * Off-loads processing to client machines
24
+ * Better guarantees against upload corruption
25
+ ** file hashing on the client side instead of an intermediary where it probably won't be hashed either
26
+ * Upload results are guaranteed if the cloud provider provides atomic operations
27
+ ** user is always aware of any failures in the process
28
+ * Detailed progress and control over the upload
29
+
30
+ This has numerous advantages over traditional Form Data style post uploads too.
31
+ * Progress bars
32
+ * Resumability when uploading large files
33
+
34
+
35
+ Support for all major browsers
36
+ * Tested in Firefox latest stable, Chromes latest stable, Safari 6, Opera 12 and IE10
37
+
38
+
39
+ h2. Usage
40
+
41
+ h3. Terms
42
+
43
+ * Residence == the current storage provider
44
+ * Resident == the current user
45
+
46
+
47
+ h3. Quick Start
48
+
49
+ See the "example application":https://github.com/cotag/condo_example which implements the steps below on an otherwise blank rails app.
50
+
51
+ # Add the following to your rails application gemfile:
52
+ #* @gem 'condo'@
53
+ #* Add a datastore
54
+ #** @gem 'condo_active_record'@ (for traditional databases)
55
+ #** "condo_mongoid":https://github.com/axomi/condo_mongoid by "axomi":https://github.com/axomi for "MongoDB":http://mongodb.org/
56
+ #* @gem 'condo_interface'@ (optional - an example interface)
57
+ # Run migrations if using active record
58
+ #* @rake railties:install:migrations FROM=condo_active_record@
59
+ #* @rake db:migrate@
60
+ # Create an initialiser for any default residencies. (details further down)
61
+ # Create controllers that will be used as Condo endpoints
62
+ #* Typically @rails g controller Uploads@
63
+ #* Add the resource to your routes
64
+ # At the top of the new controller add the following line to the class: @include Condo@
65
+ #* This creates the following public methods at run time: new, create, edit, update, destroy implementing the API
66
+ #* The following protected methods are also generated: set_residence, current_residence, current_resident, current_upload
67
+ # You are encouraged to use standard filters to authenticate users and set the residence (if this is dynamic) + implement index / show if desired
68
+ # You must implement the following call-backs:
69
+ #* resident_id - this should provide a unique identifier for the current user, used for authorisation
70
+ #* upload_complete - provides the upload information for storage in the greater application logic. Return true if successful.
71
+ #* destroy_upload - provides the upload information so that a scheduled task can be created to clean up the upload. Return true if successfully scheduled.
72
+ #** This should be done in the background using something like "Fog":http://fog.io/ Can't trust the client
73
+
74
+
75
+ If you are using "Condo Interface":https://github.com/cotag/condo_interface then you may want to do the following:
76
+ # Create an index for your controller @def index; end@
77
+ # Create an index.html.erb in your view with:
78
+ # Make sure your AngularJS app includes: @angular.module('YourApp', ['Condo', 'CondoInterface']);@
79
+ #* @<div data-ng-app="YourApp"><%= render "condo_interface/upload" %></div>@
80
+
81
+ Alternative you could load an AngularJS template linking to <%= asset_path('templates/_upload.html') %>
82
+
83
+
84
+ h3. Defining Static Residencies
85
+
86
+ If you are creating an application that only communicates with one or two storage providers or accounts then this is the simplest way to get started.
87
+ In an initialiser do the following:
88
+
89
+ <pre><code class="ruby">
90
+ Condo::Configuration.add_residence(:AmazonS3, {
91
+ :access_id => ENV['S3_KEY'],
92
+ :secret_key => ENV['S3_SECRET']
93
+ # :location => 'us-west-1' # or 'ap-southeast-1' etc (see http://docs.amazonwebservices.com/general/latest/gr/rande.html#s3_region)
94
+ # Defaults to 'us-east-1' or US Standard - not required for Google
95
+ # :namespace => :admin_resident # Allows you to assign different defaults to different controllers
96
+ # Controller must have the following line 'set_namespace :admin_resident'
97
+ })
98
+
99
+ </code></pre>
100
+
101
+ The first residence to be defined in a namespace will be the default. To change the residence for the current request use @set_residence(:name, :location)@ - location is optional
102
+ Currently available residencies:
103
+ * :AmazonS3
104
+ * :GoogleCloudStorage
105
+ * :RackspaceCloudFiles
106
+
107
+
108
+ You can also define a dynamic residence each request (maybe clients provided you with access information for their storage provider)
109
+
110
+ <pre><code class="ruby">
111
+ set_residence(:AmazonS3, {
112
+ :access_id => user.s3_key,
113
+ :secret_key => user.s3_secret,
114
+ :dynamic => true # Otherwise the same as add_residence
115
+ });
116
+
117
+
118
+ </code></pre>
119
+
120
+
121
+ h3. Callbacks
122
+
123
+ These are pretty well defined "here":https://github.com/cotag/condo_example/blob/master/app/controllers/uploads_controller.rb
124
+
125
+
126
+ h2. TODO::
127
+
128
+ # Write tests... So many tests
129
+ # Create a wiki describing things in more detail
130
+ # Implement API for more residencies
131
+ # Sign other useful requests (bucket listings with search etc)
132
+ #* For Dropbox or Megaupload style applications
133
+
@@ -1,6 +1,9 @@
1
- //= require condo/abstract-md5
2
- //= require condo/base64
3
- //= require condo/broadcaster
4
- //= require condo/uploader
5
- //= require condo/config
6
- //= require condo/controller
1
+ //= require core/core
2
+ //= require core/services/1-safe-apply
3
+ //= require core/services/2-messaging
4
+ //= require condo/condo
5
+ //= require condo/services/base64
6
+ //= require condo/services/broadcaster
7
+ //= require condo/services/abstract-md5
8
+ //= require condo/services/uploader
9
+ //= require condo/config
@@ -1,406 +1,403 @@
1
- /**
2
- * CoTag Condo Amazon S3 Strategy
3
- * Direct to cloud resumable uploads for Amazon S3
4
- *
5
- * Copyright (c) 2012 CoTag Media.
6
- *
7
- * @author Stephen von Takach <steve@cotag.me>
8
- * @copyright 2012 cotag.me
9
- *
10
- *
11
- * References:
12
- * * https://github.com/umdjs/umd
13
- * * https://github.com/addyosmani/jquery-plugin-patterns
14
- * *
15
- *
16
- **/
17
-
18
- (function (factory) {
19
- if (typeof define === 'function' && define.amd) {
20
- // AMD
21
- define(['jquery', 'base64', 'condo-uploader'], factory);
22
- } else {
23
- // Browser globals
24
- factory(jQuery, window.base64);
25
- }
26
- }(function ($, base64) {
27
- 'use strict';
28
-
29
- angular.module('CondoAmazonProvider', ['CondoUploader', 'CondoAbstractMd5']).run(['$q', 'Condo.Registrar', 'Condo.Md5', function($q, registrar, md5) {
30
- var PENDING = 0,
31
- STARTED = 1,
32
- PAUSED = 2,
33
- UPLOADING = 3,
34
- COMPLETED = 4,
35
- ABORTED = 5,
36
-
37
-
38
-
39
- hexToBin = function(input) {
40
- var result = "";
41
-
42
- if ((input.length % 2) > 0) {
43
- input = '0' + input;
44
- }
45
-
46
- for (var i = 0, length = input.length; i < length; i += 2) {
47
- result += String.fromCharCode(parseInt(input.slice(i, i + 2), 16));
48
- }
49
-
50
- return result;
51
- },
52
-
53
-
54
- Amazon = function (api, file) {
55
- var self = this,
56
- strategy = null,
57
- part_size = 5242880, // Multi-part uploads should be bigger then this
58
- pausing = false,
59
- defaultError = function(reason) {
60
- self.error = !pausing;
61
- pausing = false;
62
- self.pause(reason);
63
- },
64
-
65
- restart = function() {
66
- strategy = null;
67
- },
68
-
69
-
70
- completeUpload = function() {
71
- api.update().then(function(data) {
72
- self.state = COMPLETED;
73
- }, defaultError);
74
- },
75
-
76
-
77
- //
78
- // We need to sign our uploads so amazon can confirm they are valid for us
79
- // Part numbers can be any number from 1 to 10,000 - inclusive
80
- //
81
- build_request = function(part_number) {
82
- var current_part;
83
-
84
- if (file.size > part_size) { // If file bigger then 5mb we expect a chunked upload
85
- var endbyte = part_number * part_size;
86
- if (endbyte > file.size)
87
- endbyte = file.size;
88
- current_part = file.slice((part_number - 1) * part_size, endbyte);
89
- } else {
90
- current_part = file;
91
- }
92
-
93
- return md5.hash(current_part).then(function(val) {
94
- return {
95
- data: current_part,
96
- data_id: val,
97
- part_number: part_number
98
- }
99
- }, function(reason){
100
- return $q.reject(reason);
101
- });
102
- },
103
-
104
- //
105
- // Direct file upload strategy
106
- //
107
- AmazonDirect = function(data) {
108
- //
109
- // resume
110
- // abort
111
- // pause
112
- //
113
- var $this = this,
114
- finalising = false;
115
-
116
- //
117
- // Update the parent
118
- //
119
- self.state = UPLOADING;
120
-
121
-
122
- //
123
- // This will only be called when the upload has finished and we need to inform the application
124
- //
125
- this.resume = function() {
126
- self.state = UPLOADING;
127
- completeUpload();
128
- }
129
-
130
- this.pause = function() {
131
- api.abort();
132
-
133
- if(!finalising) {
134
- restart(); // Should occur before events triggered
135
- self.progress = 0;
136
- }
137
- };
138
-
139
-
140
- //
141
- // AJAX for upload goes here
142
- //
143
- data['data'] = file;
144
- api.process_request(data, function(progress) {
145
- self.progress = progress;
146
- }).then(function(result) {
147
- finalising = true;
148
- $this.resume(); // Resume informs the application that the upload is complete
149
- }, function(reason) {
150
- self.progress = 0;
151
- defaultError(reason);
152
- });
153
- }, // END DIRECT
154
-
155
-
156
- //
157
- // Chunked upload strategy--------------------------------------------------
158
- //
159
- AmazonChunked = function (data, first_chunk) {
160
- //
161
- // resume
162
- // abort
163
- // pause
164
- //
165
- var part_ids = [],
166
- last_part = 0,
167
-
168
-
169
- generatePartManifest = function() {
170
- var list = '<CompleteMultipartUpload>';
171
-
172
- for (var i = 0, length = part_ids.length; i < length; i += 1) {
173
- list += '<Part><PartNumber>' + (i + 1) + '</PartNumber><ETag>"' + part_ids[i] + '"</ETag></Part>';
174
- }
175
- list += '</CompleteMultipartUpload>';
176
- return list;
177
- },
178
-
179
- //
180
- // Get the next part signature
181
- //
182
- next_part = function(part_number) {
183
- //
184
- // Check if we are past the end of the file
185
- //
186
- if ((part_number - 1) * part_size < file.size) {
187
- build_request(part_number).then(function(result) {
188
- if (self.state != UPLOADING)
189
- return; // upload was paused or aborted as we were reading the file
190
-
191
- api.edit(part_number, base64.encode(hexToBin(result.data_id))).
192
- then(function(data) {
193
- set_part(data, result);
194
- }, defaultError);
195
-
196
- }, defaultError); // END BUILD_REQUEST
197
-
198
- } else {
199
- //
200
- // We're after the final commit
201
- //
202
- api.edit('finish').
203
- then(function(request) {
204
- request['data'] = generatePartManifest();
205
- api.process_request(request).then(completeUpload, defaultError);
206
- }, defaultError);
207
- }
208
- },
209
-
210
-
211
- //
212
- // Send a part to amazon
213
- //
214
- set_part = function(request, part_info) {
215
- request['data'] = part_info.data;
216
- api.process_request(request, function(progress) {
217
- self.progress = (part_info.part_number - 1) * part_size + progress;
218
- }).then(function(result) {
219
- part_ids.push(part_info.data_id); // We need to record the list of part IDs for completion
220
- last_part = part_info.part_number;
221
- next_part(last_part + 1);
222
- }, function(reason) {
223
- self.progress = (part_info.part_number - 1) * part_size;
224
- defaultError(reason);
225
- });
226
- };
227
-
228
-
229
- self.state = UPLOADING;
230
-
231
- this.resume = function() {
232
- self.state = UPLOADING;
233
- next_part(last_part + 1);
234
- };
235
-
236
- this.pause = function() {
237
- api.abort();
238
- };
239
-
240
-
241
- //
242
- // We need to check if we are grabbing a parts list or creating an upload
243
- //
244
- api.process_request(data).then(function(response) {
245
- if(data.type == 'parts') { // was the original request for a list of parts
246
- //
247
- // NextPartNumberMarker == the final part in the current request
248
- // TODO:: if IsTruncated is set then we need to keep getting parts
249
- //
250
- response = $(response);
251
- var next = parseInt(response.find('NextPartNumberMarker').eq(0).text()),
252
- etags = response.find('ETag');
253
-
254
- etags.each(function(index) {
255
- part_ids.push($(this).text().replace(/"{1}/gi,'')); // Removes " from strings
256
- });
257
-
258
- last_part = next; // So we can resume
259
- next_part(next + 1); // As NextPartNumberMarker is just the last part uploaded
260
- } else {
261
- //
262
- // We've created the upload - we need to update the application with the upload id.
263
- // This will also return the request for uploading the first part which we've already prepared
264
- //
265
- api.update({
266
- resumable_id: $(response).find('UploadId').eq(0).text(),
267
- file_id: base64.encode(hexToBin(first_chunk.data_id)),
268
- part: 1
269
- }).then(function(data) {
270
- set_part(data, first_chunk); // Parts start at 1
271
- }, function(reason) {
272
- defaultError(reason);
273
- restart(); // Easier to start from the beginning
274
- });
275
- }
276
- }, function(reason) {
277
- defaultError(reason);
278
- restart(); // We need to get a new request signature
279
- });
280
- }; // END CHUNKED
281
-
282
-
283
- //
284
- // Variables required for all drivers
285
- //
286
- this.state = PENDING;
287
- this.progress = 0;
288
- this.message = 'pending';
289
- this.name = file.name;
290
- this.size = file.size;
291
- this.error = false;
292
-
293
-
294
- //
295
- // File path is optional (amazon supports paths as part of the key name)
296
- // http://docs.amazonwebservices.com/AmazonS3/2006-03-01/dev/ListingKeysHierarchy.html
297
- //
298
- if(!!file.dir_path)
299
- this.path = file.dir_path;
300
-
301
-
302
- //
303
- // Support file slicing
304
- //
305
- if (typeof(file.slice) != 'function')
306
- file.slice = file.webkitSlice || file.mozSlice;
307
-
308
-
309
- this.start = function(){
310
- if(strategy == null) { // We need to create the upload
311
- self.error = false;
312
- pausing = false;
313
-
314
- //
315
- // Update part size if required
316
- //
317
- if((part_size * 9999) < file.size) {
318
- part_size = file.size / 9999;
319
- if(part_size > (5 * 1024 * 1024 * 1024)) { // 5GB limit on part sizes
320
- this.abort('file too big');
321
- return;
322
- }
323
- }
324
-
325
- this.message = null;
326
- this.state = STARTED;
327
- strategy = {}; // This function shouldn't be called twice so we need a state (TODO:: fix this)
328
-
329
- build_request(1).then(function(result) {
330
- if (self.state != STARTED)
331
- return; // upload was paused or aborted as we were reading the file
332
-
333
- api.create({file_id: base64.encode(hexToBin(result.data_id))}).
334
- then(function(data) {
335
- if(data.type == 'direct_upload') {
336
- strategy = new AmazonDirect(data);
337
- } else {
338
- strategy = new AmazonChunked(data, result);
339
- }
340
- }, defaultError);
341
-
342
- }, defaultError); // END BUILD_REQUEST
343
-
344
-
345
- } else if (this.state == PAUSED) { // We need to resume the upload if it is paused
346
- this.message = null;
347
- self.error = false;
348
- pausing = false;
349
- strategy.resume();
350
- }
351
- };
352
-
353
- this.pause = function(reason) {
354
- if(strategy != null && this.state == UPLOADING) { // Check if the upload is uploading
355
- this.state = PAUSED;
356
- pausing = true;
357
- strategy.pause();
358
- } else if (this.state <= STARTED) {
359
- this.state = PAUSED;
360
- restart();
361
- }
362
- if(this.state == PAUSED)
363
- this.message = reason;
364
- };
365
-
366
- this.abort = function(reason) {
367
- if(strategy != null && this.state < COMPLETED) { // Check the upload has not finished
368
- var old_state = this.state;
369
-
370
- this.state = ABORTED;
371
- api.abort();
372
-
373
-
374
- //
375
- // As we may not have successfully deleted the upload
376
- // or we aborted before we received a response from create
377
- //
378
- restart(); // nullifies strategy
379
-
380
-
381
- //
382
- // if we have an upload_id then we should destroy the upload
383
- // we won't worry if this fails as it should be automatically cleaned up by the back end
384
- //
385
- if(old_state > STARTED) {
386
- api.destroy();
387
- }
388
-
389
- this.message = reason;
390
- }
391
- };
392
- }; // END AMAZON
393
-
394
-
395
- //
396
- // Register the residence with the API
397
- // Dependency injection succeeded
398
- //
399
- registrar.register('AmazonS3', {
400
- new_upload: function(api, file) {
401
- return new Amazon(api, file);
402
- }
403
- });
404
- }]);
405
-
406
- }));
1
+ /**
2
+ * CoTag Condo Amazon S3 Strategy
3
+ * Direct to cloud resumable uploads for Amazon S3
4
+ *
5
+ * Copyright (c) 2012 CoTag Media.
6
+ *
7
+ * @author Stephen von Takach <steve@cotag.me>
8
+ * @copyright 2012 cotag.me
9
+ *
10
+ *
11
+ * References:
12
+ * *
13
+ *
14
+ **/
15
+
16
+
17
+ (function(angular, base64, undefined) {
18
+ 'use strict';
19
+
20
+ angular.module('Condo').
21
+
22
+ factory('Condo.Amazon', ['$q', 'Condo.Md5', function($q, md5) {
23
+ var PENDING = 0,
24
+ STARTED = 1,
25
+ PAUSED = 2,
26
+ UPLOADING = 3,
27
+ COMPLETED = 4,
28
+ ABORTED = 5,
29
+
30
+
31
+
32
+ hexToBin = function(input) {
33
+ var result = "";
34
+
35
+ if ((input.length % 2) > 0) {
36
+ input = '0' + input;
37
+ }
38
+
39
+ for (var i = 0, length = input.length; i < length; i += 2) {
40
+ result += String.fromCharCode(parseInt(input.slice(i, i + 2), 16));
41
+ }
42
+
43
+ return result;
44
+ },
45
+
46
+
47
+ Amazon = function (api, file) {
48
+ var self = this,
49
+ strategy = null,
50
+ part_size = 5242880, // Multi-part uploads should be bigger then this
51
+ pausing = false,
52
+ defaultError = function(reason) {
53
+ self.error = !pausing;
54
+ pausing = false;
55
+ self.pause(reason);
56
+ },
57
+
58
+ restart = function() {
59
+ strategy = null;
60
+ },
61
+
62
+
63
+ completeUpload = function() {
64
+ api.update().then(function(data) {
65
+ self.progress = self.size; // Update to 100%
66
+ self.state = COMPLETED;
67
+ }, defaultError);
68
+ },
69
+
70
+
71
+ //
72
+ // We need to sign our uploads so amazon can confirm they are valid for us
73
+ // Part numbers can be any number from 1 to 10,000 - inclusive
74
+ //
75
+ build_request = function(part_number) {
76
+ var current_part;
77
+
78
+ if (file.size > part_size) { // If file bigger then 5mb we expect a chunked upload
79
+ var endbyte = part_number * part_size;
80
+ if (endbyte > file.size)
81
+ endbyte = file.size;
82
+ current_part = file.slice((part_number - 1) * part_size, endbyte);
83
+ } else {
84
+ current_part = file;
85
+ }
86
+
87
+ return md5.hash(current_part).then(function(val) {
88
+ return {
89
+ data: current_part,
90
+ data_id: val,
91
+ part_number: part_number
92
+ }
93
+ }, function(reason){
94
+ return $q.reject(reason);
95
+ });
96
+ },
97
+
98
+ //
99
+ // Direct file upload strategy
100
+ //
101
+ AmazonDirect = function(data) {
102
+ //
103
+ // resume
104
+ // abort
105
+ // pause
106
+ //
107
+ var $this = this,
108
+ finalising = false;
109
+
110
+ //
111
+ // Update the parent
112
+ //
113
+ self.state = UPLOADING;
114
+
115
+
116
+ //
117
+ // This will only be called when the upload has finished and we need to inform the application
118
+ //
119
+ this.resume = function() {
120
+ self.state = UPLOADING;
121
+ completeUpload();
122
+ }
123
+
124
+ this.pause = function() {
125
+ api.abort();
126
+
127
+ if(!finalising) {
128
+ restart(); // Should occur before events triggered
129
+ self.progress = 0;
130
+ }
131
+ };
132
+
133
+
134
+ //
135
+ // AJAX for upload goes here
136
+ //
137
+ data['data'] = file;
138
+ api.process_request(data, function(progress) {
139
+ self.progress = progress;
140
+ }).then(function(result) {
141
+ finalising = true;
142
+ $this.resume(); // Resume informs the application that the upload is complete
143
+ }, function(reason) {
144
+ self.progress = 0;
145
+ defaultError(reason);
146
+ });
147
+ }, // END DIRECT
148
+
149
+
150
+ //
151
+ // Chunked upload strategy--------------------------------------------------
152
+ //
153
+ AmazonChunked = function (data, first_chunk) {
154
+ //
155
+ // resume
156
+ // abort
157
+ // pause
158
+ //
159
+ var part_ids = [],
160
+ last_part = 0,
161
+
162
+
163
+ generatePartManifest = function() {
164
+ var list = '<CompleteMultipartUpload>';
165
+
166
+ for (var i = 0, length = part_ids.length; i < length; i += 1) {
167
+ list += '<Part><PartNumber>' + (i + 1) + '</PartNumber><ETag>"' + part_ids[i] + '"</ETag></Part>';
168
+ }
169
+ list += '</CompleteMultipartUpload>';
170
+ return list;
171
+ },
172
+
173
+ //
174
+ // Get the next part signature
175
+ //
176
+ next_part = function(part_number) {
177
+ //
178
+ // Check if we are past the end of the file
179
+ //
180
+ if ((part_number - 1) * part_size < file.size) {
181
+
182
+ self.progress = (part_number - 1) * part_size; // Update the progress
183
+
184
+ build_request(part_number).then(function(result) {
185
+ if (self.state != UPLOADING)
186
+ return; // upload was paused or aborted as we were reading the file
187
+
188
+ api.edit(part_number, base64.encode(hexToBin(result.data_id))).
189
+ then(function(data) {
190
+ set_part(data, result);
191
+ }, defaultError);
192
+
193
+ }, defaultError); // END BUILD_REQUEST
194
+
195
+ } else {
196
+ //
197
+ // We're after the final commit
198
+ //
199
+ api.edit('finish').
200
+ then(function(request) {
201
+ request['data'] = generatePartManifest();
202
+ api.process_request(request).then(completeUpload, defaultError);
203
+ }, defaultError);
204
+ }
205
+ },
206
+
207
+
208
+ //
209
+ // Send a part to amazon
210
+ //
211
+ set_part = function(request, part_info) {
212
+ request['data'] = part_info.data;
213
+ api.process_request(request, function(progress) {
214
+ self.progress = (part_info.part_number - 1) * part_size + progress;
215
+ }).then(function(result) {
216
+ part_ids.push(part_info.data_id); // We need to record the list of part IDs for completion
217
+ last_part = part_info.part_number;
218
+ next_part(last_part + 1);
219
+ }, function(reason) {
220
+ self.progress = (part_info.part_number - 1) * part_size;
221
+ defaultError(reason);
222
+ });
223
+ };
224
+
225
+
226
+ self.state = UPLOADING;
227
+
228
+ this.resume = function() {
229
+ self.state = UPLOADING;
230
+ next_part(last_part + 1);
231
+ };
232
+
233
+ this.pause = function() {
234
+ api.abort();
235
+ };
236
+
237
+
238
+ //
239
+ // We need to check if we are grabbing a parts list or creating an upload
240
+ //
241
+ api.process_request(data).then(function(response) {
242
+ if(data.type == 'parts') { // was the original request for a list of parts
243
+ //
244
+ // NextPartNumberMarker == the final part in the current request
245
+ // TODO:: if IsTruncated is set then we need to keep getting parts
246
+ //
247
+ response = $(response[0]);
248
+ var next = parseInt(response.find('NextPartNumberMarker').eq(0).text()),
249
+ etags = response.find('ETag');
250
+
251
+ etags.each(function(index) {
252
+ part_ids.push($(this).text().replace(/"{1}/gi,'')); // Removes " from strings
253
+ });
254
+
255
+ last_part = next; // So we can resume
256
+ next_part(next + 1); // As NextPartNumberMarker is just the last part uploaded
257
+ } else {
258
+ //
259
+ // We've created the upload - we need to update the application with the upload id.
260
+ // This will also return the request for uploading the first part which we've already prepared
261
+ //
262
+ api.update({
263
+ resumable_id: $(response[0]).find('UploadId').eq(0).text(),
264
+ file_id: base64.encode(hexToBin(first_chunk.data_id)),
265
+ part: 1
266
+ }).then(function(data) {
267
+ set_part(data, first_chunk); // Parts start at 1
268
+ }, function(reason) {
269
+ defaultError(reason);
270
+ restart(); // Easier to start from the beginning
271
+ });
272
+ }
273
+ }, function(reason) {
274
+ defaultError(reason);
275
+ restart(); // We need to get a new request signature
276
+ });
277
+ }; // END CHUNKED
278
+
279
+
280
+ //
281
+ // Variables required for all drivers
282
+ //
283
+ this.state = PENDING;
284
+ this.progress = 0;
285
+ this.message = 'pending';
286
+ this.name = file.name;
287
+ this.size = file.size;
288
+ this.error = false;
289
+
290
+
291
+ //
292
+ // File path is optional (amazon supports paths as part of the key name)
293
+ // http://docs.amazonwebservices.com/AmazonS3/2006-03-01/dev/ListingKeysHierarchy.html
294
+ //
295
+ if(!!file.dir_path)
296
+ this.path = file.dir_path;
297
+
298
+
299
+ //
300
+ // Support file slicing
301
+ //
302
+ if (typeof(file.slice) != 'function')
303
+ file.slice = file.webkitSlice || file.mozSlice;
304
+
305
+
306
+ this.start = function(){
307
+ if(strategy == null) { // We need to create the upload
308
+ self.error = false;
309
+ pausing = false;
310
+
311
+ //
312
+ // Update part size if required
313
+ //
314
+ if((part_size * 9999) < file.size) {
315
+ part_size = file.size / 9999;
316
+ if(part_size > (5 * 1024 * 1024 * 1024)) { // 5GB limit on part sizes
317
+ this.abort('file too big');
318
+ return;
319
+ }
320
+ }
321
+
322
+ this.message = null;
323
+ this.state = STARTED;
324
+ strategy = {}; // This function shouldn't be called twice so we need a state (TODO:: fix this)
325
+
326
+ build_request(1).then(function(result) {
327
+ if (self.state != STARTED)
328
+ return; // upload was paused or aborted as we were reading the file
329
+
330
+ api.create({file_id: base64.encode(hexToBin(result.data_id))}).
331
+ then(function(data) {
332
+ if(data.type == 'direct_upload') {
333
+ strategy = new AmazonDirect(data);
334
+ } else {
335
+ strategy = new AmazonChunked(data, result);
336
+ }
337
+ }, defaultError);
338
+
339
+ }, defaultError); // END BUILD_REQUEST
340
+
341
+
342
+ } else if (this.state == PAUSED) { // We need to resume the upload if it is paused
343
+ this.message = null;
344
+ self.error = false;
345
+ pausing = false;
346
+ strategy.resume();
347
+ }
348
+ };
349
+
350
+ this.pause = function(reason) {
351
+ if(strategy != null && this.state == UPLOADING) { // Check if the upload is uploading
352
+ this.state = PAUSED;
353
+ pausing = true;
354
+ strategy.pause();
355
+ } else if (this.state <= STARTED) {
356
+ this.state = PAUSED;
357
+ restart();
358
+ }
359
+ if(this.state == PAUSED)
360
+ this.message = reason;
361
+ };
362
+
363
+ this.abort = function(reason) {
364
+ if(strategy != null && this.state < COMPLETED) { // Check the upload has not finished
365
+ var old_state = this.state;
366
+
367
+ this.state = ABORTED;
368
+ api.abort();
369
+
370
+
371
+ //
372
+ // As we may not have successfully deleted the upload
373
+ // or we aborted before we received a response from create
374
+ //
375
+ restart(); // nullifies strategy
376
+
377
+
378
+ //
379
+ // if we have an upload_id then we should destroy the upload
380
+ // we won't worry if this fails as it should be automatically cleaned up by the back end
381
+ //
382
+ if(old_state > STARTED) {
383
+ api.destroy();
384
+ }
385
+
386
+ this.message = reason;
387
+ }
388
+ };
389
+ }; // END AMAZON
390
+
391
+
392
+ return {
393
+ new_upload: function(api, file) {
394
+ return new Amazon(api, file);
395
+ }
396
+ };
397
+ }]).
398
+
399
+ config(['Condo.ApiProvider', function (ApiProvider) {
400
+ ApiProvider.register('AmazonS3', 'Condo.Amazon');
401
+ }]);
402
+
403
+ })(angular, window.base64);