fog-aliyun 0.3.13 → 0.3.19

Sign up to get free protection for your applications and to get access to all the features.
Files changed (35) hide show
  1. checksums.yaml +4 -4
  2. data/CHANGELOG.md +109 -0
  3. data/Gemfile +0 -1
  4. data/fog-aliyun.gemspec +2 -0
  5. data/lib/fog/aliyun/models/storage/directories.rb +30 -53
  6. data/lib/fog/aliyun/models/storage/directory.rb +96 -17
  7. data/lib/fog/aliyun/models/storage/file.rb +127 -125
  8. data/lib/fog/aliyun/models/storage/files.rb +62 -156
  9. data/lib/fog/aliyun/requests/storage/abort_multipart_upload.rb +22 -0
  10. data/lib/fog/aliyun/requests/storage/complete_multipart_upload.rb +21 -0
  11. data/lib/fog/aliyun/requests/storage/copy_object.rb +14 -19
  12. data/lib/fog/aliyun/requests/storage/delete_bucket.rb +3 -10
  13. data/lib/fog/aliyun/requests/storage/delete_multiple_objects.rb +20 -0
  14. data/lib/fog/aliyun/requests/storage/delete_object.rb +10 -11
  15. data/lib/fog/aliyun/requests/storage/get_bucket.rb +26 -125
  16. data/lib/fog/aliyun/requests/storage/get_bucket_location.rb +33 -0
  17. data/lib/fog/aliyun/requests/storage/get_object.rb +29 -11
  18. data/lib/fog/aliyun/requests/storage/get_object_acl.rb +30 -0
  19. data/lib/fog/aliyun/requests/storage/get_object_http_url.rb +8 -11
  20. data/lib/fog/aliyun/requests/storage/get_object_https_url.rb +8 -11
  21. data/lib/fog/aliyun/requests/storage/get_service.rb +13 -0
  22. data/lib/fog/aliyun/requests/storage/head_object.rb +25 -14
  23. data/lib/fog/aliyun/requests/storage/initiate_multipart_upload.rb +19 -0
  24. data/lib/fog/aliyun/requests/storage/list_buckets.rb +6 -24
  25. data/lib/fog/aliyun/requests/storage/list_objects.rb +10 -67
  26. data/lib/fog/aliyun/requests/storage/put_bucket.rb +2 -8
  27. data/lib/fog/aliyun/requests/storage/put_object.rb +16 -142
  28. data/lib/fog/aliyun/requests/storage/upload_part.rb +24 -0
  29. data/lib/fog/aliyun/storage.rb +41 -27
  30. data/lib/fog/aliyun/version.rb +1 -1
  31. metadata +39 -6
  32. data/lib/fog/aliyun/requests/storage/delete_container.rb +0 -31
  33. data/lib/fog/aliyun/requests/storage/get_container.rb +0 -56
  34. data/lib/fog/aliyun/requests/storage/get_containers.rb +0 -65
  35. data/lib/fog/aliyun/requests/storage/put_container.rb +0 -30
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: e3be4a1832619663a88cd9ac70eada6d06c24b9e8b7f11da0242516d2e379acd
4
- data.tar.gz: 5d9338f0e592f82b9f2565ee8fb5c46077d2b1156e72a808a7c476a17e7da8fe
3
+ metadata.gz: 5f527fdb37f55f3f14e68b37fa7d038ba1b351e798d25bc761e18af8e2543c82
4
+ data.tar.gz: c7b4cdb85fe538f7934eafe57e4108d8fd411084fb3bb3cdd7c1cd0a28714923
5
5
  SHA512:
6
- metadata.gz: 7aebf30618f7e098f37c4a0a0c9a10bba29d33062bc9789547d92caf0dd220a75c7e303b22a64f1c27eec7385be4c4e18b3542aff1430ba5020dea7e763866c0
7
- data.tar.gz: dd9e04bac82d1acfd60ef696d43b02f181d810aad04a9ad8bff8e6c9038c9a464fd91ad3dbab2c65b491bb1b929ca6cb4f35b9bd06f4485e65a67fe60baaad03
6
+ metadata.gz: ef5fcc535228173a33fc33294c132ca1e6153bee3e9ac15c9626c06313eaba4e7e0e6a9f88457198e872042ee4a14697d040d772ec6e43203e891c89ee591fb4
7
+ data.tar.gz: 19054a55dd975915129f3e7cc0893e81aae38a844bc4d04363de0daf2d0cad484ea1cb62496f0813cb466d7689c65d1a3773a75c597ccbb7f5fcc30073e40d7b
@@ -0,0 +1,109 @@
1
+ ## 0.3.20 (Unreleased)
2
+ ## 0.3.19 (August 17, 2020)
3
+
4
+ IMPROVEMENTS:
5
+
6
+ - Upgrade oss ruby sdk to support setting log level [GH-152](https://github.com/fog/fog-aliyun/pull/152)
7
+
8
+ ## 0.3.18 (August 03, 2020)
9
+
10
+ IMPROVEMENTS:
11
+
12
+ - reconstruct perform test [GH-148](https://github.com/fog/fog-aliyun/pull/148)
13
+ - Reconstruct fog-aliyun by using oss [GH-147](https://github.com/fog/fog-aliyun/pull/147)
14
+ - reconstruct cover case test [GH-146](https://github.com/fog/fog-aliyun/pull/146)
15
+ - reconstruct case test [GH-144](https://github.com/fog/fog-aliyun/pull/144)
16
+ - reconstruct parts two of file [GH-143](https://github.com/fog/fog-aliyun/pull/143)
17
+ - implement blobstore for cloud_controller_ng [GH-142](https://github.com/fog/fog-aliyun/pull/142)
18
+ - reconstruct parts of file [GH-141](https://github.com/fog/fog-aliyun/pull/141)
19
+ - reconstruct the files [GH-140](https://github.com/fog/fog-aliyun/pull/140)
20
+ - reconstruct the directory [GH-139](https://github.com/fog/fog-aliyun/pull/139)
21
+ - reconstruct the directories [GH-138](https://github.com/fog/fog-aliyun/pull/138)
22
+ - improve files.get code [GH-137](https://github.com/fog/fog-aliyun/pull/137)
23
+ - add testcase for testing head notexistfile [GH-136](https://github.com/fog/fog-aliyun/pull/136)
24
+ - improve head_object using oss sdk [GH-135](https://github.com/fog/fog-aliyun/pull/135)
25
+
26
+ BUG FIXES:
27
+
28
+ - fix files all options problem [GH-149](https://github.com/fog/fog-aliyun/pull/149)
29
+
30
+ ## 0.3.17 (July 06, 2020)
31
+
32
+ IMPROVEMENTS:
33
+ - adater oss_sdk_log_path [GH-125](https://github.com/fog/fog-aliyun/pull/125)
34
+ - update ruby sdk to 0.7.3 [GH-124](https://github.com/fog/fog-aliyun/pull/124)
35
+ - adapter maxkeys conversion problem [GH-123](https://github.com/fog/fog-aliyun/pull/123)
36
+ - [Enhance tests][Auth & Connectivity scenarios] Test that API cannot be accessed using incorrect credentials [GH-117](https://github.com/fog/fog-aliyun/pull/117)
37
+ - [Enhance tests][Auth & Connectivity scenarios] Test that API can be accessed using valid credentials [GH-116](https://github.com/fog/fog-aliyun/pull/116)
38
+ - adapter custom log environment variable [GH-114](https://github.com/fog/fog-aliyun/pull/114)
39
+ - [Enhance tests][Buckets scenarios] (NEGATIVE TEST) Test that error is thrown when trying to access non-existing bucket [GH-110](https://github.com/fog/fog-aliyun/pull/110)
40
+ - [Enhance tests][Buckets scenarios] (NEGATIVE TEST) Test that error is thrown when trying to create already existing bucket [GH-109](https://github.com/fog/fog-aliyun/pull/109)
41
+ - [Enhance tests][Buckets scenarios] Test that it is possible to destroy a bucket [GH-108](https://github.com/fog/fog-aliyun/pull/108)
42
+ - [Enhance tests][Buckets scenarios] Test that it is possible to create a new bucket [GH-107](https://github.com/fog/fog-aliyun/pull/107)
43
+ - [Enhance tests][Buckets scenarios] Test that it is possible to list all buckets [GH-105](https://github.com/fog/fog-aliyun/pull/105)
44
+ - [Enhance tests][Files & Directory scenarios] Test getting bucket when directory exists named with the same name as a bucket [GH-101](https://github.com/fog/fog-aliyun/pull/101)
45
+ - [Enhance tests][Files & Directory scenarios] Test file copy operations [GH-100](https://github.com/fog/fog-aliyun/pull/100)
46
+ - reset the last PR [GH-133](https://github.com/fog/fog-aliyun/pull/133)
47
+ - improve put_object_with_body and head_object using sdk do_request [GH-131](https://github.com/fog/fog-aliyun/pull/131)
48
+
49
+ BUG FIXES:
50
+ - fix max key again [GH-128](https://github.com/fog/fog-aliyun/pull/128)
51
+ - fix downloading object when pushing app twice [GH-127](https://github.com/fog/fog-aliyun/pull/127)
52
+ - fix max key [GH-126](https://github.com/fog/fog-aliyun/pull/126)
53
+ - fix max-keys conversion problem [GH-121](https://github.com/fog/fog-aliyun/pull/121)
54
+ - fix @aliyun_oss_sdk_log_path is nil [GH-132](https://github.com/fog/fog-aliyun/pull/132)
55
+
56
+ ## 0.3.16 (June 18, 2020)
57
+
58
+ IMPROVEMENTS:
59
+ - [Enhance tests][Files & Directory scenarios] Test get nested directories and files in nested directory [GH-98](https://github.com/fog/fog-aliyun/pull/98)
60
+ - remove get_bucket_location and use ruby sdk to improve performance when uploading object [GH-97](https://github.com/fog/fog-aliyun/pull/97)
61
+ - using bucket_exist to checking bucket [GH-95](https://github.com/fog/fog-aliyun/pull/95)
62
+ - add change log [GH-94](https://github.com/fog/fog-aliyun/pull/94)
63
+
64
+ BUG FIXES:
65
+ - fix delete all of files bug when specifying a prefix [GH-102](https://github.com/fog/fog-aliyun/pull/102)
66
+
67
+ ## 0.3.15 (June 05, 2020)
68
+
69
+ BUG FIXES:
70
+ - change dependence ruby sdk to gems [GH-92](https://github.com/fog/fog-aliyun/pull/92)
71
+
72
+ ## 0.3.13 (June 02, 2020)
73
+
74
+ IMPROVEMENTS:
75
+ - using ruby sdk to delete object [GH-90](https://github.com/fog/fog-aliyun/pull/90)
76
+
77
+ ## 0.3.12 (May 28, 2020 )
78
+
79
+ BUG FIXES:
80
+ - add missing dependence [GH-88](https://github.com/fog/fog-aliyun/pull/88)
81
+
82
+ ## 0.3.11 (May 25, 2020)
83
+
84
+ IMPROVEMENTS:
85
+ - using oss ruby sdk to improve downloading object performance [GH-86](https://github.com/fog/fog-aliyun/pull/86)
86
+ - Add performance tests [GH-85](https://github.com/fog/fog-aliyun/pull/85)
87
+ - [Enhance tests][Entity operations]Add tests for each type of entity that validates the CURD operations [GH-84](https://github.com/fog/fog-aliyun/pull/84)
88
+ - [Enhance tests][Auth & Connectivity scenarios] Test region is selected according to provider configuration [GH-83](https://github.com/fog/fog-aliyun/pull/83)
89
+ - [Enhance tests][Files & Directory scenarios] test file listing using parameters such as prefix, marker, delimeter and maxKeys [GH-82](https://github.com/fog/fog-aliyun/pull/82)
90
+ - [Enhance tests][Files & Directory scenarios]test directory listing using parameters such as prefix, marker, delimeter and maxKeys [GH-81](https://github.com/fog/fog-aliyun/pull/81)
91
+ - [Enhance tests][Files & Directory scenarios]Test that it is possible to upload (write) large file (multi part upload) [GH-79](https://github.com/fog/fog-aliyun/pull/79)
92
+ - upgrade deprecated code [GH-78](https://github.com/fog/fog-aliyun/pull/78)
93
+ - improve fog/integration_spec [GH-77](https://github.com/fog/fog-aliyun/pull/77)
94
+ - [Enhance tests][Files & Directory scenarios]Test that it is possible to upload (write) a file [GH-76](https://github.com/fog/fog-aliyun/pull/76)
95
+ - upgrade deprecated code [GH-74](https://github.com/fog/fog-aliyun/pull/74)
96
+ - support https scheme [GH-71](https://github.com/fog/fog-aliyun/pull/71)
97
+ - [Enhance tests][Files & Directory scenarios]Test that it is possible to destroy a file/directory [GH-69](https://github.com/fog/fog-aliyun/pull/69)
98
+ - improve fog/integration_spec [GH-68](https://github.com/fog/fog-aliyun/pull/68)
99
+ - Implement basic integration tests [GH-66](https://github.com/fog/fog-aliyun/pull/66)
100
+
101
+ ## 0.3.10 (May 07, 2020)
102
+
103
+ IMPROVEMENTS:
104
+ - Set max limitation to 1000 when get objects [GH-64](https://github.com/fog/fog-aliyun/pull/64)
105
+
106
+ ## 0.3.9 (May 07, 2020)
107
+
108
+ BUG FIXES:
109
+ - diectories.get supports options to filter the specified objects [GH-62](https://github.com/fog/fog-aliyun/pull/62)
data/Gemfile CHANGED
@@ -4,4 +4,3 @@ source 'https://rubygems.org'
4
4
 
5
5
  # Specify your gem's dependencies in fog-aliyun.gemspec
6
6
  gemspec
7
- gem 'aliyun-sdk', git: 'https://github.com/aliyun/aliyun-oss-ruby-sdk.git', branch: 'master'
@@ -28,7 +28,9 @@ Gem::Specification.new do |spec|
28
28
  spec.add_development_dependency 'rubocop'
29
29
  spec.add_development_dependency 'simplecov'
30
30
  spec.add_development_dependency 'memory_profiler'
31
+ spec.add_development_dependency 'aliyun-sdk', '~> 0.8.0'
31
32
 
33
+ spec.add_dependency 'aliyun-sdk', '~> 0.8.0'
32
34
  spec.add_dependency 'fog-core'
33
35
  spec.add_dependency 'fog-json'
34
36
  spec.add_dependency 'ipaddress', '~> 0.8'
@@ -10,70 +10,47 @@ module Fog
10
10
  model Fog::Aliyun::Storage::Directory
11
11
 
12
12
  def all
13
- containers = service.get_containers
14
- return nil if containers.nil?
13
+ buckets = service.get_service[0]
14
+ return nil if buckets.size < 1
15
15
  data = []
16
16
  i = 0
17
- containers.each do |entry|
18
- key = entry['Prefix'][0]
19
- key[-1] = ''
20
- data[i] = { key: key }
17
+ buckets.each do |b|
18
+ data[i] = { key: b.name }
21
19
  i += 1
22
20
  end
23
-
24
21
  load(data)
25
22
  end
26
23
 
27
- # get method used to get a specified directory.
28
- # If the directory is not exist, this method will create a new with 'key'
29
- # In order to support multi-buckets scenario which making bucket as a solo directory, it have been expanded.
30
- # If key is a directory(including /), return an existed or a new one;
31
- # If key does not contain /, if bucket, return '', else return an existed or a new one directory;
24
+
32
25
  def get(key, options = {})
33
- if key.is_a? Array
34
- key = key[0]
26
+ data = service.get_bucket(key, options)
27
+
28
+ directory = new(:key => key, :is_persisted => true)
29
+
30
+ options = data[1]
31
+ options[:max_keys] = options[:limit]
32
+ directory.files.merge_attributes(options)
33
+
34
+ objects = []
35
+ i = 0
36
+ data[0].each do |o|
37
+ objects[i] = {
38
+ 'Key' => o.key,
39
+ 'Type' => o.type,
40
+ 'Size' => o.size,
41
+ 'ETag' => o.etag,
42
+ 'LastModified' => o.last_modified
43
+ }
44
+ i += 1
35
45
  end
36
- if !key.nil? && key != '' && key != '.'
37
- key = key.chomp('/')
38
- if key.include? '/'
39
- dir = key + '/'
40
- ret = service.head_object(dir, options)
41
- new(key: key) if ret.data[:status] == 200
42
- else
43
- remap_attributes(options, {
44
- :delimiter => 'delimiter',
45
- :marker => 'marker',
46
- :max_keys => 'max-keys',
47
- :prefix => 'prefix'
48
- })
49
- data = service.get_bucket(key, options)
50
- directory = new(:key => data['Name'], :is_persisted => true)
51
- options = {}
52
- for k, v in data
53
- if ['CommonPrefixes', 'Delimiter', 'IsTruncated', 'Marker', 'MaxKeys', 'Prefix'].include?(k)
54
- # Sometimes, the v will be a Array, like "Name"=>["blobstore-droplet1"], "Prefix"=>[{}], "Marker"=>[{}], "MaxKeys"=>["100"], "Delimiter"=>[{}], "IsTruncated"=>["false"]
55
- # and there needs to parse them
56
- if !v.nil? && (v.is_a? Array) && (v.size > 0)
57
- if v[0].is_a? Hash
58
- v = nil
59
- else
60
- v = v[0]
61
- end
62
- end
63
- options[k] = v
64
- end
65
- end
66
- directory.files.merge_attributes(options)
67
- if data.key?('Contents') && !data['Contents'].nil?
68
- directory.files.load(data['Contents'])
69
- end
70
- directory
71
- end
46
+ directory.files.load(objects)
47
+ directory
48
+ rescue AliyunOssSdk::ServerError => error
49
+ if error.error_code == "NoSuchBucket"
50
+ nil
72
51
  else
73
- new(key: '')
52
+ raise(error)
74
53
  end
75
- rescue Fog::Aliyun::Storage::NotFound
76
- nil
77
54
  end
78
55
  end
79
56
  end
@@ -7,24 +7,62 @@ module Fog
7
7
  module Aliyun
8
8
  class Storage
9
9
  class Directory < Fog::Model
10
+ VALID_ACLS = ['private', 'public-read', 'public-read-write']
11
+
12
+ attr_reader :acl
10
13
  identity :key, :aliases => ['Key', 'Name', 'name']
11
14
 
15
+ attribute :creation_date, :aliases => 'CreationDate', :type => 'time'
16
+
17
+ def acl=(new_acl)
18
+ unless VALID_ACLS.include?(new_acl)
19
+ raise ArgumentError.new("acl must be one of [#{VALID_ACLS.join(', ')}]")
20
+ else
21
+ @acl = new_acl
22
+ end
23
+ end
24
+
12
25
  def destroy
13
26
  requires :key
14
- prefix = key + '/'
15
- ret = service.list_objects(prefix: prefix)['Contents']
16
-
17
- if ret.nil?
27
+ service.delete_bucket(key)
28
+ true
29
+ rescue AliyunOssSdk::ServerError => error
30
+ if error.error_code == "NoSuchBucket"
18
31
  false
19
- elsif ret.size == 1
20
- service.delete_container(key)
21
- true
22
32
  else
23
- raise Fog::Aliyun::Storage::Error, ' Forbidden: Direction not empty!'
33
+ raise(error)
34
+ end
35
+ end
36
+
37
+ def destroy!(options = {})
38
+ requires :key
39
+ options = {
40
+ timeout: Fog.timeout,
41
+ interval: Fog.interval,
42
+ }.merge(options)
43
+
44
+ begin
45
+ clear!
46
+ Fog.wait_for(options[:timeout], options[:interval]) { objects_keys.size == 0 }
47
+ service.delete_bucket(key)
48
+ true
49
+ rescue AliyunOssSdk::ServerError
24
50
  false
25
51
  end
26
52
  end
27
53
 
54
+ def location
55
+ region = @aliyun_region_id
56
+ region ||= Storage::DEFAULT_REGION
57
+ @location = (bucket_location || 'oss-' + region)
58
+ end
59
+
60
+ # NOTE: you can't change the region once the bucket is created
61
+ def location=(new_location)
62
+ new_location = 'oss-' + new_location unless new_location.start_with?('oss-')
63
+ @location = new_location
64
+ end
65
+
28
66
  def files
29
67
  @files ||= begin
30
68
  Fog::Aliyun::Storage::Files.new(
@@ -34,6 +72,12 @@ module Fog
34
72
  end
35
73
  end
36
74
 
75
+ # TODO
76
+ def public=(new_public)
77
+ nil
78
+ end
79
+
80
+ # TODO
37
81
  def public_url
38
82
  nil
39
83
  end
@@ -41,18 +85,53 @@ module Fog
41
85
  def save
42
86
  requires :key
43
87
 
44
- # Checking whether the key is a bucket and meet the multi-bucket scenario.
45
- # If the key is a existing bucket, return it directly.
46
- key = key.chomp('/')
47
- if !key.nil? && key != '' && key != '.' && !(key.include? '/')
48
- data = service.get_bucket(key)
49
- if data.class == Hash && data.key?('Code') && !data['Code'].nil? && !data['Code'].empty?
50
- service.put_container(key)
51
- end
52
- end
88
+ options = {}
89
+
90
+ options['x-oss-acl'] = acl if acl
91
+
92
+ # https://help.aliyun.com/document_detail/31959.html
93
+ # if !persisted?
94
+ # # There is a sdk bug that location can not be set
95
+ # options[:location] = location
96
+ # end
97
+
98
+ service.put_bucket(key, options)
99
+ attributes[:is_persisted] = true
53
100
 
54
101
  true
55
102
  end
103
+
104
+ def persisted?
105
+ # is_persisted is true in case of directories.get or after #save
106
+ # creation_date is set in case of directories.all
107
+ attributes[:is_persisted] || !!attributes[:creation_date]
108
+ end
109
+
110
+ private
111
+
112
+ def bucket_location
113
+ requires :key
114
+ return nil unless persisted?
115
+ service.get_bucket_location(key)
116
+ end
117
+
118
+ def objects_keys
119
+ requires :key
120
+ bucket_query = service.get_bucket(key)
121
+
122
+ object_keys = []
123
+ i = 0
124
+ bucket_query[0].each do |o|
125
+ object_keys[i] = o.key
126
+ i += 1
127
+ end
128
+ object_keys
129
+ end
130
+
131
+ def clear!
132
+ requires :key
133
+ service.delete_multiple_objects(key, objects_keys) if objects_keys.size > 0
134
+ end
56
135
  end
57
136
  end
58
137
  end
@@ -7,77 +7,123 @@ module Fog
7
7
  class Storage
8
8
  class File < Fog::Model
9
9
  identity :key, aliases: ['Key', 'Name', 'name']
10
+
11
+ attr_writer :body
12
+ attribute :cache_control, aliases: 'Cache-Control'
13
+ attribute :content_encoding, aliases: 'Content-Encoding'
10
14
  attribute :date, aliases: 'Date'
11
- attribute :content_length, aliases: 'Content-Length', type: :integer
15
+ attribute :content_length, aliases: ['Content-Length', 'Size'], type: :integer
16
+ attribute :content_md5, aliases: 'Content-MD5'
12
17
  attribute :content_type, aliases: 'Content-Type'
13
18
  attribute :connection, aliases: 'Connection'
14
19
  attribute :content_disposition, aliases: 'Content-Disposition'
15
- attribute :etag, aliases: 'Etag'
20
+ attribute :etag, aliases: ['Etag', 'ETag']
21
+ attribute :expires, aliases: 'Expires'
22
+ attribute :metadata
23
+ attribute :owner, aliases: 'Owner'
16
24
  attribute :last_modified, aliases: 'Last-Modified', type: :time
17
25
  attribute :accept_ranges, aliases: 'Accept-Ranges'
18
26
  attribute :server, aliases: 'Server'
19
- attribute :object_type, aliases: 'x-oss-object-type'
27
+ attribute :object_type, aliases: ['x-oss-object-type', 'x_oss_object_type']
28
+
29
+ # @note Chunk size to use for multipart uploads.
30
+ # Use small chunk sizes to minimize memory. E.g. 5242880 = 5mb
31
+ attr_reader :multipart_chunk_size
32
+ def multipart_chunk_size=(mp_chunk_size)
33
+ raise ArgumentError.new("minimum multipart_chunk_size is 5242880") if mp_chunk_size < 5242880
34
+ @multipart_chunk_size = mp_chunk_size
35
+ end
36
+
37
+ def acl
38
+ requires :directory, :key
39
+ service.get_object_acl(directory.key, key)
40
+ end
41
+
42
+ def acl=(new_acl)
43
+ valid_acls = ['private', 'public-read', 'public-read-write', 'default']
44
+ unless valid_acls.include?(new_acl)
45
+ raise ArgumentError.new("acl must be one of [#{valid_acls.join(', ')}]")
46
+ end
47
+ @acl = new_acl
48
+ end
20
49
 
21
50
  def body
22
- attributes[:body] ||=
23
- if last_modified
24
- collection.get(identity).body
25
- else
26
- ''
27
- end
51
+ return attributes[:body] if attributes[:body]
52
+ return '' unless last_modified
53
+
54
+ file = collection.get(identity)
55
+ if file
56
+ attributes[:body] = file.body
57
+ else
58
+ attributes[:body] = ''
59
+ end
28
60
  end
29
61
 
30
62
  def body=(new_body)
31
63
  attributes[:body] = new_body
32
64
  end
33
65
 
34
- attr_reader :directory
66
+ def directory
67
+ @directory
68
+ end
35
69
 
70
+ # Copy object from one bucket to other bucket.
71
+ #
72
+ # required attributes: directory, key
73
+ #
74
+ # @param target_directory_key [String]
75
+ # @param target_file_key [String]
76
+ # @param options [Hash] options for copy_object method
77
+ # @return [String] Fog::Aliyun::Files#head status of directory contents
78
+ #
36
79
  def copy(target_directory_key, target_file_key, options = {})
37
80
  requires :directory, :key
38
- source_bucket, directory_key = collection.check_directory_key(directory.key)
39
- source_object = if directory_key == ''
40
- key
41
- else
42
- directory_key + '/' + key
43
- end
44
- target_bucket, target_directory_key = collection.check_directory_key(target_directory_key)
45
- target_object = if target_directory_key == ''
46
- target_file_key
47
- else
48
- target_directory_key + '/' + target_file_key
49
- end
50
- service.copy_object(source_bucket, source_object, target_bucket, target_object, options)
51
- target_directory = service.directories.new(key: target_directory_key)
52
- target_directory.files.get(target_file_key)
53
- end
54
-
55
- def destroy
81
+ service.copy_object(directory.key, key, target_directory_key, target_file_key, options)
82
+ target_directory = service.directories.new(:key => target_directory_key)
83
+ target_directory.files.head(target_file_key)
84
+ end
85
+
86
+ def destroy(options = {})
56
87
  requires :directory, :key
57
- bucket_name, directory_key = collection.check_directory_key(directory.key)
58
- object = if directory_key == ''
59
- key
60
- else
61
- directory_key + '/' + key
62
- end
63
- service.delete_object(object, bucket: bucket_name)
88
+ # TODO support versionId
89
+ # attributes[:body] = nil if options['versionId'] == version
90
+ service.delete_object(directory.key, key, options)
64
91
  true
65
92
  end
66
93
 
94
+ remove_method :metadata
67
95
  def metadata
68
- attributes[:metadata] ||= headers_to_metadata
96
+ attributes.reject {|key, value| !(key.to_s =~ /^x-oss-/)}
97
+ end
98
+
99
+ remove_method :metadata=
100
+ def metadata=(new_metadata)
101
+ merge_attributes(new_metadata)
69
102
  end
70
103
 
104
+ remove_method :owner=
71
105
  def owner=(new_owner)
72
106
  if new_owner
73
107
  attributes[:owner] = {
74
- display_name: new_owner['DisplayName'],
75
- id: new_owner['ID']
108
+ :display_name => new_owner['DisplayName'] || new_owner[:display_name],
109
+ :id => new_owner['ID'] || new_owner[:id]
76
110
  }
77
111
  end
78
112
  end
79
113
 
114
+ # Set Access-Control-List permissions.
115
+ #
116
+ # valid new_publics: public_read, private
117
+ #
118
+ # @param [String] new_public
119
+ # @return [String] new_public
120
+ #
80
121
  def public=(new_public)
122
+ if new_public
123
+ @acl = 'public-read'
124
+ else
125
+ @acl = 'private'
126
+ end
81
127
  new_public
82
128
  end
83
129
 
@@ -86,49 +132,32 @@ module Fog
86
132
  # required attributes: directory, key
87
133
  #
88
134
  # @param expires [String] number of seconds (since 1970-01-01 00:00) before url expires
89
- # @param options [Hash]
135
+ # @param options[Hash] No need to use
90
136
  # @return [String] url
91
137
  #
92
138
  def url(expires, options = {})
93
-
94
- expires = expires.nil? ? 0 : expires.to_i
95
-
96
- requires :directory, :key
97
- bucket_name, directory_key = collection.check_directory_key(directory.key)
98
- object = if directory_key == ''
99
- key
100
- else
101
- directory_key + '/' + key
102
- end
103
- service.get_object_http_url_public(object, expires, options.merge(bucket: bucket_name))
104
- end
105
-
106
- def public_url
107
139
  requires :key
108
- collection.get_url(key)
140
+ service.get_object_http_url_public(directory.key, key, expires)
109
141
  end
110
142
 
111
143
  def save(options = {})
112
144
  requires :body, :directory, :key
113
- options['Content-Type'] = content_type if content_type
145
+ options['x-oss-object-acl'] ||= @acl if @acl
146
+ options['Cache-Control'] = cache_control if cache_control
114
147
  options['Content-Disposition'] = content_disposition if content_disposition
115
- options.merge!(metadata_to_headers)
116
- bucket_name, directory_key = collection.check_directory_key(directory.key)
117
- object = if directory_key == ''
118
- key
119
- else
120
- directory_key + '/' + key
121
- end
122
- if body.is_a?(::File)
123
- data = service.put_object(object, body, options.merge(bucket: bucket_name)).data
124
- elsif body.is_a?(String)
125
- data = service.put_object_with_body(object, body, options.merge(bucket: bucket_name)).data
148
+ options['Content-Encoding'] = content_encoding if content_encoding
149
+ options['Content-MD5'] = content_md5 if content_md5
150
+ options['Content-Type'] = content_type if content_type
151
+ options['Expires'] = expires if expires
152
+ options.merge!(metadata)
153
+
154
+ self.multipart_chunk_size = 5242880 if !multipart_chunk_size && Fog::Storage.get_body_size(body) > 5368709120
155
+ if multipart_chunk_size && Fog::Storage.get_body_size(body) >= multipart_chunk_size && body.respond_to?(:read)
156
+ multipart_save(options)
126
157
  else
127
- raise Fog::Aliyun::Storage::Error, " Forbidden: Invalid body type: #{body.class}!"
158
+ service.put_object(directory.key, key, body, options)
128
159
  end
129
- update_attributes_from(data)
130
- refresh_metadata
131
-
160
+ self.etag = self.etag.gsub('"','') if self.etag
132
161
  self.content_length = Fog::Storage.get_body_size(body)
133
162
  self.content_type ||= Fog::Storage.get_content_type(body)
134
163
  true
@@ -136,70 +165,43 @@ module Fog
136
165
 
137
166
  private
138
167
 
139
- attr_writer :directory
140
-
141
- def refresh_metadata
142
- metadata.reject! { |_k, v| v.nil? }
143
- end
144
-
145
- def headers_to_metadata
146
- key_map = key_mapping
147
- Hash[metadata_attributes.map { |k, v| [key_map[k], v] }]
148
- end
149
-
150
- def key_mapping
151
- key_map = metadata_attributes
152
- key_map.each_pair { |k, _v| key_map[k] = header_to_key(k) }
168
+ def directory=(new_directory)
169
+ @directory = new_directory
153
170
  end
154
171
 
155
- def header_to_key(opt)
156
- opt.gsub(metadata_prefix, '').split('-').map { |k| k[0, 1].downcase + k[1..-1] }.join('_').to_sym
157
- end
158
-
159
- def metadata_to_headers
160
- header_map = header_mapping
161
- Hash[metadata.map { |k, v| [header_map[k], v] }]
162
- end
163
-
164
- def header_mapping
165
- header_map = metadata.dup
166
- header_map.each_pair { |k, _v| header_map[k] = key_to_header(k) }
167
- end
168
-
169
- def key_to_header(key)
170
- metadata_prefix + key.to_s.split(/[-_]/).map(&:capitalize).join('-')
171
- end
172
+ def multipart_save(options)
173
+ # Initiate the upload
174
+ upload_id = service.initiate_multipart_upload(directory.key, key, options)
172
175
 
173
- def metadata_attributes
174
- if last_modified
175
- bucket_name, directory_key = collection.check_directory_key(directory.key)
176
- object = if directory_key == ''
177
- key
178
- else
179
- directory_key + '/' + key
180
- end
176
+ # Store ETags of upload parts
177
+ part_tags = []
181
178
 
182
- data = service.head_object(object, bucket: bucket_name).data
183
- if data[:status] == 200
184
- headers = data[:headers]
185
- headers.select! { |k, _v| metadata_attribute?(k) }
186
- end
187
- else
188
- {}
179
+ # Upload each part
180
+ # TODO: optionally upload chunks in parallel using threads
181
+ # (may cause network performance problems with many small chunks)
182
+ # TODO: Support large chunk sizes without reading the chunk into memory
183
+ if body.respond_to?(:rewind)
184
+ body.rewind rescue nil
185
+ end
186
+ while (chunk = body.read(multipart_chunk_size)) do
187
+ part_upload = service.upload_part(directory.key, key, upload_id, part_tags.size + 1, chunk)
188
+ part_tags << part_upload
189
189
  end
190
- end
191
190
 
192
- def metadata_attribute?(key)
193
- key.to_s =~ /^#{metadata_prefix}/
194
- end
191
+ if part_tags.empty? #it is an error to have a multipart upload with no parts
192
+ part_upload = service.upload_part(directory.key, key, upload_id, 1, '')
193
+ part_tags << part_upload
194
+ end
195
195
 
196
- def metadata_prefix
197
- 'X-Object-Meta-'
196
+ rescue
197
+ # Abort the upload & reraise
198
+ service.abort_multipart_upload(directory.key, key, upload_id) if upload_id
199
+ raise
200
+ else
201
+ # Complete the upload
202
+ service.complete_multipart_upload(directory.key, key, upload_id, part_tags)
198
203
  end
199
204
 
200
- def update_attributes_from(data)
201
- merge_attributes(data[:headers].reject { |key, _value| ['Content-Length', 'Content-Type'].include?(key) })
202
- end
203
205
  end
204
206
  end
205
207
  end