data_migrater 0.7.0 → 1.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: ad45bfd5bedd2d2b8a8bddd76afc4d10ce030c99d1507e5a57226c2d9be4a7bc
4
- data.tar.gz: cce9a45c1224c2b9b69e0238ec7f39881b98ea96632f4a98aae11591b90c5158
3
+ metadata.gz: d23aa11964d870b88a42f582ada1573f58a781d08a359cc25eea5e2dbf7dab62
4
+ data.tar.gz: cd3bbe075f0cc3ff091284ef1a00c23c27a8f725989f269cb33eebd2f7c07ebb
5
5
  SHA512:
6
- metadata.gz: 450dd08d38c0122a9b50ebc5d620d497e548a6f15e683887dab95a763a04006b4246ab5b697d512c3dbcbd8f694b7bd61d4a80efb54950aec3e2b99489e9e785
7
- data.tar.gz: ce1181a6c82f55ac2748072d566793e3fb8f0f22df57c6942719fe991bfa346f07bb4f37cf9c27a83d0b96c71a933ac80d973d0a9a803fcbdc5c4d75fc422c05
6
+ metadata.gz: 8fadf53e7bd9f4eaeb709847d5f73b6a53369eca3a93ee766d0192e56195a940122cc330b687ff93a346f8ca550f7eeb150f2b71d0dd757818f4631330ed0d29
7
+ data.tar.gz: e6beaa0daecc1926844ee50fe15684a673e5710b7bb3e68ae7eefcc31b508e634079a4fc78649953b1ed1e07257a741bada5640e5327b792d556beb5cc5cccff
data/CHANGELOG.md CHANGED
@@ -1,3 +1,22 @@
1
+ ## v1.0.0
2
+
3
+ ### Break
4
+
5
+ - Removes the `tmp_dir` options, now the file will be download at `path` place.
6
+
7
+ ### Fixes
8
+
9
+ - Do not raise error when S3 file does not exist;
10
+
11
+ ### News
12
+
13
+ - Uses `us-east-1` as default region for S3 module;
14
+ - Adds `csv_delete` method to be able to delete the S3 file.
15
+
16
+ ### Updates
17
+
18
+ - Uses new syntax of SDK S3.
19
+
1
20
  ## v0.7.0
2
21
 
3
22
  ### News
data/README.md CHANGED
@@ -110,7 +110,7 @@ class MyDataMigration
110
110
  def execute
111
111
  # [
112
112
  # { first_name: 'Washington', last_name: 'Botelho' },
113
- # { first_name: 'Lucas' , last_name: 'Souza' }
113
+ # { first_name: 'Vanessa' , last_name: 'Queiroz' }
114
114
  # ]
115
115
  csv.each { |line| Object.create line }
116
116
  end
@@ -134,14 +134,14 @@ end
134
134
  ```
135
135
 
136
136
  #### Options
137
- dum
138
- - `dir`: Directory where CSV is located;
139
- - `file`: File name;
140
- - `path`: Composition of `dir/file` when you want give a fully qualified path. Default: `db/data_migrate/support/csv/class_name.csv`.
137
+
138
+ - `dir`: Directory where CSV is located, by default `db/data_migrate/support/csv`;
139
+ - `file`: File name, by default is the class name underscored: `my_data_migration.csv`;
140
+ - `path`: Composition of `dir` + `/` + `file` when you want give a fully qualified path.
141
141
 
142
142
  ---
143
143
 
144
- ##### CSV Options:
144
+ ##### CSV Options
145
145
 
146
146
  - `chunk_size`: Batch parse size;
147
147
  - `key_mapping`: Key name alias.
@@ -150,14 +150,13 @@ For more CSV options, check the project [Smarter CSV](https://github.com/tilo/sm
150
150
 
151
151
  ## S3
152
152
 
153
- You can download your CSV directly from [Amazon S3](https://aws.amazon.com/s3) using the module `DataMigrater::CSV` with some configs.
154
- You *must* keep the path as `:s3` to activate S3 feature.
153
+ You can download your CSV directly from [Amazon S3](https://aws.amazon.com/s3) using the module `DataMigrater::CSV` with some configs. You *must* set `provider` as `:s3` to activate S3 feature.
155
154
 
156
155
  ```ruby
157
156
  class MyDataMigration
158
157
  include DataMigrater::CSV
159
158
 
160
- data_csv path: :s3
159
+ data_csv bucket: 'my-bucket', provider: :s3
161
160
 
162
161
  def execute
163
162
  csv.each { |line| Object.create line }
@@ -165,41 +164,20 @@ class MyDataMigration
165
164
  end
166
165
  ```
167
166
 
168
- By default, the class name is used as the file name in `underscore` style: `my_data_migration.csv`. You can change it:
169
-
170
- ```ruby
171
- class MyDataMigration
172
- include DataMigrater::CSV
173
-
174
- data_csv path: :s3, file: 'custom-name.csv'
175
-
176
- def execute
177
- csv.each { |line| Object.create line }
178
- end
179
- end
180
- ```
181
-
182
- By default, the bucket name is `data-migrater`, to change it, just declare the `bucket` options:
183
-
184
- ```ruby
185
- class MyDataMigration
186
- include DataMigrater::CSV
187
-
188
- data_csv path: :s3, bucket: 'custom-bucket'
167
+ ### Credentials
189
168
 
190
- def execute
191
- csv.each { |line| Object.create line }
192
- end
193
- end
194
- ```
195
-
196
- When file is downloaded, it is keeped in a temporary (`/tmp`) folder waiting to be parsed, using the options `tmp_dir` you change it:
169
+ By default, when you use the S3 feature, the envs `ACCESS_KEY_ID`, `REGION` (default `us-east-1`) and `SECRET_ACCESS_KEY` will be used.
170
+ If you do not want export it globally and need to pass it inside you class, just declare de `credentials` options:
197
171
 
198
172
  ```ruby
199
173
  class MyDataMigration
200
174
  include DataMigrater::CSV
201
175
 
202
- data_csv path: :s3, tmp_dir: '/Users/wbotelhos'
176
+ data_csv provider: :s3, credentials: {
177
+ access_key_id: 'foo',
178
+ region: 'us-east-1',
179
+ secret_access_key: 'bar'
180
+ }
203
181
 
204
182
  def execute
205
183
  csv.each { |line| Object.create line }
@@ -207,34 +185,29 @@ class MyDataMigration
207
185
  end
208
186
  ```
209
187
 
210
- #### Credentials
188
+ ### CSV Delete
211
189
 
212
- By default, when you use the S3 feature, the envs `ACCESS_KEY_ID`, `REGION` and `SECRET_ACCESS_KEY` will be used.
213
- If you do not want export it globally and need to pass it inside you class, just declare de `credentials` options:
190
+ You can delete the S3 file from your migration after process the CSV.
214
191
 
215
192
  ```ruby
216
193
  class MyDataMigration
217
194
  include DataMigrater::CSV
218
195
 
219
- data_csv path: :s3, credentials: {
220
- access_key_id: 'foo',
221
- region: 'us-east-1',
222
- secret_access_key: 'bar'
223
- }
196
+ data_csv provider: :s3
224
197
 
225
198
  def execute
226
199
  csv.each { |line| Object.create line }
200
+
201
+ csv_delete
227
202
  end
228
203
  end
229
204
  ```
230
205
 
231
- #### Options
206
+ #### S3 Options
232
207
 
233
- - `bucket`: Bucket name;
208
+ - `bucket`: The bucket name. By default `data-migrater`.
234
209
  - `credentials`: AWS credentials: `access_key_id`, `region` and `secret_access_key`;
235
- - `file`: File name;
236
- - `path`: `:s3` to indicate the S3 support;
237
- - `tmp_dir`: Directory where CSV will be keeped after download.
210
+ - `provider`: `:s3` to indicate the S3 provider;
238
211
 
239
212
  #### Skip Run
240
213
 
@@ -8,7 +8,7 @@ module DataMigrater
8
8
 
9
9
  included do
10
10
  def csv(processor: ::SmarterCSV)
11
- return s3.download(processor: processor) if csv_s3?
11
+ s3.download if csv_options.delete(:provider) == :s3
12
12
 
13
13
  processor.process csv_path, csv_options
14
14
  end
@@ -19,6 +19,10 @@ module DataMigrater
19
19
  @csv_path ||= [csv_dir, csv_file].join('/')
20
20
  end
21
21
 
22
+ def csv_delete
23
+ s3.delete
24
+ end
25
+
22
26
  private
23
27
 
24
28
  def csv_bucket
@@ -37,25 +41,12 @@ module DataMigrater
37
41
  self.class.csv_options
38
42
  end
39
43
 
40
- def csv_s3?
41
- csv_path.to_s == 's3'
42
- end
43
-
44
- def csv_tmp_dir
45
- csv_options.delete(:tmp_dir) || '/tmp'
46
- end
47
-
48
44
  def s3_credentials
49
45
  csv_options.delete(:credentials) || {}
50
46
  end
51
47
 
52
48
  def s3
53
- DataMigrater::S3.new(
54
- credentials: s3_credentials,
55
- bucket: csv_bucket,
56
- file: csv_file,
57
- tmp_dir: csv_tmp_dir
58
- )
49
+ @s3 ||= DataMigrater::S3.new(csv_bucket, s3_credentials, csv_path)
59
50
  end
60
51
  end
61
52
 
@@ -4,43 +4,42 @@ module DataMigrater
4
4
  class S3
5
5
  require 'aws-sdk-s3'
6
6
 
7
- def initialize(bucket:, credentials: {}, file:, tmp_dir:)
7
+ def initialize(bucket, credentials, csv_path)
8
8
  @bucket = bucket
9
- @credentials = credentials.reverse_merge(default_credentials)
10
- @file = file
11
- @tmp_dir = tmp_dir
9
+ @credentials = default_credentials.merge(credentials)
10
+ @csv_path = csv_path
12
11
 
13
12
  ::Aws.config.update @credentials
14
13
  end
15
14
 
16
- def download(processor:)
17
- File.open(file_path, 'w+') do |file|
18
- client.get_object options, target: file
15
+ def delete
16
+ client.delete_object options
17
+ end
18
+
19
+ def download
20
+ client.head_object options
19
21
 
20
- processor.process file
21
- end
22
+ client.get_object options.merge(response_target: @csv_path)
23
+ rescue Aws::S3::Errors::NotFound
24
+ []
22
25
  end
23
26
 
24
27
  private
25
28
 
26
29
  def client
27
- Aws::S3::Client.new
30
+ @client ||= Aws::S3::Client.new
28
31
  end
29
32
 
30
33
  def default_credentials
31
34
  {
32
35
  access_key_id: ENV['AWS_ACCESS_KEY_ID'],
33
- region: ENV['AWS_REGION'],
36
+ region: ENV.fetch('AWS_REGION', 'us-east-1'),
34
37
  secret_access_key: ENV['AWS_SECRET_ACCESS_KEY']
35
38
  }
36
39
  end
37
40
 
38
- def file_path
39
- [@tmp_dir, @file].join('/')
40
- end
41
-
42
41
  def options
43
- { bucket: @bucket, key: @file }
42
+ { bucket: @bucket, key: @csv_path.split('/').last }
44
43
  end
45
44
  end
46
45
  end
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module DataMigrater
4
- VERSION = '0.7.0'
4
+ VERSION = '1.0.0'
5
5
  end
@@ -0,0 +1,23 @@
1
+ # frozen_string_literal: true
2
+
3
+ require 'spec_helper'
4
+
5
+ RSpec.describe '#csv_delete' do
6
+ let!(:s3) { instance_double 'DataMigrater::S3' }
7
+
8
+ before do
9
+ stub_const 'Dummy', Class.new
10
+
11
+ Dummy.class_eval { include DataMigrater::CSV }
12
+ Dummy.class_eval { data_csv provider: :s3 }
13
+
14
+ allow(DataMigrater::S3).to receive(:new)
15
+ .with('data-migrater', {}, 'db/data_migrate/support/csv/dummy.csv').and_return s3
16
+ end
17
+
18
+ it 'delegates delete to s3 object' do
19
+ expect(s3).to receive(:delete)
20
+
21
+ Dummy.new.csv_delete
22
+ end
23
+ end
data/spec/csv/csv_spec.rb CHANGED
@@ -30,23 +30,21 @@ RSpec.describe '#data_csv' do
30
30
  end
31
31
  end
32
32
 
33
- context 'when path is about s3' do
33
+ context 'when provider is s3' do
34
34
  before do
35
35
  stub_const 'Dummy', Class.new
36
36
 
37
37
  Dummy.class_eval { include DataMigrater::CSV }
38
- Dummy.class_eval { data_csv path: :s3 }
39
-
40
- allow(DataMigrater::S3).to receive(:new).with(
41
- credentials: {},
42
- bucket: 'data-migrater',
43
- file: 'dummy.csv',
44
- tmp_dir: '/tmp'
45
- ) { double download: :result }
38
+ Dummy.class_eval { data_csv provider: :s3 }
39
+
40
+ allow(DataMigrater::S3).to receive(:new)
41
+ .with('data-migrater', {}, 'db/data_migrate/support/csv/dummy.csv').and_return double(download: true)
46
42
  end
47
43
 
48
44
  it 'reads csv from s3' do
49
- expect(Dummy.new.csv).to eq :result
45
+ expect(::SmarterCSV).to receive(:process).with('db/data_migrate/support/csv/dummy.csv', {})
46
+
47
+ Dummy.new.csv
50
48
  end
51
49
  end
52
50
  end
@@ -0,0 +1,19 @@
1
+ # frozen_string_literal: true
2
+
3
+ require 'spec_helper'
4
+
5
+ RSpec.describe DataMigrater::S3, '.delete' do
6
+ subject(:s3) { described_class.new 'bucket', {}, 'path/dummy.csv' }
7
+
8
+ let!(:client) { instance_double('Aws::S3::Client').as_null_object }
9
+
10
+ before { allow(Aws::S3::Client).to receive(:new) { client } }
11
+
12
+ context 'when file is found' do
13
+ it 'deletes the file' do
14
+ expect(client).to receive(:delete_object).with(bucket: 'bucket', key: 'dummy.csv')
15
+
16
+ s3.delete
17
+ end
18
+ end
19
+ end
@@ -3,27 +3,34 @@
3
3
  require 'spec_helper'
4
4
 
5
5
  RSpec.describe DataMigrater::S3, '.download' do
6
- subject { described_class.new bucket: 'data-migrater', file: 'dummy.csv', tmp_dir: '/tmp' }
6
+ subject(:s3) { described_class.new options[:bucket], {}, 'csv_path' }
7
7
 
8
- let!(:client) { double(Aws::S3).as_null_object }
9
- let!(:file_open) { double(File).as_null_object }
10
- let!(:processor) { double.as_null_object }
11
- let!(:temp_file) { double(File).as_null_object }
8
+ let!(:client) { double('Aws::S3::Client').as_null_object }
9
+ let!(:options) { { bucket: 'data-migrater', key: 'csv_path' } }
12
10
 
13
- before do
14
- allow(Aws::S3::Client).to receive(:new) { client }
15
- allow(File).to receive(:open).with('/tmp/dummy.csv', 'w+').and_yield temp_file
16
- end
11
+ before { allow(Aws::S3::Client).to receive(:new) { client } }
12
+
13
+ context 'when file is found' do
14
+ it 'downloads the file' do
15
+ expect(client).to receive(:get_object).with(options.merge(response_target: 'csv_path'))
17
16
 
18
- it 'downloads the csv file' do
19
- expect(client).to receive(:get_object).with({ bucket: 'data-migrater', key: 'dummy.csv' }, target: temp_file)
17
+ s3.download
18
+ end
20
19
 
21
- subject.download processor: processor
20
+ it 'returns the value of get object' do
21
+ expect(client).to receive(:get_object).with(options.merge(response_target: 'csv_path')).and_return :success
22
+
23
+ expect(s3.download).to eq :success
24
+ end
22
25
  end
23
26
 
24
- it 'process the csv content with given processor' do
25
- expect(processor).to receive(:process).with temp_file
27
+ context 'when file is not found' do
28
+ let!(:error) { Aws::S3::Errors::NotFound.new 'error', 'message' }
29
+
30
+ before { allow(client).to receive(:head_object).with(options).and_raise error }
26
31
 
27
- subject.download processor: processor
32
+ it 'returns an empty array' do
33
+ expect(s3.download).to eq []
34
+ end
28
35
  end
29
36
  end
@@ -3,59 +3,39 @@
3
3
  require 'spec_helper'
4
4
 
5
5
  RSpec.describe DataMigrater::S3, 'initialize' do
6
+ subject(:s3) { described_class.new 'data-migrater', credentials, 'dummy.csv' }
7
+
6
8
  before do
7
9
  allow(ENV).to receive(:[]).with('AWS_ACCESS_KEY_ID') { 'AWS_ACCESS_KEY_ID' }
8
- allow(ENV).to receive(:[]).with('AWS_REGION') { 'AWS_REGION' }
10
+ allow(ENV).to receive(:fetch).with('AWS_REGION', 'us-east-1') { 'AWS_REGION' }
9
11
  allow(ENV).to receive(:[]).with('AWS_SECRET_ACCESS_KEY') { 'AWS_SECRET_ACCESS_KEY' }
10
12
  end
11
13
 
12
- context 'when only mandatory params is given' do
13
- subject { described_class.new bucket: 'data-migrater', file: 'dummy.csv', tmp_dir: '/tmp' }
14
+ context 'when credentials is not given' do
15
+ let!(:credentials) { {} }
16
+
17
+ let!(:credentials_env) do
18
+ { access_key_id: 'AWS_ACCESS_KEY_ID', region: 'AWS_REGION', secret_access_key: 'AWS_SECRET_ACCESS_KEY' }
19
+ end
14
20
 
15
21
  it 'caches default values and uses exported envs' do
16
- expect(subject.instance_variable_get(:@bucket)).to eq 'data-migrater'
17
- expect(subject.instance_variable_get(:@file)).to eq 'dummy.csv'
18
- expect(subject.instance_variable_get(:@tmp_dir)).to eq '/tmp'
19
-
20
- expect(subject.instance_variable_get(:@credentials)).to eq(
21
- access_key_id: 'AWS_ACCESS_KEY_ID',
22
- region: 'AWS_REGION',
23
- secret_access_key: 'AWS_SECRET_ACCESS_KEY'
24
- )
22
+ expect(s3.instance_variable_get(:@bucket)).to eq 'data-migrater'
23
+ expect(s3.instance_variable_get(:@csv_path)).to eq 'dummy.csv'
24
+ expect(s3.instance_variable_get(:@credentials)).to eq credentials_env
25
25
  end
26
26
 
27
27
  it 'updates the aws config' do
28
- expect(::Aws.config).to receive(:update).with(
29
- access_key_id: 'AWS_ACCESS_KEY_ID',
30
- region: 'AWS_REGION',
31
- secret_access_key: 'AWS_SECRET_ACCESS_KEY'
32
- )
28
+ expect(::Aws.config).to receive(:update).with credentials_env
33
29
 
34
30
  subject
35
31
  end
36
32
  end
37
33
 
38
- context 'when some credential is given' do
39
- subject do
40
- described_class.new(
41
- bucket: 'data-migrater',
42
- file: 'dummy.csv',
43
- tmp_dir: '/tmp',
44
-
45
- credentials: {
46
- access_key_id: 'access_key_id',
47
- region: 'region',
48
- secret_access_key: 'secret_access_key'
49
- }
50
- )
51
- end
34
+ context 'when credentials is given' do
35
+ let!(:credentials) { { access_key_id: 'access_key_id', region: 'region', secret_access_key: 'secret_access_key' } }
52
36
 
53
37
  it 'is used' do
54
- expect(::Aws.config).to receive(:update).with(
55
- access_key_id: 'access_key_id',
56
- region: 'region',
57
- secret_access_key: 'secret_access_key'
58
- )
38
+ expect(::Aws.config).to receive(:update).with credentials
59
39
 
60
40
  subject
61
41
  end
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: data_migrater
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.7.0
4
+ version: 1.0.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Washington Botelho
@@ -9,7 +9,7 @@ authors:
9
9
  autorequire:
10
10
  bindir: bin
11
11
  cert_chain: []
12
- date: 2019-05-06 00:00:00.000000000 Z
12
+ date: 2019-05-23 00:00:00.000000000 Z
13
13
  dependencies:
14
14
  - !ruby/object:Gem::Dependency
15
15
  name: activerecord
@@ -162,6 +162,7 @@ files:
162
162
  - lib/generators/data_migrater/templates/db/migrate/create_data_migrations.rb
163
163
  - spec/collection_spec.rb
164
164
  - spec/csv/converters_spec.rb
165
+ - spec/csv/csv_delete_spec.rb
165
166
  - spec/csv/csv_path_spec.rb
166
167
  - spec/csv/csv_spec.rb
167
168
  - spec/csv/options_spec.rb
@@ -170,6 +171,7 @@ files:
170
171
  - spec/logger/logger_path_spec.rb
171
172
  - spec/migrater_spec.rb
172
173
  - spec/migration_spec.rb
174
+ - spec/s3/delete_spec.rb
173
175
  - spec/s3/download_spec.rb
174
176
  - spec/s3/initialize_spec.rb
175
177
  - spec/spec_helper.rb
@@ -201,20 +203,22 @@ signing_key:
201
203
  specification_version: 4
202
204
  summary: A Data Migrator gem
203
205
  test_files:
206
+ - spec/spec_helper.rb
207
+ - spec/collection_spec.rb
208
+ - spec/s3/initialize_spec.rb
209
+ - spec/s3/delete_spec.rb
210
+ - spec/s3/download_spec.rb
204
211
  - spec/logger/data_logger_spec.rb
205
212
  - spec/logger/logger_path_spec.rb
206
- - spec/collection_spec.rb
207
- - spec/support/database_cleaner.rb
208
- - spec/support/common.rb
213
+ - spec/migration_spec.rb
209
214
  - spec/support/migrate.rb
215
+ - spec/support/common.rb
210
216
  - spec/support/csv/dummy.csv
211
- - spec/spec_helper.rb
217
+ - spec/support/database_cleaner.rb
212
218
  - spec/migrater_spec.rb
213
- - spec/s3/download_spec.rb
214
- - spec/s3/initialize_spec.rb
215
- - spec/data_migration_spec.rb
216
- - spec/migration_spec.rb
217
219
  - spec/csv/csv_spec.rb
218
220
  - spec/csv/options_spec.rb
219
221
  - spec/csv/csv_path_spec.rb
222
+ - spec/csv/csv_delete_spec.rb
220
223
  - spec/csv/converters_spec.rb
224
+ - spec/data_migration_spec.rb