pg_shrink 0.0.2

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml ADDED
@@ -0,0 +1,7 @@
1
+ ---
2
+ SHA1:
3
+ metadata.gz: 977a0cdcf3f266d4b4b80737915e4621f2f9c348
4
+ data.tar.gz: 88ef7101d4cfbbf425ac9d94e0a8319a4d7edd1a
5
+ SHA512:
6
+ metadata.gz: 30e52962da0e958fe6130a8ca74cf98ec1a61a2b7225426efeb4720ab4ad511788eee8679e5aba3ed120b0d3673ccb4b462be405ee8ca8c88d37b91481a37163
7
+ data.tar.gz: 846ccb47ae40bf656312601da0aa25f23610f6752bb1f06f7fe60de6a66beaeb2d1f4cab5716b358277e639bdcfc71ca30d505152c347161f2ebb6bc8e8d3583
data/.gitignore ADDED
@@ -0,0 +1,24 @@
1
+ *.gem
2
+ *.rbc
3
+ .bundle
4
+ .config
5
+ .yardoc
6
+ Gemfile.lock
7
+ InstalledFiles
8
+ _yardoc
9
+ coverage
10
+ doc/
11
+ lib/bundler/man
12
+ pkg
13
+ rdoc
14
+ spec/reports
15
+ test/tmp
16
+ test/version_tmp
17
+ tmp
18
+ *.bundle
19
+ *.so
20
+ *.o
21
+ *.a
22
+ mkmf.log
23
+ *.swp
24
+ spec/pg_config.yml
data/.rspec ADDED
@@ -0,0 +1 @@
1
+ --format Nc --format documentation
data/Gemfile ADDED
@@ -0,0 +1,4 @@
1
+ source 'https://rubygems.org'
2
+
3
+ # Specify gem's dependencies in pg_shrink.gemspec
4
+ gemspec
data/Guardfile ADDED
@@ -0,0 +1,11 @@
1
+ guard 'rspec' do
2
+ # watch /lib/ files
3
+ watch(%r{^lib/(.+).rb$}) do |m|
4
+ "spec/#{m[1]}_spec.rb"
5
+ end
6
+
7
+ # watch /spec/ files
8
+ watch(%r{^spec/(.+).rb$}) do |m|
9
+ "spec/#{m[1]}.rb"
10
+ end
11
+ end
data/README.md ADDED
@@ -0,0 +1,92 @@
1
+ # PgShrink
2
+
3
+ The pg_shrink tool makes it easy to shrink and sanitize a postgres database,
4
+ allowing you to specify custom filtering and sanitization via a simple
5
+ DSL in a configuration file (Shrinkfile).
6
+
7
+ The pg_shrink tool takes two arguments, a url for a postgres database and
8
+ the path to a configuration file (will default to the Shrinkfile in the
9
+ current directory)
10
+
11
+ The simplest way to learn how to use pg_shrink is via an example.
12
+
13
+ ## Usage
14
+
15
+ ### Example Shrinkfile
16
+ This is a simple Ruby DSL that defines which tables are to be filtered and
17
+ sanitized in what way, and the relationships between those tables when filtering
18
+ or sanitization is to be propagated.
19
+
20
+ ```ruby
21
+ filter_table :users do |f|
22
+ f.filter_by do |u|
23
+ u[:name].match(/save me/)
24
+ end
25
+ f.sanitize do |u|
26
+ u[:email] = "sanitized_email#{u[:id]}@fake.com"
27
+ u
28
+ end
29
+
30
+ f.filter_subtable(:user_preferences, :foreign_key => :user_id)
31
+ end
32
+ ```
33
+
34
+ This particular example will filter the users table to contain only users with
35
+ a name matching the regular expression /save me/, sanitize the email field on
36
+ those users, and then filter the user_preferences table to contain only
37
+ preferences associated with those users.
38
+
39
+ ### Full DSL
40
+ See the Shrinkfile.example file in this directory for a complete list of the
41
+ available DSL.
42
+
43
+ ### Options
44
+ ```
45
+ -u, --url URL *REQUIRED* Specify URL to postgres database.
46
+ WARNING: This database should be a backup and not
47
+ be changing at the time pg_shrink is run. It will
48
+ be modified in place.
49
+ -c, --config SHRINKFILE Specify a configuration file for how to shrink
50
+ --force Force run without confirmation.
51
+ -h, --help Show this message and exit
52
+ ```
53
+
54
+ ## How does it work?
55
+
56
+ The pg_shrink command runs through 4 major steps.
57
+ * 1. Options parsing.
58
+ * 2. Shrinkfile parsing and setting up the structure of tables, filters, sanitizers,
59
+ and their subtable relationships
60
+ * 3. Iterating through tables and doing a depth-first filter on them.
61
+ * 4. Iterating through tables and doing a depth-first sanitization on them.
62
+
63
+ **Step 1:** Option parsing is simple. pg_shrink uses `optparse`
64
+
65
+ **Step 2:** Before anything is run, the Shrinkfile is completely parsed, setting up a set of tables, the filters and sanitizers on those tables, and any subtable relationships
66
+
67
+ **Step 3:** For each table, the filters on that table are iterated through. For each filter, the records in the table are pulled out in batches, the filter is applied to that batch, and then any subtable filters are applied for records impacted within that batch.
68
+
69
+ **Step 4:** For each table, the sanitizers on that table are iterated through. For each filter, the records in the table are pulled out in batches, the sanitizers is applied to that batch, and then any subtable sanitizers are applied for records impacted within that batch.
70
+
71
+ ## Installation
72
+
73
+ Add this line to your application's Gemfile:
74
+
75
+ gem 'pg_shrink'
76
+
77
+ And then execute:
78
+
79
+ $ bundle
80
+
81
+ Or install it yourself as:
82
+
83
+ $ gem install pg_shrink
84
+
85
+ ## Contributing
86
+
87
+ 1. Fork it
88
+ 2. Create your feature branch (`git checkout -b my-new-feature`)
89
+ 3. Commit your changes (`git commit -am 'Add some feature'`)
90
+ 4. Push to the branch (`git push origin my-new-feature`)
91
+ 5. Create new Pull Request
92
+
data/Rakefile ADDED
@@ -0,0 +1,10 @@
1
+ require 'rspec/core/rake_task'
2
+ require "bundler/gem_tasks"
3
+
4
+ # Default directory to look in is `/specs`
5
+ # Run with `rake spec`
6
+ RSpec::Core::RakeTask.new(:spec) do |task|
7
+ task.rspec_opts = ['--color', '--format', 'nested']
8
+ end
9
+
10
+ task :default => :spec
@@ -0,0 +1,74 @@
1
+ filter_table :users do |f|
2
+
3
+ # filter_by takes a block and yields the fields of each record (as a hash)
4
+ # the block should return true to keep the record, false if not. For
5
+ # ease of use and extensibility, we allow multiple filter_by blocks
6
+ # rather than forcing all logic into one block.
7
+ f.filter_by do |u|
8
+ u[:id] % 1000 == 0
9
+ end
10
+
11
+ # lock takes a block and yields the fields of each record (as a hash of
12
+ # fieldname => value) If the block returns true this record is immune to all
13
+ # further filtering.
14
+ f.lock do |u|
15
+ u[:email].split('@').last == 'apartmentlist.com'
16
+ end
17
+
18
+ # sanitize takes a block, yields the fields of each record as a hash of
19
+ # fieldname => value and should return a new set of fields that has been
20
+ # sanitized however desired.
21
+ f.sanitize do |u|
22
+ u[:email] = "somerandomemail#{u[:id]}@foo.bar"
23
+ u
24
+ end
25
+
26
+ # filter_subtable indicates a child table to filter based upon the filtering
27
+ # done on this table.
28
+ f.filter_subtable(:favorites, :foreign_key => :user_id)
29
+
30
+ # if needbe you can filter by a different key besides the id. All filtering
31
+ # will be done before all sanitization, so you don't need to worry about if
32
+ # these are getting munged.
33
+ f.filter_subtable(:email_preferences, :foreign_key => :user_email,
34
+ :primary_key => :email)
35
+
36
+ # You can also filter by a polymorphic reference by specifying the
37
+ # type_key and type.
38
+ f.filter_subtable(:polymorphic_referneces, :foreign_key => :context_id,
39
+ :type_key => :context_type,
40
+ :type => 'User')
41
+
42
+ # If it feels more natural, you can define additional filters
43
+ # or locks within a filter_subtable definitition
44
+ f.filter_subtable(:lockable_table, :foreign_key => :user_id) do |sub|
45
+ sub.lock do |u|
46
+ u[:locked] == true
47
+ end
48
+ end
49
+
50
+ # To keep things consistent, if you're sanitizing something that also exists
51
+ # in other places (ie tables aren't fully normalized, and you have email in 2
52
+ # places), you probably need to be able to specify this somehow
53
+ f.sanitize_subtable(:email_preferences,
54
+ :local_field => :email,
55
+ :foreign_field => :user_email)
56
+
57
+ end
58
+
59
+ # If you have a chain of dependencies, ie users has favorites, favorites has
60
+ # some additional set of tables hanging off it, you can define the 2nd
61
+ # relationship in its own filter_table block, and the tool will figure out that
62
+ # going from users => favorites also implies
63
+ # favorites => favorite_related_table
64
+ filter_table :favorites do |f|
65
+ f.filter_subtable(:favorite_related_table, :foreign_key => :favorite_id)
66
+ end
67
+
68
+ # You can completely remove a table as well, or remove it minus a locked set of
69
+ # rows
70
+ remove_table :removables do |f|
71
+ f.lock do |u|
72
+ u[:name] == "Keep Me"
73
+ end
74
+ end
data/bin/pg_shrink ADDED
@@ -0,0 +1,44 @@
1
+ #!/usr/bin/env ruby
2
+ require 'optparse'
3
+
4
+ $:.unshift(File.join(File.dirname(__FILE__), "/../lib"))
5
+ require 'pg_shrink'
6
+
7
+ def parse_options!(options)
8
+ OptionParser.new do |opts|
9
+ banner = <<-TXT
10
+ pg_shrink helps you shrink and sanitize your psql database!
11
+ Please make sure you have a Shrinkfile or specify one using -c
12
+ TXT
13
+ opts.banner = banner
14
+
15
+ url_desc = '*REQUIRED* Specify URL to postgres database. WARNING: ' +
16
+ 'This database should be a backup and not be changing at the ' +
17
+ 'time pg_shrink is run. It will be modified in place.'
18
+ opts.on('-u', '--url URL', url_desc) do |url|
19
+ options[:url] = url
20
+ end
21
+
22
+ config_desc = '(Optional) Specify configuration file for how to shrink. ' +
23
+ ' Will default to Shrinkfile in directory command is being ' +
24
+ 'run from'
25
+ 'time pg_shrink is run. It will be modified in place.'
26
+ opts.on('-c', '--config Shrinkfile', config_desc) do |config|
27
+ options[:config] = config
28
+ end
29
+
30
+ force_desc = 'Force run without confirmation.'
31
+ opts.on('--force', force_desc) do
32
+ options[:force] = true
33
+ end
34
+
35
+
36
+ opts.on('-h', '--help', 'Show this message and exit') do |h|
37
+ puts opts
38
+ exit
39
+ end
40
+ end.parse!
41
+ end
42
+ options = PgShrink.blank_options
43
+ parse_options!(options)
44
+ PgShrink.run(options)
@@ -0,0 +1,91 @@
1
+ module PgShrink
2
+ require 'pg'
3
+ require 'sequel'
4
+ class Database::Postgres < Database
5
+
6
+
7
+ attr_accessor :connection
8
+ DEFAULT_OPTS = {
9
+ postgres_url: nil,
10
+ host: 'localhost',
11
+ port: nil,
12
+ username: 'postgres',
13
+ password: nil,
14
+ database: 'test',
15
+ batch_size: 10000
16
+ }.freeze
17
+
18
+ def connection_string
19
+ if @opts[:postgres_url]
20
+ @opts[:postgres_url]
21
+ else
22
+ str = "postgres://#{@opts[:user]}"
23
+ str << ":#{@opts[:password]}" if @opts[:password]
24
+ str << "@#{@opts[:host]}"
25
+ str << ":#{@opts[:port]}" if @opts[:port]
26
+ str << "/#{@opts[:database]}"
27
+ end
28
+ end
29
+
30
+ def batch_size
31
+ @opts[:batch_size]
32
+ end
33
+
34
+ def initialize(opts)
35
+ @opts = DEFAULT_OPTS.merge(opts.symbolize_keys)
36
+ @connection = Sequel.connect(connection_string)
37
+ end
38
+
39
+ # WARNING! This assumes the database is not changing during run. If
40
+ # requirements change we may need to insert a lock.
41
+ def records_in_batches(table_name)
42
+ table = self.table(table_name)
43
+ primary_key = table.primary_key
44
+ max_id = self.connection["select max(#{primary_key}) from #{table_name}"].
45
+ first[:max]
46
+ i = 1;
47
+ while i < max_id do
48
+ sql = "select * from #{table_name} where " +
49
+ "#{primary_key} >= #{i} and #{primary_key} < #{i + batch_size}"
50
+ batch = self.connection[sql].all
51
+ yield(batch)
52
+ i = i + batch_size
53
+ end
54
+ end
55
+
56
+ def update_records(table_name, old_records, new_records)
57
+ table = self.table(table_name)
58
+ primary_key = table.primary_key
59
+
60
+ old_records_by_key = old_records.index_by {|r| r[primary_key]}
61
+ new_records_by_key = new_records.index_by {|r| r[primary_key]}
62
+
63
+ if (new_records_by_key.keys - old_records_by_key.keys).size > 0
64
+ raise "Bad voodoo! New records have primary keys not in old records!"
65
+ end
66
+
67
+ deleted_record_ids = old_records_by_key.keys - new_records_by_key.keys
68
+ if deleted_record_ids.any?
69
+ raise "Bad voodoo! Some records missing in new records!"
70
+ end
71
+
72
+ # TODO: This can be optimized if performance is too slow. Will impact
73
+ # the speed of sanitizing the already-filtered dataset.
74
+ new_records.each do |rec|
75
+ if old_records_by_key[rec[primary_key]] != rec
76
+ self.connection.from(table_name).
77
+ where(primary_key => rec[primary_key]).
78
+ update(rec)
79
+ end
80
+ end
81
+ end
82
+
83
+ def get_records(table_name, opts)
84
+ self.connection.from(table_name).where(opts).all
85
+ end
86
+
87
+ def delete_records(table_name, condition_to_delete)
88
+ self.connection.from(table_name).where(condition_to_delete).delete
89
+ end
90
+ end
91
+ end
@@ -0,0 +1,61 @@
1
+ module PgShrink
2
+ class Database
3
+ def tables
4
+ @tables ||= {}
5
+ end
6
+
7
+ # table should return a unique table representation for this database.
8
+ def table(table_name)
9
+ tables[table_name] ||= Table.new(self, table_name)
10
+ end
11
+
12
+ def filter_table(table_name, opts = {})
13
+ table = self.table(table_name)
14
+ # we want to allow composability of filter specifications, so we always
15
+ # update existing options rather than overriding
16
+ table.update_options(opts)
17
+ yield table if block_given?
18
+ end
19
+
20
+ def remove_table(table_name)
21
+ table = self.table(table_name)
22
+ table.mark_for_removal!
23
+ end
24
+
25
+ # records_in_batches should yield a series of batches # of records.
26
+ def records_in_batches(table_name)
27
+ raise "implement in subclass"
28
+ end
29
+
30
+ # get_records should take a table name and options hash and return a
31
+ # specific set of records
32
+ def get_records(table_name, opts)
33
+ raise "implement in subclass"
34
+ end
35
+
36
+ # The update_records method takes a set of original records and a new
37
+ # set of records. It should throw an error if there are any records missing,
38
+ # so it should not be used for deletion.
39
+ def update_records(table_name, old_records, new_records)
40
+ raise "implement in subclass"
41
+ end
42
+
43
+ # The delete_records method takes a table name and a condition to delete on.
44
+ def delete_records(table_name, condition)
45
+ raise "implement in subclass"
46
+ end
47
+
48
+ def filter!
49
+ tables.values.each(&:filter!)
50
+ end
51
+
52
+ def sanitize!
53
+ tables.values.each(&:sanitize!)
54
+ end
55
+
56
+ def shrink!
57
+ filter!
58
+ sanitize!
59
+ end
60
+ end
61
+ end
@@ -0,0 +1,21 @@
1
+ module PgShrink
2
+ class SubTableFilter < SubTableOperator
3
+
4
+ def propagate!(old_parent_data, new_parent_data)
5
+ old_batch_keys = old_parent_data.map {|record| record[@opts[:primary_key]]}
6
+ new_batch_keys = new_parent_data.map {|record| record[@opts[:primary_key]]}
7
+
8
+ foreign_key = @opts[:foreign_key]
9
+ finder_options = {foreign_key => old_batch_keys}
10
+ if @opts[:type_key] && @opts[:type]
11
+ finder_options[@opts[:type_key]] = @opts[:type]
12
+ end
13
+
14
+ old_records = table.get_records(finder_options)
15
+ table.filter_batch(old_records) do |record|
16
+ new_batch_keys.include?(record[foreign_key])
17
+ end
18
+ end
19
+
20
+ end
21
+ end
@@ -0,0 +1,44 @@
1
+ module PgShrink
2
+ class SubTableOperator
3
+ attr_accessor :parent, :table_name, :database
4
+ def default_opts
5
+ {:foreign_key =>
6
+ "#{ActiveSupport::Inflector.singularize(parent.table_name.to_s)}_id",
7
+ :primary_key => :id
8
+ }
9
+ end
10
+
11
+ def name
12
+ "#{table_name} #{self.class.name.demodulize} from #{parent.table_name}"
13
+ end
14
+
15
+ def table
16
+ database.table(table_name)
17
+ end
18
+
19
+ def validate_opts!(opts)
20
+ if opts[:type_key] && !opts[:type]
21
+ raise "Error: #{name} has type_key set but no type"
22
+ end
23
+ if opts[:type] && !opts[:type_key]
24
+ raise "Error: #{name} has type set but no type_key"
25
+ end
26
+ end
27
+
28
+ def initialize(parent, table_name, opts = {})
29
+ self.parent = parent
30
+ self.table_name = table_name
31
+ self.database = parent.database
32
+ @opts = default_opts.merge(opts)
33
+
34
+ validate_opts!(@opts)
35
+ end
36
+
37
+ def propagate!(old_parent_data, new_parent_data)
38
+ raise "Implement in subclass"
39
+ end
40
+
41
+
42
+ end
43
+ end
44
+
@@ -0,0 +1,33 @@
1
+ module PgShrink
2
+ class SubTableSanitizer < SubTableOperator
3
+
4
+ def validate_opts!(opts)
5
+ unless opts[:local_field] && opts[:foreign_field]
6
+ raise "Error: #{name} must define :local_field and :foreign_field"
7
+ end
8
+ super(opts)
9
+ end
10
+
11
+ def propagate!(old_parent_data, new_parent_data)
12
+ old_batch = old_parent_data.index_by {|record| record[@opts[:primary_key]]}
13
+ new_batch = new_parent_data.index_by {|record| record[@opts[:primary_key]]}
14
+
15
+ foreign_key = @opts[:foreign_key]
16
+ finder_options = {foreign_key => old_batch.keys}
17
+ if @opts[:type_key] && @opts[:type]
18
+ finder_options[@opts[:type_key]] = @opts[:type]
19
+ end
20
+
21
+ parent_field = @opts[:local_field].to_sym
22
+ child_field = @opts[:foreign_field].to_sym
23
+
24
+ old_child_records = table.get_records(finder_options)
25
+ table.sanitize_batch(old_child_records) do |record|
26
+ parent_record = new_batch[record[foreign_key]]
27
+ record[child_field] = parent_record[parent_field]
28
+ record
29
+ end
30
+ end
31
+
32
+ end
33
+ end
@@ -0,0 +1,159 @@
1
+ module PgShrink
2
+ class Table
3
+ attr_accessor :table_name
4
+ attr_accessor :database
5
+ attr_accessor :opts
6
+ attr_reader :filters, :sanitizers, :subtable_filters, :subtable_sanitizers
7
+ # TODO: Figure out, do we need to be able to support tables with no
8
+ # keys? If so, how should we handle that?
9
+ def initialize(database, table_name, opts = {})
10
+ self.table_name = table_name
11
+ self.database = database
12
+ @opts = opts
13
+ @filters = []
14
+ @sanitizers = []
15
+ @subtable_filters = []
16
+ @subtable_sanitizers = []
17
+ end
18
+
19
+ def update_options(opts)
20
+ @opts = @opts.merge(opts)
21
+ end
22
+
23
+ def filter_by(opts = {}, &block)
24
+ self.filters << TableFilter.new(self, opts, &block)
25
+ end
26
+
27
+ def filter_subtable(table_name, opts = {})
28
+ filter = SubTableFilter.new(self, table_name, opts)
29
+ self.subtable_filters << filter
30
+ yield filter.table if block_given?
31
+ end
32
+
33
+ def lock(opts = {}, &block)
34
+ @lock = block
35
+ end
36
+
37
+ def locked?(record)
38
+ if @lock
39
+ @lock.call(record)
40
+ end
41
+ end
42
+
43
+ def sanitize(opts = {}, &block)
44
+ self.sanitizers << TableSanitizer.new(self, opts, &block)
45
+ end
46
+
47
+ def sanitize_subtable(table_name, opts = {})
48
+ sanitizer = SubTableSanitizer.new(self, table_name, opts)
49
+ self.subtable_sanitizers << sanitizer
50
+ yield sanitizer.table if block_given?
51
+ end
52
+
53
+ def update_records(original_records, new_records)
54
+ if self.database
55
+ database.update_records(self.table_name, original_records, new_records)
56
+ end
57
+ end
58
+
59
+ def delete_records(old_records, new_records)
60
+ if primary_key
61
+ deleted_keys = old_records.map {|r| r[primary_key]} -
62
+ new_records.map {|r| r[primary_key]}
63
+ self.database.delete_records(table_name, primary_key => deleted_keys)
64
+ else
65
+ # TODO: Do we need to speed this up? Or is this an unusual enough
66
+ # case that we can leave it slow?
67
+ deleted_records = old_records - new_records
68
+ deleted_records.each do |rec|
69
+ self.database.delete_records(table_name, rec)
70
+ end
71
+ end
72
+ end
73
+
74
+ def records_in_batches(&block)
75
+ if self.database
76
+ self.database.records_in_batches(self.table_name, &block)
77
+ else
78
+ yield []
79
+ end
80
+ end
81
+
82
+ def get_records(finder_options)
83
+ if self.database
84
+ self.database.get_records(self.table_name, finder_options)
85
+ else
86
+ []
87
+ end
88
+ end
89
+
90
+ def filter_subtables(old_set, new_set)
91
+ self.subtable_filters.each do |subtable_filter|
92
+ subtable_filter.propagate!(old_set, new_set)
93
+ end
94
+ end
95
+
96
+ def sanitize_subtables(old_set, new_set)
97
+ self.subtable_sanitizers.each do |subtable_sanitizer|
98
+ subtable_sanitizer.propagate!(old_set, new_set)
99
+ end
100
+ end
101
+
102
+ def filter_batch(batch, &filter_block)
103
+ new_set = batch.select do |record|
104
+ locked?(record) || filter_block.call(record.dup)
105
+ end
106
+ delete_records(batch, new_set)
107
+ filter_subtables(batch, new_set)
108
+ end
109
+
110
+ def sanitize_batch(batch, &sanitize_block)
111
+ new_set = batch.map do |record|
112
+ if locked?(record)
113
+ record.dup
114
+ else
115
+ sanitize_block.call(record.dup)
116
+ end
117
+ end
118
+ update_records(batch, new_set)
119
+ sanitize_subtables(batch, new_set)
120
+ end
121
+
122
+ def filter!
123
+ self.filters.each do |filter|
124
+ self.records_in_batches do |batch|
125
+ self.filter_batch(batch) do |record|
126
+ filter.apply(record)
127
+ end
128
+ end
129
+ end
130
+ end
131
+
132
+ def sanitize!
133
+ self.sanitizers.each do |sanitizer|
134
+ self.records_in_batches do |batch|
135
+ self.sanitize_batch(batch) do |record|
136
+ sanitizer.apply(record)
137
+ end
138
+ end
139
+ end
140
+ end
141
+
142
+ # We use a filter for this, so that all other dependencies etc behave
143
+ # as would be expected.
144
+ def mark_for_removal!
145
+ self.filter_by { false }
146
+ end
147
+
148
+ # Check explicitly for nil because we want to be able to set primary_key
149
+ # to false for e.g. join tables
150
+ def primary_key
151
+ opts[:primary_key].nil? ? :id : opts[:primary_key]
152
+ end
153
+
154
+ def shrink!
155
+ filter!
156
+ sanitize!
157
+ end
158
+ end
159
+ end