pgsync 0.5.4 → 0.5.5

Sign up to get free protection for your applications and to get access to all the features.

Potentially problematic release.


This version of pgsync might be problematic. Click here for more details.

checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: aa1085a96bdf17dcf1d5e240276faf9d26ee31a4c454e6bb042d9b6e919c6191
4
- data.tar.gz: c1e3d4be165bf9365d8faedee4cf7177908c1f4565342467691842d376f96d98
3
+ metadata.gz: 32d1e60a133659dde98d3564d21fca4f1866e8f7b58518ef71f6dcaf59cf8c82
4
+ data.tar.gz: 650b50aee614ed737d2e99278cbb8910f7980a041dcf58d18288f6bc53826635
5
5
  SHA512:
6
- metadata.gz: 68c7e7820c9e57e0f08f4f46030b9593b274bb1b731926d7c861d59424767c8dc650cf67a5849db463600392d36059fcfa451fe9c2e408213e35463c71f3cc6a
7
- data.tar.gz: 66080f837d39782139fd59ca60edf7bc18cfc5f37e5f8b76ff8d5b2bfced3fbd2f542cf08775e78d06872eda0ed887cf64993a2b955260eee29b3ed8892e3686
6
+ metadata.gz: 8bd3e4373558c8f8519f0c118f51b55ac27a3a82fb1ab9a38e85bfee858e08a80fb2c37cab490878d2dda06121c13f45d81b3df6c87b8ede0e0060a50b94fee6
7
+ data.tar.gz: 922ecb8f57615025e5b92a93f82010bb667fbc968db12ebddeb49507ce90d6dd4c3c16ea3cd5777817bb321a8fefa54f6483b6ffae872f897e7ada64d68831b0
data/CHANGELOG.md CHANGED
@@ -1,3 +1,11 @@
1
+ ## 0.5.5 (2020-05-13)
2
+
3
+ - Added `--jobs` option
4
+ - Added experimental `--defer-constraints` option
5
+ - Added experimental `--disable-user-triggers` option
6
+ - Added experimental `--disable-integrity` option
7
+ - Improved error message for older libpq
8
+
1
9
  ## 0.5.4 (2020-05-09)
2
10
 
3
11
  - Fixed output for `--in-batches`
data/README.md CHANGED
@@ -65,7 +65,7 @@ Or truncate them
65
65
  pgsync products "where store_id = 1" --truncate
66
66
  ```
67
67
 
68
- ### Exclude Tables
68
+ ## Exclude Tables
69
69
 
70
70
  ```sh
71
71
  pgsync --exclude users
@@ -79,7 +79,7 @@ exclude:
79
79
  - table2
80
80
  ```
81
81
 
82
- For Rails, you probably want to exclude schema migrations and ActiveRecord metadata.
82
+ For Rails, you probably want to exclude schema migrations and Active Record metadata.
83
83
 
84
84
  ```yml
85
85
  exclude:
@@ -87,7 +87,7 @@ exclude:
87
87
  - ar_internal_metadata
88
88
  ```
89
89
 
90
- ### Groups
90
+ ## Groups
91
91
 
92
92
  Define groups in `.pgsync.yml`:
93
93
 
@@ -123,7 +123,7 @@ And run:
123
123
  pgsync product:123
124
124
  ```
125
125
 
126
- ### Schema
126
+ ## Schema
127
127
 
128
128
  **Note:** pgsync is designed to sync data. You should use a schema migration tool to manage schema changes. The methods in this section are provided for convenience but not recommended.
129
129
 
@@ -149,10 +149,6 @@ pgsync --schema-only
149
149
 
150
150
  pgsync does not try to sync Postgres extensions.
151
151
 
152
- ## Data Protection
153
-
154
- Always make sure your [connection is secure](https://ankane.org/postgres-sslmode-explained) when connecting to a database over a network you don’t fully trust. Your best option is to connect over SSH or a VPN. Another option is to use `sslmode=verify-full`. If you don’t do this, your database credentials can be compromised.
155
-
156
152
  ## Sensitive Information
157
153
 
158
154
  Prevent sensitive information like email addresses from leaving the remote server.
@@ -190,29 +186,49 @@ Options for replacement are:
190
186
 
191
187
  Rules starting with `unique_` require the table to have a primary key. `unique_phone` requires a numeric primary key.
192
188
 
193
- ## Multiple Databases
189
+ ## Foreign Keys
194
190
 
195
- To use with multiple databases, run:
191
+ Foreign keys can make it difficult to sync data. Three options are:
192
+
193
+ 1. Manually specify the order of tables
194
+ 2. Use deferrable constraints
195
+ 3. Disable triggers, which can silently break referential integrity
196
+
197
+ When manually specifying the order, use `--jobs 1` so tables are synced one-at-a-time.
196
198
 
197
199
  ```sh
198
- pgsync --init db2
200
+ pgsync table1,table2,table3 --jobs 1
199
201
  ```
200
202
 
201
- This creates `.pgsync-db2.yml` for you to edit. Specify a database in commands with:
203
+ If your tables have [deferrable constraints](https://begriffs.com/posts/2017-08-27-deferrable-sql-constraints.html), use:
202
204
 
203
205
  ```sh
204
- pgsync --db db2
206
+ pgsync --defer-constraints
205
207
  ```
206
208
 
207
- ## Safety
209
+ **Note:** This feature is currently experimental.
208
210
 
209
- To keep you from accidentally overwriting production, the destination is limited to `localhost` or `127.0.0.1` by default.
211
+ To disable triggers and potentially break referential integrity, use:
210
212
 
211
- To use another host, add `to_safe: true` to your `.pgsync.yml`.
213
+ ```sh
214
+ pgsync --disable-integrity
215
+ ```
216
+
217
+ **Note:** This feature is currently experimental.
218
+
219
+ ## Triggers
220
+
221
+ Disable user triggers with:
222
+
223
+ ```sh
224
+ pgsync --disable-user-triggers
225
+ ```
226
+
227
+ **Note:** This feature is currently experimental.
212
228
 
213
- ## Large Tables
229
+ ## Append-Only Tables
214
230
 
215
- For extremely large tables, sync in batches.
231
+ For extremely large, append-only tables, sync in batches.
216
232
 
217
233
  ```sh
218
234
  pgsync large_table --in-batches
@@ -220,15 +236,31 @@ pgsync large_table --in-batches
220
236
 
221
237
  The script will resume where it left off when run again, making it great for backfills.
222
238
 
223
- ## Foreign Keys
239
+ ## Connection Security
240
+
241
+ Always make sure your [connection is secure](https://ankane.org/postgres-sslmode-explained) when connecting to a database over a network you don’t fully trust. Your best option is to connect over SSH or a VPN. Another option is to use `sslmode=verify-full`. If you don’t do this, your database credentials can be compromised.
242
+
243
+ ## Safety
244
+
245
+ To keep you from accidentally overwriting production, the destination is limited to `localhost` or `127.0.0.1` by default.
246
+
247
+ To use another host, add `to_safe: true` to your `.pgsync.yml`.
224
248
 
225
- By default, tables are copied in parallel. If you use foreign keys, this can cause violations. You can specify tables to be copied serially with:
249
+ ## Multiple Databases
250
+
251
+ To use with multiple databases, run:
226
252
 
227
253
  ```sh
228
- pgsync group1 --debug
254
+ pgsync --init db2
229
255
  ```
230
256
 
231
- ## Reference
257
+ This creates `.pgsync-db2.yml` for you to edit. Specify a database in commands with:
258
+
259
+ ```sh
260
+ pgsync --db db2
261
+ ```
262
+
263
+ ## Other Commands
232
264
 
233
265
  Help
234
266
 
@@ -242,6 +274,12 @@ Version
242
274
  pgsync --version
243
275
  ```
244
276
 
277
+ List tables
278
+
279
+ ```sh
280
+ pgsync --list
281
+ ```
282
+
245
283
  ## Scripts
246
284
 
247
285
  Use groups when possible to take advantage of parallelism.
data/lib/pgsync/client.rb CHANGED
@@ -43,6 +43,7 @@ Options:}
43
43
  o.string "-d", "--db", "database"
44
44
  o.string "-t", "--tables", "tables to sync"
45
45
  o.string "-g", "--groups", "groups to sync"
46
+ o.integer "-j", "--jobs", "number of tables to sync at a time"
46
47
  o.string "--schemas", "schemas to sync"
47
48
  o.string "--from", "source"
48
49
  o.string "--to", "destination"
@@ -68,6 +69,9 @@ Options:}
68
69
  o.integer "--batch-size", "batch size", default: 10000, help: false
69
70
  o.float "--sleep", "sleep", default: 0, help: false
70
71
  o.boolean "--fail-fast", "stop on the first failed table", default: false
72
+ o.boolean "--defer-constraints", "defer constraints", default: false
73
+ o.boolean "--disable-user-triggers", "disable non-system triggers", default: false
74
+ o.boolean "--disable-integrity", "disable foreign key triggers", default: false
71
75
  # o.array "--var", "pass a variable"
72
76
  o.boolean "-v", "--version", "print the version"
73
77
  o.boolean "-h", "--help", "prints help"
@@ -2,9 +2,8 @@ module PgSync
2
2
  class DataSource
3
3
  attr_reader :url
4
4
 
5
- def initialize(source, timeout: 3)
5
+ def initialize(source)
6
6
  @url = resolve_url(source)
7
- @timeout = timeout
8
7
  end
9
8
 
10
9
  def exists?
@@ -85,10 +84,25 @@ module PgSync
85
84
  row && row["attname"]
86
85
  end
87
86
 
87
+ def triggers(table)
88
+ query = <<-SQL
89
+ SELECT
90
+ tgname AS name,
91
+ tgisinternal AS internal,
92
+ tgenabled != 'D' AS enabled,
93
+ tgconstraint != 0 AS integrity
94
+ FROM
95
+ pg_trigger
96
+ WHERE
97
+ pg_trigger.tgrelid = $1::regclass
98
+ SQL
99
+ execute(query, [quote_ident_full(table)])
100
+ end
101
+
88
102
  def conn
89
103
  @conn ||= begin
90
104
  begin
91
- ENV["PGCONNECT_TIMEOUT"] ||= @timeout.to_s
105
+ ENV["PGCONNECT_TIMEOUT"] ||= "3"
92
106
  if @url =~ /\Apostgres(ql)?:\/\//
93
107
  config = @url
94
108
  else
@@ -108,6 +122,10 @@ module PgSync
108
122
  end
109
123
  end
110
124
 
125
+ def reconnect
126
+ @conn.reset
127
+ end
128
+
111
129
  def dump_command(tables)
112
130
  tables = tables ? tables.keys.map { |t| "-t #{Shellwords.escape(quote_ident_full(t))}" }.join(" ") : ""
113
131
  "pg_dump -Fc --verbose --schema-only --no-owner --no-acl #{tables} -d #{@url}"
@@ -132,6 +150,21 @@ module PgSync
132
150
  @search_path ||= execute("SELECT current_schemas(true)")[0]["current_schemas"][1..-2].split(",")
133
151
  end
134
152
 
153
+ def execute(query, params = [])
154
+ conn.exec_params(query, params).to_a
155
+ end
156
+
157
+ def transaction
158
+ if conn.transaction_status == 0
159
+ # not currently in transaction
160
+ conn.transaction do
161
+ yield
162
+ end
163
+ else
164
+ yield
165
+ end
166
+ end
167
+
135
168
  private
136
169
 
137
170
  def pg_restore_version
@@ -145,17 +178,18 @@ module PgSync
145
178
  end
146
179
 
147
180
  def conninfo
148
- @conninfo ||= conn.conninfo_hash
181
+ @conninfo ||= begin
182
+ unless conn.respond_to?(:conninfo_hash)
183
+ raise Error, "libpq is too old. Upgrade it and run `gem install pg`"
184
+ end
185
+ conn.conninfo_hash
186
+ end
149
187
  end
150
188
 
151
189
  def quote_ident_full(ident)
152
190
  ident.split(".", 2).map { |v| quote_ident(v) }.join(".")
153
191
  end
154
192
 
155
- def execute(query, params = [])
156
- conn.exec_params(query, params).to_a
157
- end
158
-
159
193
  def quote_ident(value)
160
194
  PG::Connection.quote_ident(value)
161
195
  end
data/lib/pgsync/sync.rb CHANGED
@@ -12,7 +12,7 @@ module PgSync
12
12
  opts[opt] ||= config[opt.to_s]
13
13
  end
14
14
 
15
- # TODO remove deprecations
15
+ # TODO remove deprecations in 0.6.0
16
16
  map_deprecations(args, opts)
17
17
 
18
18
  # start
@@ -22,35 +22,17 @@ module PgSync
22
22
  raise Error, "Usage:\n pgsync [options]"
23
23
  end
24
24
 
25
- source = DataSource.new(opts[:from])
26
25
  raise Error, "No source" unless source.exists?
27
-
28
- destination = DataSource.new(opts[:to])
29
26
  raise Error, "No destination" unless destination.exists?
30
27
 
31
- begin
32
- # start connections
33
- source.host
34
- destination.host
35
-
36
- unless opts[:to_safe] || destination.local?
37
- raise Error, "Danger! Add `to_safe: true` to `.pgsync.yml` if the destination is not localhost or 127.0.0.1"
38
- end
39
-
40
- print_description("From", source)
41
- print_description("To", destination)
42
- ensure
43
- source.close
44
- destination.close
28
+ unless opts[:to_safe] || destination.local?
29
+ raise Error, "Danger! Add `to_safe: true` to `.pgsync.yml` if the destination is not localhost or 127.0.0.1"
45
30
  end
46
31
 
47
- tables = nil
48
- begin
49
- tables = TableList.new(args, opts, source, config).tables
50
- ensure
51
- source.close
52
- end
32
+ print_description("From", source)
33
+ print_description("To", destination)
53
34
 
35
+ tables = TableList.new(args, opts, source, config).tables
54
36
  confirm_tables_exist(source, tables, "source")
55
37
 
56
38
  if opts[:list]
@@ -78,8 +60,8 @@ module PgSync
78
60
  unless opts[:schema_only]
79
61
  confirm_tables_exist(destination, tables, "destination")
80
62
 
81
- in_parallel(tables, first_schema: source.search_path.find { |sp| sp != "pg_catalog" }) do |table, table_opts|
82
- TableSync.new.sync(config, table, opts.merge(table_opts), source.url, destination.url)
63
+ in_parallel(tables, first_schema: source.search_path.find { |sp| sp != "pg_catalog" }) do |table, table_opts, source, destination|
64
+ TableSync.new(source: source, destination: destination).sync(config, table, opts.merge(table_opts))
83
65
  end
84
66
  end
85
67
 
@@ -93,8 +75,6 @@ module PgSync
93
75
  raise Error, "Table does not exist in #{description}: #{table}"
94
76
  end
95
77
  end
96
- ensure
97
- data_source.close
98
78
  end
99
79
 
100
80
  def map_deprecations(args, opts)
@@ -197,20 +177,56 @@ module PgSync
197
177
  end
198
178
 
199
179
  options = {start: start, finish: finish}
200
- if @options[:debug] || @options[:in_batches]
201
- options[:in_processes] = 0
180
+
181
+ jobs = @options[:jobs]
182
+ if @options[:debug] || @options[:in_batches] || @options[:defer_constraints]
183
+ warn "--jobs ignored" if jobs
184
+ jobs = 0
185
+ end
186
+
187
+ if windows?
188
+ options[:in_threads] = jobs || 4
202
189
  else
203
- options[:in_threads] = 4 if windows?
190
+ options[:in_processes] = jobs if jobs
204
191
  end
205
192
 
206
- # could try to use `raise Parallel::Kill` to fail faster with --fail-fast
207
- # see `fast_faster` branch
208
- # however, need to make sure connections are cleaned up properly
209
- Parallel.each(tables, **options, &block)
193
+ maybe_defer_constraints do
194
+ # could try to use `raise Parallel::Kill` to fail faster with --fail-fast
195
+ # see `fast_faster` branch
196
+ # however, need to make sure connections are cleaned up properly
197
+ Parallel.each(tables, **options) do |table, table_opts|
198
+ # must reconnect for new thread or process
199
+ # TODO only reconnect first time
200
+ unless options[:in_processes] == 0
201
+ source.reconnect
202
+ destination.reconnect
203
+ end
204
+
205
+ # TODO warn if there are non-deferrable constraints on the table
206
+
207
+ yield table, table_opts, source, destination
208
+ end
209
+ end
210
210
 
211
211
  fail_sync(failed_tables) if failed_tables.any?
212
212
  end
213
213
 
214
+ def maybe_defer_constraints
215
+ if @options[:defer_constraints]
216
+ destination.transaction do
217
+ destination.execute("SET CONSTRAINTS ALL DEFERRED")
218
+
219
+ # create a transaction on the source
220
+ # to ensure we get a consistent snapshot
221
+ source.transaction do
222
+ yield
223
+ end
224
+ end
225
+ else
226
+ yield
227
+ end
228
+ end
229
+
214
230
  def fail_sync(failed_tables)
215
231
  raise Error, "Sync failed for #{failed_tables.size} table#{failed_tables.size == 1 ? nil : "s"}: #{failed_tables.join(", ")}"
216
232
  end
@@ -241,5 +257,24 @@ module PgSync
241
257
  def windows?
242
258
  Gem.win_platform?
243
259
  end
260
+
261
+ def source
262
+ @source ||= data_source(@options[:from])
263
+ end
264
+
265
+ def destination
266
+ @destination ||= data_source(@options[:to])
267
+ end
268
+
269
+ def data_source(url)
270
+ ds = DataSource.new(url)
271
+ ObjectSpace.define_finalizer(self, self.class.finalize(ds))
272
+ ds
273
+ end
274
+
275
+ def self.finalize(ds)
276
+ # must use proc instead of stabby lambda
277
+ proc { ds.close }
278
+ end
244
279
  end
245
280
  end
@@ -2,14 +2,21 @@ module PgSync
2
2
  class TableSync
3
3
  include Utils
4
4
 
5
- def sync(config, table, opts, source_url, destination_url)
6
- start_time = Time.now
5
+ attr_reader :source, :destination
6
+
7
+ def initialize(source:, destination:)
8
+ @source = source
9
+ @destination = destination
10
+ end
7
11
 
8
- source = DataSource.new(source_url, timeout: 0)
9
- destination = DataSource.new(destination_url, timeout: 0)
12
+ def sync(config, table, opts)
13
+ maybe_disable_triggers(table, opts) do
14
+ sync_data(config, table, opts)
15
+ end
16
+ end
10
17
 
11
- from_connection = source.conn
12
- to_connection = destination.conn
18
+ def sync_data(config, table, opts)
19
+ start_time = Time.now
13
20
 
14
21
  from_fields = source.columns(table)
15
22
  to_fields = destination.columns(table)
@@ -79,13 +86,7 @@ module PgSync
79
86
  batch_sql_clause = " #{sql_clause.length > 0 ? "#{sql_clause} AND" : "WHERE"} #{where}"
80
87
 
81
88
  batch_copy_to_command = "COPY (SELECT #{copy_fields} FROM #{quote_ident_full(table)}#{batch_sql_clause}) TO STDOUT"
82
- to_connection.copy_data "COPY #{quote_ident_full(table)} (#{fields}) FROM STDIN" do
83
- from_connection.copy_data batch_copy_to_command do
84
- while (row = from_connection.get_copy_data)
85
- to_connection.put_copy_data(row)
86
- end
87
- end
88
- end
89
+ copy(batch_copy_to_command, dest_table: table, dest_fields: fields)
89
90
 
90
91
  starting_id += batch_size
91
92
  i += 1
@@ -99,38 +100,31 @@ module PgSync
99
100
 
100
101
  # create a temp table
101
102
  temp_table = "pgsync_#{rand(1_000_000_000)}"
102
- to_connection.exec("CREATE TEMPORARY TABLE #{quote_ident_full(temp_table)} AS SELECT * FROM #{quote_ident_full(table)} WITH NO DATA")
103
+ destination.execute("CREATE TEMPORARY TABLE #{quote_ident_full(temp_table)} AS TABLE #{quote_ident_full(table)} WITH NO DATA")
103
104
 
104
105
  # load data
105
- to_connection.copy_data "COPY #{quote_ident_full(temp_table)} (#{fields}) FROM STDIN" do
106
- from_connection.copy_data copy_to_command do
107
- while (row = from_connection.get_copy_data)
108
- to_connection.put_copy_data(row)
109
- end
110
- end
111
- end
106
+ copy(copy_to_command, dest_table: temp_table, dest_fields: fields)
112
107
 
113
108
  if opts[:preserve]
114
109
  # insert into
115
- to_connection.exec("INSERT INTO #{quote_ident_full(table)} (SELECT * FROM #{quote_ident_full(temp_table)} WHERE NOT EXISTS (SELECT 1 FROM #{quote_ident_full(table)} WHERE #{quote_ident_full(table)}.#{quote_ident(primary_key)} = #{quote_ident_full(temp_table)}.#{quote_ident(primary_key)}))")
110
+ destination.execute("INSERT INTO #{quote_ident_full(table)} (SELECT * FROM #{quote_ident_full(temp_table)} WHERE NOT EXISTS (SELECT 1 FROM #{quote_ident_full(table)} WHERE #{quote_ident_full(table)}.#{quote_ident(primary_key)} = #{quote_ident_full(temp_table)}.#{quote_ident(primary_key)}))")
116
111
  else
117
- to_connection.transaction do
118
- to_connection.exec("DELETE FROM #{quote_ident_full(table)} WHERE #{quote_ident(primary_key)} IN (SELECT #{quote_ident(primary_key)} FROM #{quote_ident_full(temp_table)})")
119
- to_connection.exec("INSERT INTO #{quote_ident_full(table)} (SELECT * FROM #{quote_ident(temp_table)})")
112
+ destination.transaction do
113
+ destination.execute("DELETE FROM #{quote_ident_full(table)} WHERE #{quote_ident(primary_key)} IN (SELECT #{quote_ident(primary_key)} FROM #{quote_ident_full(temp_table)})")
114
+ destination.execute("INSERT INTO #{quote_ident_full(table)} (SELECT * FROM #{quote_ident(temp_table)})")
120
115
  end
121
116
  end
122
117
  else
123
- destination.truncate(table)
124
- to_connection.copy_data "COPY #{quote_ident_full(table)} (#{fields}) FROM STDIN" do
125
- from_connection.copy_data copy_to_command do
126
- while (row = from_connection.get_copy_data)
127
- to_connection.put_copy_data(row)
128
- end
129
- end
118
+ # use delete instead of truncate for foreign keys
119
+ if opts[:defer_constraints]
120
+ destination.execute("DELETE FROM #{quote_ident_full(table)}")
121
+ else
122
+ destination.truncate(table)
130
123
  end
124
+ copy(copy_to_command, dest_table: table, dest_fields: fields)
131
125
  end
132
126
  seq_values.each do |seq, value|
133
- to_connection.exec("SELECT setval(#{escape(seq)}, #{escape(value)})")
127
+ destination.execute("SELECT setval(#{escape(seq)}, #{escape(value)})")
134
128
  end
135
129
 
136
130
  message = nil
@@ -153,13 +147,21 @@ module PgSync
153
147
  end
154
148
 
155
149
  {status: "error", message: message}
156
- ensure
157
- source.close if source
158
- destination.close if destination
159
150
  end
160
151
 
161
152
  private
162
153
 
154
+ def copy(source_command, dest_table:, dest_fields:)
155
+ destination_command = "COPY #{quote_ident_full(dest_table)} (#{dest_fields}) FROM STDIN"
156
+ destination.conn.copy_data(destination_command) do
157
+ source.conn.copy_data(source_command) do
158
+ while (row = source.conn.get_copy_data)
159
+ destination.conn.put_copy_data(row)
160
+ end
161
+ end
162
+ end
163
+ end
164
+
163
165
  # TODO better performance
164
166
  def rule_match?(table, column, rule)
165
167
  regex = Regexp.new('\A' + Regexp.escape(rule).gsub('\*','[^\.]*') + '\z')
@@ -231,5 +233,43 @@ module PgSync
231
233
  def quote_string(s)
232
234
  s.gsub(/\\/, '\&\&').gsub(/'/, "''")
233
235
  end
236
+
237
+ def maybe_disable_triggers(table, opts)
238
+ if opts[:disable_integrity] || opts[:disable_user_triggers]
239
+ destination.transaction do
240
+ triggers = destination.triggers(table)
241
+ triggers.select! { |t| t["enabled"] == "t" }
242
+ internal_triggers, user_triggers = triggers.partition { |t| t["internal"] == "t" }
243
+ integrity_triggers = internal_triggers.select { |t| t["integrity"] == "t" }
244
+ restore_triggers = []
245
+
246
+ if opts[:disable_integrity]
247
+ integrity_triggers.each do |trigger|
248
+ destination.execute("ALTER TABLE #{quote_ident_full(table)} DISABLE TRIGGER #{quote_ident(trigger["name"])}")
249
+ end
250
+ restore_triggers.concat(integrity_triggers)
251
+ end
252
+
253
+ if opts[:disable_user_triggers]
254
+ # important!
255
+ # rely on Postgres to disable user triggers
256
+ # we don't want to accidentally disable non-user triggers if logic above is off
257
+ destination.execute("ALTER TABLE #{quote_ident_full(table)} DISABLE TRIGGER USER")
258
+ restore_triggers.concat(user_triggers)
259
+ end
260
+
261
+ result = yield
262
+
263
+ # restore triggers that were previously enabled
264
+ restore_triggers.each do |trigger|
265
+ destination.execute("ALTER TABLE #{quote_ident_full(table)} ENABLE TRIGGER #{quote_ident(trigger["name"])}")
266
+ end
267
+
268
+ result
269
+ end
270
+ else
271
+ yield
272
+ end
273
+ end
234
274
  end
235
275
  end
@@ -1,3 +1,3 @@
1
1
  module PgSync
2
- VERSION = "0.5.4"
2
+ VERSION = "0.5.5"
3
3
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: pgsync
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.5.4
4
+ version: 0.5.5
5
5
  platform: ruby
6
6
  authors:
7
7
  - Andrew Kane
8
8
  autorequire:
9
9
  bindir: exe
10
10
  cert_chain: []
11
- date: 2020-05-10 00:00:00.000000000 Z
11
+ date: 2020-05-14 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: parallel