pg_easy_replicate 0.2.6 → 0.3.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 81be66e36a67bb505548d91dabf061ad83724c6d849bb611701769ffe3e7abc8
4
- data.tar.gz: 2223eb05e483c2481a68b157e75f9aca9fd62f5205de8fe5664ce91364bbaf73
3
+ metadata.gz: 8b12c7664573414ae5b4421e16c2a4d603a1550abd52cd677fa97c0a3f4187d3
4
+ data.tar.gz: 03baf5fa2d60d841475024d8e6a4e822546a5f5180ddcdead5ee33ed89bc472b
5
5
  SHA512:
6
- metadata.gz: e3a93263182fc9c68c5b417d5512b021753ef356efb2e6c11ce2098e4c49b2ea41f21533f9aa590032a8eee0663a8ded1633f2d87399a11bd4c150da56c8f19e
7
- data.tar.gz: a24dc626daa748db907b07f8542008f2d5b6de558f6b80aa6f5879435a3eac92a6282e3a6bebc6d637502967adf3b77e9c57d8e67bd3a58136ca2c1f871b85be
6
+ metadata.gz: c18024373cdf4313723599eaee6f57d84b99ff52afffbbab3a393bae9e453ba43d300f8ec3424396af12fcbf40de46c92846110777a7a6d4ebb6fe9adae47f5f
7
+ data.tar.gz: 8f4e8dc80d25b072b6e44a4e17fc6d367fe27368a9643e4e0cc4a39dd846cd1abf247aedbc6e887b1285293939f967131e0d5254ae53751f584af2e13a72c783
data/CHANGELOG.md CHANGED
@@ -1,4 +1,10 @@
1
- ## [0.2.4] - 2024-04-14
1
+ ## [0.2.6] - 2024-06-04
2
+
3
+ - Quote table name in the VACUUM SQL - #118
4
+ - Exclude tables created by extensions - #120
5
+ - Use db user when adding tables to replication - #130
6
+
7
+ ## [0.2.5] - 2024-04-14
2
8
 
3
9
  - List only permanent tables - #113
4
10
 
data/Gemfile.lock CHANGED
@@ -1,10 +1,11 @@
1
1
  PATH
2
2
  remote: .
3
3
  specs:
4
- pg_easy_replicate (0.2.6)
4
+ pg_easy_replicate (0.3.0)
5
5
  ougai (~> 2.0.0)
6
6
  pg (~> 1.5.3)
7
- sequel (>= 5.69, < 5.82)
7
+ pg_query (~> 5.1.0)
8
+ sequel (>= 5.69, < 5.83)
8
9
  thor (>= 1.2.2, < 1.4.0)
9
10
 
10
11
  GEM
@@ -14,6 +15,12 @@ GEM
14
15
  bigdecimal (3.1.8)
15
16
  coderay (1.1.3)
16
17
  diff-lcs (1.5.1)
18
+ google-protobuf (4.28.0-arm64-darwin)
19
+ bigdecimal
20
+ rake (>= 13)
21
+ google-protobuf (4.28.0-x86_64-linux)
22
+ bigdecimal
23
+ rake (>= 13)
17
24
  haml (6.1.1)
18
25
  temple (>= 0.8.2)
19
26
  thor
@@ -24,11 +31,13 @@ GEM
24
31
  oj (3.14.3)
25
32
  ougai (2.0.0)
26
33
  oj (~> 3.10)
27
- parallel (1.24.0)
28
- parser (3.3.2.0)
34
+ parallel (1.25.1)
35
+ parser (3.3.3.0)
29
36
  ast (~> 2.4.1)
30
37
  racc
31
- pg (1.5.6)
38
+ pg (1.5.7)
39
+ pg_query (5.1.0)
40
+ google-protobuf (>= 3.22.3)
32
41
  prettier_print (1.2.1)
33
42
  pry (0.14.2)
34
43
  coderay (~> 1.1)
@@ -38,8 +47,8 @@ GEM
38
47
  rake (13.2.1)
39
48
  rbs (3.1.0)
40
49
  regexp_parser (2.9.2)
41
- rexml (3.2.8)
42
- strscan (>= 3.0.9)
50
+ rexml (3.3.6)
51
+ strscan
43
52
  rspec (3.13.0)
44
53
  rspec-core (~> 3.13.0)
45
54
  rspec-expectations (~> 3.13.0)
@@ -72,7 +81,7 @@ GEM
72
81
  rubocop (~> 1.41)
73
82
  rubocop-packaging (0.5.2)
74
83
  rubocop (>= 1.33, < 2.0)
75
- rubocop-performance (1.21.0)
84
+ rubocop-performance (1.21.1)
76
85
  rubocop (>= 1.48.1, < 2.0)
77
86
  rubocop-ast (>= 1.31.1, < 2.0)
78
87
  rubocop-rake (0.6.0)
@@ -85,7 +94,7 @@ GEM
85
94
  rubocop-rspec_rails (2.28.2)
86
95
  rubocop (~> 1.40)
87
96
  ruby-progressbar (1.13.0)
88
- sequel (5.81.0)
97
+ sequel (5.82.0)
89
98
  bigdecimal
90
99
  strscan (3.1.0)
91
100
  syntax_tree (6.2.0)
data/README.md CHANGED
@@ -69,7 +69,12 @@ All [Logical Replication Restrictions](https://www.postgresql.org/docs/current/l
69
69
 
70
70
  ## Usage
71
71
 
72
- Ensure `SOURCE_DB_URL` and `TARGET_DB_URL` are present as environment variables in the runtime environment. The URL are of the postgres connection string format. Example:
72
+ Ensure `SOURCE_DB_URL` and `TARGET_DB_URL` are present as environment variables in the runtime environment.
73
+
74
+ - `SOURCE_DB_URL` = The database that you want to replicate FROM.
75
+ - `TARGET_DB_URL` = The database that you want to replicate TO.
76
+
77
+ The URL should be in postgres connection string format. Example:
73
78
 
74
79
  ```bash
75
80
  $ export SOURCE_DB_URL="postgres://USERNAME:PASSWORD@localhost:5432/DATABASE_NAME"
@@ -84,6 +89,12 @@ You can extend the default timeout by setting the following environment variable
84
89
  $ export PG_EASY_REPLICATE_STATEMENT_TIMEOUT="10s" # default 5s
85
90
  ```
86
91
 
92
+ You can get additional debug logging by adding the following environment variable
93
+
94
+ ```bash
95
+ $ export DEBUG="true"
96
+ ```
97
+
87
98
  Any `pg_easy_replicate` command can be run the same way with the docker image as well. As long the container is running in an environment where it has access to both the databases. Example
88
99
 
89
100
  ```bash
@@ -165,12 +176,62 @@ Once the bootstrap is complete, you can start the sync. Starting the sync sets u
165
176
  **NOTE**: Start sync by default will drop all indices in the target database for performance reasons. And will automatically re-add the indices during `switchover`. It is turned on by default and you can opt out of this with `--no-recreate-indices-post-copy`
166
177
 
167
178
  ```bash
168
- $ pg_easy_replicate start_sync --group-name database-cluster-1
179
+ $ pg_easy_replicate start_sync --group-name database-cluster-1 [-d <track-ddl>]
169
180
 
170
181
  {"name":"pg_easy_replicate","hostname":"PKHXQVK6DW","pid":22113,"level":30,"time":"2023-06-19T15:54:54.874-04:00","v":0,"msg":"Setting up publication","publication_name":"pger_publication_database_cluster_1","version":"0.1.0"}
171
182
  ...
172
183
  ```
173
184
 
185
+ ### DDL Changes Management
186
+
187
+ `pg_easy_replicate` now supports tracking and applying DDL (Data Definition Language) changes between the source and target databases. To track DDLs you can pass `-track-ddl` to `start_sync`.
188
+
189
+ This feature ensures that most schema changes made to the source database tables that are being replicated during the replication process are tracked, so that you can apply them at your will before or after switchover.
190
+
191
+ #### Listing DDL Changes
192
+
193
+ To view the DDL changes that have been tracked:
194
+
195
+ ```bash
196
+ $ pg_easy_replicate list_ddl_changes -g <group-name> [-l <limit>]
197
+ ```
198
+
199
+ This command will display a list of DDL changes in JSON format;
200
+
201
+ ```
202
+ [
203
+ {
204
+ "id": 1,
205
+ "group_name": "cluster-1",
206
+ "event_type": "ddl_command_end",
207
+ "object_type": "table",
208
+ "object_identity": "public.pgbench_accounts",
209
+ "ddl_command": "ALTER TABLE public.pgbench_accounts ADD COLUMN test_column VARCHAR(255)",
210
+ "created_at": "2024-08-31 15:42:33 UTC"
211
+ }
212
+ ]
213
+ ```
214
+
215
+ #### Applying DDL Changes
216
+
217
+ `pg_easy_replicate` won't automatically apply the changes for you. To apply the tracked DDL changes to the target database:
218
+
219
+ ```bash
220
+ $ pg_easy_replicate apply_ddl_change -g <group-name> [-i <change-id>]
221
+ ```
222
+
223
+ If you specify a change ID with the `-i` option, only that specific change will be applied. If you don't specify an ID, you'll be prompted to apply all pending changes.
224
+
225
+ ```bash
226
+ $ pg_easy_replicate apply_ddl_change -g cluster-1
227
+ The following DDL changes will be applied:
228
+ ID: 1, Type: table, Command: ALTER TABLE public.pgbench_accounts ADD COLUMN test_column VARCHAR(255)...
229
+
230
+ Do you want to apply all these changes? (y/n): y
231
+ ...
232
+ All pending DDL changes applied successfully.
233
+ ```
234
+
174
235
  ### Stats
175
236
 
176
237
  You can inspect or watch stats any time during the sync process. The stats give you an idea of when the sync started, current flush/write lag, how many tables are in `replicating`, `copying` or other stages, and more.
@@ -247,6 +308,30 @@ $ pg_easy_replicate switchover --group-name database-cluster-2
247
308
  ...
248
309
  ```
249
310
 
311
+ ## Exclude tables from replication
312
+
313
+ By default all tables are added for replication but you can exclude tables if necessary. Example
314
+
315
+ ```bash
316
+ ...
317
+ $ pg_easy_replicate bootstrap --group-name database-cluster-1 --copy-schema
318
+ $ pg_easy_replicate start_sync --group-name database-cluster-1 --schema-name public --exclude_tables "events"
319
+ ...
320
+ ```
321
+
322
+ ### Cleanup
323
+
324
+ Use `cleanup` if you want to remove all bootstrapped data for the specified group. Additionally you can pass `-e` or `--everything` in order to clean up all schema changes for bootstrapped tables, users and any publication/subscription data.
325
+
326
+ ```bash
327
+ $ pg_easy_replicate cleanup --group-name database-cluster-1 --everything
328
+
329
+ {"name":"pg_easy_replicate","hostname":"PKHXQVK6DW","pid":24192,"level":30,"time":"2023-06-19T16:05:23.033-04:00","v":0,"msg":"Dropping groups table","version":"0.1.0"}
330
+
331
+ {"name":"pg_easy_replicate","hostname":"PKHXQVK6DW","pid":24192,"level":30,"time":"2023-06-19T16:05:23.033-04:00","v":0,"msg":"Dropping schema","version":"0.1.0"}
332
+ ...
333
+ ```
334
+
250
335
  ## Switchover strategies with minimal downtime
251
336
 
252
337
  For minimal downtime, it'd be best to watch/tail the stats and wait until `switchover_completed_at` is updated with a timestamp. Once that happens you can perform any of the following strategies. Note: These are just suggestions and `pg_easy_replicate` doesn't provide any functionalities for this.
data/docker-compose.yml CHANGED
@@ -9,6 +9,7 @@ services:
9
9
  POSTGRES_PASSWORD: james-bond123@7!'3aaR
10
10
  POSTGRES_DB: postgres-db
11
11
  command: >
12
+ -c max_connections=200
12
13
  -c wal_level=logical
13
14
  -c ssl=on
14
15
  -c ssl_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem
@@ -25,6 +26,7 @@ services:
25
26
  POSTGRES_PASSWORD: james-bond123@7!'3aaR
26
27
  POSTGRES_DB: postgres-db
27
28
  command: >
29
+ -c max_connections=200
28
30
  -c wal_level=logical
29
31
  -c ssl=on
30
32
  -c ssl_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem
@@ -21,6 +21,11 @@ module PgEasyReplicate
21
21
  default: "",
22
22
  desc:
23
23
  "Comma separated list of table names. Default: All tables"
24
+ method_option :exclude_tables,
25
+ aliases: "-e",
26
+ default: "",
27
+ desc:
28
+ "Comma separated list of table names to exclude. Default: None"
24
29
  method_option :schema_name,
25
30
  aliases: "-s",
26
31
  desc:
@@ -30,6 +35,7 @@ module PgEasyReplicate
30
35
  special_user_role: options[:special_user_role],
31
36
  copy_schema: options[:copy_schema],
32
37
  tables: options[:tables],
38
+ exclude_tables: options[:exclude_tables],
33
39
  schema_name: options[:schema_name],
34
40
  )
35
41
 
@@ -48,6 +54,11 @@ module PgEasyReplicate
48
54
  aliases: "-c",
49
55
  boolean: true,
50
56
  desc: "Copy schema to the new database"
57
+ method_option :track_ddl,
58
+ aliases: "-d",
59
+ type: :boolean,
60
+ default: false,
61
+ desc: "Enable DDL tracking for the group"
51
62
  desc "bootstrap",
52
63
  "Sets up temporary tables for information required during runtime"
53
64
  def bootstrap
@@ -77,11 +88,6 @@ module PgEasyReplicate
77
88
  aliases: "-g",
78
89
  required: true,
79
90
  desc: "Name of the group to provision"
80
- method_option :group_name,
81
- aliases: "-g",
82
- required: true,
83
- desc:
84
- "Name of the grouping for this collection of source and target DB"
85
91
  method_option :schema_name,
86
92
  aliases: "-s",
87
93
  desc:
@@ -91,12 +97,21 @@ module PgEasyReplicate
91
97
  default: "",
92
98
  desc:
93
99
  "Comma separated list of table names. Default: All tables"
100
+ method_option :exclude_tables,
101
+ aliases: "-e",
102
+ default: "",
103
+ desc:
104
+ "Comma separated list of table names to exclude. Default: None"
94
105
  method_option :recreate_indices_post_copy,
95
106
  type: :boolean,
96
- default: true,
107
+ default: false,
97
108
  aliases: "-r",
98
109
  desc:
99
110
  "Drop all non-primary indices before copy and recreate them post-copy"
111
+ method_option :track_ddl,
112
+ type: :boolean,
113
+ default: false,
114
+ desc: "Enable DDL tracking for the group"
100
115
  def start_sync
101
116
  PgEasyReplicate::Orchestrate.start_sync(options)
102
117
  end
@@ -111,7 +126,7 @@ module PgEasyReplicate
111
126
  PgEasyReplicate::Orchestrate.stop_sync(group_name: options[:group_name])
112
127
  end
113
128
 
114
- desc "switchover ",
129
+ desc "switchover",
115
130
  "Puts the source database in read only mode after all the data is flushed and written"
116
131
  method_option :group_name,
117
132
  aliases: "-g",
@@ -119,21 +134,18 @@ module PgEasyReplicate
119
134
  desc: "Name of the group previously provisioned"
120
135
  method_option :lag_delta_size,
121
136
  aliases: "-l",
122
- desc: "The size of the lag to watch for before switchover. Default 200KB."
137
+ desc:
138
+ "The size of the lag to watch for before switchover. Default 200KB."
123
139
  method_option :skip_vacuum_analyze,
124
140
  type: :boolean,
125
141
  default: false,
126
142
  aliases: "-s",
127
143
  desc: "Skip vacuum analyzing tables before switchover."
128
- # method_option :bi_directional,
129
- # aliases: "-b",
130
- # desc:
131
- # "Setup replication from target database to source database"
132
144
  def switchover
133
145
  PgEasyReplicate::Orchestrate.switchover(
134
146
  group_name: options[:group_name],
135
147
  lag_delta_size: options[:lag_delta_size],
136
- skip_vacuum_analyze: options[:skip_vacuum_analyze]
148
+ skip_vacuum_analyze: options[:skip_vacuum_analyze],
137
149
  )
138
150
  end
139
151
 
@@ -151,6 +163,71 @@ module PgEasyReplicate
151
163
  end
152
164
  end
153
165
 
166
+ desc "list_ddl_changes", "Lists recent DDL changes in the source database"
167
+ method_option :group_name,
168
+ aliases: "-g",
169
+ required: true,
170
+ desc: "Name of the group"
171
+ method_option :limit,
172
+ aliases: "-l",
173
+ type: :numeric,
174
+ default: 100,
175
+ desc: "Limit the number of DDL changes to display"
176
+ def list_ddl_changes
177
+ changes =
178
+ PgEasyReplicate::DDLManager.list_ddl_changes(
179
+ group_name: options[:group_name],
180
+ limit: options[:limit],
181
+ )
182
+ puts JSON.pretty_generate(changes)
183
+ end
184
+
185
+ desc "apply_ddl_change", "Applies DDL changes to the target database"
186
+ method_option :group_name,
187
+ aliases: "-g",
188
+ required: true,
189
+ desc: "Name of the group"
190
+ method_option :id,
191
+ aliases: "-i",
192
+ type: :numeric,
193
+ desc:
194
+ "ID of the specific DDL change to apply. If not provided, all changes will be applied."
195
+ def apply_ddl_change
196
+ if options[:id]
197
+ PgEasyReplicate::DDLManager.apply_ddl_change(
198
+ group_name: options[:group_name],
199
+ id: options[:id],
200
+ )
201
+ puts "DDL change with ID #{options[:id]} applied successfully."
202
+ else
203
+ changes =
204
+ PgEasyReplicate::DDLManager.list_ddl_changes(
205
+ group_name: options[:group_name],
206
+ )
207
+ if changes.empty?
208
+ puts "No pending DDL changes to apply."
209
+ return
210
+ end
211
+
212
+ puts "The following DDL changes will be applied:"
213
+ changes.each do |change|
214
+ puts "ID: #{change[:id]}, Type: #{change[:object_type]}, Command: #{change[:ddl_command]}"
215
+ end
216
+ puts ""
217
+ print("Do you want to apply all these changes? (y/n): ")
218
+ confirmation = $stdin.gets.chomp.downcase
219
+
220
+ if confirmation == "y"
221
+ PgEasyReplicate::DDLManager.apply_all_ddl_changes(
222
+ group_name: options[:group_name],
223
+ )
224
+ puts "All pending DDL changes applied successfully."
225
+ else
226
+ puts "Operation cancelled."
227
+ end
228
+ end
229
+ end
230
+
154
231
  desc "version", "Prints the version"
155
232
  def version
156
233
  puts PgEasyReplicate::VERSION
@@ -0,0 +1,256 @@
1
+ # frozen_string_literal: true
2
+
3
+ require "pg_query"
4
+
5
+ module PgEasyReplicate
6
+ class DDLAudit
7
+ extend Helper
8
+
9
+ class << self
10
+ def setup(group_name)
11
+ conn = connect_to_internal_schema
12
+ return if conn.table_exists?(table_name)
13
+
14
+ begin
15
+ conn.create_table(table_name) do
16
+ primary_key(:id)
17
+ String(:group_name, null: false)
18
+ String(:event_type, null: false)
19
+ String(:object_type)
20
+ String(:object_identity)
21
+ String(:ddl_command, text: true)
22
+ DateTime(:created_at, default: Sequel::CURRENT_TIMESTAMP)
23
+ end
24
+
25
+ create_trigger_function(conn, group_name)
26
+ create_event_triggers(conn, group_name)
27
+ rescue => e
28
+ abort_with("Failed to set up DDL audit: #{e.message}")
29
+ ensure
30
+ conn&.disconnect
31
+ end
32
+ end
33
+
34
+ def create(
35
+ group_name,
36
+ event_type,
37
+ object_type,
38
+ object_identity,
39
+ ddl_command
40
+ )
41
+ conn = connect_to_internal_schema
42
+ begin
43
+ conn[table_name].insert(
44
+ group_name: group_name,
45
+ event_type: event_type,
46
+ object_type: object_type,
47
+ object_identity: object_identity,
48
+ ddl_command: ddl_command,
49
+ created_at: Time.now.utc,
50
+ )
51
+ rescue => e
52
+ abort_with("Adding DDL audit entry failed: #{e.message}")
53
+ ensure
54
+ conn&.disconnect
55
+ end
56
+ end
57
+
58
+ def list_changes(group_name, limit: 100)
59
+ conn = connect_to_internal_schema
60
+ begin
61
+ conn[table_name]
62
+ .where(group_name: group_name)
63
+ .order(Sequel.desc(:id))
64
+ .limit(limit)
65
+ .all
66
+ rescue => e
67
+ abort_with("Listing DDL changes failed: #{e.message}")
68
+ ensure
69
+ conn&.disconnect
70
+ end
71
+ end
72
+
73
+ def apply_change(source_conn_string, target_conn_string, group_name, id)
74
+ ddl_queries = fetch_ddl_query(source_conn_string, group_name, id: id)
75
+ apply_ddl_changes(target_conn_string, ddl_queries)
76
+ end
77
+
78
+ def apply_all_changes(source_conn_string, target_conn_string, group_name)
79
+ ddl_queries = fetch_ddl_query(source_conn_string, group_name)
80
+ apply_ddl_changes(target_conn_string, ddl_queries)
81
+ end
82
+
83
+ def drop(group_name)
84
+ conn = connect_to_internal_schema
85
+ begin
86
+ drop_event_triggers(conn, group_name)
87
+ drop_trigger_function(conn, group_name)
88
+ conn[table_name].where(group_name: group_name).delete
89
+ rescue => e
90
+ abort_with("Dropping DDL audit failed: #{e.message}")
91
+ ensure
92
+ conn&.disconnect
93
+ end
94
+ end
95
+
96
+ private
97
+
98
+ def table_name
99
+ :pger_ddl_audits
100
+ end
101
+
102
+ def connect_to_internal_schema(conn_string = nil)
103
+ Query.connect(
104
+ connection_url: conn_string || source_db_url,
105
+ schema: internal_schema_name,
106
+ )
107
+ end
108
+
109
+ def create_trigger_function(conn, group_name)
110
+ group = PgEasyReplicate::Group.find(group_name)
111
+ tables = group[:table_names].split(",").map(&:strip)
112
+ schema_name = group[:schema_name]
113
+ sanitized_group_name = sanitize_identifier(group_name)
114
+
115
+ full_table_names = tables.map { |table| "#{schema_name}.#{table}" }
116
+ puts "full_table_names: #{full_table_names}"
117
+ conn.run(<<~SQL)
118
+ CREATE OR REPLACE FUNCTION #{internal_schema_name}.pger_ddl_trigger_#{sanitized_group_name}() RETURNS event_trigger AS $$
119
+ DECLARE
120
+ obj record;
121
+ ddl_command text;
122
+ affected_table text;
123
+ BEGIN
124
+ SELECT current_query() INTO ddl_command;
125
+
126
+ IF TG_EVENT = 'ddl_command_end' THEN
127
+ FOR obj IN SELECT * FROM pg_event_trigger_ddl_commands()
128
+ LOOP
129
+ IF obj.object_type = 'table' AND obj.object_identity = ANY(ARRAY['#{full_table_names.join("','")}']) THEN
130
+ INSERT INTO #{internal_schema_name}.#{table_name} (group_name, event_type, object_type, object_identity, ddl_command)
131
+ VALUES ('#{group_name}', TG_EVENT, obj.object_type, obj.object_identity, ddl_command);
132
+ ELSIF obj.object_type = 'index' THEN
133
+ SELECT (regexp_match(ddl_command, 'ON\\s+(\\S+)'))[1] INTO affected_table;
134
+ IF affected_table = ANY(ARRAY['#{full_table_names.join("','")}']) THEN
135
+ INSERT INTO #{internal_schema_name}.#{table_name} (group_name, event_type, object_type, object_identity, ddl_command)
136
+ VALUES ('#{group_name}', TG_EVENT, obj.object_type, obj.object_identity, ddl_command);
137
+ END IF;
138
+ END IF;
139
+ END LOOP;
140
+ ELSIF TG_EVENT = 'sql_drop' THEN
141
+ FOR obj IN SELECT * FROM pg_event_trigger_dropped_objects()
142
+ LOOP
143
+ IF obj.object_type IN ('table', 'index') AND
144
+ (obj.object_identity = ANY(ARRAY['#{full_table_names.join("','")}']) OR
145
+ obj.object_identity ~ ('^' || '#{schema_name}' || '\\.(.*?)_.*$'))
146
+ THEN
147
+ INSERT INTO #{internal_schema_name}.#{table_name} (group_name, event_type, object_type, object_identity, ddl_command)
148
+ VALUES ('#{group_name}', TG_EVENT, obj.object_type, obj.object_identity, ddl_command);
149
+ END IF;
150
+ END LOOP;
151
+ ELSIF TG_EVENT = 'table_rewrite' THEN
152
+ FOR obj IN SELECT * FROM pg_event_trigger_table_rewrite_oid()
153
+ LOOP
154
+ IF obj.oid::regclass::text = ANY(ARRAY['#{full_table_names.join("','")}']) THEN
155
+ INSERT INTO #{internal_schema_name}.#{table_name} (group_name, event_type, object_type, object_identity, ddl_command)
156
+ VALUES ('#{group_name}', TG_EVENT, 'table', obj.oid::regclass::text, ddl_command);
157
+ END IF;
158
+ END LOOP;
159
+ END IF;
160
+ END;
161
+ $$ LANGUAGE plpgsql;
162
+ SQL
163
+ rescue => e
164
+ abort_with("Creating DDL trigger function failed: #{e.message}")
165
+ end
166
+
167
+ def create_event_triggers(conn, group_name)
168
+ sanitized_group_name = sanitize_identifier(group_name)
169
+ conn.run(<<~SQL)
170
+ DROP EVENT TRIGGER IF EXISTS pger_ddl_trigger_#{sanitized_group_name};
171
+ CREATE EVENT TRIGGER pger_ddl_trigger_#{sanitized_group_name} ON ddl_command_end
172
+ EXECUTE FUNCTION #{internal_schema_name}.pger_ddl_trigger_#{sanitized_group_name}();
173
+
174
+ DROP EVENT TRIGGER IF EXISTS pger_drop_trigger_#{sanitized_group_name};
175
+ CREATE EVENT TRIGGER pger_drop_trigger_#{sanitized_group_name} ON sql_drop
176
+ EXECUTE FUNCTION #{internal_schema_name}.pger_ddl_trigger_#{sanitized_group_name}();
177
+
178
+ DROP EVENT TRIGGER IF EXISTS pger_table_rewrite_trigger_#{sanitized_group_name};
179
+ CREATE EVENT TRIGGER pger_table_rewrite_trigger_#{sanitized_group_name} ON table_rewrite
180
+ EXECUTE FUNCTION #{internal_schema_name}.pger_ddl_trigger_#{sanitized_group_name}();
181
+ SQL
182
+ rescue => e
183
+ abort_with("Creating event triggers failed: #{e.message}")
184
+ end
185
+
186
+ def drop_event_triggers(conn, group_name)
187
+ sanitized_group_name = sanitize_identifier(group_name)
188
+ conn.run(<<~SQL)
189
+ DROP EVENT TRIGGER IF EXISTS pger_ddl_trigger_#{sanitized_group_name};
190
+ DROP EVENT TRIGGER IF EXISTS pger_drop_trigger_#{sanitized_group_name};
191
+ DROP EVENT TRIGGER IF EXISTS pger_table_rewrite_trigger_#{sanitized_group_name};
192
+ SQL
193
+ rescue => e
194
+ abort_with("Dropping event triggers failed: #{e.message}")
195
+ end
196
+
197
+ def drop_trigger_function(conn, group_name)
198
+ sanitized_group_name = sanitize_identifier(group_name)
199
+ conn.run(
200
+ "DROP FUNCTION IF EXISTS #{internal_schema_name}.pger_ddl_trigger_#{sanitized_group_name}();",
201
+ )
202
+ rescue => e
203
+ abort_with("Dropping trigger function failed: #{e.message}")
204
+ end
205
+
206
+ def self.extract_table_info(sql)
207
+ parsed = PgQuery.parse(sql)
208
+ stmt = parsed.tree.stmts.first.stmt
209
+
210
+ case stmt
211
+ when PgQuery::CreateStmt, PgQuery::IndexStmt, PgQuery::AlterTableStmt
212
+ schema_name = stmt.relation.schemaname || "public"
213
+ table_name = stmt.relation.relname
214
+ "#{schema_name}.#{table_name}"
215
+ end
216
+ rescue PgQuery::ParseError
217
+ nil
218
+ end
219
+
220
+ def sanitize_identifier(identifier)
221
+ identifier.gsub(/[^a-zA-Z0-9_]/, "_")
222
+ end
223
+
224
+ def fetch_ddl_query(source_conn_string, group_name, id: nil)
225
+ source_conn = connect_to_internal_schema(source_conn_string)
226
+ begin
227
+ query = source_conn[table_name].where(group_name: group_name)
228
+ query = query.where(id: id) if id
229
+ result = query.order(:id).select_map(:ddl_command)
230
+ result.uniq
231
+ rescue => e
232
+ abort_with("Fetching DDL queries failed: #{e.message}")
233
+ ensure
234
+ source_conn&.disconnect
235
+ end
236
+ end
237
+
238
+ def apply_ddl_changes(target_conn_string, ddl_queries)
239
+ target_conn = Query.connect(connection_url: target_conn_string)
240
+ begin
241
+ ddl_queries.each do |query|
242
+ target_conn.run(query)
243
+ rescue => e
244
+ abort_with(
245
+ "Error executing DDL command: #{query}. Error: #{e.message}",
246
+ )
247
+ end
248
+ rescue => e
249
+ abort_with("Applying DDL changes failed: #{e.message}")
250
+ ensure
251
+ target_conn&.disconnect
252
+ end
253
+ end
254
+ end
255
+ end
256
+ end
@@ -0,0 +1,56 @@
1
+ # frozen_string_literal: true
2
+
3
+ module PgEasyReplicate
4
+ module DDLManager
5
+ extend Helper
6
+
7
+ class << self
8
+ def setup_ddl_tracking(
9
+ group_name:, conn_string: source_db_url,
10
+ schema: "public"
11
+ )
12
+ DDLAudit.setup(group_name)
13
+ end
14
+
15
+ def cleanup_ddl_tracking(
16
+ group_name:, conn_string: source_db_url,
17
+ schema: "public"
18
+ )
19
+ DDLAudit.drop(group_name)
20
+ end
21
+
22
+ def list_ddl_changes(
23
+ group_name:, conn_string: source_db_url,
24
+ schema: "public",
25
+ limit: 100
26
+ )
27
+ DDLAudit.list_changes(group_name, limit: limit)
28
+ end
29
+
30
+ def apply_ddl_change(
31
+ group_name:, id:, source_conn_string: source_db_url,
32
+ target_conn_string: target_db_url,
33
+ schema: "public"
34
+ )
35
+ DDLAudit.apply_change(
36
+ source_conn_string,
37
+ target_conn_string,
38
+ group_name,
39
+ id,
40
+ )
41
+ end
42
+
43
+ def apply_all_ddl_changes(
44
+ group_name:, source_conn_string: source_db_url,
45
+ target_conn_string: target_db_url,
46
+ schema: "public"
47
+ )
48
+ DDLAudit.apply_all_changes(
49
+ source_conn_string,
50
+ target_conn_string,
51
+ group_name,
52
+ )
53
+ end
54
+ end
55
+ end
56
+ end
@@ -73,14 +73,18 @@ module PgEasyReplicate
73
73
  abort(msg)
74
74
  end
75
75
 
76
- def determine_tables(conn_string:, list: "", schema: nil)
76
+ def determine_tables(conn_string:, list: "", exclude_list: "", schema: nil)
77
77
  schema ||= "public"
78
-
79
- tables = list&.split(",") || []
80
- if tables.size > 0
81
- tables
78
+
79
+ tables = convert_to_array(list)
80
+ exclude_tables = convert_to_array(exclude_list)
81
+ validate_table_lists(tables, exclude_tables, schema)
82
+
83
+ if tables.empty?
84
+ all_tables = list_all_tables(schema: schema, conn_string: conn_string)
85
+ all_tables - (exclude_tables + %w[spatial_ref_sys])
82
86
  else
83
- list_all_tables(schema: schema, conn_string: conn_string) - %w[ spatial_ref_sys ]
87
+ tables
84
88
  end
85
89
  end
86
90
 
@@ -101,5 +105,24 @@ module PgEasyReplicate
101
105
  .map(&:values)
102
106
  .flatten
103
107
  end
108
+
109
+ def convert_to_array(input)
110
+ input.is_a?(Array) ? input : input&.split(",") || []
111
+ end
112
+
113
+ def validate_table_lists(tables, exclude_tables, schema_name)
114
+ table_list = convert_to_array(tables)
115
+ exclude_table_list = convert_to_array(exclude_tables)
116
+
117
+ if !table_list.empty? && !exclude_table_list.empty?
118
+ abort_with("Options --tables(-t) and --exclude-tables(-e) cannot be used together.")
119
+ elsif !table_list.empty?
120
+ if table_list.size > 0 && (schema_name.nil? || schema_name == "")
121
+ abort_with("Schema name is required if tables are passed")
122
+ end
123
+ elsif exclude_table_list.size > 0 && (schema_name.nil? || schema_name == "")
124
+ abort_with("Schema name is required if exclude tables are passed")
125
+ end
126
+ end
104
127
  end
105
128
  end
@@ -15,6 +15,7 @@ module PgEasyReplicate
15
15
  schema: schema_name,
16
16
  conn_string: source_db_url,
17
17
  list: options[:tables],
18
+ exclude_list: options[:exclude_tables],
18
19
  )
19
20
 
20
21
  if options[:recreate_indices_post_copy]
@@ -51,25 +52,19 @@ module PgEasyReplicate
51
52
  started_at: Time.now.utc,
52
53
  recreate_indices_post_copy: options[:recreate_indices_post_copy],
53
54
  )
54
- rescue => e
55
- stop_sync(
56
- group_name: options[:group_name],
57
- source_conn_string: source_db_url,
58
- target_conn_string: target_db_url,
59
- )
60
55
 
61
- if Group.find(options[:group_name])
62
- Group.update(group_name: options[:group_name], failed_at: Time.now)
63
- else
64
- Group.create(
65
- name: options[:group_name],
66
- table_names: tables.join(","),
67
- schema_name: schema_name,
68
- started_at: Time.now.utc,
69
- failed_at: Time.now.utc,
56
+ if options[:track_ddl]
57
+ DDLManager.setup_ddl_tracking(
58
+ conn_string: source_db_url,
59
+ group_name: options[:group_name],
60
+ schema: schema_name,
70
61
  )
71
62
  end
72
-
63
+ rescue => e
64
+ stop_sync(group_name: options[:group_name])
65
+ if Group.find(options[:group_name])
66
+ Group.update(name: options[:group_name], failed_at: Time.now.utc)
67
+ end
73
68
  abort_with("Starting sync failed: #{e.message}")
74
69
  end
75
70
 
@@ -179,8 +174,8 @@ module PgEasyReplicate
179
174
 
180
175
  def stop_sync(
181
176
  group_name:,
182
- source_conn_string: source_db_url,
183
- target_conn_string: target_db_url
177
+ source_conn_string: nil,
178
+ target_conn_string: nil
184
179
  )
185
180
  logger.info(
186
181
  "Stopping sync",
@@ -191,29 +186,34 @@ module PgEasyReplicate
191
186
  )
192
187
  drop_publication(
193
188
  group_name: group_name,
194
- conn_string: source_conn_string,
189
+ conn_string: source_conn_string || source_db_url,
195
190
  )
196
191
  drop_subscription(
197
192
  group_name: group_name,
198
- target_conn_string: target_conn_string,
193
+ target_conn_string: target_conn_string || target_db_url,
199
194
  )
200
195
  rescue => e
201
- raise "Unable to stop sync user: #{e.message}"
196
+ abort_with("Unable to stop sync: #{e.message}")
202
197
  end
203
198
 
204
199
  def switchover(
205
200
  group_name:,
206
- source_conn_string: source_db_url,
207
- target_conn_string: target_db_url,
208
201
  lag_delta_size: nil,
209
- skip_vacuum_analyze: false
202
+ skip_vacuum_analyze: false,
203
+ source_conn_string: nil,
204
+ target_conn_string: nil
210
205
  )
211
206
  group = Group.find(group_name)
207
+ abort_with("Group not found: #{group_name}") unless group
208
+
212
209
  tables_list = group[:table_names].split(",")
213
210
 
211
+ source_conn = source_conn_string || source_db_url
212
+ target_conn = target_conn_string || target_db_url
213
+
214
214
  unless skip_vacuum_analyze
215
215
  run_vacuum_analyze(
216
- conn_string: target_conn_string,
216
+ conn_string: target_conn,
217
217
  tables: tables_list,
218
218
  schema: group[:schema_name],
219
219
  )
@@ -224,39 +224,35 @@ module PgEasyReplicate
224
224
  if group[:recreate_indices_post_copy]
225
225
  IndexManager.wait_for_replication_completion(group_name: group_name)
226
226
  IndexManager.recreate_indices(
227
- source_conn_string: source_db_url,
228
- target_conn_string: target_db_url,
227
+ source_conn_string: source_conn,
228
+ target_conn_string: target_conn,
229
229
  tables: tables_list,
230
230
  schema: group[:schema_name],
231
231
  )
232
232
  end
233
233
 
234
- # Watch for lag again, because it could've grown during index recreation
235
234
  watch_lag(group_name: group_name, lag: lag_delta_size || DEFAULT_LAG)
236
235
 
237
236
  revoke_connections_on_source_db(group_name)
238
237
  wait_for_remaining_catchup(group_name)
239
- refresh_sequences(
240
- conn_string: target_conn_string,
241
- schema: group[:schema_name],
242
- )
238
+ refresh_sequences(conn_string: target_conn, schema: group[:schema_name])
243
239
  mark_switchover_complete(group_name)
244
- # Run vacuum analyze to refresh the planner post switchover
240
+
245
241
  unless skip_vacuum_analyze
246
242
  run_vacuum_analyze(
247
- conn_string: target_conn_string,
243
+ conn_string: target_conn,
248
244
  tables: tables_list,
249
245
  schema: group[:schema_name],
250
246
  )
251
247
  end
248
+
252
249
  drop_subscription(
253
250
  group_name: group_name,
254
- target_conn_string: target_conn_string,
251
+ target_conn_string: target_conn,
255
252
  )
256
253
  rescue => e
257
254
  restore_connections_on_source_db(group_name)
258
-
259
- abort_with("Switchover sync failed: #{e.message}")
255
+ abort_with("Switchover failed: #{e.message}")
260
256
  end
261
257
 
262
258
  def watch_lag(group_name:, wait_time: DEFAULT_WAIT, lag: DEFAULT_LAG)
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module PgEasyReplicate
4
- VERSION = "0.2.6"
4
+ VERSION = "0.3.0"
5
5
  end
@@ -6,6 +6,7 @@ require "pg"
6
6
  require "sequel"
7
7
  require "open3"
8
8
  require "English"
9
+ require "pg_query"
9
10
 
10
11
  require "pg_easy_replicate/helper"
11
12
  require "pg_easy_replicate/version"
@@ -15,6 +16,8 @@ require "pg_easy_replicate/orchestrate"
15
16
  require "pg_easy_replicate/stats"
16
17
  require "pg_easy_replicate/group"
17
18
  require "pg_easy_replicate/cli"
19
+ require "pg_easy_replicate/ddl_audit"
20
+ require "pg_easy_replicate/ddl_manager"
18
21
 
19
22
  Sequel.default_timezone = :utc
20
23
  module PgEasyReplicate
@@ -30,11 +33,18 @@ module PgEasyReplicate
30
33
  special_user_role: nil,
31
34
  copy_schema: false,
32
35
  tables: "",
36
+ exclude_tables: "",
33
37
  schema_name: nil
34
38
  )
35
39
  abort_with("SOURCE_DB_URL is missing") if source_db_url.nil?
36
40
  abort_with("TARGET_DB_URL is missing") if target_db_url.nil?
37
41
 
42
+ if !tables.empty? && !exclude_tables.empty?
43
+ abort_with(
44
+ "Options --tables(-t) and --exclude-tables(-e) cannot be used together.",
45
+ )
46
+ end
47
+
38
48
  system("which pg_dump")
39
49
  pg_dump_exists = $CHILD_STATUS.success?
40
50
 
@@ -65,6 +75,7 @@ module PgEasyReplicate
65
75
  tables_have_replica_identity?(
66
76
  conn_string: source_db_url,
67
77
  tables: tables,
78
+ exclude_tables: exclude_tables,
68
79
  schema_name: schema_name,
69
80
  ),
70
81
  }
@@ -77,6 +88,7 @@ module PgEasyReplicate
77
88
  special_user_role: nil,
78
89
  copy_schema: false,
79
90
  tables: "",
91
+ exclude_tables: "",
80
92
  schema_name: nil
81
93
  )
82
94
  config_hash =
@@ -84,6 +96,7 @@ module PgEasyReplicate
84
96
  special_user_role: special_user_role,
85
97
  copy_schema: copy_schema,
86
98
  tables: tables,
99
+ exclude_tables: exclude_tables,
87
100
  schema_name: schema_name,
88
101
  )
89
102
 
@@ -103,9 +116,7 @@ module PgEasyReplicate
103
116
  abort_with("User on source database does not have super user privilege")
104
117
  end
105
118
 
106
- if tables.split(",").size > 0 && (schema_name.nil? || schema_name == "")
107
- abort_with("Schema name is required if tables are passed")
108
- end
119
+ validate_table_lists(tables, exclude_tables, schema_name)
109
120
 
110
121
  unless config_hash.dig(:tables_have_replica_identity)
111
122
  abort_with(
@@ -152,34 +163,66 @@ module PgEasyReplicate
152
163
  end
153
164
 
154
165
  def cleanup(options)
155
- logger.info("Dropping groups table")
156
- Group.drop
157
-
158
- if options[:everything]
159
- logger.info("Dropping schema")
160
- drop_internal_schema
161
- end
162
-
163
- if options[:everything] || options[:sync]
164
- Orchestrate.drop_publication(
165
- group_name: options[:group_name],
166
- conn_string: source_db_url,
167
- )
168
-
169
- Orchestrate.drop_subscription(
170
- group_name: options[:group_name],
171
- target_conn_string: target_db_url,
166
+ cleanup_steps = [
167
+ -> do
168
+ logger.info("Dropping groups table")
169
+ Group.drop
170
+ end,
171
+ -> do
172
+ if options[:everything]
173
+ logger.info("Dropping schema")
174
+ drop_internal_schema
175
+ end
176
+ end,
177
+ -> do
178
+ if options[:everything] || options[:sync]
179
+ logger.info("Dropping publication on source database")
180
+ Orchestrate.drop_publication(
181
+ group_name: options[:group_name],
182
+ conn_string: source_db_url,
183
+ )
184
+ end
185
+ end,
186
+ -> do
187
+ if options[:everything] || options[:sync]
188
+ logger.info("Dropping subscription on target database")
189
+ Orchestrate.drop_subscription(
190
+ group_name: options[:group_name],
191
+ target_conn_string: target_db_url,
192
+ )
193
+ end
194
+ end,
195
+ -> do
196
+ if options[:everything]
197
+ logger.info("Dropping replication user on source database")
198
+ drop_user(conn_string: source_db_url)
199
+ end
200
+ end,
201
+ -> do
202
+ if options[:everything]
203
+ logger.info("Dropping replication user on target database")
204
+ drop_user(conn_string: target_db_url)
205
+ end
206
+ -> do
207
+ if options[:everything]
208
+ PgEasyReplicate::DDLManager.cleanup_ddl_tracking(
209
+ conn_string: source_db_url,
210
+ group_name: options[:group_name],
211
+ )
212
+ end
213
+ end
214
+ end,
215
+ ]
216
+
217
+ cleanup_steps.each do |step|
218
+ step.call
219
+ rescue => e
220
+ logger.warn(
221
+ "Part of the cleanup step failed with #{e.message}. Continuing...",
172
222
  )
173
223
  end
174
224
 
175
- if options[:everything]
176
- # Drop users at last
177
- logger.info("Dropping replication user on source database")
178
- drop_user(conn_string: source_db_url)
179
-
180
- logger.info("Dropping replication user on target database")
181
- drop_user(conn_string: target_db_url)
182
- end
225
+ logger.info("Cleanup process completed.")
183
226
  rescue => e
184
227
  abort_with("Unable to cleanup: #{e.message}")
185
228
  end
@@ -299,9 +342,9 @@ module PgEasyReplicate
299
342
  special_user_role: nil,
300
343
  grant_permissions_on_schema: false
301
344
  )
302
- password = connection_info(conn_string)[:password].gsub("'") { "''" }
345
+ return if user_exists?(conn_string: conn_string, user: internal_user_name)
303
346
 
304
- drop_user(conn_string: conn_string)
347
+ password = connection_info(conn_string)[:password].gsub("'") { "''" }
305
348
 
306
349
  sql = <<~SQL
307
350
  create role #{quote_ident(internal_user_name)} with password '#{password}' login createdb createrole;
@@ -390,6 +433,7 @@ module PgEasyReplicate
390
433
  def tables_have_replica_identity?(
391
434
  conn_string:,
392
435
  tables: "",
436
+ exclude_tables: "",
393
437
  schema_name: nil
394
438
  )
395
439
  schema_name ||= "public"
@@ -399,6 +443,7 @@ module PgEasyReplicate
399
443
  schema: schema_name,
400
444
  conn_string: source_db_url,
401
445
  list: tables,
446
+ exclude_list: exclude_tables,
402
447
  )
403
448
  return false if table_list.empty?
404
449
 
data/scripts/e2e-start.sh CHANGED
@@ -10,9 +10,17 @@ export SOURCE_DB_URL="postgres://james-bond:james-bond123%407%21%273aaR@localhos
10
10
  export TARGET_DB_URL="postgres://james-bond:james-bond123%407%21%273aaR@localhost:5433/postgres-db"
11
11
  export PGPASSWORD='james-bond123@7!'"'"''"'"'3aaR'
12
12
 
13
- # Config check, Bootstrap and cleanup
14
13
  echo "===== Performing Bootstrap and cleanup"
15
14
  bundle exec bin/pg_easy_replicate bootstrap -g cluster-1 --copy-schema
16
- bundle exec bin/pg_easy_replicate start_sync -g cluster-1 -s public --recreate-indices-post-copy
15
+ bundle exec bin/pg_easy_replicate start_sync -g cluster-1 -s public --recreate-indices-post-copy --track-ddl
17
16
  bundle exec bin/pg_easy_replicate stats -g cluster-1
17
+
18
+ echo "===== Applying DDL change"
19
+ psql $SOURCE_DB_URL -c "ALTER TABLE public.pgbench_accounts ADD COLUMN test_column VARCHAR(255)"
20
+
21
+ echo "===== Applying DDL changes"
22
+ echo "Y" | bundle exec bin/pg_easy_replicate apply_ddl_change -g cluster-1
23
+
24
+ # Switchover
25
+ echo "===== Performing switchover"
18
26
  bundle exec bin/pg_easy_replicate switchover -g cluster-1
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: pg_easy_replicate
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.2.6
4
+ version: 0.3.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Shayon Mukherjee
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2024-06-04 00:00:00.000000000 Z
11
+ date: 2024-08-31 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: ougai
@@ -38,6 +38,20 @@ dependencies:
38
38
  - - "~>"
39
39
  - !ruby/object:Gem::Version
40
40
  version: 1.5.3
41
+ - !ruby/object:Gem::Dependency
42
+ name: pg_query
43
+ requirement: !ruby/object:Gem::Requirement
44
+ requirements:
45
+ - - "~>"
46
+ - !ruby/object:Gem::Version
47
+ version: 5.1.0
48
+ type: :runtime
49
+ prerelease: false
50
+ version_requirements: !ruby/object:Gem::Requirement
51
+ requirements:
52
+ - - "~>"
53
+ - !ruby/object:Gem::Version
54
+ version: 5.1.0
41
55
  - !ruby/object:Gem::Dependency
42
56
  name: sequel
43
57
  requirement: !ruby/object:Gem::Requirement
@@ -47,7 +61,7 @@ dependencies:
47
61
  version: '5.69'
48
62
  - - "<"
49
63
  - !ruby/object:Gem::Version
50
- version: '5.82'
64
+ version: '5.83'
51
65
  type: :runtime
52
66
  prerelease: false
53
67
  version_requirements: !ruby/object:Gem::Requirement
@@ -57,7 +71,7 @@ dependencies:
57
71
  version: '5.69'
58
72
  - - "<"
59
73
  - !ruby/object:Gem::Version
60
- version: '5.82'
74
+ version: '5.83'
61
75
  - !ruby/object:Gem::Dependency
62
76
  name: thor
63
77
  requirement: !ruby/object:Gem::Requirement
@@ -274,6 +288,8 @@ files:
274
288
  - docker-compose.yml
275
289
  - lib/pg_easy_replicate.rb
276
290
  - lib/pg_easy_replicate/cli.rb
291
+ - lib/pg_easy_replicate/ddl_audit.rb
292
+ - lib/pg_easy_replicate/ddl_manager.rb
277
293
  - lib/pg_easy_replicate/group.rb
278
294
  - lib/pg_easy_replicate/helper.rb
279
295
  - lib/pg_easy_replicate/index_manager.rb