ruby-pg-extras 5.6.17 → 6.0.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of ruby-pg-extras might be problematic. Click here for more details.

checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 743910d2bd970c03a9ca1d79d56de46d40c5742aae37d0e24021d58e5c2883c6
4
- data.tar.gz: e6a50ce83c92f55f49119feaeddf6796c3035a9b1b1d30a2d2439af4231c744a
3
+ metadata.gz: 9f5f1e9f65cc072944a879f93e059630dcb225520cfb2573c2f6a44dbbf8d1ad
4
+ data.tar.gz: db8fbf676357996e0f53d82228811bde3906732b9299a0988961f65b77bc2037
5
5
  SHA512:
6
- metadata.gz: 7c9f78aed2dfbbadbb41e9957b390b1dc4f0effd997515b7fd8679d677c210d0da9a664f14e16a0bb5f5d26016c98c40331acb9db8327073ec9ee3514a3519e8
7
- data.tar.gz: 73f758201aac10e76af41a531b9e0bf0badec4eba00491174225ac46981d1c92e9cfedaaebba694472b036d7b603197d361948999d1a11a2d4af0d990271db7c
6
+ metadata.gz: 27108e41870ead71fe9e2c47cdbf5b544031b0dd7b1be0f537ca9be0737ed887b43fef9b1a329be57510458cfeffe4df8540af29369ba8b87fe482d4596279b3
7
+ data.tar.gz: 778f5af8cb8eaa9a47892590dde26a99c3a78a9c940b128e060b121e4881d608ddb190cbeabf01e7d1c4b3ba98890fc0bf2e423de8c6caef2e37b7334558d06a
@@ -15,6 +15,13 @@ jobs:
15
15
  ruby-version: ['3.4', '3.3', '3.2', '3.1', '3.0', '2.7']
16
16
  steps:
17
17
  - uses: actions/checkout@v4
18
+ - name: Run PostgreSQL 12
19
+ run: |
20
+ docker run --env POSTGRES_USER=postgres \
21
+ --env POSTGRES_DB=ruby-pg-extras-test \
22
+ --env POSTGRES_PASSWORD=secret \
23
+ -d -p 5432:5432 postgres:12.20-alpine \
24
+ postgres -c shared_preload_libraries=pg_stat_statements
18
25
  - name: Run PostgreSQL 13
19
26
  run: |
20
27
  docker run --env POSTGRES_USER=postgres \
@@ -64,6 +71,11 @@ jobs:
64
71
  bundle config set --local path 'vendor/bundle'
65
72
  bundle install
66
73
  sleep 5
74
+ - name: Run tests for PG 12
75
+ env:
76
+ PG_VERSION: 12
77
+ run: |
78
+ bundle exec rspec spec/
67
79
  - name: Run tests for PG 13
68
80
  env:
69
81
  PG_VERSION: 13
data/README.md CHANGED
@@ -118,7 +118,7 @@ Keep reading to learn about methods that `diagnose` uses under the hood.
118
118
 
119
119
  ### `missing_fk_indexes`
120
120
 
121
- This method lists **actual foreign key columns** (based on existing foreign key constraints) which don't have a supporting index. It's recommended to always index foreign key columns because they are commonly used for lookups and join conditions.
121
+ This method lists columns likely to be foreign keys (i.e. column name ending in `_id` and related table exists) which don't have an index. It's recommended to always index foreign key columns because they are used for searching relation objects.
122
122
 
123
123
  You can add indexes on the columns returned by this query and later check if they are receiving scans using the [unused_indexes method](#unused_indexes). Please remember that each index decreases write performance and autovacuuming overhead, so be careful when adding multiple indexes to often updated tables.
124
124
 
@@ -136,34 +136,14 @@ RubyPgExtras.missing_fk_indexes(args: { table_name: "users" })
136
136
 
137
137
  ```
138
138
 
139
- You can also exclude known/intentional cases using `ignore_list` (array or comma-separated string), with entries like:
140
- - `"posts.topic_id"` (ignore a specific table+column)
141
- - `"topic_id"` (ignore this column name for all tables)
142
- - `"posts.*"` (ignore all columns on a table)
143
- - `"*"` (ignore everything)
144
-
145
- ```ruby
146
- RubyPgExtras.missing_fk_indexes(args: { ignore_list: ["users.company_id", "posts.*"] })
147
- ```
148
-
149
139
  `table_name` argument is optional, if omitted, the method will display missing fk indexes for all the tables.
150
140
 
151
141
  ## `missing_fk_constraints`
152
142
 
153
- This method shows **columns that look like foreign keys** but don't have a corresponding foreign key constraint yet. Foreign key constraints improve data integrity in the database by preventing relations with nonexisting objects. You can read more about the benefits of using foreign keys [in this blog post](https://pawelurbanek.com/rails-postgresql-data-integrity).
154
-
155
- Heuristic notes:
156
- - A column is considered a candidate if it matches `<table_singular>_id` and the related table exists (underscored prefixes like `account_user_id` are supported).
157
- - Rails polymorphic associations (`<name>_id` + `<name>_type`) are ignored since they cannot be expressed as real FK constraints.
158
-
159
- You can also exclude known/intentional cases using `ignore_list` (array or comma-separated string), with entries like:
160
- - `"posts.category_id"` (ignore a specific table+column)
161
- - `"category_id"` (ignore this column name for all tables)
162
- - `"posts.*"` (ignore all columns on a table)
163
- - `"*"` (ignore everything)
143
+ Similarly to the previous method, this one shows columns likely to be foreign keys that don't have a corresponding foreign key constraint. Foreign key constraints improve data integrity in the database by preventing relations with nonexisting objects. You can read more about the benefits of using foreign keys [in this blog post](https://pawelurbanek.com/rails-postgresql-data-integrity).
164
144
 
165
145
  ```ruby
166
- RubyPgExtras.missing_fk_constraints(args: { table_name: "users", ignore_list: ["users.customer_id", "posts.*"] })
146
+ RubyPgExtras.missing_fk_constraints(args: { table_name: "users" })
167
147
 
168
148
  +---------------------------------+
169
149
  | Missing foreign key constraints |
@@ -384,19 +364,19 @@ This command displays all the current locks, regardless of their type.
384
364
 
385
365
  RubyPgExtras.outliers(args: { limit: 20 })
386
366
 
387
- query | exec_time | prop_exec_time | ncalls | avg_exec_ms | sync_io_time
388
- -----------------------------------------+------------------+----------------+-------------+-------------+--------------
389
- SELECT * FROM archivable_usage_events.. | 154:39:26.431466 | 72.2% | 34,211,877 | 16 | 00:00:00
390
- COPY public.archivable_usage_events (.. | 50:38:33.198418 | 23.6% | 13 | 14014481 | 13:34:21.00108
391
- COPY public.usage_events (id, reporte.. | 02:32:16.335233 | 1.2% | 13 | 70332 | 00:34:19.784318
392
- INSERT INTO usage_events (id, retaine.. | 01:42:59.436532 | 0.8% | 12,328,187 | 0 | 00:00:00
393
- SELECT * FROM usage_events WHERE (alp.. | 01:18:10.754354 | 0.6% | 102,114,301 | 0 | 00:00:00
394
- UPDATE usage_events SET reporter_id =.. | 00:52:35.683254 | 0.4% | 23,786,348 | 0 | 00:00:00
395
- INSERT INTO usage_events (id, retaine.. | 00:49:24.952561 | 0.4% | 21,988,201 | 0 | 00:00:00
367
+ query | exec_time | prop_exec_time | ncalls | sync_io_time
368
+ -----------------------------------------+------------------+----------------+-------------+--------------
369
+ SELECT * FROM archivable_usage_events.. | 154:39:26.431466 | 72.2% | 34,211,877 | 00:00:00
370
+ COPY public.archivable_usage_events (.. | 50:38:33.198418 | 23.6% | 13 | 13:34:21.00108
371
+ COPY public.usage_events (id, reporte.. | 02:32:16.335233 | 1.2% | 13 | 00:34:19.784318
372
+ INSERT INTO usage_events (id, retaine.. | 01:42:59.436532 | 0.8% | 12,328,187 | 00:00:00
373
+ SELECT * FROM usage_events WHERE (alp.. | 01:18:10.754354 | 0.6% | 102,114,301 | 00:00:00
374
+ UPDATE usage_events SET reporter_id =.. | 00:52:35.683254 | 0.4% | 23,786,348 | 00:00:00
375
+ INSERT INTO usage_events (id, retaine.. | 00:49:24.952561 | 0.4% | 21,988,201 | 00:00:00
396
376
  (truncated results for brevity)
397
377
  ```
398
378
 
399
- This command displays statements, obtained from `pg_stat_statements`, ordered by the amount of time to execute in aggregate. This includes the statement itself, the total execution time for that statement, the proportion of total execution time for all statements that statement has taken up, the number of times that statement has been called, the average execution time per call in milliseconds, and the amount of time that statement spent on synchronous I/O (reading/writing from the file system).
379
+ This command displays statements, obtained from `pg_stat_statements`, ordered by the amount of time to execute in aggregate. This includes the statement itself, the total execution time for that statement, the proportion of total execution time for all statements that statement has taken up, the number of times that statement has been called, and the amount of time that statement spent on synchronous I/O (reading/writing from the file system).
400
380
 
401
381
  Typically, an efficient query will have an appropriate ratio of calls to total execution time, with as little time spent on I/O as possible. Queries that have a high total execution time but low call count should be investigated to improve their performance. Queries that have a high proportion of execution time being spent on synchronous I/O should also be investigated.
402
382
 
@@ -408,15 +388,15 @@ Typically, an efficient query will have an appropriate ratio of calls to total e
408
388
 
409
389
  RubyPgExtras.calls(args: { limit: 10 })
410
390
 
411
- qry | exec_time | prop_exec_time | ncalls | avg_exec_ms | sync_io_time
412
- -----------------------------------------+------------------+----------------+-------------+-------------+--------------
413
- SELECT * FROM usage_events WHERE (alp.. | 01:18:11.073333 | 0.6% | 102,120,780 | 0 | 00:00:00
414
- BEGIN | 00:00:51.285988 | 0.0% | 47,288,662 | 0 | 00:00:00
415
- COMMIT | 00:00:52.31724 | 0.0% | 47,288,615 | 0 | 00:00:00
416
- SELECT * FROM archivable_usage_event.. | 154:39:26.431466 | 72.2% | 34,211,877 | 16 | 00:00:00
417
- UPDATE usage_events SET reporter_id =.. | 00:52:35.986167 | 0.4% | 23,788,388 | 0 | 00:00:00
418
- INSERT INTO usage_events (id, retaine.. | 00:49:25.260245 | 0.4% | 21,990,326 | 0 | 00:00:00
419
- INSERT INTO usage_events (id, retaine.. | 01:42:59.436532 | 0.8% | 12,328,187 | 0 | 00:00:00
391
+ qry | exec_time | prop_exec_time | ncalls | sync_io_time
392
+ -----------------------------------------+------------------+----------------+-------------+--------------
393
+ SELECT * FROM usage_events WHERE (alp.. | 01:18:11.073333 | 0.6% | 102,120,780 | 00:00:00
394
+ BEGIN | 00:00:51.285988 | 0.0% | 47,288,662 | 00:00:00
395
+ COMMIT | 00:00:52.31724 | 0.0% | 47,288,615 | 00:00:00
396
+ SELECT * FROM archivable_usage_event.. | 154:39:26.431466 | 72.2% | 34,211,877 | 00:00:00
397
+ UPDATE usage_events SET reporter_id =.. | 00:52:35.986167 | 0.4% | 23,788,388 | 00:00:00
398
+ INSERT INTO usage_events (id, retaine.. | 00:49:25.260245 | 0.4% | 21,990,326 | 00:00:00
399
+ INSERT INTO usage_events (id, retaine.. | 01:42:59.436532 | 0.8% | 12,328,187 | 00:00:00
420
400
  (truncated results for brevity)
421
401
  ```
422
402
 
@@ -667,60 +647,17 @@ This command displays an estimation of table "bloat" – space allocated to a re
667
647
 
668
648
  RubyPgExtras.vacuum_stats
669
649
 
670
- schema | table | last_manual_vacuum | manual_vacuum_count | last_autovacuum | autovacuum_count | rowcount | dead_rowcount | dead_tup_autovacuum_threshold | n_ins_since_vacuum | insert_autovacuum_threshold | expect_autovacuum
671
- --------+-----------------------+--------------------+---------------------+------------------+------------------+----------------+----------------+-------------------------------+--------------------+-----------------------------+-------------------
672
- public | log_table | | 0 | 2013-04-26 17:37 | 5 | 18,030 | 0 | 3,656 | 0 | 3,606 |
673
- public | data_table | | 0 | 2013-04-26 13:09 | 3 | 79 | 28 | 66 | 10 | 16 | yes (dead_tuples)
674
- public | other_table | | 0 | 2013-04-26 11:41 | 4 | 41 | 47 | 58 | 2,000 | 1,008 | yes (dead_tuples & inserts)
675
- (truncated results for brevity)
676
- ```
677
-
678
- This command displays statistics related to vacuum operations for each table, including last manual vacuum and autovacuum timestamps and counters, an estimation of dead rows, dead-tuple-based autovacuum threshold, number of rows inserted since the last VACUUM (`n_ins_since_vacuum`) and the insert-based autovacuum threshold introduced in PostgreSQL 13 ([PostgreSQL autovacuum configuration](https://www.postgresql.org/docs/current/runtime-config-vacuum.html#RUNTIME-CONFIG-AUTOVACUUM)). It helps determine if current autovacuum thresholds (both dead-tuple and insert-based) are appropriate, and whether an automatic vacuum is expected to be triggered soon.
679
-
680
- ### `vacuum_progress`
681
-
682
- ```ruby
683
-
684
- RubyPgExtras.vacuum_progress
685
-
686
- database | schema | table | pid | phase | heap_blks_total | heap_blks_scanned | heap_blks_vacuumed | index_vacuum_count
687
- ----------+--------+----------+-------+---------------------+-----------------+-------------------+--------------------+--------------------
688
- app_db | public | users | 12345 | scanning heap | 125000 | 32000 | 0 | 0
689
- app_db | public | orders | 12346 | vacuuming indexes | 80000 | 80000 | 75000 | 3
690
- (truncated results for brevity)
691
- ```
692
-
693
- This command shows the current progress of `VACUUM` / autovacuum operations by reading `pg_stat_progress_vacuum` ([VACUUM progress reporting docs](https://www.postgresql.org/docs/current/progress-reporting.html#VACUUM-PROGRESS-REPORTING)). It can be used to see which tables are being vacuumed right now, how far each operation has progressed, and how many index vacuum cycles have been performed.
694
-
695
- ### `analyze_progress`
696
-
697
- ```ruby
698
-
699
- RubyPgExtras.analyze_progress
700
-
701
- database | schema | table | pid | phase | sample_blks_total | sample_blks_scanned | ext_stats_total | ext_stats_computed
702
- ----------+--------+----------+-------+----------------------+-------------------+---------------------+-----------------+--------------------
703
- app_db | public | users | 22345 | acquiring sample rows| 5000 | 1200 | 2 | 0
704
- app_db | public | orders | 22346 | computing statistics | 8000 | 8000 | 1 | 1
705
- (truncated results for brevity)
706
- ```
707
-
708
- This command displays the current progress of `ANALYZE` and auto-analyze operations using `pg_stat_progress_analyze` ([ANALYZE progress reporting docs](https://www.postgresql.org/docs/current/progress-reporting.html#ANALYZE-PROGRESS-REPORTING)). It helps understand how far statistics collection has progressed for each active analyze and whether extended statistics are being computed.
709
-
710
- ### `vacuum_io_stats`
711
-
712
- ```ruby
713
-
714
- RubyPgExtras.vacuum_io_stats
715
-
716
- backend_type | object | context | reads | writes | writebacks | extends | evictions | reuses | fsyncs | stats_reset
717
- --------------------+----------+----------+---------+---------+-----------+---------+-----------+---------+--------+-------------------------------
718
- autovacuum worker | relation | vacuum | 5824251 | 3028684 | 0 | 0 | 2588 | 5821460 | 0 | 2025-01-10 11:50:27.583875+00
719
- autovacuum launcher| relation | autovacuum| 16306 | 2494 | 0 | 2915 | 17785 | 0 | 0 | 2025-01-10 11:50:27.583875+00
650
+ schema | table | last_vacuum | last_autovacuum | rowcount | dead_rowcount | autovacuum_threshold | expect_autovacuum
651
+ --------+-----------------------+-------------+------------------+----------------+----------------+----------------------+-------------------
652
+ public | log_table | | 2013-04-26 17:37 | 18,030 | 0 | 3,656 |
653
+ public | data_table | | 2013-04-26 13:09 | 79 | 28 | 66 |
654
+ public | other_table | | 2013-04-26 11:41 | 41 | 47 | 58 |
655
+ public | queue_table | | 2013-04-26 17:39 | 12 | 8,228 | 52 | yes
656
+ public | picnic_table | | | 13 | 0 | 53 |
720
657
  (truncated results for brevity)
721
658
  ```
722
659
 
723
- This command surfaces cumulative I/O statistics for autovacuum-related VACUUM activity, based on the `pg_stat_io` view introduced in PostgreSQL 16 ([pg_stat_io documentation](https://www.postgresql.org/docs/current/monitoring-stats.html#MONITORING-PG-STAT-IO-VIEW)). It shows how many blocks autovacuum workers have read and written, how many buffer evictions and ring-buffer reuses occurred, and when the statistics were last reset; this is useful for determining whether autovacuum is responsible for I/O spikes, as described in the pganalyze article on `pg_stat_io` ([Tracking cumulative I/O activity by autovacuum and manual VACUUMs](https://pganalyze.com/blog/pg-stat-io#tracking-cumulative-io-activity-by-autovacuum-and-manual-vacuums)). On PostgreSQL versions below 16 this method returns a single informational row indicating that the feature is unavailable.
660
+ This command displays statistics related to vacuum operations for each table, including an estimation of dead rows, last autovacuum and the current autovacuum threshold. This command can be useful when determining if current vacuum thresholds require adjustments, and to determine when the table was last vacuumed.
724
661
 
725
662
  ### `kill_all`
726
663
 
data/Rakefile CHANGED
@@ -5,5 +5,5 @@ RSpec::Core::RakeTask.new(:spec)
5
5
 
6
6
  desc "Test all PG versions"
7
7
  task :test_all do
8
- system("PG_VERSION=13 bundle exec rspec spec/ && PG_VERSION=14 bundle exec rspec spec/ && PG_VERSION=15 bundle exec rspec spec/ && PG_VERSION=16 bundle exec rspec spec/ && PG_VERSION=17 bundle exec rspec spec/")
8
+ system("PG_VERSION=12 bundle exec rspec spec/ && PG_VERSION=13 bundle exec rspec spec/ && PG_VERSION=14 bundle exec rspec spec/ && PG_VERSION=15 bundle exec rspec spec/ && PG_VERSION=16 bundle exec rspec spec/ && PG_VERSION=17 bundle exec rspec spec/")
9
9
  end
@@ -1,4 +1,13 @@
1
1
  services:
2
+ postgres12:
3
+ image: postgres:12.20-alpine
4
+ command: postgres -c shared_preload_libraries=pg_stat_statements
5
+ environment:
6
+ POSTGRES_USER: postgres
7
+ POSTGRES_DB: ruby-pg-extras-test
8
+ POSTGRES_PASSWORD: secret
9
+ ports:
10
+ - '5432:5432'
2
11
  postgres13:
3
12
  image: postgres:13.16-alpine
4
13
  command: postgres -c shared_preload_libraries=pg_stat_statements
@@ -36,7 +45,7 @@ services:
36
45
  ports:
37
46
  - '5436:5432'
38
47
  postgres17:
39
- image: postgres:17.7-alpine
48
+ image: postgres:17.0-alpine
40
49
  command: postgres -c shared_preload_libraries=pg_stat_statements
41
50
  environment:
42
51
  POSTGRES_USER: postgres
@@ -44,13 +53,4 @@ services:
44
53
  POSTGRES_PASSWORD: secret
45
54
  ports:
46
55
  - '5437:5432'
47
- postgres18:
48
- image: postgres:18.1-alpine
49
- command: postgres -c shared_preload_libraries=pg_stat_statements
50
- environment:
51
- POSTGRES_USER: postgres
52
- POSTGRES_DB: ruby-pg-extras-test
53
- POSTGRES_PASSWORD: secret
54
- ports:
55
- - '5438:5432'
56
56
 
@@ -7,7 +7,6 @@ require "ruby_pg_extras/size_parser"
7
7
  require "ruby_pg_extras/diagnose_data"
8
8
  require "ruby_pg_extras/diagnose_print"
9
9
  require "ruby_pg_extras/detect_fk_column"
10
- require "ruby_pg_extras/ignore_list"
11
10
  require "ruby_pg_extras/missing_fk_indexes"
12
11
  require "ruby_pg_extras/missing_fk_constraints"
13
12
  require "ruby_pg_extras/index_info"
@@ -27,13 +26,10 @@ module RubyPgExtras
27
26
  long_running_queries mandelbrot outliers
28
27
  records_rank seq_scans table_index_scans table_indexes_size
29
28
  table_size total_index_size total_table_size
30
- unused_indexes duplicate_indexes vacuum_stats vacuum_progress vacuum_io_stats
31
- analyze_progress
32
- kill_all kill_pid
29
+ unused_indexes duplicate_indexes vacuum_stats kill_all kill_pid
33
30
  pg_stat_statements_reset buffercache_stats
34
31
  buffercache_usage ssl_used connections
35
- table_schema table_schemas
36
- table_foreign_keys foreign_keys
32
+ table_schema table_foreign_keys
37
33
  )
38
34
 
39
35
  DEFAULT_SCHEMA = ENV["PG_EXTRAS_SCHEMA"] || "public"
@@ -53,11 +49,6 @@ module RubyPgExtras
53
49
  outliers: { limit: 10 },
54
50
  outliers_legacy: { limit: 10 },
55
51
  outliers_17: { limit: 10 },
56
- vacuum_progress: {},
57
- vacuum_progress_17: {},
58
- vacuum_io_stats: {},
59
- vacuum_io_stats_legacy: {},
60
- analyze_progress: {},
61
52
  buffercache_stats: { limit: 10 },
62
53
  buffercache_usage: { limit: 20 },
63
54
  unused_indexes: { max_scans: 50, schema: DEFAULT_SCHEMA },
@@ -66,14 +57,12 @@ module RubyPgExtras
66
57
  index_cache_hit: { schema: DEFAULT_SCHEMA },
67
58
  table_cache_hit: { schema: DEFAULT_SCHEMA },
68
59
  table_size: { schema: DEFAULT_SCHEMA },
69
- table_schemas: { schema: DEFAULT_SCHEMA },
70
60
  index_scans: { schema: DEFAULT_SCHEMA },
71
61
  cache_hit: { schema: DEFAULT_SCHEMA },
72
62
  seq_scans: { schema: DEFAULT_SCHEMA },
73
63
  table_index_scans: { schema: DEFAULT_SCHEMA },
74
64
  records_rank: { schema: DEFAULT_SCHEMA },
75
65
  tables: { schema: DEFAULT_SCHEMA },
76
- foreign_keys: { schema: DEFAULT_SCHEMA },
77
66
  kill_pid: { pid: 0 },
78
67
  })
79
68
 
@@ -100,25 +89,6 @@ module RubyPgExtras
100
89
  end
101
90
  end
102
91
 
103
- # vacuum_progress uses pg_stat_progress_vacuum only and does not depend on pg_stat_statements,
104
- # so we switch it based on the server_version_num instead of the pg_stat_statements version.
105
- if query_name == :vacuum_progress
106
- server_version_num = conn.send(exec_method, "SHOW server_version_num").to_a[0].values[0].to_i
107
- if server_version_num >= 170000
108
- query_name = :vacuum_progress_17
109
- end
110
- end
111
-
112
- # vacuum_io_stats relies on pg_stat_io which is available starting from PostgreSQL 16.
113
- # For older versions we fall back to vacuum_io_stats_legacy which just indicates
114
- # that this feature is not available on the current server.
115
- if query_name == :vacuum_io_stats
116
- server_version_num = conn.send(exec_method, "SHOW server_version_num").to_a[0].values[0].to_i
117
- if server_version_num < 160000
118
- query_name = :vacuum_io_stats_legacy
119
- end
120
- end
121
-
122
92
  REQUIRED_ARGS.fetch(query_name) { [] }.each do |arg_name|
123
93
  if args[arg_name].nil?
124
94
  raise ArgumentError, "'#{arg_name}' is required"
@@ -192,11 +162,11 @@ module RubyPgExtras
192
162
  end
193
163
 
194
164
  def self.missing_fk_indexes(args: {}, in_format: :display_table)
195
- RubyPgExtras::MissingFkIndexes.call(args[:table_name], ignore_list: args[:ignore_list])
165
+ RubyPgExtras::MissingFkIndexes.call(args[:table_name])
196
166
  end
197
167
 
198
168
  def self.missing_fk_constraints(args: {}, in_format: :display_table)
199
- RubyPgExtras::MissingFkConstraints.call(args[:table_name], ignore_list: args[:ignore_list])
169
+ RubyPgExtras::MissingFkConstraints.call(args[:table_name])
200
170
  end
201
171
 
202
172
  def self.display_result(result, title:, in_format:)
@@ -247,8 +217,6 @@ module RubyPgExtras
247
217
  end
248
218
 
249
219
  def self.database_url=(value)
250
- @_connection&.close
251
- @_connection = nil
252
220
  @@database_url = value
253
221
  end
254
222
 
@@ -34,25 +34,16 @@ module RubyPgExtras
34
34
  end
35
35
 
36
36
  def call(column_name, tables)
37
- # Heuristic: Rails-style foreign keys are usually named `<table_singular>_id`.
38
- # We accept underscores in the prefix (e.g. `account_user_id` -> `account_users`).
39
- match = /\A(?<table_singular>.+)_id\z/i.match(column_name.to_s)
40
- return false unless match
41
-
42
- table_singular = match[:table_singular]
43
- return false if table_singular.empty?
44
-
45
- tables.include?(pluralize(table_singular))
37
+ return false unless column_name =~ /_id$/
38
+ table_name = column_name.split("_").first
39
+ table_name = pluralize(table_name)
40
+ tables.include?(table_name)
46
41
  end
47
42
 
48
43
  def pluralize(word)
49
- # Table names from Postgres are typically lowercase. Normalize before applying rules.
50
- word = word.to_s.downcase
51
-
52
- return word if UNCOUNTABLE.include?(word)
53
- return IRREGULAR.fetch(word) if IRREGULAR.key?(word)
54
- # If the word is already an irregular plural (e.g. "people"), keep it as-is.
55
- return word if IRREGULAR.value?(word)
44
+ return word if UNCOUNTABLE.include?(word.downcase)
45
+ return IRREGULAR[word] if IRREGULAR.key?(word)
46
+ return IRREGULAR.invert[word] if IRREGULAR.value?(word)
56
47
 
57
48
  PLURAL_RULES.reverse.each do |(rule, replacement)|
58
49
  return word.gsub(rule, replacement) if word.match?(rule)
@@ -2,68 +2,39 @@
2
2
 
3
3
  module RubyPgExtras
4
4
  class MissingFkConstraints
5
- # ignore_list: array (or comma-separated string) of entries like:
6
- # - "posts.category_id" (ignore a specific table+column)
7
- # - "category_id" (ignore this column name for all tables)
8
- # - "posts.*" (ignore all columns on a table)
9
- # - "*" (ignore everything)
10
- def self.call(table_name, ignore_list: nil)
11
- new.call(table_name, ignore_list: ignore_list)
5
+ def self.call(table_name)
6
+ new.call(table_name)
12
7
  end
13
8
 
14
- def call(table_name, ignore_list: nil)
15
- ignore_list_matcher = IgnoreList.new(ignore_list)
16
-
17
- tables =
18
- if table_name
9
+ def call(table_name)
10
+ tables = if table_name
19
11
  [table_name]
20
12
  else
21
13
  all_tables
22
14
  end
23
15
 
24
- schemas_by_table = query_module
25
- .table_schemas(in_format: :hash)
26
- .group_by { |row| row.fetch("table_name") }
27
-
28
- fk_columns_by_table = query_module
29
- .foreign_keys(in_format: :hash)
30
- .group_by { |row| row.fetch("table_name") }
31
- .transform_values { |rows| rows.map { |row| row.fetch("column_name") } }
32
-
33
- tables.each_with_object([]) do |table, agg|
34
- schema = schemas_by_table.fetch(table, [])
35
- fk_columns_for_table = fk_columns_by_table.fetch(table, [])
36
- schema_column_names = schema.map { |row| row.fetch("column_name") }
37
-
38
- candidate_fk_columns = schema.filter_map do |row|
39
- column_name = row.fetch("column_name")
40
-
41
- # Skip columns explicitly excluded via ignore list.
42
- next if ignore_list_matcher.ignored?(table: table, column_name: column_name)
16
+ tables.reduce([]) do |agg, table|
17
+ foreign_keys_info = query_module.table_foreign_keys(args: { table_name: table }, in_format: :hash)
18
+ schema = query_module.table_schema(args: { table_name: table }, in_format: :hash)
43
19
 
44
- # Skip columns that already have a foreign key constraint on this table.
45
- next if fk_columns_for_table.include?(column_name)
46
-
47
- # Skip columns that don't look like an FK candidate based on naming conventions.
48
- next unless DetectFkColumn.call(column_name, all_tables)
49
-
50
- # Rails polymorphic associations use <name>_id + <name>_type and can't have FK constraints.
51
- candidate_prefix = column_name.delete_suffix("_id")
52
- polymorphic_type_column = "#{candidate_prefix}_type"
53
- # Skip polymorphic associations (cannot be expressed as a real FK constraint).
54
- next if schema_column_names.include?(polymorphic_type_column)
55
-
56
- column_name
20
+ fk_columns = schema.filter_map do |row|
21
+ if DetectFkColumn.call(row.fetch("column_name"), all_tables)
22
+ row.fetch("column_name")
23
+ end
57
24
  end
58
25
 
59
- candidate_fk_columns.each do |column_name|
60
- agg.push(
61
- {
62
- table: table,
63
- column_name: column_name,
64
- }
65
- )
26
+ fk_columns.each do |column_name|
27
+ if foreign_keys_info.none? { |row| row.fetch("column_name") == column_name }
28
+ agg.push(
29
+ {
30
+ table: table,
31
+ column_name: column_name,
32
+ }
33
+ )
34
+ end
66
35
  end
36
+
37
+ agg
67
38
  end
68
39
  end
69
40
 
@@ -2,37 +2,30 @@
2
2
 
3
3
  module RubyPgExtras
4
4
  class MissingFkIndexes
5
- # ignore_list: array (or comma-separated string) of entries like:
6
- # - "posts.topic_id" (ignore a specific table+column)
7
- # - "topic_id" (ignore this column name for all tables)
8
- # - "posts.*" (ignore all columns on a table)
9
- # - "*" (ignore everything)
10
- def self.call(table_name, ignore_list: nil)
11
- new.call(table_name, ignore_list: ignore_list)
5
+ def self.call(table_name)
6
+ new.call(table_name)
12
7
  end
13
8
 
14
- def call(table_name, ignore_list: nil)
15
- ignore_list_matcher = IgnoreList.new(ignore_list)
16
-
17
- indexes_info = query_module.indexes(in_format: :hash)
18
- foreign_keys = query_module.foreign_keys(in_format: :hash)
19
-
9
+ def call(table_name)
20
10
  tables = if table_name
21
11
  [table_name]
22
12
  else
23
- foreign_keys.map { |row| row.fetch("table_name") }.uniq
13
+ all_tables
24
14
  end
25
15
 
16
+ indexes_info = query_module.indexes(in_format: :hash)
17
+
26
18
  tables.reduce([]) do |agg, table|
27
19
  index_info = indexes_info.select { |row| row.fetch("tablename") == table }
28
- table_fks = foreign_keys.select { |row| row.fetch("table_name") == table }
20
+ schema = query_module.table_schema(args: { table_name: table }, in_format: :hash)
29
21
 
30
- table_fks.each do |fk|
31
- column_name = fk.fetch("column_name")
32
-
33
- # Skip columns explicitly excluded via ignore list.
34
- next if ignore_list_matcher.ignored?(table: table, column_name: column_name)
22
+ fk_columns = schema.filter_map do |row|
23
+ if DetectFkColumn.call(row.fetch("column_name"), all_tables)
24
+ row.fetch("column_name")
25
+ end
26
+ end
35
27
 
28
+ fk_columns.each do |column_name|
36
29
  if index_info.none? { |row| row.fetch("columns").split(",").first == column_name }
37
30
  agg.push(
38
31
  {
@@ -49,6 +42,10 @@ module RubyPgExtras
49
42
 
50
43
  private
51
44
 
45
+ def all_tables
46
+ @_all_tables ||= query_module.table_size(in_format: :hash).map { |row| row.fetch("name") }
47
+ end
48
+
52
49
  def query_module
53
50
  RubyPgExtras
54
51
  end
@@ -4,7 +4,6 @@ SELECT query AS qry,
4
4
  interval '1 millisecond' * total_exec_time AS exec_time,
5
5
  to_char((total_exec_time/sum(total_exec_time) OVER()) * 100, 'FM90D0') || '%%' AS prop_exec_time,
6
6
  to_char(calls, 'FM999G999G990') AS ncalls,
7
- ROUND(total_exec_time/calls) AS avg_exec_ms,
8
7
  interval '1 millisecond' * (blk_read_time + blk_write_time) AS sync_io_time
9
8
  FROM pg_stat_statements WHERE userid = (SELECT usesysid FROM pg_user WHERE usename = current_user LIMIT 1)
10
9
  ORDER BY calls DESC LIMIT %{limit};
@@ -4,7 +4,6 @@ SELECT query AS qry,
4
4
  interval '1 millisecond' * total_exec_time AS exec_time,
5
5
  to_char((total_exec_time/sum(total_exec_time) OVER()) * 100, 'FM90D0') || '%%' AS prop_exec_time,
6
6
  to_char(calls, 'FM999G999G990') AS ncalls,
7
- ROUND(total_exec_time/calls) AS avg_exec_ms,
8
7
  interval '1 millisecond' * (shared_blk_read_time + shared_blk_write_time) AS sync_io_time
9
8
  FROM pg_stat_statements WHERE userid = (SELECT usesysid FROM pg_user WHERE usename = current_user LIMIT 1)
10
9
  ORDER BY calls DESC LIMIT %{limit};
@@ -4,7 +4,6 @@ SELECT query AS qry,
4
4
  interval '1 millisecond' * total_time AS exec_time,
5
5
  to_char((total_time/sum(total_time) OVER()) * 100, 'FM90D0') || '%%' AS prop_exec_time,
6
6
  to_char(calls, 'FM999G999G990') AS ncalls,
7
- ROUND(total_time/calls) AS avg_exec_ms,
8
7
  interval '1 millisecond' * (blk_read_time + blk_write_time) AS sync_io_time
9
8
  FROM pg_stat_statements WHERE userid = (SELECT usesysid FROM pg_user WHERE usename = current_user LIMIT 1)
10
9
  ORDER BY calls DESC LIMIT %{limit};
@@ -3,7 +3,6 @@
3
3
  SELECT interval '1 millisecond' * total_exec_time AS total_exec_time,
4
4
  to_char((total_exec_time/sum(total_exec_time) OVER()) * 100, 'FM90D0') || '%%' AS prop_exec_time,
5
5
  to_char(calls, 'FM999G999G999G990') AS ncalls,
6
- ROUND(total_exec_time/calls) AS avg_exec_ms,
7
6
  interval '1 millisecond' * (blk_read_time + blk_write_time) AS sync_io_time,
8
7
  query AS query
9
8
  FROM pg_stat_statements WHERE userid = (SELECT usesysid FROM pg_user WHERE usename = current_user LIMIT 1)
@@ -3,7 +3,6 @@
3
3
  SELECT interval '1 millisecond' * total_exec_time AS total_exec_time,
4
4
  to_char((total_exec_time/sum(total_exec_time) OVER()) * 100, 'FM90D0') || '%%' AS prop_exec_time,
5
5
  to_char(calls, 'FM999G999G999G990') AS ncalls,
6
- ROUND(total_exec_time/calls) AS avg_exec_ms,
7
6
  interval '1 millisecond' * (shared_blk_read_time + shared_blk_write_time) AS sync_io_time,
8
7
  query AS query
9
8
  FROM pg_stat_statements WHERE userid = (SELECT usesysid FROM pg_user WHERE usename = current_user LIMIT 1)
@@ -3,7 +3,6 @@
3
3
  SELECT interval '1 millisecond' * total_time AS total_exec_time,
4
4
  to_char((total_time/sum(total_time) OVER()) * 100, 'FM90D0') || '%%' AS prop_exec_time,
5
5
  to_char(calls, 'FM999G999G999G990') AS ncalls,
6
- ROUND(total_time/calls) AS avg_exec_ms,
7
6
  interval '1 millisecond' * (blk_read_time + blk_write_time) AS sync_io_time,
8
7
  query AS query
9
8
  FROM pg_stat_statements WHERE userid = (SELECT usesysid FROM pg_user WHERE usename = current_user LIMIT 1)