ruby-pg-extras 1.5.0 → 2.0.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 741b8ec49ba2e7b4933ddbe085bd3c4f90ab1640e951fdec1dfa1178c4bf8ab0
4
- data.tar.gz: d1b7d405ef8da4bcd450e461a6ff4411b760128bd22e660c1dc102aaa1cf9453
3
+ metadata.gz: d71c264c67135e083a2072dfa8248ff46210ae386c813f23417b70c6084d01c2
4
+ data.tar.gz: 8749efb98e35fb17443f16a1e9a06cd8ce4e70d87f32aef4a20f21b22bfd56b9
5
5
  SHA512:
6
- metadata.gz: ce5011d3e926a7b1a246b6268d7d58a8f05f26bbab22bd665209daec81f4d03867b78c8306f6d3ebd4661acc1ae96640427ea33486d3ab6f91b1716c20b11cc1
7
- data.tar.gz: 1cee23d5f7e171bdd13eeea76102a59ff1ad6774a9b1c80b614e76e9d4a60d4b76b098403fd69e7e5913c001d98e0b4792fc1f58762234d0b10af7cbd6a1ef6e
6
+ metadata.gz: f31780c4436d533a80fd1b73c4470a09276330ef6a07b8e453807a4b12c4595c05c0dce82ef85d400d4615e9d384b6df25f4f84442007e40d187212e52b33618
7
+ data.tar.gz: 4de7b1452cea8ef0df3d9f6d3c4c71de7476c6cf2cedb43bdbdff8e4d5f5732890b3251938ab9a811f74e64d4b24c4d325b32a4d5511b072fe152b1b836e98ca
data/.circleci/config.yml CHANGED
@@ -6,6 +6,7 @@ jobs:
6
6
  environment:
7
7
  DATABASE_URL: postgresql://postgres:secret@localhost:5432/ruby-pg-extras-test
8
8
  - image: circleci/postgres:11.5
9
+ command: postgres -c shared_preload_libraries=pg_stat_statements -c pg_stat_statements.track=all -c max_connections=200
9
10
  environment:
10
11
  POSTGRES_USER: postgres
11
12
  POSTGRES_DB: ruby-pg-extras-test
data/README.md CHANGED
@@ -109,6 +109,8 @@ RubyPGExtras.cache_hit
109
109
 
110
110
  This command provides information on the efficiency of the buffer cache, for both index reads (`index hit rate`) as well as table reads (`table hit rate`). A low buffer cache hit ratio can be a sign that the Postgres instance is too small for the workload.
111
111
 
112
+ [More info](https://pawelurbanek.com/postgresql-fix-performance#cache-hit)
113
+
112
114
  ### `index_cache_hit`
113
115
 
114
116
  ```ruby
@@ -125,6 +127,8 @@ RubyPGExtras.index_cache_hit
125
127
 
126
128
  The same as `cache_hit` with each table's indexes cache hit info displayed separately.
127
129
 
130
+ [More info](https://pawelurbanek.com/postgresql-fix-performance#cache-hit)
131
+
128
132
  ### `table_cache_hit`
129
133
 
130
134
  ```ruby
@@ -141,6 +145,28 @@ RubyPGExtras.table_cache_hit
141
145
 
142
146
  The same as `cache_hit` with each table's cache hit info displayed seperately.
143
147
 
148
+ [More info](https://pawelurbanek.com/postgresql-fix-performance#cache-hit)
149
+
150
+ ### `db_settings`
151
+
152
+ ```ruby
153
+
154
+ RubyPGExtras.db_settings
155
+
156
+ name | setting | unit |
157
+ ------------------------------+---------+------+
158
+ checkpoint_completion_target | 0.7 | |
159
+ default_statistics_target | 100 | |
160
+ effective_cache_size | 1350000 | 8kB |
161
+ effective_io_concurrency | 1 | |
162
+ (truncated results for brevity)
163
+
164
+ ```
165
+
166
+ This method displays values for selected PostgreSQL settings. You can compare them with settings recommended by [PGTune](https://pgtune.leopard.in.ua/#/) and tweak values to improve performance.
167
+
168
+ [More info](https://pawelurbanek.com/postgresql-fix-performance#cache-hit)
169
+
144
170
  ### `index_usage`
145
171
 
146
172
  ```ruby
@@ -178,6 +204,8 @@ RubyPGExtras.locks
178
204
 
179
205
  This command displays queries that have taken out an exclusive lock on a relation. Exclusive locks typically prevent other operations on that relation from taking place, and can be a cause of "hung" queries that are waiting for a lock to be granted.
180
206
 
207
+ [More info](https://pawelurbanek.com/postgresql-fix-performance#deadlocks)
208
+
181
209
  ### `all_locks`
182
210
 
183
211
  ```ruby
@@ -210,6 +238,8 @@ This command displays statements, obtained from `pg_stat_statements`, ordered by
210
238
 
211
239
  Typically, an efficient query will have an appropriate ratio of calls to total execution time, with as little time spent on I/O as possible. Queries that have a high total execution time but low call count should be investigated to improve their performance. Queries that have a high proportion of execution time being spent on synchronous I/O should also be investigated.
212
240
 
241
+ [More info](https://pawelurbanek.com/postgresql-fix-performance#missing-indexes)
242
+
213
243
  ### `calls`
214
244
 
215
245
  ```ruby
@@ -230,6 +260,8 @@ RubyPGExtras.calls(args: { limit: 10 })
230
260
 
231
261
  This command is much like `pg:outliers`, but ordered by the number of times a statement has been called.
232
262
 
263
+ [More info](https://pawelurbanek.com/postgresql-fix-performance#missing-indexes)
264
+
233
265
  ### `blocking`
234
266
 
235
267
  ```ruby
@@ -244,6 +276,8 @@ RubyPGExtras.blocking
244
276
 
245
277
  This command displays statements that are currently holding locks that other statements are waiting to be released. This can be used in conjunction with `pg:locks` to determine which statements need to be terminated in order to resolve lock contention.
246
278
 
279
+ [More info](https://pawelurbanek.com/postgresql-fix-performance#deadlocks)
280
+
247
281
  ### `total_index_size`
248
282
 
249
283
  ```ruby
@@ -351,21 +385,25 @@ RubyPGExtras.unused_indexes(args: { min_scans: 20 })
351
385
 
352
386
  This command displays indexes that have < 50 scans recorded against them, and are greater than 5 pages in size, ordered by size relative to the number of index scans. This command is generally useful for eliminating indexes that are unused, which can impact write performance, as well as read performance should they occupy space in memory.
353
387
 
388
+ [More info](https://pawelurbanek.com/postgresql-fix-performance#unused-indexes)
389
+
354
390
  ### `null_indexes`
355
391
 
356
392
  ```ruby
357
393
 
358
- RubyPGExtras.null_indexes
394
+ RubyPGExtras.null_indexes(args: { min_relation_size_mb: 10 })
359
395
 
360
396
  oid | index | index_size | unique | indexed_column | null_frac | expected_saving
361
397
  ---------+--------------------+------------+--------+----------------+-----------+-----------------
362
- 183764 | users_reset_token | 1418 MB | t | reset_token | 96.15% | 1363 MB
363
- 88732 | plan_cancelled_at | 1651 MB | f | cancelled_at | 6.11% | 101 MB
364
- 9827345 | users_email | 22 MB | t | email | 11.21% | 2494 kB
398
+ 183764 | users_reset_token | 1445 MB | t | reset_token | 97.00% | 1401 MB
399
+ 88732 | plan_cancelled_at | 539 MB | f | cancelled_at | 8.30% | 44 MB
400
+ 9827345 | users_email | 18 MB | t | email | 28.67% | 5160 kB
365
401
 
366
402
  ```
367
403
 
368
- This commands displays indexes that contain `NULL` values. A high ratio of `NULL` values means that using a partial index excluding them will be beneficial in case they are not used for searching. [Source and more info](https://hakibenita.com/postgresql-unused-index-size).
404
+ This command displays indexes that contain `NULL` values. A high ratio of `NULL` values means that using a partial index excluding them will be beneficial in case they are not used for searching.
405
+
406
+ [More info](https://pawelurbanek.com/postgresql-fix-performance#null-indexes)
369
407
 
370
408
  ### `seq_scans`
371
409
 
@@ -389,11 +427,13 @@ RubyPGExtras.seq_scans
389
427
 
390
428
  This command displays the number of sequential scans recorded against all tables, descending by count of sequential scans. Tables that have very high numbers of sequential scans may be under-indexed, and it may be worth investigating queries that read from these tables.
391
429
 
430
+ [More info](https://pawelurbanek.com/postgresql-fix-performance#missing-indexes)
431
+
392
432
  ### `long_running_queries`
393
433
 
394
434
  ```ruby
395
435
 
396
- RubyPGExtras.long_running_queries
436
+ RubyPGExtras.long_running_queries(args: { threshold: "200 milliseconds" })
397
437
 
398
438
 
399
439
  pid | duration | query
@@ -444,6 +484,8 @@ RubyPGExtras.bloat
444
484
 
445
485
  This command displays an estimation of table "bloat" – space allocated to a relation that is full of dead tuples, that has yet to be reclaimed. Tables that have a high bloat ratio, typically 10 or greater, should be investigated to see if vacuuming is aggressive enough, and can be a sign of high table churn.
446
486
 
487
+ [More info](https://pawelurbanek.com/postgresql-fix-performance#bloat)
488
+
447
489
  ### `vacuum_stats`
448
490
 
449
491
  ```ruby
@@ -472,6 +514,22 @@ RubyPGExtras.kill_all
472
514
 
473
515
  This commands kills all the currently active connections to the database. It can be useful as a last resort when your database is stuck in a deadlock.
474
516
 
517
+ ### `buffercache_stats`
518
+
519
+ ```ruby
520
+ RubyPGExtras.buffercache_stats(args: { limit: 10 })
521
+ ```
522
+
523
+ This command shows the relations buffered in database share buffer, ordered by percentage taken. It also shows that how much of the whole relation is buffered.
524
+
525
+ ### `buffercache_usage`
526
+
527
+ ```ruby
528
+ RubyPGExtras.buffercache_usage(args: { limit: 20 })
529
+ ```
530
+
531
+ This command calculates how many blocks from which table are currently cached.
532
+
475
533
  ### `extensions`
476
534
 
477
535
  ```ruby
@@ -491,3 +549,9 @@ RubyPGExtras.mandelbrot
491
549
  ```
492
550
 
493
551
  This command outputs the Mandelbrot set, calculated through SQL.
552
+
553
+ ## Query sources
554
+
555
+ - [https://github.com/heroku/heroku-pg-extras](https://github.com/heroku/heroku-pg-extras)
556
+ - [https://hakibenita.com/postgresql-unused-index-size](https://hakibenita.com/postgresql-unused-index-size)
557
+ - [https://sites.google.com/site/itmyshare/database-tips-and-examples/postgres/useful-sqls-to-check-contents-of-postgresql-shared_buffer](https://sites.google.com/site/itmyshare/database-tips-and-examples/postgres/useful-sqls-to-check-contents-of-postgresql-shared_buffer)
@@ -1,11 +1,30 @@
1
1
  version: '3'
2
2
 
3
3
  services:
4
- postgres:
4
+ postgres11:
5
5
  image: postgres:11.5-alpine
6
+ command: postgres -c shared_preload_libraries=pg_stat_statements -c pg_stat_statements.track=all -c max_connections=200
6
7
  environment:
7
8
  POSTGRES_USER: postgres
8
9
  POSTGRES_DB: ruby-pg-extras-test
9
10
  POSTGRES_PASSWORD: secret
10
11
  ports:
11
12
  - '5432:5432'
13
+ postgres12:
14
+ image: postgres:12.7-alpine
15
+ command: postgres -c shared_preload_libraries=pg_stat_statements -c pg_stat_statements.track=all -c max_connections=200
16
+ environment:
17
+ POSTGRES_USER: postgres
18
+ POSTGRES_DB: ruby-pg-extras-test
19
+ POSTGRES_PASSWORD: secret
20
+ ports:
21
+ - '5433:5432'
22
+ postgres13:
23
+ image: postgres:13.3-alpine
24
+ command: postgres -c shared_preload_libraries=pg_stat_statements -c pg_stat_statements.track=all -c max_connections=200
25
+ environment:
26
+ POSTGRES_USER: postgres
27
+ POSTGRES_DB: ruby-pg-extras-test
28
+ POSTGRES_PASSWORD: secret
29
+ ports:
30
+ - '5434:5432'
@@ -6,22 +6,29 @@ require 'pg'
6
6
 
7
7
  module RubyPGExtras
8
8
  @@database_url = nil
9
+ NEW_PG_STAT_STATEMENTS = "1.8"
9
10
 
10
11
  QUERIES = %i(
11
- bloat blocking cache_hit
12
+ bloat blocking cache_hit db_settings
12
13
  calls extensions table_cache_hit index_cache_hit
13
14
  index_size index_usage null_indexes locks all_locks
14
15
  long_running_queries mandelbrot outliers
15
16
  records_rank seq_scans table_indexes_size
16
17
  table_size total_index_size total_table_size
17
18
  unused_indexes vacuum_stats kill_all
19
+ buffercache_stats buffercache_usage
18
20
  )
19
21
 
20
22
  DEFAULT_ARGS = Hash.new({}).merge({
21
23
  calls: { limit: 10 },
24
+ calls_legacy: { limit: 10 },
22
25
  long_running_queries: { threshold: "500 milliseconds" },
23
26
  outliers: { limit: 10 },
24
- unused_indexes: { min_scans: 50 }
27
+ outliers_legacy: { limit: 10 },
28
+ buffercache_stats: { limit: 10 },
29
+ buffercache_usage: { limit: 20 },
30
+ unused_indexes: { min_scans: 50 },
31
+ null_indexes: { min_relation_size_mb: 10 }
25
32
  })
26
33
 
27
34
  QUERIES.each do |query_name|
@@ -35,6 +42,16 @@ module RubyPGExtras
35
42
  end
36
43
 
37
44
  def self.run_query(query_name:, in_format:, args: {})
45
+ if %i(calls outliers).include?(query_name)
46
+ pg_stat_statements_ver = RubyPGExtras.connection.exec("select installed_version from pg_available_extensions where name='pg_stat_statements'")
47
+ .to_a[0].fetch("installed_version", nil)
48
+ if pg_stat_statements_ver != nil
49
+ if Gem::Version.new(pg_stat_statements_ver) < Gem::Version.new(NEW_PG_STAT_STATEMENTS)
50
+ query_name = "#{query_name}_legacy".to_sym
51
+ end
52
+ end
53
+ end
54
+
38
55
  sql = if (custom_args = DEFAULT_ARGS[query_name].merge(args)) != {}
39
56
  sql_for(query_name: query_name) % custom_args
40
57
  else
@@ -0,0 +1,13 @@
1
+ /* Calculates percentages of relations buffered in database share buffer */
2
+
3
+ SELECT
4
+ c.relname,
5
+ pg_size_pretty(count(*) * 8192) AS buffered,
6
+ round(100.0 * count(*) / (SELECT setting FROM pg_settings WHERE name = 'shared_buffers')::integer, 1) AS buffer_percent,
7
+ round(100.0 * count(*) * 8192 / pg_table_size(c.oid), 1) AS percent_of_relation
8
+ FROM pg_class c
9
+ INNER JOIN pg_buffercache b ON b.relfilenode = c.relfilenode
10
+ INNER JOIN pg_database d ON (b.reldatabase = d.oid AND d.datname = current_database())
11
+ GROUP BY c.oid,c.relname
12
+ ORDER BY 3 DESC
13
+ LIMIT %{limit};
@@ -0,0 +1,9 @@
1
+ /* Calculate how many blocks from which table are currently cached */
2
+
3
+ SELECT c.relname, count(*) AS buffers
4
+ FROM pg_class c
5
+ INNER JOIN pg_buffercache b ON b.relfilenode = c.relfilenode
6
+ INNER JOIN pg_database d ON (b.reldatabase = d.oid AND d.datname = current_database())
7
+ GROUP BY c.relname
8
+ ORDER BY 2 DESC
9
+ LIMIT %{limit};
@@ -1,8 +1,8 @@
1
1
  /* Queries that have highest frequency of execution */
2
2
 
3
3
  SELECT query AS qry,
4
- interval '1 millisecond' * total_time AS exec_time,
5
- to_char((total_time/sum(total_time) OVER()) * 100, 'FM90D0') || '%%' AS prop_exec_time,
4
+ interval '1 millisecond' * total_exec_time AS exec_time,
5
+ to_char((total_exec_time/sum(total_exec_time) OVER()) * 100, 'FM90D0') || '%%' AS prop_exec_time,
6
6
  to_char(calls, 'FM999G999G990') AS ncalls,
7
7
  interval '1 millisecond' * (blk_read_time + blk_write_time) AS sync_io_time
8
8
  FROM pg_stat_statements WHERE userid = (SELECT usesysid FROM pg_user WHERE usename = current_user LIMIT 1)
@@ -0,0 +1,9 @@
1
+ /* Queries that have highest frequency of execution */
2
+
3
+ SELECT query AS qry,
4
+ interval '1 millisecond' * total_time AS exec_time,
5
+ to_char((total_time/sum(total_time) OVER()) * 100, 'FM90D0') || '%%' AS prop_exec_time,
6
+ to_char(calls, 'FM999G999G990') AS ncalls,
7
+ interval '1 millisecond' * (blk_read_time + blk_write_time) AS sync_io_time
8
+ FROM pg_stat_statements WHERE userid = (SELECT usesysid FROM pg_user WHERE usename = current_user LIMIT 1)
9
+ ORDER BY calls DESC LIMIT %{limit};
@@ -0,0 +1,9 @@
1
+ /* Values of selected PostgreSQL settings */
2
+
3
+ SELECT name, setting, unit, short_desc FROM pg_settings
4
+ WHERE name IN (
5
+ 'max_connections', 'shared_buffers', 'effective_cache_size',
6
+ 'maintenance_work_mem', 'checkpoint_completion_target', 'wal_buffers',
7
+ 'default_statistics_target', 'random_page_cost', 'effective_io_concurrency',
8
+ 'work_mem', 'min_wal_size', 'max_wal_size'
9
+ );
@@ -1,4 +1,5 @@
1
- /* Find indexed columns with high null_frac */
1
+ /* Find indexes with a high ratio of NULL values */
2
+
2
3
  SELECT
3
4
  c.oid,
4
5
  c.relname AS index,
@@ -7,7 +8,7 @@ SELECT
7
8
  a.attname AS indexed_column,
8
9
  CASE s.null_frac
9
10
  WHEN 0 THEN ''
10
- ELSE to_char(s.null_frac * 100, '999.00%')
11
+ ELSE to_char(s.null_frac * 100, '999.00%%')
11
12
  END AS null_frac,
12
13
  pg_size_pretty((pg_relation_size(c.oid) * s.null_frac)::bigint) AS expected_saving
13
14
  FROM
@@ -26,7 +27,7 @@ WHERE
26
27
  AND array_length(i.indkey, 1) = 1
27
28
  -- Exclude indexes without null_frac ratio
28
29
  AND coalesce(s.null_frac, 0) != 0
29
- -- Larger than 10MB
30
- AND pg_relation_size(c.oid) > 10 * 1024 ^ 2
30
+ -- Larger than threshold
31
+ AND pg_relation_size(c.oid) > %{min_relation_size_mb} * 1024 ^ 2
31
32
  ORDER BY
32
33
  pg_relation_size(c.oid) * s.null_frac DESC;
@@ -1,10 +1,10 @@
1
1
  /* Queries that have longest execution time in aggregate */
2
2
 
3
- SELECT interval '1 millisecond' * total_time AS total_exec_time,
4
- to_char((total_time/sum(total_time) OVER()) * 100, 'FM90D0') || '%%' AS prop_exec_time,
3
+ SELECT interval '1 millisecond' * total_exec_time AS total_exec_time,
4
+ to_char((total_exec_time/sum(total_exec_time) OVER()) * 100, 'FM90D0') || '%%' AS prop_exec_time,
5
5
  to_char(calls, 'FM999G999G999G990') AS ncalls,
6
6
  interval '1 millisecond' * (blk_read_time + blk_write_time) AS sync_io_time,
7
7
  query AS query
8
8
  FROM pg_stat_statements WHERE userid = (SELECT usesysid FROM pg_user WHERE usename = current_user LIMIT 1)
9
- ORDER BY total_time DESC
9
+ ORDER BY total_exec_time DESC
10
10
  LIMIT %{limit};
@@ -0,0 +1,10 @@
1
+ /* Queries that have longest execution time in aggregate */
2
+
3
+ SELECT interval '1 millisecond' * total_time AS total_exec_time,
4
+ to_char((total_time/sum(total_time) OVER()) * 100, 'FM90D0') || '%%' AS prop_exec_time,
5
+ to_char(calls, 'FM999G999G999G990') AS ncalls,
6
+ interval '1 millisecond' * (blk_read_time + blk_write_time) AS sync_io_time,
7
+ query AS query
8
+ FROM pg_stat_statements WHERE userid = (SELECT usesysid FROM pg_user WHERE usename = current_user LIMIT 1)
9
+ ORDER BY total_time DESC
10
+ LIMIT %{limit};
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module RubyPGExtras
4
- VERSION = "1.5.0"
4
+ VERSION = "2.0.0"
5
5
  end
data/spec/smoke_spec.rb CHANGED
@@ -3,6 +3,11 @@
3
3
  require 'spec_helper'
4
4
 
5
5
  describe RubyPGExtras do
6
+ before(:all) do
7
+ RubyPGExtras.connection.exec("CREATE EXTENSION IF NOT EXISTS pg_buffercache;")
8
+ RubyPGExtras.connection.exec("CREATE EXTENSION IF NOT EXISTS pg_stat_statements;")
9
+ end
10
+
6
11
  RubyPGExtras::QUERIES.each do |query_name|
7
12
  it "#{query_name} description can be read" do
8
13
  expect do
@@ -13,9 +18,7 @@ describe RubyPGExtras do
13
18
  end
14
19
  end
15
20
 
16
- PG_STATS_DEPENDENT_QUERIES = %i(calls outliers)
17
-
18
- (RubyPGExtras::QUERIES - PG_STATS_DEPENDENT_QUERIES).each do |query_name|
21
+ RubyPGExtras::QUERIES.each do |query_name|
19
22
  it "#{query_name} query can be executed" do
20
23
  expect do
21
24
  RubyPGExtras.run_query(
data/spec/spec_helper.rb CHANGED
@@ -4,4 +4,16 @@ require 'rubygems'
4
4
  require 'bundler/setup'
5
5
  require_relative '../lib/ruby-pg-extras'
6
6
 
7
- ENV["DATABASE_URL"] ||= "postgresql://postgres:secret@localhost:5432/ruby-pg-extras-test"
7
+ pg_version = ENV["PG_VERSION"]
8
+
9
+ port = if pg_version == "11"
10
+ "5432"
11
+ elsif pg_version == "12"
12
+ "5433"
13
+ elsif pg_version == "13"
14
+ "5434"
15
+ else
16
+ "5432"
17
+ end
18
+
19
+ ENV["DATABASE_URL"] ||= "postgresql://postgres:secret@localhost:#{port}/ruby-pg-extras-test"
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: ruby-pg-extras
3
3
  version: !ruby/object:Gem::Version
4
- version: 1.5.0
4
+ version: 2.0.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - pawurb
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2021-02-13 00:00:00.000000000 Z
11
+ date: 2021-07-08 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: pg
@@ -86,8 +86,12 @@ files:
86
86
  - lib/ruby-pg-extras/queries/all_locks.sql
87
87
  - lib/ruby-pg-extras/queries/bloat.sql
88
88
  - lib/ruby-pg-extras/queries/blocking.sql
89
+ - lib/ruby-pg-extras/queries/buffercache_stats.sql
90
+ - lib/ruby-pg-extras/queries/buffercache_usage.sql
89
91
  - lib/ruby-pg-extras/queries/cache_hit.sql
90
92
  - lib/ruby-pg-extras/queries/calls.sql
93
+ - lib/ruby-pg-extras/queries/calls_legacy.sql
94
+ - lib/ruby-pg-extras/queries/db_settings.sql
91
95
  - lib/ruby-pg-extras/queries/extensions.sql
92
96
  - lib/ruby-pg-extras/queries/index_cache_hit.sql
93
97
  - lib/ruby-pg-extras/queries/index_size.sql
@@ -98,6 +102,7 @@ files:
98
102
  - lib/ruby-pg-extras/queries/mandelbrot.sql
99
103
  - lib/ruby-pg-extras/queries/null_indexes.sql
100
104
  - lib/ruby-pg-extras/queries/outliers.sql
105
+ - lib/ruby-pg-extras/queries/outliers_legacy.sql
101
106
  - lib/ruby-pg-extras/queries/records_rank.sql
102
107
  - lib/ruby-pg-extras/queries/seq_scans.sql
103
108
  - lib/ruby-pg-extras/queries/table_cache_hit.sql
@@ -130,7 +135,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
130
135
  - !ruby/object:Gem::Version
131
136
  version: '0'
132
137
  requirements: []
133
- rubygems_version: 3.1.4
138
+ rubygems_version: 3.1.6
134
139
  signing_key:
135
140
  specification_version: 4
136
141
  summary: Ruby PostgreSQL performance database insights