pghero 2.4.2 → 2.5.0

Sign up to get free protection for your applications and to get access to all the features.

Potentially problematic release.


This version of pghero might be problematic. Click here for more details.

checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 33927c2c6b6d2c286f872adb99d7095f836108872c4f9af3ca180d6a70cb0b48
4
- data.tar.gz: 8f2f7565269047b8a351bd712a931e26febc1b6bd96f21b3c1f81ece50807f8d
3
+ metadata.gz: 252b8c1bb7580d67b149ef865f97b99a89fbd9aecb640a92f5ebf8a532d7c10b
4
+ data.tar.gz: 87a9df921265875867cb9d9b0dcd92fa6d6855075818129a8b5bf47337277354
5
5
  SHA512:
6
- metadata.gz: 76dd17666d9842cad4a540c519df850cf784d97eeccd9792ff853d90a633628b5615459521bc2f5b382e75ffe35b52023e4cf8cfdbabc2924154a6feb7d8c94e
7
- data.tar.gz: c0c5ba7f1823182241ca7c8365969a5f661f2abca55551832ee3e64d61eabb5432ad7dcb3e2b9f81b972c125d80bf22f5a55e2ef13a471889fe393b1976d1e10
6
+ metadata.gz: 9b1f75f5ef19ee10da7ee815aca9ee7933fa89ae78046e495417f7c863f383172fba28bfb00605ac8810a2bb5a9895ae478a3f182ba36204080151f84ad9e2a6
7
+ data.tar.gz: 6fc630bd5f46fbecb5b6194d58726397c8f28a812e7ceabe12784ed6892bd7f7609646cba22e57c34ab817be40b37e251384f5ba23a32e82199db7620f4dc009
@@ -1,3 +1,12 @@
1
+ ## 2.5.0 (2020-05-24)
2
+
3
+ - Added system stats for Google Cloud SQL and Azure Database
4
+ - Added experimental `filter_data` option
5
+ - Localized times on maintenance page
6
+ - Improved connection pooling
7
+ - Improved error message for non-Postgres connections
8
+ - Fixed more deprecation warnings in Ruby 2.7
9
+
1
10
  ## 2.4.2 (2020-04-16)
2
11
 
3
12
  - Added `connections` method
data/README.md CHANGED
@@ -12,15 +12,19 @@ A performance dashboard for Postgres
12
12
 
13
13
  [![Screenshot](https://pghero.dokkuapp.com/assets/pghero-f8abe426e6bf54bb7dba87b425bb809740ebd386208bcd280a7e802b053a1023.png)](https://pghero.dokkuapp.com/)
14
14
 
15
+ :tangerine: Battle-tested at [Instacart](https://www.instacart.com/opensource)
16
+
17
+ [![Build Status](https://travis-ci.org/ankane/pghero.svg?branch=master)](https://travis-ci.org/ankane/pghero) [![Docker Pulls](https://img.shields.io/docker/pulls/ankane/pghero)](https://hub.docker.com/repository/docker/ankane/pghero)
18
+
15
19
  ## Installation
16
20
 
17
- PgHero is available as a Rails engine, Linux package, and Docker image.
21
+ PgHero is available as a Docker image, Linux package, and Rails engine.
18
22
 
19
23
  Select your preferred method of installation to get started.
20
24
 
21
- - [Rails](guides/Rails.md)
22
- - [Linux](guides/Linux.md)
23
25
  - [Docker](guides/Docker.md)
26
+ - [Linux](guides/Linux.md)
27
+ - [Rails](guides/Rails.md)
24
28
 
25
29
  ## Related Projects
26
30
 
@@ -31,10 +35,17 @@ Select your preferred method of installation to get started.
31
35
 
32
36
  ## Credits
33
37
 
34
- A big thanks to [Craig Kerstiens](http://www.craigkerstiens.com/2013/01/10/more-on-postgres-performance/) and [Heroku](https://blog.heroku.com/archives/2013/5/10/more_insight_into_your_database_with_pgextras) for the initial queries and [Bootswatch](https://github.com/thomaspark/bootswatch) for the theme :clap:
38
+ A big thanks to [Craig Kerstiens](http://www.craigkerstiens.com/2013/01/10/more-on-postgres-performance/) and [Heroku](https://blog.heroku.com/archives/2013/5/10/more_insight_into_your_database_with_pgextras) for the initial queries and [Bootswatch](https://github.com/thomaspark/bootswatch) for the theme.
35
39
 
36
- Know a bit about PostgreSQL? [Suggestions](https://github.com/ankane/pghero/issues) are greatly appreciated.
40
+ ## History
37
41
 
38
- :tangerine: Battle-tested at [Instacart](https://www.instacart.com/opensource)
42
+ View the [changelog](https://github.com/ankane/pghero/blob/master/CHANGELOG.md)
43
+
44
+ ## Contributing
45
+
46
+ Everyone is encouraged to help improve this project. Here are a few ways you can help:
39
47
 
40
- [![Build Status](https://travis-ci.org/ankane/pghero.svg?branch=master)](https://travis-ci.org/ankane/pghero)
48
+ - [Report bugs](https://github.com/ankane/pghero/issues)
49
+ - Fix bugs and [submit pull requests](https://github.com/ankane/pghero/pulls)
50
+ - Write, clarify, or fix documentation
51
+ - Suggest or add new features
@@ -59,7 +59,7 @@ function initSlider() {
59
59
  html = "Now";
60
60
  }
61
61
  } else {
62
- html = months[time.getMonth()] + " " + time.getDate() + ", " + pad(time.getHours()) + ":" + pad(time.getMinutes());
62
+ html = time.getDate() + " " + months[time.getMonth()] + " " + pad(time.getHours()) + ":" + pad(time.getMinutes());
63
63
  }
64
64
  $(selector).html(html);
65
65
  }
@@ -13,6 +13,11 @@ module PgHero
13
13
  before_action :ensure_query_stats, only: [:queries]
14
14
 
15
15
  if PgHero.config["override_csp"]
16
+ # note: this does not take into account asset hosts
17
+ # which can be a string with %d or a proc
18
+ # https://api.rubyonrails.org/classes/ActionView/Helpers/AssetUrlHelper.html
19
+ # users should set CSP manually if needed
20
+ # see https://github.com/ankane/pghero/issues/297
16
21
  after_action do
17
22
  response.headers["Content-Security-Policy"] = "default-src 'self' 'unsafe-inline'"
18
23
  end
@@ -198,6 +203,11 @@ module PgHero
198
203
  "1 week" => {duration: 1.week, period: 30.minutes},
199
204
  "2 weeks" => {duration: 2.weeks, period: 1.hours}
200
205
  }
206
+ if @database.system_stats_provider == :azure
207
+ # doesn't support 10, just 5 and 15
208
+ @periods["1 day"][:period] = 15.minutes
209
+ end
210
+
201
211
  @duration = (params[:duration] || 1.hour).to_i
202
212
  @period = (params[:period] || 60.seconds).to_i
203
213
 
@@ -209,22 +219,36 @@ module PgHero
209
219
  end
210
220
 
211
221
  def cpu_usage
212
- render json: [{name: "CPU", data: @database.cpu_usage(system_params).map { |k, v| [k, v.round] }, library: chart_library_options}]
222
+ render json: [{name: "CPU", data: @database.cpu_usage(**system_params).map { |k, v| [k, v ? v.round : v] }, library: chart_library_options}]
213
223
  end
214
224
 
215
225
  def connection_stats
216
- render json: [{name: "Connections", data: @database.connection_stats(system_params), library: chart_library_options}]
226
+ render json: [{name: "Connections", data: @database.connection_stats(**system_params), library: chart_library_options}]
217
227
  end
218
228
 
219
229
  def replication_lag_stats
220
- render json: [{name: "Lag", data: @database.replication_lag_stats(system_params), library: chart_library_options}]
230
+ render json: [{name: "Lag", data: @database.replication_lag_stats(**system_params), library: chart_library_options}]
221
231
  end
222
232
 
223
233
  def load_stats
224
- render json: [
225
- {name: "Read IOPS", data: @database.read_iops_stats(system_params).map { |k, v| [k, v.round] }, library: chart_library_options},
226
- {name: "Write IOPS", data: @database.write_iops_stats(system_params).map { |k, v| [k, v.round] }, library: chart_library_options}
227
- ]
234
+ stats =
235
+ case @database.system_stats_provider
236
+ when :azure
237
+ [
238
+ {name: "IO Consumption", data: @database.azure_stats("io_consumption_percent", **system_params), library: chart_library_options}
239
+ ]
240
+ when :gcp
241
+ [
242
+ {name: "Read Ops", data: @database.read_iops_stats(**system_params).map { |k, v| [k, v ? v.round : v] }, library: chart_library_options},
243
+ {name: "Write Ops", data: @database.write_iops_stats(**system_params).map { |k, v| [k, v ? v.round : v] }, library: chart_library_options}
244
+ ]
245
+ else
246
+ [
247
+ {name: "Read IOPS", data: @database.read_iops_stats(**system_params).map { |k, v| [k, v ? v.round : v] }, library: chart_library_options},
248
+ {name: "Write IOPS", data: @database.write_iops_stats(**system_params).map { |k, v| [k, v ? v.round : v] }, library: chart_library_options}
249
+ ]
250
+ end
251
+ render json: stats
228
252
  end
229
253
 
230
254
  def free_space_stats
@@ -311,6 +335,7 @@ module PgHero
311
335
  @title = "Maintenance"
312
336
  @maintenance_info = @database.maintenance_info
313
337
  @time_zone = PgHero.time_zone
338
+ @show_dead_rows = params[:dead_rows]
314
339
  end
315
340
 
316
341
  def kill
@@ -393,7 +418,8 @@ module PgHero
393
418
  def system_params
394
419
  {
395
420
  duration: params[:duration],
396
- period: params[:period]
421
+ period: params[:period],
422
+ series: true
397
423
  }.delete_if { |_, v| v.nil? }
398
424
  end
399
425
 
@@ -421,6 +447,7 @@ module PgHero
421
447
  render_text "No support for Rails API. See https://github.com/pghero/pghero for a standalone app." if Rails.application.config.try(:api_only)
422
448
  end
423
449
 
450
+ # TODO return error status code
424
451
  def render_text(message)
425
452
  render plain: message
426
453
  end
@@ -30,7 +30,9 @@
30
30
  <% end %>
31
31
  </td>
32
32
  <td class="text-right">
33
- <%= button_to "Explain", explain_path, params: {query: query[:query]}, form: {target: "_blank"}, class: "btn btn-info" %>
33
+ <% unless @database.filter_data %>
34
+ <%= button_to "Explain", explain_path, params: {query: query[:query]}, form: {target: "_blank"}, class: "btn btn-info" %>
35
+ <% end %>
34
36
  <%= button_to "Kill", kill_path(pid: query[:pid]), class: "btn btn-danger" %>
35
37
  </td>
36
38
  </tr>
@@ -7,6 +7,9 @@
7
7
  <th>Table</th>
8
8
  <th style="width: 20%;">Last Vacuum</th>
9
9
  <th style="width: 20%;">Last Analyze</th>
10
+ <% if @show_dead_rows %>
11
+ <th style="width: 20%;">Dead Rows</th>
12
+ <% end %>
10
13
  </tr>
11
14
  </thead>
12
15
  <tbody>
@@ -21,7 +24,7 @@
21
24
  <td>
22
25
  <% time = [table[:last_autovacuum], table[:last_vacuum]].compact.max %>
23
26
  <% if time %>
24
- <%= time.in_time_zone(@time_zone).strftime("%-m/%-e %l:%M %P") %>
27
+ <%= l time.in_time_zone(@time_zone), format: :short %>
25
28
  <% else %>
26
29
  <span class="text-muted">Unknown</span>
27
30
  <% end %>
@@ -29,11 +32,22 @@
29
32
  <td>
30
33
  <% time = [table[:last_autoanalyze], table[:last_analyze]].compact.max %>
31
34
  <% if time %>
32
- <%= time.in_time_zone(@time_zone).strftime("%-m/%-e %l:%M %P") %>
35
+ <%= l time.in_time_zone(@time_zone), format: :short %>
33
36
  <% else %>
34
37
  <span class="text-muted">Unknown</span>
35
38
  <% end %>
36
39
  </td>
40
+ <% if @show_dead_rows %>
41
+ <td>
42
+ <% if table[:live_rows] != 0 %>
43
+ <%# use live rows only for denominator to make it easier to compare with autovacuum_vacuum_scale_factor %>
44
+ <%# it's not a true percentage, since it can go above 100% %>
45
+ <%= (100.0 * table[:dead_rows] / table[:live_rows]).round %>%
46
+ <% else %>
47
+ <span class="text-muted">Unknown</span>
48
+ <% end %>
49
+ </td>
50
+ <% end %>
37
51
  </tr>
38
52
  <% end %>
39
53
  </tbody>
@@ -18,7 +18,8 @@
18
18
  </tbody>
19
19
  </table>
20
20
 
21
- <p>Check out <%= link_to "PgTune", "https://pgtune.leopard.in.ua/", target: "_blank" %> for recommendations. DB version is <%= @database.server_version.split(" ").first.split(".").first(2).join(".") %>.</p>
21
+ <% version_parts = @database.server_version.split(" ").first.split(".") %>
22
+ <p>Check out <%= link_to "PgTune", "https://pgtune.leopard.in.ua/", target: "_blank" %> for recommendations. DB version is <%= version_parts[0].to_i >= 10 ? version_parts[0] : version_parts.first(2).join(".") %>.</p>
22
23
  </div>
23
24
 
24
25
  <% if @autovacuum_settings %>
@@ -3,6 +3,11 @@ databases:
3
3
  # Database URL (defaults to app database)
4
4
  # url: <%%= ENV["DATABASE_URL"] %>
5
5
 
6
+ # System stats
7
+ # aws_db_instance_identifier: my-instance
8
+ # gcp_database_id: my-project:my-instance
9
+ # azure_resource_id: my-resource-id
10
+
6
11
  # Add more databases
7
12
  # other:
8
13
  # url: <%%= ENV["OTHER_DATABASE_URL"] %>
@@ -33,7 +38,9 @@ databases:
33
38
  # stats_database_url: <%%= ENV["PGHERO_STATS_DATABASE_URL"] %>
34
39
 
35
40
  # AWS configuration (defaults to app AWS config)
36
- # also need aws_db_instance_identifier with each database
37
41
  # aws_access_key_id: ...
38
42
  # aws_secret_access_key: ...
39
43
  # aws_region: us-east-1
44
+
45
+ # Filter data from queries (experimental)
46
+ # filter_data: true
@@ -34,9 +34,11 @@ module PgHero
34
34
  class Error < StandardError; end
35
35
  class NotEnabled < Error; end
36
36
 
37
+ MUTEX = Mutex.new
38
+
37
39
  # settings
38
40
  class << self
39
- attr_accessor :long_running_query_sec, :slow_query_ms, :slow_query_calls, :explain_timeout_sec, :total_connections_threshold, :cache_hit_rate_threshold, :env, :show_migrations, :config_path
41
+ attr_accessor :long_running_query_sec, :slow_query_ms, :slow_query_calls, :explain_timeout_sec, :total_connections_threshold, :cache_hit_rate_threshold, :env, :show_migrations, :config_path, :filter_data
40
42
  end
41
43
  self.long_running_query_sec = (ENV["PGHERO_LONG_RUNNING_QUERY_SEC"] || 60).to_i
42
44
  self.slow_query_ms = (ENV["PGHERO_SLOW_QUERY_MS"] || 20).to_i
@@ -47,6 +49,7 @@ module PgHero
47
49
  self.env = ENV["RAILS_ENV"] || ENV["RACK_ENV"] || "development"
48
50
  self.show_migrations = true
49
51
  self.config_path = ENV["PGHERO_CONFIG_PATH"] || "config/pghero.yml"
52
+ self.filter_data = ENV["PGHERO_FILTER_DATA"].to_s.size > 0
50
53
 
51
54
  class << self
52
55
  extend Forwardable
@@ -117,7 +120,9 @@ module PgHero
117
120
  if databases.empty?
118
121
  databases["primary"] = {
119
122
  "url" => ENV["PGHERO_DATABASE_URL"] || ActiveRecord::Base.connection_config,
120
- "db_instance_identifier" => ENV["PGHERO_DB_INSTANCE_IDENTIFIER"]
123
+ "db_instance_identifier" => ENV["PGHERO_DB_INSTANCE_IDENTIFIER"],
124
+ "gcp_database_id" => ENV["PGHERO_GCP_DATABASE_ID"],
125
+ "azure_resource_id" => ENV["PGHERO_AZURE_RESOURCE_ID"]
121
126
  }
122
127
  end
123
128
 
@@ -128,14 +133,20 @@ module PgHero
128
133
  end
129
134
  end
130
135
 
136
+ # ensure we only have one copy of databases
137
+ # so there's only one connection pool per database
131
138
  def databases
132
- @databases ||= begin
133
- Hash[
134
- config["databases"].map do |id, c|
135
- [id.to_sym, PgHero::Database.new(id, c)]
136
- end
137
- ]
139
+ unless defined?(@databases)
140
+ # only use mutex on initialization
141
+ MUTEX.synchronize do
142
+ # return if another process initialized while we were waiting
143
+ return @databases if defined?(@databases)
144
+
145
+ @databases = config["databases"].map { |id, c| [id.to_sym, Database.new(id, c)] }.to_h
146
+ end
138
147
  end
148
+
149
+ @databases
139
150
  end
140
151
 
141
152
  def primary_database
@@ -23,6 +23,11 @@ module PgHero
23
23
  def initialize(id, config)
24
24
  @id = id
25
25
  @config = config || {}
26
+
27
+ # preload model to ensure only one connection pool
28
+ # this doesn't actually start any connections
29
+ @adapter_checked = false
30
+ @connection_model = build_connection_model
26
31
  end
27
32
 
28
33
  def name
@@ -73,8 +78,43 @@ module PgHero
73
78
  config["aws_region"] || PgHero.config["aws_region"] || ENV["PGHERO_REGION"] || ENV["AWS_REGION"] || (defined?(Aws) && Aws.config[:region]) || "us-east-1"
74
79
  end
75
80
 
81
+ # environment variable is only used if no config file
76
82
  def aws_db_instance_identifier
77
- @db_instance_identifier ||= config["aws_db_instance_identifier"] || config["db_instance_identifier"]
83
+ @aws_db_instance_identifier ||= config["aws_db_instance_identifier"] || config["db_instance_identifier"]
84
+ end
85
+
86
+ # environment variable is only used if no config file
87
+ def gcp_database_id
88
+ @gcp_database_id ||= config["gcp_database_id"]
89
+ end
90
+
91
+ # environment variable is only used if no config file
92
+ def azure_resource_id
93
+ @azure_resource_id ||= config["azure_resource_id"]
94
+ end
95
+
96
+ # must check keys for booleans
97
+ def filter_data
98
+ unless defined?(@filter_data)
99
+ @filter_data =
100
+ if config.key?("filter_data")
101
+ config["filter_data"]
102
+ elsif PgHero.config.key?("filter_data")
103
+ PgHero.config.key?("filter_data")
104
+ else
105
+ PgHero.filter_data
106
+ end
107
+
108
+ if @filter_data
109
+ begin
110
+ require "pg_query"
111
+ rescue LoadError
112
+ raise Error, "pg_query required for filter_data"
113
+ end
114
+ end
115
+ end
116
+
117
+ @filter_data
78
118
  end
79
119
 
80
120
  # TODO remove in next major version
@@ -85,27 +125,46 @@ module PgHero
85
125
 
86
126
  private
87
127
 
128
+ # check adapter lazily
88
129
  def connection_model
89
- @connection_model ||= begin
90
- url = config["url"]
91
- if !url && config["spec"]
92
- raise Error, "Spec requires Rails 6+" unless PgHero.spec_supported?
93
- resolved = ActiveRecord::Base.configurations.configs_for(env_name: PgHero.env, spec_name: config["spec"], include_replicas: true)
94
- raise Error, "Spec not found: #{config["spec"]}" unless resolved
95
- url = resolved.config
130
+ unless @adapter_checked
131
+ # rough check for Postgres adapter
132
+ # keep this message generic so it's useful
133
+ # when empty url set in Docker image pghero.yml
134
+ unless @connection_model.connection.adapter_name =~ /postg/i
135
+ raise Error, "Invalid connection URL"
96
136
  end
97
- Class.new(PgHero::Connection) do
98
- def self.name
99
- "PgHero::Connection::Database#{object_id}"
100
- end
101
- case url
102
- when String
103
- url = "#{url}#{url.include?("?") ? "&" : "?"}connect_timeout=5" unless url.include?("connect_timeout=")
104
- when Hash
105
- url[:connect_timeout] ||= 5
106
- end
107
- establish_connection url if url
137
+ @adapter_checked = true
138
+ end
139
+
140
+ @connection_model
141
+ end
142
+
143
+ # just return the model
144
+ # do not start a connection
145
+ def build_connection_model
146
+ url = config["url"]
147
+
148
+ # resolve spec
149
+ if !url && config["spec"]
150
+ raise Error, "Spec requires Rails 6+" unless PgHero.spec_supported?
151
+ resolved = ActiveRecord::Base.configurations.configs_for(env_name: PgHero.env, spec_name: config["spec"], include_replicas: true)
152
+ raise Error, "Spec not found: #{config["spec"]}" unless resolved
153
+ url = resolved.config
154
+ end
155
+
156
+ Class.new(PgHero::Connection) do
157
+ def self.name
158
+ "PgHero::Connection::Database#{object_id}"
159
+ end
160
+
161
+ case url
162
+ when String
163
+ url = "#{url}#{url.include?("?") ? "&" : "?"}connect_timeout=5" unless url.include?("connect_timeout=")
164
+ when Hash
165
+ url[:connect_timeout] ||= 5
108
166
  end
167
+ establish_connection url if url
109
168
  end
110
169
  end
111
170
  end
@@ -32,13 +32,34 @@ module PgHero
32
32
 
33
33
  private
34
34
 
35
- def select_all(sql, conn = nil)
35
+ def select_all(sql, conn: nil, query_columns: [])
36
36
  conn ||= connection
37
37
  # squish for logs
38
38
  retries = 0
39
39
  begin
40
40
  result = conn.select_all(add_source(squish(sql)))
41
- result.map { |row| Hash[row.map { |col, val| [col.to_sym, result.column_types[col].send(:cast_value, val)] }] }
41
+ result = result.map { |row| Hash[row.map { |col, val| [col.to_sym, result.column_types[col].send(:cast_value, val)] }] }
42
+ if filter_data
43
+ query_columns.each do |column|
44
+ result.each do |row|
45
+ begin
46
+ row[column] = PgQuery.normalize(row[column])
47
+ rescue PgQuery::ParseError
48
+ # try replacing "interval $1" with "$1::interval"
49
+ # see https://github.com/lfittl/pg_query/issues/169 for more info
50
+ # this is not ideal since it changes the query slightly
51
+ # we could skip normalization
52
+ # but this has a very small chance of data leakage
53
+ begin
54
+ row[column] = PgQuery.normalize(row[column].gsub(/\binterval\s+(\$\d+)\b/i, "\\1::interval"))
55
+ rescue PgQuery::ParseError
56
+ row[column] = "<unable to filter data>"
57
+ end
58
+ end
59
+ end
60
+ end
61
+ end
62
+ result
42
63
  rescue ActiveRecord::StatementInvalid => e
43
64
  # fix for random internal errors
44
65
  if e.message.include?("PG::InternalError") && retries < 2
@@ -51,8 +72,8 @@ module PgHero
51
72
  end
52
73
  end
53
74
 
54
- def select_all_stats(sql)
55
- select_all(sql, stats_connection)
75
+ def select_all_stats(sql, **options)
76
+ select_all(sql, **options, conn: stats_connection)
56
77
  end
57
78
 
58
79
  def select_all_size(sql)
@@ -63,12 +84,12 @@ module PgHero
63
84
  result
64
85
  end
65
86
 
66
- def select_one(sql, conn = nil)
67
- select_all(sql, conn).first.values.first
87
+ def select_one(sql, conn: nil)
88
+ select_all(sql, conn: conn).first.values.first
68
89
  end
69
90
 
70
91
  def select_one_stats(sql)
71
- select_one(sql, stats_connection)
92
+ select_one(sql, conn: stats_connection)
72
93
  end
73
94
 
74
95
  def execute(sql)
@@ -57,7 +57,9 @@ module PgHero
57
57
  last_vacuum,
58
58
  last_autovacuum,
59
59
  last_analyze,
60
- last_autoanalyze
60
+ last_autoanalyze,
61
+ n_dead_tup AS dead_rows,
62
+ n_live_tup AS live_rows
61
63
  FROM
62
64
  pg_stat_user_tables
63
65
  ORDER BY
@@ -2,7 +2,7 @@ module PgHero
2
2
  module Methods
3
3
  module Queries
4
4
  def running_queries(min_duration: nil, all: false)
5
- select_all <<-SQL
5
+ query = <<-SQL
6
6
  SELECT
7
7
  pid,
8
8
  state,
@@ -24,6 +24,8 @@ module PgHero
24
24
  ORDER BY
25
25
  COALESCE(query_start, xact_start) DESC
26
26
  SQL
27
+
28
+ select_all(query, query_columns: [:query])
27
29
  end
28
30
 
29
31
  def long_running_queries
@@ -33,7 +35,7 @@ module PgHero
33
35
  # from https://wiki.postgresql.org/wiki/Lock_Monitoring
34
36
  # and https://big-elephants.com/2013-09/exploring-query-locks-in-postgres/
35
37
  def blocked_queries
36
- select_all <<-SQL
38
+ query = <<-SQL
37
39
  SELECT
38
40
  COALESCE(blockingl.relation::regclass::text,blockingl.locktype) as locked_item,
39
41
  blockeda.pid AS blocked_pid,
@@ -65,6 +67,8 @@ module PgHero
65
67
  ORDER BY
66
68
  blocked_duration DESC
67
69
  SQL
70
+
71
+ select_all(query, query_columns: [:blocked_query, :current_or_recent_query_in_blocking_process])
68
72
  end
69
73
  end
70
74
  end
@@ -166,7 +166,7 @@ module PgHero
166
166
  if query_stats_enabled?
167
167
  limit ||= 100
168
168
  sort ||= "total_minutes"
169
- select_all <<-SQL
169
+ query = <<-SQL
170
170
  WITH query_stats AS (
171
171
  SELECT
172
172
  LEFT(query, 10000) AS query,
@@ -200,6 +200,11 @@ module PgHero
200
200
  #{quote_table_name(sort)} DESC
201
201
  LIMIT #{limit.to_i}
202
202
  SQL
203
+
204
+ # we may be able to skip query_columns
205
+ # in more recent versions of Postgres
206
+ # as pg_stat_statements should be already normalized
207
+ select_all(query, query_columns: [:query])
203
208
  else
204
209
  raise NotEnabled, "Query stats not enabled"
205
210
  end
@@ -208,7 +213,7 @@ module PgHero
208
213
  def historical_query_stats(sort: nil, start_at: nil, end_at: nil, query_hash: nil)
209
214
  if historical_query_stats_enabled?
210
215
  sort ||= "total_minutes"
211
- select_all_stats <<-SQL
216
+ query = <<-SQL
212
217
  WITH query_stats AS (
213
218
  SELECT
214
219
  #{supports_query_hash? ? "query_hash" : "md5(query)"} AS query_hash,
@@ -244,6 +249,10 @@ module PgHero
244
249
  #{quote_table_name(sort)} DESC
245
250
  LIMIT 100
246
251
  SQL
252
+
253
+ # we can skip query_columns if all stored data is normalized
254
+ # for now, assume it's not
255
+ select_all_stats(query, query_columns: [:query, :explainable_query])
247
256
  else
248
257
  raise NotEnabled, "Historical query stats not enabled"
249
258
  end
@@ -1,31 +1,46 @@
1
1
  module PgHero
2
2
  module Methods
3
3
  module System
4
+ def system_stats_enabled?
5
+ !system_stats_provider.nil?
6
+ end
7
+
8
+ # TODO require AWS 2+ automatically
9
+ def system_stats_provider
10
+ if aws_db_instance_identifier && (defined?(Aws) || defined?(AWS))
11
+ :aws
12
+ elsif gcp_database_id
13
+ :gcp
14
+ elsif azure_resource_id
15
+ :azure
16
+ end
17
+ end
18
+
4
19
  def cpu_usage(**options)
5
- rds_stats("CPUUtilization", options)
20
+ system_stats(:cpu, **options)
6
21
  end
7
22
 
8
23
  def connection_stats(**options)
9
- rds_stats("DatabaseConnections", options)
24
+ system_stats(:connections, **options)
10
25
  end
11
26
 
12
27
  def replication_lag_stats(**options)
13
- rds_stats("ReplicaLag", options)
28
+ system_stats(:replication_lag, **options)
14
29
  end
15
30
 
16
31
  def read_iops_stats(**options)
17
- rds_stats("ReadIOPS", options)
32
+ system_stats(:read_iops, **options)
18
33
  end
19
34
 
20
35
  def write_iops_stats(**options)
21
- rds_stats("WriteIOPS", options)
36
+ system_stats(:write_iops, **options)
22
37
  end
23
38
 
24
39
  def free_space_stats(**options)
25
- rds_stats("FreeStorageSpace", options)
40
+ system_stats(:free_space, **options)
26
41
  end
27
42
 
28
- def rds_stats(metric_name, duration: nil, period: nil, offset: nil)
43
+ def rds_stats(metric_name, duration: nil, period: nil, offset: nil, series: false)
29
44
  if system_stats_enabled?
30
45
  aws_options = {region: region}
31
46
  if access_key_id
@@ -43,16 +58,14 @@ module PgHero
43
58
  duration = (duration || 1.hour).to_i
44
59
  period = (period || 1.minute).to_i
45
60
  offset = (offset || 0).to_i
46
-
47
- end_time = (Time.now - offset)
48
- # ceil period
49
- end_time = Time.at((end_time.to_f / period).ceil * period)
61
+ end_time = Time.at(((Time.now - offset).to_f / period).ceil * period)
62
+ start_time = end_time - duration
50
63
 
51
64
  resp = client.get_metric_statistics(
52
65
  namespace: "AWS/RDS",
53
66
  metric_name: metric_name,
54
- dimensions: [{name: "DBInstanceIdentifier", value: db_instance_identifier}],
55
- start_time: (end_time - duration).iso8601,
67
+ dimensions: [{name: "DBInstanceIdentifier", value: aws_db_instance_identifier}],
68
+ start_time: start_time.iso8601,
56
69
  end_time: end_time.iso8601,
57
70
  period: period,
58
71
  statistics: ["Average"]
@@ -61,14 +74,180 @@ module PgHero
61
74
  resp[:datapoints].sort_by { |d| d[:timestamp] }.each do |d|
62
75
  data[d[:timestamp]] = d[:average]
63
76
  end
77
+
78
+ add_missing_data(data, start_time, end_time, period) if series
79
+
64
80
  data
65
81
  else
66
82
  raise NotEnabled, "System stats not enabled"
67
83
  end
68
84
  end
69
85
 
70
- def system_stats_enabled?
71
- !!((defined?(Aws) || defined?(AWS)) && db_instance_identifier)
86
+ def azure_stats(metric_name, duration: nil, period: nil, offset: nil, series: false)
87
+ # TODO DRY with RDS stats
88
+ duration = (duration || 1.hour).to_i
89
+ period = (period || 1.minute).to_i
90
+ offset = (offset || 0).to_i
91
+ end_time = Time.at(((Time.now - offset).to_f / period).ceil * period)
92
+ start_time = end_time - duration
93
+
94
+ interval =
95
+ case period
96
+ when 60
97
+ "PT1M"
98
+ when 300
99
+ "PT5M"
100
+ when 900
101
+ "PT15M"
102
+ when 1800
103
+ "PT30M"
104
+ when 3600
105
+ "PT1H"
106
+ else
107
+ raise Error, "Unsupported period"
108
+ end
109
+
110
+ client = Azure::Monitor::Profiles::Latest::Mgmt::Client.new
111
+ timespan = "#{start_time.iso8601}/#{end_time.iso8601}"
112
+ results = client.metrics.list(
113
+ azure_resource_id,
114
+ metricnames: metric_name,
115
+ aggregation: "Average",
116
+ timespan: timespan,
117
+ interval: interval
118
+ )
119
+
120
+ data = {}
121
+ result = results.value.first
122
+ if result
123
+ result.timeseries.first.data.each do |point|
124
+ data[point.time_stamp.to_time] = point.average
125
+ end
126
+ end
127
+
128
+ add_missing_data(data, start_time, end_time, period) if series
129
+
130
+ data
131
+ end
132
+
133
+ private
134
+
135
+ def gcp_stats(metric_name, duration: nil, period: nil, offset: nil, series: false)
136
+ require "google/cloud/monitoring"
137
+
138
+ # TODO DRY with RDS stats
139
+ duration = (duration || 1.hour).to_i
140
+ period = (period || 1.minute).to_i
141
+ offset = (offset || 0).to_i
142
+ end_time = Time.at(((Time.now - offset).to_f / period).ceil * period)
143
+ start_time = end_time - duration
144
+
145
+ client = Google::Cloud::Monitoring::Metric.new
146
+
147
+ interval = Google::Monitoring::V3::TimeInterval.new
148
+ interval.end_time = Google::Protobuf::Timestamp.new(seconds: end_time.to_i)
149
+ # subtract period to make sure we get first data point
150
+ interval.start_time = Google::Protobuf::Timestamp.new(seconds: (start_time - period).to_i)
151
+
152
+ aggregation = Google::Monitoring::V3::Aggregation.new
153
+ # may be better to use ALIGN_NEXT_OLDER for space stats to show most recent data point
154
+ # stick with average for now to match AWS
155
+ aggregation.per_series_aligner = Google::Monitoring::V3::Aggregation::Aligner::ALIGN_MEAN
156
+ aggregation.alignment_period = period
157
+
158
+ # validate input since we need to interpolate below
159
+ raise Error, "Invalid metric name" unless metric_name =~ /\A[a-z\/_]+\z/i
160
+ raise Error, "Invalid database id" unless gcp_database_id =~ /\A[a-z\-:]+\z/i
161
+
162
+ results = client.list_time_series(
163
+ "projects/#{gcp_database_id.split(":").first}",
164
+ "metric.type = \"cloudsql.googleapis.com/database/#{metric_name}\" AND resource.label.database_id = \"#{gcp_database_id}\"",
165
+ interval,
166
+ Google::Monitoring::V3::ListTimeSeriesRequest::TimeSeriesView::FULL,
167
+ aggregation: aggregation
168
+ )
169
+
170
+ data = {}
171
+ result = results.first
172
+ if result
173
+ result.points.each do |point|
174
+ time = Time.at(point.interval.start_time.seconds)
175
+ value = point.value.double_value
176
+ value *= 100 if metric_name == "cpu/utilization"
177
+ data[time] = value
178
+ end
179
+ end
180
+
181
+ add_missing_data(data, start_time, end_time, period) if series
182
+
183
+ data
184
+ end
185
+
186
+ def system_stats(metric_key, **options)
187
+ case system_stats_provider
188
+ when :aws
189
+ metrics = {
190
+ cpu: "CPUUtilization",
191
+ connections: "DatabaseConnections",
192
+ replication_lag: "ReplicaLag",
193
+ read_iops: "ReadIOPS",
194
+ write_iops: "WriteIOPS",
195
+ free_space: "FreeStorageSpace"
196
+ }
197
+ rds_stats(metrics[metric_key], **options)
198
+ when :gcp
199
+ if metric_key == :free_space
200
+ quota = gcp_stats("disk/quota", **options)
201
+ used = gcp_stats("disk/bytes_used", **options)
202
+ free_space(quota, used)
203
+ else
204
+ metrics = {
205
+ cpu: "cpu/utilization",
206
+ connections: "postgresql/num_backends",
207
+ replication_lag: "replication/replica_lag",
208
+ read_iops: "disk/read_ops_count",
209
+ write_iops: "disk/write_ops_count"
210
+ }
211
+ gcp_stats(metrics[metric_key], **options)
212
+ end
213
+ when :azure
214
+ if metric_key == :free_space
215
+ quota = azure_stats("storage_limit", **options)
216
+ used = azure_stats("storage_used", **options)
217
+ free_space(quota, used)
218
+ else
219
+ # no read_iops, write_iops
220
+ # could add io_consumption_percent
221
+ metrics = {
222
+ cpu: "cpu_percent",
223
+ connections: "active_connections",
224
+ replication_lag: "pg_replica_log_delay_in_seconds"
225
+ }
226
+ raise Error, "Metric not supported" unless metrics[metric_key]
227
+ azure_stats(metrics[metric_key], **options)
228
+ end
229
+ else
230
+ raise NotEnabled, "System stats not enabled"
231
+ end
232
+ end
233
+
234
+ # only use data points included in both series
235
+ # this also eliminates need to align Time.now
236
+ def free_space(quota, used)
237
+ data = {}
238
+ quota.each do |k, v|
239
+ data[k] = v - used[k] if v && used[k]
240
+ end
241
+ data
242
+ end
243
+
244
+ def add_missing_data(data, start_time, end_time, period)
245
+ time = start_time
246
+ end_time = end_time
247
+ while time < end_time
248
+ data[time] ||= nil
249
+ time += period
250
+ end
72
251
  end
73
252
  end
74
253
  end
@@ -1,3 +1,3 @@
1
1
  module PgHero
2
- VERSION = "2.4.2"
2
+ VERSION = "2.5.0"
3
3
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: pghero
3
3
  version: !ruby/object:Gem::Version
4
- version: 2.4.2
4
+ version: 2.5.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Andrew Kane
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2020-04-16 00:00:00.000000000 Z
11
+ date: 2020-05-24 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: activerecord