pg_metrics 0.1.1

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml ADDED
@@ -0,0 +1,7 @@
1
+ ---
2
+ SHA1:
3
+ metadata.gz: 1f02f742e23e3d02686a15c2cf6a9f8b48b527e3
4
+ data.tar.gz: cb0e25e46b852b7cc3f3551b3c87d63bbf187acb
5
+ SHA512:
6
+ metadata.gz: 615b4a325678b94b2f3596f924c8a5ac978314d2a4b0a18d09eb6785d45882545291948298b2ba849f20efd4b4e075cda241d7b091584a5ac1f60c656296f7df
7
+ data.tar.gz: 455211aeb1ef3fa4883edbee01bc70d9111433dfe9e04950134765097a8f7664d131b31e1f6b26e3e24a46e3422e6146a85ffc482bdf01c2481de26a8b9dc236
@@ -0,0 +1,90 @@
1
+ # pg_metrics Changelog
2
+
3
+ ## Changes between 0.1.1 and 0.0.7
4
+
5
+ ### Remove sensu
6
+
7
+ pg_metrics generates too many stats for sensu to cope with, which
8
+ is why we made the statsd version in the first place. As we're not
9
+ using or testing the sensu version, remove it rather than let it bitrot.
10
+
11
+ ### Update project homepage
12
+
13
+ Make the project public (yay!), making github.com the official homepage.
14
+
15
+ ## Changes between 0.0.7 and 0.0.6
16
+
17
+ ### Use client connection user in backend stats
18
+
19
+ In 0.0.5, backend-stats for automatic connections or non-force-user database
20
+ definitions used `SAMEUSER` as the username for the backend connection. The
21
+ actual client connection user is now used instead.
22
+
23
+ ### Drop backend stats for pools with no corresponding backend
24
+
25
+ `pgbouncer` sometimes shows databases in `SHOW pools` that have no
26
+ corresponding entry in `SHOW databases`. These pool entries are
27
+ now ignored for backend stats.
28
+
29
+ ## Changes between 0.0.5 and 0.0.6
30
+
31
+ ### Add pgbouncer stats
32
+
33
+ `pg_metrics` can now be used to collect pgbouncer stats using the `--pgbouncer`
34
+ flag. Only results of `SHOW stats` and `SHOW pools` are collected, as well
35
+ as per-backend stats, which uses the results of `SHOW databases` to aggregate
36
+ pool stats per backend-connection, not just per defined database connection.
37
+
38
+ ## Changes between 0.0.4 and 0.0.5
39
+
40
+ ### Collect waiting session stats
41
+
42
+ Count sessions as *waiting* if `pg_stat_activity.waiting` is `TRUE`,
43
+ and as the session state otherwise (using `pg_stat_activity.state` for
44
+ PostgreSQL versions >= 9.2 and calculated from `pg_stat_activity.current_query`
45
+ in earlier versions).
46
+
47
+ `pg_stat_activity` does track waiting idependently of state, but for practical
48
+ purposes not all that useful to track them separately for metrics collection.
49
+
50
+ ### Track xlog.location for PostgreSQL versions <= 9.0
51
+
52
+ Earlier versions of pg_metrics did not collect `current_xlog_location`
53
+ for versions earlier than 9.1. The `pg_current_xlog_location()` function
54
+ is available in both PostgreSQL 8.3 and 8.4. The `pg_last_xlog_(receive|replay)_location()`
55
+ functions are available for PostgreSQL versions >= 9.0, so collect those
56
+ when available.
57
+
58
+
59
+ ## Changes between 0.0.3 and 0.0.4
60
+
61
+ ### Allow specfication of which database stats are collected.
62
+
63
+ Prior to 0.0.4, per-database stats were collected only from `pg_locks`,
64
+ `pg_stat_user_tables` and `pg_statio_user_tables`. `pg_metrics_statsd`
65
+ collects all stats by default, and allows specification of which stats
66
+ to omit with a a variety of `--no-*` command line flags.
67
+
68
+ ## Changes between 0.0.2 and 0.0.3
69
+
70
+ ### Improve formatting of verbose output
71
+
72
+ 0.0.3 prints each metric on its own line.
73
+
74
+ ### Permit using short -s flag to specify scheme
75
+
76
+ 0.0.3 allows you to specify scheme using `-s SCHEME` as well
77
+ as the legacy `--scheme SCHEME`.
78
+
79
+ ## Changes between 0.0.1 and 0.0.2
80
+
81
+ ### Fix use of regexp filter
82
+
83
+ A incomplete refactor left behind a second instantiation of the filter regex,
84
+ along with a reference to a variable that was no longer in scope.
85
+
86
+ ## Set application_name only for PostgreSQL versions >= 9.0
87
+
88
+ The application_name parameter was introduced in PostgreSQL version 9.0. Earlier
89
+ versions (such as 8.3 and 8.4) will throw an error if you try to set it, so we
90
+ no longer try to set it for versions that don't support it.
data/DEV.markdown ADDED
@@ -0,0 +1,22 @@
1
+ # Development notes
2
+
3
+ Processes I often forget between release cycles
4
+
5
+ ## To build and install locally
6
+
7
+ gem build pg_metrics.gemspec
8
+ gem install ./pg_metrics-0.0.X.gem
9
+
10
+
11
+ ## To release
12
+
13
+ * Update `CHANGELOG.markdown`
14
+ * Update `lib/pg_metrics/version.rb`
15
+ * Update `spec.date` in `pg_metrics.gemspec`
16
+ * Tag
17
+
18
+ git tag -a "v0.0.X" -m "version 0.0.X"
19
+
20
+ * Push to github repo
21
+
22
+ git push origin --tags
data/Gemfile ADDED
@@ -0,0 +1,5 @@
1
+ source "http://rubygems.org"
2
+
3
+ gem "pg"
4
+ gem "statsd-ruby"
5
+ gem "simplecov", :require => false, :group => :test
data/Gemfile.lock ADDED
@@ -0,0 +1,20 @@
1
+ GEM
2
+ remote: http://rubygems.org/
3
+ specs:
4
+ docile (1.1.5)
5
+ multi_json (1.10.1)
6
+ pg (0.17.1)
7
+ simplecov (0.9.1)
8
+ docile (~> 1.1.0)
9
+ multi_json (~> 1.0)
10
+ simplecov-html (~> 0.8.0)
11
+ simplecov-html (0.8.0)
12
+ statsd-ruby (1.2.1)
13
+
14
+ PLATFORMS
15
+ ruby
16
+
17
+ DEPENDENCIES
18
+ pg
19
+ simplecov
20
+ statsd-ruby
data/LICENSE ADDED
@@ -0,0 +1,22 @@
1
+ Copyright (c) 2015, MeetMe, Inc.
2
+ All rights reserved.
3
+
4
+ The MIT License (MIT)
5
+
6
+ Permission is hereby granted, free of charge, to any person obtaining a copy
7
+ of this software and associated documentation files (the "Software"), to deal
8
+ in the Software without restriction, including without limitation the rights
9
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
10
+ copies of the Software, and to permit persons to whom the Software is
11
+ furnished to do so, subject to the following conditions:
12
+
13
+ The above copyright notice and this permission notice shall be included in
14
+ all copies or substantial portions of the Software.
15
+
16
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
17
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
18
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
19
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
20
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
21
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
22
+ THE SOFTWARE.
data/README.markdown ADDED
@@ -0,0 +1,43 @@
1
+ # pg_metrics PostgreSQL Metrics
2
+
3
+ `pg_metrics` is a PostgreSQL metrics collector for use with statsd.
4
+
5
+ ## Installation
6
+
7
+ gem install pg_metrics
8
+
9
+ The `pg_metrics_statsd` command is now available.
10
+
11
+ ## Usage
12
+
13
+ To collect PostgreSQL instance metrics on localhost port 5432 and pass them to a
14
+ statsd instance running on localhost port 8125:
15
+
16
+ pg_metrics_statsd --host localhost --port 8125 --connection "host=localhost port=5432"
17
+
18
+ To collect PostgreSQL database metrics for the `prod` database, include the
19
+ `--dbname` parameter:
20
+
21
+ pg_metrics_statsd --host localhost --port 8125 --connection "host=localhost port=5432" --dbname=prod
22
+
23
+ By default, pg_metrics_statsd collects stats from `pg_locks`,
24
+ `pg_stat_user_functions` (where available), `pg_stat_user_tables`,
25
+ `pg_stat_user_tables`, `pg_stat_user_indexes`, `pg_statio_user_indexes`,
26
+ as well as per-table and per-index sizes. You can omit stats by supplying
27
+ command line flags:
28
+
29
+ - `--no-functions`
30
+ - `--no-locks`
31
+ - `--no-table-stats`
32
+ - `--no-table-statio`
33
+ - `--no-index-stats`
34
+ - `--no-index-statio`
35
+ - `--no-table-sizes`
36
+ - `--no-index-sizes`
37
+
38
+ ### pgbouncer metrics
39
+
40
+ `pg_metrics` can also collect `pgbouncer` metrics by passing the `--pgbouncer`
41
+ flag.
42
+
43
+ pg_metrics_statsd --host localhost --port 8125 --connection "host=localhost port=6432" --pgbouncer
@@ -0,0 +1,14 @@
1
+ #!/usr/bin/env ruby
2
+ begin
3
+ require 'pg_metrics'
4
+ rescue LoadError
5
+ require 'rubygems'
6
+ require 'pg_metrics'
7
+ end
8
+
9
+ begin
10
+ exit PgMetrics::Statsd::main(ARGV)
11
+ rescue => e
12
+ STDERR.puts e.message
13
+ STDERR.puts e.backtrace
14
+ end
@@ -0,0 +1,347 @@
1
+ require "pg"
2
+ require "set"
3
+
4
+ module PgMetrics
5
+ module Metrics
6
+
7
+ Functions = :functions
8
+ Locks = :locks
9
+ TableSizes = :table_size
10
+ IndexSizes = :index_size
11
+ TableStatio = :table_statio
12
+ TableStats = :table_stats
13
+ IndexStatio = :index_statio
14
+ IndexStats = :index_stats
15
+
16
+ def self.fetch_instance_metrics(app_name, conn_info, regexp = nil)
17
+ metrics = []
18
+ conn = make_conn(conn_str(conn_info), app_name)
19
+ server_version = conn.parameter_status("server_version")
20
+ instance_metrics(server_version).values.each do |m|
21
+ metrics += fetch_metrics(conn, m[:prefix], m[:query])
22
+ end
23
+ conn.finish
24
+ filter_metrics(metrics, regexp)
25
+ end
26
+
27
+ def self.fetch_database_metrics(app_name, conn_info, dbname, select_names, regexp = nil)
28
+ metrics = []
29
+ conn = make_conn(conn_str(conn_info, dbname), app_name)
30
+ server_version = conn.parameter_status("server_version")
31
+ select_metrics = database_metrics(server_version).select { |k, _v| select_names.include? k }
32
+ select_metrics.values.each do |m|
33
+ metrics += fetch_metrics(conn, ["database", dbname] + m[:prefix], m[:query])
34
+ end
35
+ conn.finish
36
+ filter_metrics(metrics, regexp)
37
+ end
38
+
39
+ def self.make_conn(conn_str, app_name)
40
+ conn = PG::Connection.new(conn_str)
41
+ server_version = conn.parameter_status("server_version")
42
+ conn.exec(%(SET application_name = "#{app_name}")) if Gem::Version.new(server_version) >= Gem::Version.new("9.0")
43
+ conn
44
+ end
45
+
46
+ def self.filter_metrics(metrics, regexp = nil)
47
+ metrics.reject! { |m| m[1].nil? }
48
+ metrics.reject! { |m| m[0].any? { |k| k =~ regexp } } if regexp
49
+ metrics
50
+ end
51
+
52
+ def self.conn_str(conn_info, dbname = "postgres")
53
+ [conn_info, %(dbname=#{dbname})].join(" ")
54
+ end
55
+
56
+ def self.fetch_metrics(conn, keys, query)
57
+ metrics = []
58
+
59
+ return metrics if query.nil?
60
+
61
+ timestamp = Time.now.to_i
62
+
63
+ conn.exec(query) do |result|
64
+ if result.nfields == 1 && result.ntuples == 1
65
+ # Typically result of SHOW command
66
+ metrics << format_metric(keys, result.getvalue(0, 0), timestamp)
67
+ elsif result.nfields >= 2 && result.fields.first == "key"
68
+ if result.fields.last == "value"
69
+ # Omit "value" column from metric name
70
+ nkeys = result.nfields - 1
71
+ result.each_row do |row|
72
+ mkeys = row.first(nkeys)
73
+ value = row.last
74
+ metrics << format_metric(keys + mkeys, value, timestamp)
75
+ end
76
+ else
77
+ # Use any column named key* as part of the metric name.
78
+ # Any other columns are named values.
79
+ nkeys = result.fields.take_while { |f| f =~ /^key/ }.count
80
+ keycols = result.fields.first(nkeys)
81
+ nvals = result.nfields - nkeys
82
+ valcols = result.fields.last(nvals)
83
+ result.each do |tup|
84
+ mkeys = keycols.map { |col| tup[col] }
85
+ valcols.each do |key|
86
+ value = tup[key]
87
+ metrics << format_metric(keys + mkeys + [key], value, timestamp)
88
+ end
89
+ end
90
+ end
91
+ else # We've got a single-row result where columns are named values
92
+ result[0].each do |key, value|
93
+ metrics << format_metric(keys + [key], value, timestamp)
94
+ end
95
+ end
96
+ end
97
+
98
+ metrics
99
+ end
100
+
101
+ def self.format_metric(keys, value, timestamp)
102
+ segs = keys.reject { |k| k.nil? }.map { |x| x.gsub(/[\s.]/, "_") }
103
+ value = decode_xlog_location(value)
104
+ [segs, value, timestamp]
105
+ end
106
+
107
+ def self.decode_xlog_location(val)
108
+ return val if val.nil?
109
+ if (m = val.match(%r{([A-Fa-f0-9]+)/([A-Fa-f0-9]+)}))
110
+ return (m[1].hex << 32) + m[2].hex
111
+ end
112
+ val
113
+ end
114
+
115
+ def self.instance_metrics(server_version)
116
+ {
117
+ max_connections: {
118
+ prefix: %w(config instance max_connections),
119
+ query: %q{SHOW max_connections}
120
+ },
121
+
122
+ superuser_connections: {
123
+ prefix: %w(config instance superuser_reserved_connections),
124
+ query: %q{SHOW superuser_reserved_connections}
125
+ },
126
+
127
+ archive_files: {
128
+ prefix: %w(archive_files),
129
+ query: %q{SELECT CAST(COALESCE(SUM(CAST(archive_file ~ E'\\.ready$' AS int)), 0) AS INT) AS ready,
130
+ CAST(COALESCE(SUM(CAST(archive_file ~ E'\\.done$' AS int)), 0) AS INT) AS done
131
+ FROM pg_catalog.pg_ls_dir('pg_xlog/archive_status') AS archive_files (archive_file)}
132
+ },
133
+
134
+ bgwriter: {
135
+ prefix: %w(bgwriter),
136
+ query: %q{SELECT checkpoints_timed, checkpoints_req, buffers_checkpoint,
137
+ buffers_clean, maxwritten_clean, buffers_backend, buffers_alloc
138
+ FROM pg_stat_bgwriter}
139
+ },
140
+
141
+ sessions: {
142
+ prefix: %w(sessions),
143
+ query: Gem::Version.new(server_version) >= Gem::Version.new('9.2') \
144
+ ? %{SELECT datname AS key, usename AS key2,
145
+ CASE WHEN waiting THEN 'waiting' ELSE state END AS key3,
146
+ count(*) AS value
147
+ FROM pg_stat_activity
148
+ WHERE pid <> pg_backend_pid() GROUP BY datname, usename, 3}
149
+ : %{SELECT datname AS key, usename AS key2,
150
+ CASE WHEN waiting THEN 'waiting'
151
+ ELSE CASE current_query
152
+ WHEN NULL THEN 'disabled'
153
+ WHEN '<IDLE>' THEN 'idle'
154
+ WHEN '<IDLE> in transaction' THEN 'idle in transaction'
155
+ ELSE 'active' END END AS key3,
156
+ count(*) AS value
157
+ FROM pg_stat_activity
158
+ WHERE procpid <> pg_backend_pid() GROUP BY datname, usename, 3}
159
+ },
160
+
161
+ database_connection_limits: {
162
+ prefix: %w(config database),
163
+ query: %q{SELECT datname AS key,
164
+ CASE WHEN datconnlimit <> -1 THEN datconnlimit ELSE current_setting('max_connections')::int END AS connection_limit
165
+ FROM pg_database
166
+ WHERE datallowconn AND NOT datistemplate}
167
+ },
168
+
169
+ user_connection_limits: {
170
+ prefix: %w(config user),
171
+ query: %q{SELECT rolname AS key,
172
+ CASE WHEN rolconnlimit <> -1 THEN rolconnlimit ELSE current_setting('max_connections')::INT - CASE WHEN rolsuper THEN 0 ELSE current_setting('superuser_reserved_connections')::INT END END AS connection_limit
173
+ FROM pg_roles
174
+ WHERE rolcanlogin}
175
+ },
176
+
177
+ database_size: {
178
+ prefix: %w(database),
179
+ query: %q{SELECT datname AS key, pg_database_size(oid) AS size FROM pg_database WHERE NOT datistemplate}
180
+ },
181
+
182
+ streaming_state: {
183
+ prefix: %w(streaming_state),
184
+ query: Gem::Version.new(server_version) >= Gem::Version.new('9.1') \
185
+ ? %q{SELECT CASE WHEN client_hostname IS NULL THEN 'socket' ELSE host(client_addr) END AS key,
186
+ CASE state WHEN 'catchup' THEN 1 WHEN 'streaming' THEN 2 ELSE 0 END as value
187
+ FROM pg_stat_replication}
188
+ : nil
189
+ },
190
+
191
+ transactions: {
192
+ prefix: %w(database),
193
+ query: %q{SELECT dat.datname AS key, 'transactions' AS key2, xact_commit AS commit, xact_rollback AS rollback FROM pg_stat_database JOIN pg_database dat ON dat.oid = datid WHERE datallowconn AND NOT datistemplate}
194
+ },
195
+
196
+ xlog: {
197
+ prefix: %w(xlog),
198
+ query: Gem::Version.new(server_version) >= Gem::Version.new('9.0') \
199
+ ? %q{SELECT CASE WHEN pg_is_in_recovery() THEN NULL ELSE pg_current_xlog_location() END AS location,
200
+ pg_last_xlog_receive_location() AS receive_location,
201
+ pg_last_xlog_replay_location() AS replay_location}
202
+ : %q{SELECT pg_current_xlog_location() AS location}
203
+ }
204
+ }
205
+ end
206
+
207
+ def self.database_metrics(server_version)
208
+ {
209
+ Functions => {
210
+ prefix: %w(function),
211
+ query: Gem::Version.new(server_version) >= Gem::Version.new('8.4') \
212
+ ? %q{SELECT schemaname AS key,
213
+ array_to_string(ARRAY[funcname, '-', pronargs::TEXT,
214
+ CASE WHEN pronargs = 0 THEN ''
215
+ ELSE '-' || array_to_string(CASE WHEN pronargs > 16
216
+ THEN ARRAY(SELECT args[i]
217
+ FROM generate_series(1, 8) AS _(i))
218
+ || '-'::TEXT
219
+ || ARRAY(SELECT args[i]
220
+ FROM generate_series(pronargs - 7, pronargs) AS _ (i))
221
+ || funcid::TEXT
222
+ ELSE args END, '-') END], '') AS key2,
223
+ calls, total_time, self_time
224
+ FROM (SELECT funcid, schemaname, funcname::TEXT, pronargs,
225
+ ARRAY(SELECT typname::TEXT
226
+ FROM pg_type
227
+ JOIN (SELECT args.i, proargtypes[args.i] AS typid
228
+ FROM pg_catalog.generate_series(0, array_upper(proargtypes, 1)) AS args (i))
229
+ AS args (i, typid) ON typid = pg_type.oid
230
+ ORDER BY i) AS args,
231
+ calls, total_time, self_time
232
+ FROM pg_stat_user_functions
233
+ JOIN pg_proc ON pg_proc.oid = funcid
234
+ WHERE schemaname NOT IN ('information_schema', 'pg_catalog')) AS funcs}
235
+ : nil
236
+ },
237
+
238
+ Locks => {
239
+ prefix: %w(table),
240
+ query: %q{SELECT nspname AS key,
241
+ CASE rel.relkind WHEN 'r' THEN rel.relname ELSE crel.relname END AS key2,
242
+ CASE rel.relkind WHEN 'r' THEN 'locks' ELSE 'index' END AS key3,
243
+ CASE rel.relkind WHEN 'r' THEN mode ELSE rel.relname END AS key4,
244
+ CASE rel.relkind WHEN 'r' THEN NULL ELSE 'locks' END AS key5,
245
+ CASE rel.relkind WHEN 'r' THEN NULL ELSE mode END AS key6,
246
+ count(*) AS value
247
+ FROM pg_locks
248
+ JOIN pg_database dat ON dat.oid = database
249
+ JOIN pg_class rel ON rel.oid = relation
250
+ LEFT JOIN pg_index ON indexrelid = rel.oid
251
+ LEFT JOIN pg_class crel ON indrelid = crel.oid
252
+ JOIN pg_namespace nsp ON nsp.oid = rel.relnamespace
253
+ WHERE locktype = 'relation' AND nspname <> 'pg_catalog' AND rel.relkind in ('r', 'i')
254
+ GROUP BY 1, 2, 3, 4, 5, 6}
255
+ },
256
+
257
+ TableSizes => {
258
+ prefix: %w(table),
259
+ query: %q{SELECT n.nspname AS key, r.relname AS key2,
260
+ pg_relation_size(r.oid) AS size,
261
+ pg_total_relation_size(r.oid) AS total_size
262
+ FROM pg_class r
263
+ JOIN pg_namespace n ON r.relnamespace = n.oid
264
+ WHERE r.relkind = 'r'
265
+ AND n.nspname NOT IN ('pg_catalog', 'information_schema')}
266
+ },
267
+
268
+ IndexSizes => {
269
+ prefix: %w(table),
270
+ query: %q{SELECT n.nspname AS key, cr.relname AS key2, 'index' AS key3,
271
+ ci.relname AS key4, pg_relation_size(ci.oid) AS size
272
+ FROM pg_class ci JOIN pg_index i ON ci.oid = i.indexrelid
273
+ JOIN pg_class cr ON cr.oid = i.indrelid
274
+ JOIN pg_namespace n on ci.relnamespace = n.oid
275
+ WHERE ci.relkind = 'i' AND cr.relkind = 'r'
276
+ AND n.nspname NOT IN ('pg_catalog', 'information_schema')}
277
+ },
278
+
279
+ TableStatio => {
280
+ prefix: %w(table),
281
+ query: %q{SELECT schemaname AS key, relname AS key2, 'statio' AS key3,
282
+ nullif(heap_blks_read, 0) AS heap_blks_read,
283
+ nullif(heap_blks_hit, 0) AS heap_blks_hit,
284
+ nullif(idx_blks_read, 0) AS idx_blks_read,
285
+ nullif(idx_blks_hit, 0) AS idx_blks_hit,
286
+ nullif(toast_blks_read, 0) AS toast_blks_read,
287
+ nullif(toast_blks_hit, 0) AS toast_blks_hit,
288
+ nullif(tidx_blks_read, 0) AS tidx_blks_read,
289
+ nullif(tidx_blks_hit, 0) AS tidx_blks_hit
290
+ FROM pg_statio_user_tables}
291
+ },
292
+
293
+ TableStats => {
294
+ prefix: %w(table),
295
+ query: Gem::Version.new(server_version) >= Gem::Version.new('9.1') \
296
+ ? %q{SELECT schemaname AS key, relname AS key2, 'stat' AS key3,
297
+ nullif(seq_scan, 0) AS seq_scan,
298
+ nullif(seq_tup_read, 0) AS seq_tup_read,
299
+ nullif(idx_scan, 0) AS idx_scan,
300
+ nullif(idx_tup_fetch, 0) AS idx_tup_fetch,
301
+ nullif(n_tup_ins, 0) AS n_tup_ins,
302
+ nullif(n_tup_upd, 0) AS n_tup_upd,
303
+ nullif(n_tup_del, 0) AS n_tup_del,
304
+ nullif(n_tup_hot_upd, 0) AS n_tup_hot_upd,
305
+ nullif(n_live_tup, 0) AS n_live_tup,
306
+ nullif(n_dead_tup, 0) AS n_dead_tup,
307
+ nullif(vacuum_count, 0) AS vacuum_count,
308
+ nullif(autovacuum_count, 0) AS autovacuum_count,
309
+ nullif(analyze_count, 0) AS analyze_count,
310
+ nullif(autoanalyze_count, 0) AS autoanalyze_count
311
+ FROM pg_stat_user_tables} \
312
+ : %q{SELECT schemaname AS key, relname AS key2, 'stat' AS key3,
313
+ nullif(seq_scan, 0) AS seq_scan,
314
+ nullif(seq_tup_read, 0) AS seq_tup_read,
315
+ nullif(idx_scan, 0) AS idx_scan,
316
+ nullif(idx_tup_fetch, 0) AS idx_tup_fetch,
317
+ nullif(n_tup_ins, 0) AS n_tup_ins,
318
+ nullif(n_tup_upd, 0) AS n_tup_upd,
319
+ nullif(n_tup_del, 0) AS n_tup_del,
320
+ nullif(n_tup_hot_upd, 0) AS n_tup_hot_upd,
321
+ nullif(n_live_tup, 0) AS n_live_tup,
322
+ nullif(n_dead_tup, 0) AS n_dead_tup
323
+ FROM pg_stat_user_tables},
324
+ },
325
+
326
+ IndexStatio => {
327
+ prefix: %w(table),
328
+ query: %q{SELECT schemaname AS key, relname AS key2, 'index' AS key3,
329
+ indexrelname AS key4, 'statio' AS key5,
330
+ nullif(idx_blks_read, 0) AS idx_blks_read,
331
+ nullif(idx_blks_hit, 0) AS idx_blks_hit
332
+ FROM pg_statio_user_indexes},
333
+ },
334
+
335
+ IndexStats => {
336
+ prefix: %w(table),
337
+ query: %q{SELECT schemaname AS key, relname AS key2, 'index' AS key3,
338
+ indexrelname AS key4, 'stat' AS key5,
339
+ nullif(idx_scan, 0) AS idx_scan,
340
+ nullif(idx_tup_read, 0) AS idx_tup_read,
341
+ nullif(idx_tup_fetch, 0) AS idx_tup_fetch
342
+ FROM pg_stat_user_indexes}
343
+ }
344
+ }
345
+ end
346
+ end
347
+ end
@@ -0,0 +1,119 @@
1
+ require "pg"
2
+
3
+ module PgMetrics
4
+ module PgbouncerMetrics
5
+
6
+ def self.fetch_pgbouncer_metrics(app_name, conn_info)
7
+ metrics = []
8
+ conn = make_conn(conn_str(conn_info), app_name)
9
+ metrics = metrics.concat(fetch_stats_metrics(conn))
10
+ pool_results = fetch_pools(conn)
11
+ metrics = metrics.concat(extract_pool_metrics(pool_results))
12
+ database_results = fetch_databases(conn)
13
+ metrics = metrics.concat(extract_database_metrics(database_results))
14
+ metrics = metrics.concat(extract_backend_metrics(database_results, pool_results))
15
+ conn.finish
16
+ filter_metrics(metrics)
17
+ end
18
+
19
+ def self.make_conn(conn_str, app_name)
20
+ PG::Connection.new(conn_str)
21
+ end
22
+
23
+ def self.conn_str(conn_info, dbname = "pgbouncer", user = "admin")
24
+ [conn_info, %(dbname=#{dbname}), %(user=#{user})].join(" ")
25
+ end
26
+
27
+ def self.filter_metrics(metrics, regexp = nil)
28
+ metrics.reject! { |m| m[1].nil? }
29
+ metrics.reject! { |m| m[0].any? { |k| k =~ regexp } } if regexp
30
+ metrics.inject([]) { |memo, m| memo << [sanitize_key(m[0]), m[1]] }
31
+ end
32
+
33
+ def self.fetch_stats_metrics(conn)
34
+ cols = %w(total_requests total_received total_sent total_query_time avg_req avg_recv avg_sent avg_query)
35
+ metrics = []
36
+ conn.exec("SHOW stats") do |results|
37
+ results.each do |tup|
38
+ cols.each do |col|
39
+ metrics << [["stats", tup["database"], col], tup[col]]
40
+ end
41
+ end
42
+ end
43
+ metrics
44
+ end
45
+
46
+ def self.fetch_pools(conn)
47
+ conn.exec("SHOW pools")
48
+ end
49
+
50
+ def self.extract_pool_metrics(results)
51
+ cols = %w(cl_active cl_waiting sv_active sv_idle sv_used sv_tested sv_login maxwait)
52
+ results.inject([]) do |memo, tup|
53
+ cols.each do |col|
54
+ memo << [["pools", tup["database"], tup["user"], col], tup[col]]
55
+ end
56
+ memo
57
+ end
58
+ end
59
+
60
+ def self.fetch_databases(conn)
61
+ conn.exec("SHOW databases")
62
+ end
63
+
64
+ def self.extract_database_metrics(results)
65
+ cols = %w(pool_size reserve_pool)
66
+ results.inject([]) do |memo, tup|
67
+ cols.each do |col|
68
+ memo << [["databases", tup["name"], col], tup[col]]
69
+ end
70
+ memo
71
+ end
72
+ end
73
+
74
+ def self.extract_backend_metrics(database_results, pool_results)
75
+ databases = database_results.inject({}) do |memo, tup|
76
+ user = tup["force_user"].nil? ? :sameuser : tup["force_user"]
77
+ host = tup["host"].nil? ? "localhost" : tup["host"]
78
+ memo[tup["name"]] = {:host => host, :port => tup["port"], :database => tup["database"], :user => user}
79
+ memo
80
+ end
81
+
82
+ sum_cols = %w(cl_active cl_waiting sv_active sv_idle sv_used sv_tested sv_login)
83
+ max_cols = %w(max_wait)
84
+ cols = sum_cols.concat(max_cols)
85
+ sums = pool_results.inject({}) do |memo, tup|
86
+ database = databases[tup["database"]]
87
+ next memo if database.nil?
88
+ user = database[:user] === :sameuser ? tup["user"] : database[:user]
89
+ key = [database[:host], database[:port], database[:database], user]
90
+ vals = memo[key] || {
91
+ "cl_active" => 0,
92
+ "cl_waiting" => 0,
93
+ "sv_active" => 0,
94
+ "sv_idle" => 0,
95
+ "sv_used" => 0,
96
+ "sv_tested" => 0,
97
+ "sv_login" => 0,
98
+ "max_wait" => 0
99
+ }
100
+ sum_cols.each { |col| vals[col] += tup[col].to_i }
101
+ max_cols.each { |col| vals[col] = [tup[col].to_i, vals[col]].max }
102
+ memo[key] = vals
103
+ memo
104
+ end
105
+ sums.inject([]) do |memo, (key, val)|
106
+ cols.each do |col|
107
+ k = ["backends"] + key + [col]
108
+ memo << [k, val[col]]
109
+ end
110
+ memo
111
+ end
112
+ end
113
+
114
+ def self.sanitize_key(key)
115
+ key.inject([]) { |memo, el| memo << el.gsub(/[^-a-zA-Z_0-9]/, "_") }
116
+ end
117
+
118
+ end
119
+ end
@@ -0,0 +1,85 @@
1
+ require 'optparse'
2
+ require 'socket'
3
+ require 'statsd-ruby'
4
+ require 'set'
5
+
6
+ module PgMetrics
7
+ module Statsd
8
+ APPNAME = "pg_metrics_statsd"
9
+
10
+ def self.main(args)
11
+ options = self.parse(args)
12
+
13
+ if options[:version]
14
+ STDOUT.puts %(pg_metrics #{PgMetrics::VERSION})
15
+ return 0
16
+ end
17
+
18
+ if options[:pgbouncer]
19
+ metrics = PgMetrics::PgbouncerMetrics::fetch_pgbouncer_metrics(APPNAME, options[:conn])
20
+ else
21
+ regexp = options[:exclude] ? options[:exclude] : nil
22
+
23
+ metrics = if options[:dbname]
24
+ PgMetrics::Metrics::fetch_database_metrics(APPNAME, options[:conn], options[:dbname],
25
+ options[:dbstats], regexp)
26
+ else
27
+ PgMetrics::Metrics::fetch_instance_metrics(APPNAME, options[:conn], regexp)
28
+ end
29
+ end
30
+
31
+ statsd = ::Statsd.new(options[:host], options[:port]).tap do |sd|
32
+ sd.namespace = options[:scheme]
33
+ end
34
+
35
+ metrics.map! { |m| [m[0].join("."), m[1]] }
36
+
37
+ metrics.each { |m| STDOUT.puts m.join(" ") } if options[:verbose]
38
+
39
+ metrics.each do |m|
40
+ statsd.gauge(m[0], m[1])
41
+ end
42
+
43
+ exit 0
44
+ end
45
+
46
+ def self.parse(args)
47
+ options = {
48
+ host: "localhost",
49
+ port: 8125,
50
+ conn: "",
51
+ scheme: %(#{Socket.gethostname}.postgresql),
52
+ dbstats: [PgMetrics::Metrics::Functions,
53
+ PgMetrics::Metrics::Locks,
54
+ PgMetrics::Metrics::TableSizes,
55
+ PgMetrics::Metrics::IndexSizes,
56
+ PgMetrics::Metrics::TableStatio,
57
+ PgMetrics::Metrics::TableStats,
58
+ PgMetrics::Metrics::IndexStatio,
59
+ PgMetrics::Metrics::IndexStats].to_set
60
+ }
61
+
62
+ OptionParser.new do |opts|
63
+ opts.on("-h", "--host STATSD_HOST", "StatsD host") { |v| options[:host] = v }
64
+ opts.on("-p", "--port STATSD_PORT", "StatsD port") { |v| options[:port] = v.to_i }
65
+ opts.on("-c", "--connection CONN", "PostgreSQL connection string") { |v| options[:conn] = v }
66
+ opts.on("-d", "--dbname DBNAME", "PostgreSQL database name for database metrics") { |v| options[:dbname] = v }
67
+ opts.on("-e", "--exclude REGEXP", "Exclude objects matching given regexp") { |v| options[:exclude] = ::Regexp.new(v) }
68
+ opts.on("-s", "--scheme SCHEME", "Metric namespace") { |v| options[:scheme] = v }
69
+ opts.on("--[no-]functions", "Collect database function stats") { |v| options[:dbstats].delete(PgMetrics::Metrics::Functions) unless v }
70
+ opts.on("--[no-]locks", "Collect database lock stats") { |v| options[:dbstats].delete(PgMetrics::Metrics::Locks) unless v }
71
+ opts.on("--[no-]table-sizes", "Collect database table size stats ") { |v| options[:dbstats].delete(PgMetrics::Metrics::TableSizes) unless v }
72
+ opts.on("--[no-]index-sizes", "Collect database index size stats ") { |v| options[:dbstats].delete(PgMetrics::Metrics::IndexSizes) unless v }
73
+ opts.on("--[no-]table-statio", "Collect database table statio stats ") { |v| options[:dbstats].delete(PgMetrics::Metrics::TableStatio) unless v }
74
+ opts.on("--[no-]table-stats", "Collect database table stats ") { |v| options[:dbstats].delete(PgMetrics::Metrics::TableStats) unless v }
75
+ opts.on("--[no-]index-statio", "Collect database index statio stats ") { |v| options[:dbstats].delete(PgMetrics::Metrics::IndexStatio) unless v }
76
+ opts.on("--[no-]index-stats", "Collect database index stats ") { |v| options[:dbstats].delete(PgMetrics::Metrics::IndexStats) unless v }
77
+ opts.on("--pgbouncer", "Collect pgbouncer stats") { |v| options[:pgbouncer] = true }
78
+ opts.on("--verbose") { |v| options[:verbose] = true }
79
+ opts.on("--version") { |v| options[:version] = v }
80
+ end.order!(args)
81
+
82
+ options
83
+ end
84
+ end
85
+ end
@@ -0,0 +1,3 @@
1
+ module PgMetrics
2
+ VERSION = "0.1.1"
3
+ end
data/lib/pg_metrics.rb ADDED
@@ -0,0 +1,7 @@
1
+ module PgMetrics
2
+ end
3
+
4
+ require 'pg_metrics/version'
5
+ require 'pg_metrics/metrics'
6
+ require 'pg_metrics/statsd'
7
+ require 'pg_metrics/pgbouncer_metrics'
@@ -0,0 +1,38 @@
1
+ lib = File.expand_path("../lib/", __FILE__)
2
+ $LOAD_PATH.unshift lib unless $LOAD_PATH.include?(lib)
3
+ require "rake"
4
+ require "pg_metrics/version"
5
+
6
+ Gem::Specification.new do |spec|
7
+ spec.name = "pg_metrics"
8
+ spec.version = PgMetrics::VERSION
9
+ spec.licenses = %w(MIT)
10
+ spec.date = "2015-03-17"
11
+ spec.summary = "pg_metrics"
12
+ spec.description = "PostgreSQL Metrics"
13
+ spec.authors = ["Michael Glaesemann"]
14
+ spec.email = ["michael.glaesemann@meetme.com"]
15
+ spec.files = FileList["{bin,lib,test}/**/*.*",
16
+ "CHANGELOG.markdown",
17
+ "DEV.markdown",
18
+ "Gemfile",
19
+ "Gemfile.lock",
20
+ "LICENSE",
21
+ "README.markdown",
22
+ "pg_metrics.gemspec",
23
+ "rakefile"].to_a
24
+ spec.executables = %w(pg_metrics_statsd)
25
+ spec.require_path = %(lib)
26
+ spec.test_files = FileList["test/**/*.*"].to_a
27
+ spec.extra_rdoc_files = %w(LICENSE README.markdown)
28
+ spec.homepage = "https://github.com/MeetMe/pg_metrics"
29
+ [["pg", ["~> 0.10"]],
30
+ ["statsd-ruby", ["~> 1.2", ">= 1.2.1"]]].each do |dep|
31
+ spec.add_runtime_dependency(*dep)
32
+ end
33
+
34
+ [["test-unit", [["~> 2.1", ">= 2.1.2.0"]]],
35
+ ["simplecov", ["~> 0.7", ">= 0.7.1"]]].each do |dep|
36
+ spec.add_development_dependency(*dep)
37
+ end
38
+ end
data/rakefile ADDED
@@ -0,0 +1,8 @@
1
+ require "rake/testtask"
2
+
3
+ Rake::TestTask.new do |t|
4
+ t.libs << "test"
5
+ t.verbose = true
6
+ end
7
+
8
+ task :default => :test
data/test/helper.rb ADDED
@@ -0,0 +1,11 @@
1
+ require "simplecov"
2
+ SimpleCov.start { add_filter "/test/" }
3
+ require "test/unit"
4
+
5
+ [File.dirname(__FILE__),
6
+ File.join(File.dirname(__FILE__), "..", "lib")].each do |f|
7
+ $LOAD_PATH.unshift(f)
8
+ end
9
+
10
+ require "pg_metrics"
11
+
@@ -0,0 +1,95 @@
1
+ require "helper"
2
+ require "set"
3
+
4
+ module PgMetrics
5
+ module Test
6
+ class Statsd < ::Test::Unit::TestCase
7
+ def test_ok
8
+ assert(true)
9
+ end
10
+
11
+ def test_should_have_sensible_defaults
12
+ args = %w()
13
+ config = PgMetrics::Statsd::parse(args)
14
+ assert_equal("localhost", config[:host])
15
+ assert_equal(8125, config[:port])
16
+ assert_equal("", config[:conn])
17
+ assert_match(/\.postgresql$/, config[:scheme])
18
+ end
19
+
20
+ def test_should_set_host_and_port
21
+ args = %w(--host 127.0.0.1 --port 9000)
22
+ config = PgMetrics::Statsd::parse(args)
23
+ assert_equal("127.0.0.1", config[:host])
24
+ assert_equal(9000, config[:port])
25
+ end
26
+
27
+ def test_should_set_regexp_filter
28
+ args = %w(--exclude xdrop)
29
+ config = PgMetrics::Statsd::parse(args)
30
+ assert_equal(config[:exclude], ::Regexp.new(/xdrop/))
31
+ end
32
+
33
+ def test_should_set_connection
34
+ args = ["--connection", "host=localhost port=5493"]
35
+ config = PgMetrics::Statsd::parse(args)
36
+ assert_equal(config[:conn], "host=localhost port=5493")
37
+ end
38
+
39
+ def test_should_set_dbname
40
+ args = %w(--dbname prod)
41
+ config = PgMetrics::Statsd::parse(args)
42
+ assert_equal(config[:dbname], "prod")
43
+ end
44
+
45
+ def test_should_set_all_metrics
46
+ args = []
47
+ config = PgMetrics::Statsd::parse(args)
48
+ expected = [PgMetrics::Metrics::Functions,
49
+ PgMetrics::Metrics::Locks,
50
+ PgMetrics::Metrics::TableSizes,
51
+ PgMetrics::Metrics::IndexSizes,
52
+ PgMetrics::Metrics::TableStatio,
53
+ PgMetrics::Metrics::TableStats,
54
+ PgMetrics::Metrics::IndexStatio,
55
+ PgMetrics::Metrics::IndexStats].to_set
56
+ assert_equal(config[:dbstats], expected)
57
+ end
58
+
59
+ def test_should_set_all_metrics_with_positive_locks
60
+ args = %w(--locks)
61
+ config = PgMetrics::Statsd::parse(args)
62
+ expected = [PgMetrics::Metrics::Functions,
63
+ PgMetrics::Metrics::Locks,
64
+ PgMetrics::Metrics::TableSizes,
65
+ PgMetrics::Metrics::IndexSizes,
66
+ PgMetrics::Metrics::TableStatio,
67
+ PgMetrics::Metrics::TableStats,
68
+ PgMetrics::Metrics::IndexStatio,
69
+ PgMetrics::Metrics::IndexStats].to_set
70
+ assert_equal(config[:dbstats], expected)
71
+ end
72
+
73
+ def test_should_not_collect_locks
74
+ args = %w(--no-locks)
75
+ config = PgMetrics::Statsd::parse(args)
76
+ expected = [PgMetrics::Metrics::Functions,
77
+ PgMetrics::Metrics::TableSizes,
78
+ PgMetrics::Metrics::IndexSizes,
79
+ PgMetrics::Metrics::TableStatio,
80
+ PgMetrics::Metrics::TableStats,
81
+ PgMetrics::Metrics::IndexStatio,
82
+ PgMetrics::Metrics::IndexStats].to_set
83
+ assert_equal(config[:dbstats], expected)
84
+ end
85
+
86
+ def test_should_remove_all_but_locks
87
+ args = %w(--no-functions --no-table-sizes --no-index-sizes --no-table-statio --no-table-stats --no-index-stats --no-index-statio)
88
+ config = PgMetrics::Statsd::parse(args)
89
+ expected = [PgMetrics::Metrics::Locks].to_set
90
+ assert_equal(config[:dbstats], expected)
91
+ end
92
+
93
+ end
94
+ end
95
+ end
metadata ADDED
@@ -0,0 +1,139 @@
1
+ --- !ruby/object:Gem::Specification
2
+ name: pg_metrics
3
+ version: !ruby/object:Gem::Version
4
+ version: 0.1.1
5
+ platform: ruby
6
+ authors:
7
+ - Michael Glaesemann
8
+ autorequire:
9
+ bindir: bin
10
+ cert_chain: []
11
+ date: 2015-03-17 00:00:00.000000000 Z
12
+ dependencies:
13
+ - !ruby/object:Gem::Dependency
14
+ name: pg
15
+ requirement: !ruby/object:Gem::Requirement
16
+ requirements:
17
+ - - ~>
18
+ - !ruby/object:Gem::Version
19
+ version: '0.10'
20
+ type: :runtime
21
+ prerelease: false
22
+ version_requirements: !ruby/object:Gem::Requirement
23
+ requirements:
24
+ - - ~>
25
+ - !ruby/object:Gem::Version
26
+ version: '0.10'
27
+ - !ruby/object:Gem::Dependency
28
+ name: statsd-ruby
29
+ requirement: !ruby/object:Gem::Requirement
30
+ requirements:
31
+ - - ~>
32
+ - !ruby/object:Gem::Version
33
+ version: '1.2'
34
+ - - '>='
35
+ - !ruby/object:Gem::Version
36
+ version: 1.2.1
37
+ type: :runtime
38
+ prerelease: false
39
+ version_requirements: !ruby/object:Gem::Requirement
40
+ requirements:
41
+ - - ~>
42
+ - !ruby/object:Gem::Version
43
+ version: '1.2'
44
+ - - '>='
45
+ - !ruby/object:Gem::Version
46
+ version: 1.2.1
47
+ - !ruby/object:Gem::Dependency
48
+ name: test-unit
49
+ requirement: !ruby/object:Gem::Requirement
50
+ requirements:
51
+ - - ~>
52
+ - !ruby/object:Gem::Version
53
+ version: '2.1'
54
+ - - '>='
55
+ - !ruby/object:Gem::Version
56
+ version: 2.1.2.0
57
+ type: :development
58
+ prerelease: false
59
+ version_requirements: !ruby/object:Gem::Requirement
60
+ requirements:
61
+ - - ~>
62
+ - !ruby/object:Gem::Version
63
+ version: '2.1'
64
+ - - '>='
65
+ - !ruby/object:Gem::Version
66
+ version: 2.1.2.0
67
+ - !ruby/object:Gem::Dependency
68
+ name: simplecov
69
+ requirement: !ruby/object:Gem::Requirement
70
+ requirements:
71
+ - - ~>
72
+ - !ruby/object:Gem::Version
73
+ version: '0.7'
74
+ - - '>='
75
+ - !ruby/object:Gem::Version
76
+ version: 0.7.1
77
+ type: :development
78
+ prerelease: false
79
+ version_requirements: !ruby/object:Gem::Requirement
80
+ requirements:
81
+ - - ~>
82
+ - !ruby/object:Gem::Version
83
+ version: '0.7'
84
+ - - '>='
85
+ - !ruby/object:Gem::Version
86
+ version: 0.7.1
87
+ description: PostgreSQL Metrics
88
+ email:
89
+ - michael.glaesemann@meetme.com
90
+ executables:
91
+ - pg_metrics_statsd
92
+ extensions: []
93
+ extra_rdoc_files:
94
+ - LICENSE
95
+ - README.markdown
96
+ files:
97
+ - lib/pg_metrics.rb
98
+ - lib/pg_metrics/metrics.rb
99
+ - lib/pg_metrics/pgbouncer_metrics.rb
100
+ - lib/pg_metrics/statsd.rb
101
+ - lib/pg_metrics/version.rb
102
+ - test/helper.rb
103
+ - test/test_statsd.rb
104
+ - CHANGELOG.markdown
105
+ - DEV.markdown
106
+ - Gemfile
107
+ - Gemfile.lock
108
+ - LICENSE
109
+ - README.markdown
110
+ - pg_metrics.gemspec
111
+ - rakefile
112
+ - bin/pg_metrics_statsd
113
+ homepage: https://github.com/MeetMe/pg_metrics
114
+ licenses:
115
+ - MIT
116
+ metadata: {}
117
+ post_install_message:
118
+ rdoc_options: []
119
+ require_paths:
120
+ - lib
121
+ required_ruby_version: !ruby/object:Gem::Requirement
122
+ requirements:
123
+ - - '>='
124
+ - !ruby/object:Gem::Version
125
+ version: '0'
126
+ required_rubygems_version: !ruby/object:Gem::Requirement
127
+ requirements:
128
+ - - '>='
129
+ - !ruby/object:Gem::Version
130
+ version: '0'
131
+ requirements: []
132
+ rubyforge_project:
133
+ rubygems_version: 2.0.3
134
+ signing_key:
135
+ specification_version: 4
136
+ summary: pg_metrics
137
+ test_files:
138
+ - test/helper.rb
139
+ - test/test_statsd.rb