prometheus_exporter 2.0.0 → 2.0.3
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +4 -4
- data/CHANGELOG +15 -1
- data/README.md +32 -1
- data/lib/prometheus_exporter/instrumentation/active_record.rb +6 -22
- data/lib/prometheus_exporter/instrumentation/method_profiler.rb +61 -24
- data/lib/prometheus_exporter/instrumentation/periodic_stats.rb +62 -0
- data/lib/prometheus_exporter/instrumentation/process.rb +5 -21
- data/lib/prometheus_exporter/instrumentation/puma.rb +7 -12
- data/lib/prometheus_exporter/instrumentation/resque.rb +6 -11
- data/lib/prometheus_exporter/instrumentation/sidekiq.rb +20 -12
- data/lib/prometheus_exporter/instrumentation/sidekiq_process.rb +5 -11
- data/lib/prometheus_exporter/instrumentation/sidekiq_queue.rb +5 -11
- data/lib/prometheus_exporter/instrumentation/sidekiq_stats.rb +5 -11
- data/lib/prometheus_exporter/instrumentation/unicorn.rb +7 -12
- data/lib/prometheus_exporter/instrumentation.rb +1 -0
- data/lib/prometheus_exporter/metric/histogram.rb +3 -3
- data/lib/prometheus_exporter/middleware.rb +8 -6
- data/lib/prometheus_exporter/server/runner.rb +3 -1
- data/lib/prometheus_exporter/server/web_server.rb +3 -1
- data/lib/prometheus_exporter/version.rb +1 -1
- metadata +3 -2
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA256:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: 07f116c0fdf15835af1b9520ca81b47c1d7a6011b3b8991cf9b569127d7a22fc
|
4
|
+
data.tar.gz: f631d2d3259d72577e896a4f2a650f7af65e70b5703f24a02cbcebadd75f97d1
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: 81894939793a45953be6efbf90db9506abb6cc5ca196e03fd0bfe92035430284240820e1311e3f9532933602fa337790eb828e99809f003378cb6290b74f1eba
|
7
|
+
data.tar.gz: 151c49417eee04b2f9b931d9bda160fd19df22d2d9ff474f2f54b57bc828504eaf2d2f047eae2be1f121ce4d0b3ed74e9695959bcd51da8a52949da8a3931604
|
data/CHANGELOG
CHANGED
@@ -1,6 +1,20 @@
|
|
1
|
+
2.0.3 - 2022-05-23
|
2
|
+
|
3
|
+
- FEATURE: new ping endpoint for keepalive checks
|
4
|
+
- FIX: order histogram correctly for GCP support
|
5
|
+
- FIX: improve sidekiq instrumentation
|
6
|
+
|
7
|
+
2.0.2 - 2022-02-25
|
8
|
+
|
9
|
+
- FIX: runner was not requiring unicorn integration correctly leading to a crash
|
10
|
+
|
11
|
+
2.0.1 - 2022-02-24
|
12
|
+
|
13
|
+
- FIX: ensure threads do not leak when calling #start repeatedly on instrumentation classes, this is an urgent patch for Puma integration
|
14
|
+
|
1
15
|
2.0.0 - 2022-02-18
|
2
16
|
|
3
|
-
- FEATURE: Add per worker custom labels
|
17
|
+
- FEATURE: Add per worker custom labels
|
4
18
|
- FEATURE: support custom histogram buckets
|
5
19
|
- FIX: all metrics are exposing status label, and not only `http_requests_total`
|
6
20
|
- BREAKING: rename all `http_duration` metrics to `http_request_duration` to match prometheus official naming conventions (See https://prometheus.io/docs/practices/naming/#metric-names).
|
data/README.md
CHANGED
@@ -201,6 +201,14 @@ Ensure you run the exporter in a monitored background process:
|
|
201
201
|
$ bundle exec prometheus_exporter
|
202
202
|
```
|
203
203
|
|
204
|
+
#### Choosing the style of method patching
|
205
|
+
|
206
|
+
By default, `prometheus_exporter` uses `alias_method` to instrument methods used by SQL and Redis as it is the fastest approach (see [this article](https://samsaffron.com/archive/2017/10/18/fastest-way-to-profile-a-method-in-ruby)). You may desire to add additional instrumentation libraries beyond `prometheus_exporter` to your app. This can become problematic if these other libraries instead use `prepend` to instrument methods. To resolve this, you can tell the middleware to instrument using `prepend` by passing an `instrument` option like so:
|
207
|
+
|
208
|
+
```ruby
|
209
|
+
Rails.application.middleware.unshift PrometheusExporter::Middleware, instrument: :prepend
|
210
|
+
```
|
211
|
+
|
204
212
|
#### Metrics collected by Rails integration middleware
|
205
213
|
|
206
214
|
| Type | Name | Description |
|
@@ -575,7 +583,10 @@ The easiest way to gather this metrics is to put the following in your `puma.rb`
|
|
575
583
|
# puma.rb config
|
576
584
|
after_worker_boot do
|
577
585
|
require 'prometheus_exporter/instrumentation'
|
578
|
-
|
586
|
+
# optional check, avoids spinning up and down threads per worker
|
587
|
+
if !PrometheusExporter::Instrumentation::Puma.started?
|
588
|
+
PrometheusExporter::Instrumentation::Puma.start
|
589
|
+
end
|
579
590
|
end
|
580
591
|
```
|
581
592
|
|
@@ -949,6 +960,26 @@ You can also pass a log level (default is [`Logger::WARN`](https://ruby-doc.org/
|
|
949
960
|
PrometheusExporter::Client.new(log_level: Logger::DEBUG)
|
950
961
|
```
|
951
962
|
|
963
|
+
## Docker/Kubernetes Healthcheck
|
964
|
+
|
965
|
+
A `/ping` endpoint which only returns `PONG` is available so you can run container healthchecks :
|
966
|
+
|
967
|
+
Example:
|
968
|
+
|
969
|
+
```yml
|
970
|
+
services:
|
971
|
+
rails-exporter:
|
972
|
+
command:
|
973
|
+
- bin/prometheus_exporter
|
974
|
+
- -b
|
975
|
+
- 0.0.0.0
|
976
|
+
healthcheck:
|
977
|
+
test: ["CMD", "curl", "--silent", "--show-error", "--fail", "--max-time", "3", "http://0.0.0.0:9394/ping"]
|
978
|
+
timeout: 3s
|
979
|
+
interval: 10s
|
980
|
+
retries: 5
|
981
|
+
```
|
982
|
+
|
952
983
|
## Contributing
|
953
984
|
|
954
985
|
Bug reports and pull requests are welcome on GitHub at https://github.com/discourse/prometheus_exporter. This project is intended to be a safe, welcoming space for collaboration, and contributors are expected to adhere to the [Contributor Covenant](http://contributor-covenant.org) code of conduct.
|
@@ -2,11 +2,10 @@
|
|
2
2
|
|
3
3
|
# collects stats from currently running process
|
4
4
|
module PrometheusExporter::Instrumentation
|
5
|
-
class ActiveRecord
|
5
|
+
class ActiveRecord < PeriodicStats
|
6
6
|
ALLOWED_CONFIG_LABELS = %i(database username host port)
|
7
7
|
|
8
8
|
def self.start(client: nil, frequency: 30, custom_labels: {}, config_labels: [])
|
9
|
-
|
10
9
|
client ||= PrometheusExporter::Client.default
|
11
10
|
|
12
11
|
# Not all rails versions support connection pool stats
|
@@ -20,20 +19,12 @@ module PrometheusExporter::Instrumentation
|
|
20
19
|
|
21
20
|
active_record_collector = new(custom_labels, config_labels)
|
22
21
|
|
23
|
-
|
24
|
-
|
25
|
-
|
26
|
-
while true
|
27
|
-
begin
|
28
|
-
metrics = active_record_collector.collect
|
29
|
-
metrics.each { |metric| client.send_json metric }
|
30
|
-
rescue => e
|
31
|
-
client.logger.error("Prometheus Exporter Failed To Collect Process Stats #{e}")
|
32
|
-
ensure
|
33
|
-
sleep frequency
|
34
|
-
end
|
35
|
-
end
|
22
|
+
worker_loop do
|
23
|
+
metrics = active_record_collector.collect
|
24
|
+
metrics.each { |metric| client.send_json metric }
|
36
25
|
end
|
26
|
+
|
27
|
+
super
|
37
28
|
end
|
38
29
|
|
39
30
|
def self.validate_config_labels(config_labels)
|
@@ -41,13 +32,6 @@ module PrometheusExporter::Instrumentation
|
|
41
32
|
raise "Invalid Config Labels, available options #{ALLOWED_CONFIG_LABELS}" if (config_labels - ALLOWED_CONFIG_LABELS).size > 0
|
42
33
|
end
|
43
34
|
|
44
|
-
def self.stop
|
45
|
-
if t = @thread
|
46
|
-
t.kill
|
47
|
-
@thread = nil
|
48
|
-
end
|
49
|
-
end
|
50
|
-
|
51
35
|
def initialize(metric_labels, config_labels)
|
52
36
|
@metric_labels = metric_labels
|
53
37
|
@config_labels = config_labels
|
@@ -4,30 +4,14 @@
|
|
4
4
|
module PrometheusExporter::Instrumentation; end
|
5
5
|
|
6
6
|
class PrometheusExporter::Instrumentation::MethodProfiler
|
7
|
-
def self.patch(klass, methods, name)
|
8
|
-
|
9
|
-
|
10
|
-
|
11
|
-
|
12
|
-
|
13
|
-
|
14
|
-
|
15
|
-
return #{method_name}__mp_unpatched(*args, &blk)
|
16
|
-
end
|
17
|
-
begin
|
18
|
-
start = Process.clock_gettime(Process::CLOCK_MONOTONIC)
|
19
|
-
#{method_name}__mp_unpatched(*args, &blk)
|
20
|
-
ensure
|
21
|
-
data = (prof[:#{name}] ||= {duration: 0.0, calls: 0})
|
22
|
-
data[:duration] += Process.clock_gettime(Process::CLOCK_MONOTONIC) - start
|
23
|
-
data[:calls] += 1
|
24
|
-
end
|
25
|
-
end
|
26
|
-
end
|
27
|
-
RUBY
|
28
|
-
end.join("\n")
|
29
|
-
|
30
|
-
klass.class_eval patches, __FILE__, patch_source_line
|
7
|
+
def self.patch(klass, methods, name, instrument:)
|
8
|
+
if instrument == :alias_method
|
9
|
+
patch_using_alias_method(klass, methods, name)
|
10
|
+
elsif instrument == :prepend
|
11
|
+
patch_using_prepend(klass, methods, name)
|
12
|
+
else
|
13
|
+
raise ArgumentError, "instrument must be :alias_method or :prepend"
|
14
|
+
end
|
31
15
|
end
|
32
16
|
|
33
17
|
def self.transfer
|
@@ -55,4 +39,57 @@ class PrometheusExporter::Instrumentation::MethodProfiler
|
|
55
39
|
end
|
56
40
|
data
|
57
41
|
end
|
42
|
+
|
43
|
+
private
|
44
|
+
|
45
|
+
def self.patch_using_prepend(klass, methods, name)
|
46
|
+
prepend_instument = Module.new
|
47
|
+
patch_source_line = __LINE__ + 3
|
48
|
+
patches = methods.map do |method_name|
|
49
|
+
<<~RUBY
|
50
|
+
def #{method_name}(*args, &blk)
|
51
|
+
unless prof = Thread.current[:_method_profiler]
|
52
|
+
return super
|
53
|
+
end
|
54
|
+
begin
|
55
|
+
start = Process.clock_gettime(Process::CLOCK_MONOTONIC)
|
56
|
+
super
|
57
|
+
ensure
|
58
|
+
data = (prof[:#{name}] ||= {duration: 0.0, calls: 0})
|
59
|
+
data[:duration] += Process.clock_gettime(Process::CLOCK_MONOTONIC) - start
|
60
|
+
data[:calls] += 1
|
61
|
+
end
|
62
|
+
end
|
63
|
+
RUBY
|
64
|
+
end.join("\n")
|
65
|
+
|
66
|
+
prepend_instument.module_eval patches, __FILE__, patch_source_line
|
67
|
+
klass.prepend(prepend_instument)
|
68
|
+
end
|
69
|
+
|
70
|
+
def self.patch_using_alias_method(klass, methods, name)
|
71
|
+
patch_source_line = __LINE__ + 3
|
72
|
+
patches = methods.map do |method_name|
|
73
|
+
<<~RUBY
|
74
|
+
unless defined?(#{method_name}__mp_unpatched)
|
75
|
+
alias_method :#{method_name}__mp_unpatched, :#{method_name}
|
76
|
+
def #{method_name}(*args, &blk)
|
77
|
+
unless prof = Thread.current[:_method_profiler]
|
78
|
+
return #{method_name}__mp_unpatched(*args, &blk)
|
79
|
+
end
|
80
|
+
begin
|
81
|
+
start = Process.clock_gettime(Process::CLOCK_MONOTONIC)
|
82
|
+
#{method_name}__mp_unpatched(*args, &blk)
|
83
|
+
ensure
|
84
|
+
data = (prof[:#{name}] ||= {duration: 0.0, calls: 0})
|
85
|
+
data[:duration] += Process.clock_gettime(Process::CLOCK_MONOTONIC) - start
|
86
|
+
data[:calls] += 1
|
87
|
+
end
|
88
|
+
end
|
89
|
+
end
|
90
|
+
RUBY
|
91
|
+
end.join("\n")
|
92
|
+
|
93
|
+
klass.class_eval patches, __FILE__, patch_source_line
|
94
|
+
end
|
58
95
|
end
|
@@ -0,0 +1,62 @@
|
|
1
|
+
# frozen_string_literal: true
|
2
|
+
|
3
|
+
module PrometheusExporter::Instrumentation
|
4
|
+
class PeriodicStats
|
5
|
+
|
6
|
+
def self.start(*args, frequency:, client: nil, **kwargs)
|
7
|
+
client ||= PrometheusExporter::Client.default
|
8
|
+
|
9
|
+
if !(Numeric === frequency)
|
10
|
+
raise ArgumentError.new("Expected frequency to be a number")
|
11
|
+
end
|
12
|
+
|
13
|
+
if frequency < 0
|
14
|
+
raise ArgumentError.new("Expected frequency to be a positive number")
|
15
|
+
end
|
16
|
+
|
17
|
+
if !@worker_loop
|
18
|
+
raise ArgumentError.new("Worker loop was not set")
|
19
|
+
end
|
20
|
+
|
21
|
+
klass = self
|
22
|
+
|
23
|
+
stop
|
24
|
+
|
25
|
+
@stop_thread = false
|
26
|
+
|
27
|
+
@thread = Thread.new do
|
28
|
+
while !@stop_thread
|
29
|
+
begin
|
30
|
+
@worker_loop.call
|
31
|
+
rescue => e
|
32
|
+
client.logger.error("#{klass} Prometheus Exporter Failed To Collect Stats #{e}")
|
33
|
+
ensure
|
34
|
+
sleep frequency
|
35
|
+
end
|
36
|
+
end
|
37
|
+
end
|
38
|
+
|
39
|
+
end
|
40
|
+
|
41
|
+
def self.started?
|
42
|
+
!!@thread&.alive?
|
43
|
+
end
|
44
|
+
|
45
|
+
def self.worker_loop(&blk)
|
46
|
+
@worker_loop = blk
|
47
|
+
end
|
48
|
+
|
49
|
+
def self.stop
|
50
|
+
# to avoid a warning
|
51
|
+
@thread = nil if !defined?(@thread)
|
52
|
+
|
53
|
+
if @thread&.alive?
|
54
|
+
@stop_thread = true
|
55
|
+
@thread.wakeup
|
56
|
+
@thread.join
|
57
|
+
end
|
58
|
+
@thread = nil
|
59
|
+
end
|
60
|
+
|
61
|
+
end
|
62
|
+
end
|
@@ -2,8 +2,7 @@
|
|
2
2
|
|
3
3
|
# collects stats from currently running process
|
4
4
|
module PrometheusExporter::Instrumentation
|
5
|
-
class Process
|
6
|
-
@thread = nil if !defined?(@thread)
|
5
|
+
class Process < PeriodicStats
|
7
6
|
|
8
7
|
def self.start(client: nil, type: "ruby", frequency: 30, labels: nil)
|
9
8
|
|
@@ -19,27 +18,12 @@ module PrometheusExporter::Instrumentation
|
|
19
18
|
process_collector = new(metric_labels)
|
20
19
|
client ||= PrometheusExporter::Client.default
|
21
20
|
|
22
|
-
|
23
|
-
|
24
|
-
|
25
|
-
while true
|
26
|
-
begin
|
27
|
-
metric = process_collector.collect
|
28
|
-
client.send_json metric
|
29
|
-
rescue => e
|
30
|
-
client.logger.error("Prometheus Exporter Failed To Collect Process Stats #{e}")
|
31
|
-
ensure
|
32
|
-
sleep frequency
|
33
|
-
end
|
34
|
-
end
|
21
|
+
worker_loop do
|
22
|
+
metric = process_collector.collect
|
23
|
+
client.send_json metric
|
35
24
|
end
|
36
|
-
end
|
37
25
|
|
38
|
-
|
39
|
-
if t = @thread
|
40
|
-
t.kill
|
41
|
-
@thread = nil
|
42
|
-
end
|
26
|
+
super
|
43
27
|
end
|
44
28
|
|
45
29
|
def initialize(metric_labels)
|
@@ -4,22 +4,17 @@ require "json"
|
|
4
4
|
|
5
5
|
# collects stats from puma
|
6
6
|
module PrometheusExporter::Instrumentation
|
7
|
-
class Puma
|
7
|
+
class Puma < PeriodicStats
|
8
8
|
def self.start(client: nil, frequency: 30, labels: {})
|
9
9
|
puma_collector = new(labels)
|
10
10
|
client ||= PrometheusExporter::Client.default
|
11
|
-
|
12
|
-
|
13
|
-
|
14
|
-
|
15
|
-
client.send_json metric
|
16
|
-
rescue => e
|
17
|
-
client.logger.error("Prometheus Exporter Failed To Collect Puma Stats #{e}")
|
18
|
-
ensure
|
19
|
-
sleep frequency
|
20
|
-
end
|
21
|
-
end
|
11
|
+
|
12
|
+
worker_loop do
|
13
|
+
metric = puma_collector.collect
|
14
|
+
client.send_json metric
|
22
15
|
end
|
16
|
+
|
17
|
+
super
|
23
18
|
end
|
24
19
|
|
25
20
|
def initialize(metric_labels = {})
|
@@ -2,21 +2,16 @@
|
|
2
2
|
|
3
3
|
# collects stats from resque
|
4
4
|
module PrometheusExporter::Instrumentation
|
5
|
-
class Resque
|
5
|
+
class Resque < PeriodicStats
|
6
6
|
def self.start(client: nil, frequency: 30)
|
7
7
|
resque_collector = new
|
8
8
|
client ||= PrometheusExporter::Client.default
|
9
|
-
|
10
|
-
|
11
|
-
|
12
|
-
client.send_json(resque_collector.collect)
|
13
|
-
rescue => e
|
14
|
-
client.logger.error("Prometheus Exporter Failed To Collect Resque Stats #{e}")
|
15
|
-
ensure
|
16
|
-
sleep frequency
|
17
|
-
end
|
18
|
-
end
|
9
|
+
|
10
|
+
worker_loop do
|
11
|
+
client.send_json(resque_collector.collect)
|
19
12
|
end
|
13
|
+
|
14
|
+
super
|
20
15
|
end
|
21
16
|
|
22
17
|
def collect
|
@@ -16,12 +16,12 @@ module PrometheusExporter::Instrumentation
|
|
16
16
|
job_is_fire_and_forget = job["retry"] == false
|
17
17
|
|
18
18
|
worker_class = Object.const_get(job["class"])
|
19
|
-
worker_custom_labels = self.get_worker_custom_labels(worker_class)
|
19
|
+
worker_custom_labels = self.get_worker_custom_labels(worker_class, job)
|
20
20
|
|
21
21
|
unless job_is_fire_and_forget
|
22
22
|
PrometheusExporter::Client.default.send_json(
|
23
23
|
type: "sidekiq",
|
24
|
-
name: job["class"],
|
24
|
+
name: get_name(job["class"], job),
|
25
25
|
dead: true,
|
26
26
|
custom_labels: worker_custom_labels
|
27
27
|
)
|
@@ -29,12 +29,21 @@ module PrometheusExporter::Instrumentation
|
|
29
29
|
end
|
30
30
|
end
|
31
31
|
|
32
|
-
def self.get_worker_custom_labels(worker_class)
|
33
|
-
worker_class.respond_to?(:custom_labels)
|
32
|
+
def self.get_worker_custom_labels(worker_class, msg)
|
33
|
+
return {} unless worker_class.respond_to?(:custom_labels)
|
34
|
+
|
35
|
+
# TODO remove when version 3.0.0 is released
|
36
|
+
method_arity = worker_class.method(:custom_labels).arity
|
37
|
+
|
38
|
+
if method_arity > 0
|
39
|
+
worker_class.custom_labels(msg)
|
40
|
+
else
|
41
|
+
worker_class.custom_labels
|
42
|
+
end
|
34
43
|
end
|
35
44
|
|
36
|
-
def initialize(client: nil)
|
37
|
-
@client = client || PrometheusExporter::Client.default
|
45
|
+
def initialize(options = { client: nil })
|
46
|
+
@client = options.fetch(:client, nil) || PrometheusExporter::Client.default
|
38
47
|
end
|
39
48
|
|
40
49
|
def call(worker, msg, queue)
|
@@ -51,19 +60,18 @@ module PrometheusExporter::Instrumentation
|
|
51
60
|
duration = ::Process.clock_gettime(::Process::CLOCK_MONOTONIC) - start
|
52
61
|
@client.send_json(
|
53
62
|
type: "sidekiq",
|
54
|
-
name: get_name(worker, msg),
|
63
|
+
name: self.class.get_name(worker.class.to_s, msg),
|
55
64
|
queue: queue,
|
56
65
|
success: success,
|
57
66
|
shutdown: shutdown,
|
58
67
|
duration: duration,
|
59
|
-
custom_labels: self.class.get_worker_custom_labels(worker.class)
|
68
|
+
custom_labels: self.class.get_worker_custom_labels(worker.class, msg)
|
60
69
|
)
|
61
70
|
end
|
62
71
|
|
63
72
|
private
|
64
73
|
|
65
|
-
def get_name(
|
66
|
-
class_name = worker.class.to_s
|
74
|
+
def self.get_name(class_name, msg)
|
67
75
|
if class_name == JOB_WRAPPER_CLASS_NAME
|
68
76
|
get_job_wrapper_name(msg)
|
69
77
|
elsif DELAYED_CLASS_NAMES.include?(class_name)
|
@@ -73,11 +81,11 @@ module PrometheusExporter::Instrumentation
|
|
73
81
|
end
|
74
82
|
end
|
75
83
|
|
76
|
-
def get_job_wrapper_name(msg)
|
84
|
+
def self.get_job_wrapper_name(msg)
|
77
85
|
msg['wrapped']
|
78
86
|
end
|
79
87
|
|
80
|
-
def get_delayed_name(msg, class_name)
|
88
|
+
def self.get_delayed_name(msg, class_name)
|
81
89
|
begin
|
82
90
|
# fallback to class_name since we're relying on the internal implementation
|
83
91
|
# of the delayed extensions
|
@@ -1,22 +1,16 @@
|
|
1
1
|
# frozen_string_literal: true
|
2
2
|
|
3
3
|
module PrometheusExporter::Instrumentation
|
4
|
-
class SidekiqProcess
|
4
|
+
class SidekiqProcess < PeriodicStats
|
5
5
|
def self.start(client: nil, frequency: 30)
|
6
6
|
client ||= PrometheusExporter::Client.default
|
7
7
|
sidekiq_process_collector = new
|
8
8
|
|
9
|
-
|
10
|
-
|
11
|
-
begin
|
12
|
-
client.send_json(sidekiq_process_collector.collect)
|
13
|
-
rescue StandardError => e
|
14
|
-
STDERR.puts("Prometheus Exporter Failed To Collect Sidekiq Processes metrics #{e}")
|
15
|
-
ensure
|
16
|
-
sleep frequency
|
17
|
-
end
|
18
|
-
end
|
9
|
+
worker_loop do
|
10
|
+
client.send_json(sidekiq_process_collector.collect)
|
19
11
|
end
|
12
|
+
|
13
|
+
super
|
20
14
|
end
|
21
15
|
|
22
16
|
def initialize
|
@@ -1,22 +1,16 @@
|
|
1
1
|
# frozen_string_literal: true
|
2
2
|
|
3
3
|
module PrometheusExporter::Instrumentation
|
4
|
-
class SidekiqQueue
|
4
|
+
class SidekiqQueue < PeriodicStats
|
5
5
|
def self.start(client: nil, frequency: 30, all_queues: false)
|
6
6
|
client ||= PrometheusExporter::Client.default
|
7
7
|
sidekiq_queue_collector = new(all_queues: all_queues)
|
8
8
|
|
9
|
-
|
10
|
-
|
11
|
-
begin
|
12
|
-
client.send_json(sidekiq_queue_collector.collect)
|
13
|
-
rescue StandardError => e
|
14
|
-
client.logger.error("Prometheus Exporter Failed To Collect Sidekiq Queue metrics #{e}")
|
15
|
-
ensure
|
16
|
-
sleep frequency
|
17
|
-
end
|
18
|
-
end
|
9
|
+
worker_loop do
|
10
|
+
client.send_json(sidekiq_queue_collector.collect)
|
19
11
|
end
|
12
|
+
|
13
|
+
super
|
20
14
|
end
|
21
15
|
|
22
16
|
def initialize(all_queues: false)
|
@@ -1,22 +1,16 @@
|
|
1
1
|
# frozen_string_literal: true
|
2
2
|
|
3
3
|
module PrometheusExporter::Instrumentation
|
4
|
-
class SidekiqStats
|
4
|
+
class SidekiqStats < PeriodicStats
|
5
5
|
def self.start(client: nil, frequency: 30)
|
6
6
|
client ||= PrometheusExporter::Client.default
|
7
7
|
sidekiq_stats_collector = new
|
8
8
|
|
9
|
-
|
10
|
-
|
11
|
-
begin
|
12
|
-
client.send_json(sidekiq_stats_collector.collect)
|
13
|
-
rescue StandardError => e
|
14
|
-
STDERR.puts("Prometheus Exporter Failed To Collect Sidekiq Stats metrics #{e}")
|
15
|
-
ensure
|
16
|
-
sleep frequency
|
17
|
-
end
|
18
|
-
end
|
9
|
+
worker_loop do
|
10
|
+
client.send_json(sidekiq_stats_collector.collect)
|
19
11
|
end
|
12
|
+
|
13
|
+
super
|
20
14
|
end
|
21
15
|
|
22
16
|
def collect
|
@@ -8,22 +8,17 @@ end
|
|
8
8
|
|
9
9
|
module PrometheusExporter::Instrumentation
|
10
10
|
# collects stats from unicorn
|
11
|
-
class Unicorn
|
11
|
+
class Unicorn < PeriodicStats
|
12
12
|
def self.start(pid_file:, listener_address:, client: nil, frequency: 30)
|
13
13
|
unicorn_collector = new(pid_file: pid_file, listener_address: listener_address)
|
14
14
|
client ||= PrometheusExporter::Client.default
|
15
|
-
|
16
|
-
|
17
|
-
|
18
|
-
|
19
|
-
client.send_json metric
|
20
|
-
rescue StandardError => e
|
21
|
-
client.logger.error("Prometheus Exporter Failed To Collect Unicorn Stats #{e}")
|
22
|
-
ensure
|
23
|
-
sleep frequency
|
24
|
-
end
|
25
|
-
end
|
15
|
+
|
16
|
+
worker_loop do
|
17
|
+
metric = unicorn_collector.collect
|
18
|
+
client.send_json metric
|
26
19
|
end
|
20
|
+
|
21
|
+
super
|
27
22
|
end
|
28
23
|
|
29
24
|
def initialize(pid_file:, listener_address:)
|
@@ -19,7 +19,7 @@ module PrometheusExporter::Metric
|
|
19
19
|
|
20
20
|
def initialize(name, help, opts = {})
|
21
21
|
super(name, help)
|
22
|
-
@buckets = (opts[:buckets] || self.class.default_buckets).sort
|
22
|
+
@buckets = (opts[:buckets] || self.class.default_buckets).sort
|
23
23
|
reset!
|
24
24
|
end
|
25
25
|
|
@@ -57,11 +57,11 @@ module PrometheusExporter::Metric
|
|
57
57
|
first = false
|
58
58
|
count = @counts[labels]
|
59
59
|
sum = @sums[labels]
|
60
|
-
text << "#{prefix(@name)}_bucket#{labels_text(with_bucket(labels, "+Inf"))} #{count}\n"
|
61
60
|
@buckets.each do |bucket|
|
62
61
|
value = @observations[labels][bucket]
|
63
62
|
text << "#{prefix(@name)}_bucket#{labels_text(with_bucket(labels, bucket.to_s))} #{value}\n"
|
64
63
|
end
|
64
|
+
text << "#{prefix(@name)}_bucket#{labels_text(with_bucket(labels, "+Inf"))} #{count}\n"
|
65
65
|
text << "#{prefix(@name)}_count#{labels_text(labels)} #{count}\n"
|
66
66
|
text << "#{prefix(@name)}_sum#{labels_text(labels)} #{sum}"
|
67
67
|
end
|
@@ -91,7 +91,7 @@ module PrometheusExporter::Metric
|
|
91
91
|
end
|
92
92
|
|
93
93
|
def fill_buckets(value, buckets)
|
94
|
-
@buckets.each do |b|
|
94
|
+
@buckets.reverse.each do |b|
|
95
95
|
break if value > b
|
96
96
|
buckets[b] += 1
|
97
97
|
end
|
@@ -6,23 +6,25 @@ require 'prometheus_exporter/client'
|
|
6
6
|
class PrometheusExporter::Middleware
|
7
7
|
MethodProfiler = PrometheusExporter::Instrumentation::MethodProfiler
|
8
8
|
|
9
|
-
def initialize(app, config = { instrument:
|
9
|
+
def initialize(app, config = { instrument: :alias_method, client: nil })
|
10
10
|
@app = app
|
11
11
|
@client = config[:client] || PrometheusExporter::Client.default
|
12
12
|
|
13
13
|
if config[:instrument]
|
14
14
|
if defined? Redis::Client
|
15
|
-
MethodProfiler.patch(Redis::Client, [
|
15
|
+
MethodProfiler.patch(Redis::Client, [
|
16
|
+
:call, :call_pipeline
|
17
|
+
], :redis, instrument: config[:instrument])
|
16
18
|
end
|
17
19
|
if defined? PG::Connection
|
18
20
|
MethodProfiler.patch(PG::Connection, [
|
19
21
|
:exec, :async_exec, :exec_prepared, :send_query_prepared, :query
|
20
|
-
], :sql)
|
22
|
+
], :sql, instrument: config[:instrument])
|
21
23
|
end
|
22
24
|
if defined? Mysql2::Client
|
23
|
-
MethodProfiler.patch(Mysql2::Client, [:query], :sql)
|
24
|
-
MethodProfiler.patch(Mysql2::Statement, [:execute], :sql)
|
25
|
-
MethodProfiler.patch(Mysql2::Result, [:each], :sql)
|
25
|
+
MethodProfiler.patch(Mysql2::Client, [:query], :sql, instrument: config[:instrument])
|
26
|
+
MethodProfiler.patch(Mysql2::Statement, [:execute], :sql, instrument: config[:instrument])
|
27
|
+
MethodProfiler.patch(Mysql2::Result, [:each], :sql, instrument: config[:instrument])
|
26
28
|
end
|
27
29
|
end
|
28
30
|
end
|
@@ -1,7 +1,6 @@
|
|
1
1
|
# frozen_string_literal: true
|
2
2
|
|
3
3
|
require_relative '../client'
|
4
|
-
require_relative '../instrumentation/unicorn'
|
5
4
|
|
6
5
|
module PrometheusExporter::Server
|
7
6
|
class RunnerException < StandardError; end
|
@@ -39,6 +38,9 @@ module PrometheusExporter::Server
|
|
39
38
|
end
|
40
39
|
|
41
40
|
if unicorn_listen_address && unicorn_pid_file
|
41
|
+
|
42
|
+
require_relative '../instrumentation'
|
43
|
+
|
42
44
|
local_client = PrometheusExporter::LocalClient.new(collector: collector)
|
43
45
|
PrometheusExporter::Instrumentation::Unicorn.start(
|
44
46
|
pid_file: unicorn_pid_file,
|
@@ -73,9 +73,11 @@ module PrometheusExporter::Server
|
|
73
73
|
end
|
74
74
|
elsif req.path == '/send-metrics'
|
75
75
|
handle_metrics(req, res)
|
76
|
+
elsif req.path == '/ping'
|
77
|
+
res.body = 'PONG'
|
76
78
|
else
|
77
79
|
res.status = 404
|
78
|
-
res.body = "Not Found! The Prometheus Ruby Exporter only listens on /metrics and /send-metrics"
|
80
|
+
res.body = "Not Found! The Prometheus Ruby Exporter only listens on /ping, /metrics and /send-metrics"
|
79
81
|
end
|
80
82
|
end
|
81
83
|
end
|
metadata
CHANGED
@@ -1,14 +1,14 @@
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
2
2
|
name: prometheus_exporter
|
3
3
|
version: !ruby/object:Gem::Version
|
4
|
-
version: 2.0.
|
4
|
+
version: 2.0.3
|
5
5
|
platform: ruby
|
6
6
|
authors:
|
7
7
|
- Sam Saffron
|
8
8
|
autorequire:
|
9
9
|
bindir: bin
|
10
10
|
cert_chain: []
|
11
|
-
date: 2022-
|
11
|
+
date: 2022-05-23 00:00:00.000000000 Z
|
12
12
|
dependencies:
|
13
13
|
- !ruby/object:Gem::Dependency
|
14
14
|
name: webrick
|
@@ -238,6 +238,7 @@ files:
|
|
238
238
|
- lib/prometheus_exporter/instrumentation/delayed_job.rb
|
239
239
|
- lib/prometheus_exporter/instrumentation/hutch.rb
|
240
240
|
- lib/prometheus_exporter/instrumentation/method_profiler.rb
|
241
|
+
- lib/prometheus_exporter/instrumentation/periodic_stats.rb
|
241
242
|
- lib/prometheus_exporter/instrumentation/process.rb
|
242
243
|
- lib/prometheus_exporter/instrumentation/puma.rb
|
243
244
|
- lib/prometheus_exporter/instrumentation/resque.rb
|