prometheus_exporter 1.0.1 → 2.0.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 9a885aa655d0bd9d939bef77ec5d7c37027d19882a96b184d0351c49593393b9
4
- data.tar.gz: bca2080108cb8169d5589ee481fc40b0edb39e890a8a0fba6ec301a987c35396
3
+ metadata.gz: 5cd7d7181c0dcb2e26efe99605bd72e6f4747394018a74b8eee93a126a10c2ee
4
+ data.tar.gz: 94d70aa9f50ef97fee06ed8fa5963588ba55ee98ab702d50136abbbba18b9f56
5
5
  SHA512:
6
- metadata.gz: b21fe8ad31aec32b877ab50146759715c41968dd1edc510af7b986436983217fd7eb0c468faf2ceca7897ae855a36a88f3b6aef1e0dcf776846051e4cd35b822
7
- data.tar.gz: 5a0a61bd914c269385bb6b46df97b2eca5e6fe4573092f10dfed7d92a4f052b62761faa53902bfdf93e2286dc96a13bc804dd56e820dec99c95a00550026b741
6
+ metadata.gz: 26c52753448d360c229e0827a08f42e8bcc3cbd602caf7c530d0f7306b1c0d848a8604aa2959115c97aae1eaf41ebf1b1b8bad1735ddf89a11c7d4174d62c71c
7
+ data.tar.gz: c80255a6d646371390f2f3157b736878a1a2df6b0a45c0905ec518debf4fe33f1a4dfaa02203f24141132991ef1f570114417120b25b37d949b0c343da7448bf
@@ -6,7 +6,7 @@ on:
6
6
  - main
7
7
  pull_request:
8
8
  schedule:
9
- - cron: '0 0 * * 0' # weekly
9
+ - cron: "0 0 * * 0" # weekly
10
10
 
11
11
  jobs:
12
12
  build:
@@ -20,7 +20,7 @@ jobs:
20
20
  strategy:
21
21
  fail-fast: false
22
22
  matrix:
23
- ruby: [2.6, 2.7, 3.0]
23
+ ruby: ['2.6', '2.7', '3.0', '3.1']
24
24
  activerecord: [60, 61]
25
25
 
26
26
  steps:
data/CHANGELOG CHANGED
@@ -1,3 +1,18 @@
1
+ 2.0.2 - 2022-02-25
2
+
3
+ - FIX: runner was not requiring unicorn integration correctly leading to a crash
4
+
5
+ 2.0.1 - 2022-02-24
6
+
7
+ - FIX: ensure threads do not leak when calling #start repeatedly on instrumentation classes, this is an urgent patch for Puma integration
8
+
9
+ 2.0.0 - 2022-02-18
10
+
11
+ - FEATURE: Add per worker custom labels
12
+ - FEATURE: support custom histogram buckets
13
+ - FIX: all metrics are exposing status label, and not only `http_requests_total`
14
+ - BREAKING: rename all `http_duration` metrics to `http_request_duration` to match prometheus official naming conventions (See https://prometheus.io/docs/practices/naming/#metric-names).
15
+
1
16
  1.0.1 - 2021-12-22
2
17
 
3
18
  - FEATURE: add labels to preflight requests
data/README.md CHANGED
@@ -28,6 +28,7 @@ To learn more see [Instrumenting Rails with Prometheus](https://samsaffron.com/a
28
28
  * [Client default labels](#client-default-labels)
29
29
  * [Client default host](#client-default-host)
30
30
  * [Histogram mode](#histogram-mode)
31
+ * [Histogram - custom buckets](#histogram-custom-buckets)
31
32
  * [Transport concerns](#transport-concerns)
32
33
  * [JSON generation and parsing](#json-generation-and-parsing)
33
34
  * [Logging](#logging)
@@ -202,13 +203,13 @@ $ bundle exec prometheus_exporter
202
203
 
203
204
  #### Metrics collected by Rails integration middleware
204
205
 
205
- | Type | Name | Description |
206
- | --- | --- | --- |
207
- | Counter | `http_requests_total` | Total HTTP requests from web app |
208
- | Summary | `http_duration_seconds` | Time spent in HTTP reqs in seconds |
209
- | Summary | `http_redis_duration_seconds | Time spent in HTTP reqs in Redis, in seconds |
210
- | Summary | `http_sql_duration_seconds | Time spent in HTTP reqs in SQL in seconds |
211
- | Summary | `http_queue_duration_seconds | Time spent queueing the request in load balancer in seconds |
206
+ | Type | Name | Description |
207
+ | --- | --- | --- |
208
+ | Counter | `http_requests_total` | Total HTTP requests from web app |
209
+ | Summary | `http_request_duration_seconds` | Time spent in HTTP reqs in seconds |
210
+ | Summary | `http_request_redis_duration_seconds | Time spent in HTTP reqs in Redis, in seconds |
211
+ | Summary | `http_request_sql_duration_seconds | Time spent in HTTP reqs in SQL in seconds |
212
+ | Summary | `http_request_queue_duration_seconds | Time spent queueing the request in load balancer in seconds |
212
213
 
213
214
  All metrics have a `controller` and an `action` label.
214
215
  `http_requests_total` additionally has a (HTTP response) `status` label.
@@ -251,7 +252,7 @@ end
251
252
  ```
252
253
  That way you won't have all metrics labeled with `controller=other` and `action=other`, but have labels such as
253
254
  ```
254
- ruby_http_duration_seconds{path="/api/v1/teams/:id",method="GET",status="200",quantile="0.99"} 0.009880661998977303
255
+ ruby_http_request_duration_seconds{path="/api/v1/teams/:id",method="GET",status="200",quantile="0.99"} 0.009880661998977303
255
256
  ```
256
257
 
257
258
  ¹) Only available when Redis is used.
@@ -420,6 +421,18 @@ Sometimes the Sidekiq server shuts down before it can send metrics, that were ge
420
421
  end
421
422
  ```
422
423
 
424
+ Custom labels can be added for individual jobs by defining a class method on the job class. These labels will be added to all Sidekiq metrics written by the job:
425
+
426
+ ```ruby
427
+ class WorkerWithCustomLabels
428
+ def self.custom_labels
429
+ { my_label: 'value-here', other_label: 'second-val' }
430
+ end
431
+
432
+ def perform; end
433
+ end
434
+ ```
435
+
423
436
  ##### Metrics collected by Sidekiq Instrumentation
424
437
 
425
438
  **PrometheusExporter::Instrumentation::Sidekiq**
@@ -562,7 +575,10 @@ The easiest way to gather this metrics is to put the following in your `puma.rb`
562
575
  # puma.rb config
563
576
  after_worker_boot do
564
577
  require 'prometheus_exporter/instrumentation'
565
- PrometheusExporter::Instrumentation::Puma.start
578
+ # optional check, avoids spinning up and down threads per worker
579
+ if !PrometheusExporter::Instrumentation::Puma.started?
580
+ PrometheusExporter::Instrumentation::Puma.start
581
+ end
566
582
  end
567
583
  ```
568
584
 
@@ -891,6 +907,26 @@ In histogram mode, the same metrics will be collected but will be reported as hi
891
907
 
892
908
  [`histogram_quantile`]: https://prometheus.io/docs/prometheus/latest/querying/functions/#histogram_quantile
893
909
 
910
+ ### Histogram - custom buckets
911
+
912
+ By default these buckets will be used:
913
+ ```
914
+ [0.005, 0.01, 0.025, 0.05, 0.1, 0.25, 0.5, 1, 2.5, 5.0, 10.0].freeze
915
+ ```
916
+ if this is not enough you can specify `default_buckets` like this:
917
+ ```
918
+ Histogram.default_buckets = [0.005, 0.01, 0.025, 0.05, 0.1, 0.25, 0.5, 1, 2, 2.5, 3, 4, 5.0, 10.0, 12, 14, 15, 20, 25].freeze
919
+ ```
920
+
921
+ Specfied buckets on the instance takes precedence over default:
922
+
923
+ ```
924
+ Histogram.default_buckets = [0.005, 0.01, 0,5].freeze
925
+ buckets = [0.1, 0.2, 0.3]
926
+ histogram = Histogram.new('test_bucktets', 'I have specified buckets', buckets: buckets)
927
+ histogram.buckets => [0.1, 0.2, 0.3]
928
+ ```
929
+
894
930
  ## Transport concerns
895
931
 
896
932
  Prometheus Exporter handles transport using a simple HTTP protocol. In multi process mode we avoid needing a large number of HTTP request by using chunked encoding to send metrics. This means that a single HTTP channel can deliver 100s or even 1000s of metrics over a single HTTP session to the `/send-metrics` endpoint. All calls to `send` and `send_json` on the `PrometheusExporter::Client` class are **non-blocking** and batched.
@@ -2,11 +2,10 @@
2
2
 
3
3
  # collects stats from currently running process
4
4
  module PrometheusExporter::Instrumentation
5
- class ActiveRecord
5
+ class ActiveRecord < PeriodicStats
6
6
  ALLOWED_CONFIG_LABELS = %i(database username host port)
7
7
 
8
8
  def self.start(client: nil, frequency: 30, custom_labels: {}, config_labels: [])
9
-
10
9
  client ||= PrometheusExporter::Client.default
11
10
 
12
11
  # Not all rails versions support connection pool stats
@@ -20,20 +19,12 @@ module PrometheusExporter::Instrumentation
20
19
 
21
20
  active_record_collector = new(custom_labels, config_labels)
22
21
 
23
- stop if @thread
24
-
25
- @thread = Thread.new do
26
- while true
27
- begin
28
- metrics = active_record_collector.collect
29
- metrics.each { |metric| client.send_json metric }
30
- rescue => e
31
- client.logger.error("Prometheus Exporter Failed To Collect Process Stats #{e}")
32
- ensure
33
- sleep frequency
34
- end
35
- end
22
+ worker_loop do
23
+ metrics = active_record_collector.collect
24
+ metrics.each { |metric| client.send_json metric }
36
25
  end
26
+
27
+ super
37
28
  end
38
29
 
39
30
  def self.validate_config_labels(config_labels)
@@ -41,13 +32,6 @@ module PrometheusExporter::Instrumentation
41
32
  raise "Invalid Config Labels, available options #{ALLOWED_CONFIG_LABELS}" if (config_labels - ALLOWED_CONFIG_LABELS).size > 0
42
33
  end
43
34
 
44
- def self.stop
45
- if t = @thread
46
- t.kill
47
- @thread = nil
48
- end
49
- end
50
-
51
35
  def initialize(metric_labels, config_labels)
52
36
  @metric_labels = metric_labels
53
37
  @config_labels = config_labels
@@ -0,0 +1,62 @@
1
+ # frozen_string_literal: true
2
+
3
+ module PrometheusExporter::Instrumentation
4
+ class PeriodicStats
5
+
6
+ def self.start(*args, frequency:, client: nil, **kwargs)
7
+ client ||= PrometheusExporter::Client.default
8
+
9
+ if !(Numeric === frequency)
10
+ raise ArgumentError.new("Expected frequency to be a number")
11
+ end
12
+
13
+ if frequency < 0
14
+ raise ArgumentError.new("Expected frequency to be a positive number")
15
+ end
16
+
17
+ if !@worker_loop
18
+ raise ArgumentError.new("Worker loop was not set")
19
+ end
20
+
21
+ klass = self
22
+
23
+ stop
24
+
25
+ @stop_thread = false
26
+
27
+ @thread = Thread.new do
28
+ while !@stop_thread
29
+ begin
30
+ @worker_loop.call
31
+ rescue => e
32
+ client.logger.error("#{klass} Prometheus Exporter Failed To Collect Stats #{e}")
33
+ ensure
34
+ sleep frequency
35
+ end
36
+ end
37
+ end
38
+
39
+ end
40
+
41
+ def self.started?
42
+ !!@thread&.alive?
43
+ end
44
+
45
+ def self.worker_loop(&blk)
46
+ @worker_loop = blk
47
+ end
48
+
49
+ def self.stop
50
+ # to avoid a warning
51
+ @thread = nil if !defined?(@thread)
52
+
53
+ if @thread&.alive?
54
+ @stop_thread = true
55
+ @thread.wakeup
56
+ @thread.join
57
+ end
58
+ @thread = nil
59
+ end
60
+
61
+ end
62
+ end
@@ -2,8 +2,7 @@
2
2
 
3
3
  # collects stats from currently running process
4
4
  module PrometheusExporter::Instrumentation
5
- class Process
6
- @thread = nil if !defined?(@thread)
5
+ class Process < PeriodicStats
7
6
 
8
7
  def self.start(client: nil, type: "ruby", frequency: 30, labels: nil)
9
8
 
@@ -19,27 +18,12 @@ module PrometheusExporter::Instrumentation
19
18
  process_collector = new(metric_labels)
20
19
  client ||= PrometheusExporter::Client.default
21
20
 
22
- stop if @thread
23
-
24
- @thread = Thread.new do
25
- while true
26
- begin
27
- metric = process_collector.collect
28
- client.send_json metric
29
- rescue => e
30
- client.logger.error("Prometheus Exporter Failed To Collect Process Stats #{e}")
31
- ensure
32
- sleep frequency
33
- end
34
- end
21
+ worker_loop do
22
+ metric = process_collector.collect
23
+ client.send_json metric
35
24
  end
36
- end
37
25
 
38
- def self.stop
39
- if t = @thread
40
- t.kill
41
- @thread = nil
42
- end
26
+ super
43
27
  end
44
28
 
45
29
  def initialize(metric_labels)
@@ -4,22 +4,17 @@ require "json"
4
4
 
5
5
  # collects stats from puma
6
6
  module PrometheusExporter::Instrumentation
7
- class Puma
7
+ class Puma < PeriodicStats
8
8
  def self.start(client: nil, frequency: 30, labels: {})
9
9
  puma_collector = new(labels)
10
10
  client ||= PrometheusExporter::Client.default
11
- Thread.new do
12
- while true
13
- begin
14
- metric = puma_collector.collect
15
- client.send_json metric
16
- rescue => e
17
- client.logger.error("Prometheus Exporter Failed To Collect Puma Stats #{e}")
18
- ensure
19
- sleep frequency
20
- end
21
- end
11
+
12
+ worker_loop do
13
+ metric = puma_collector.collect
14
+ client.send_json metric
22
15
  end
16
+
17
+ super
23
18
  end
24
19
 
25
20
  def initialize(metric_labels = {})
@@ -2,21 +2,16 @@
2
2
 
3
3
  # collects stats from resque
4
4
  module PrometheusExporter::Instrumentation
5
- class Resque
5
+ class Resque < PeriodicStats
6
6
  def self.start(client: nil, frequency: 30)
7
7
  resque_collector = new
8
8
  client ||= PrometheusExporter::Client.default
9
- Thread.new do
10
- while true
11
- begin
12
- client.send_json(resque_collector.collect)
13
- rescue => e
14
- client.logger.error("Prometheus Exporter Failed To Collect Resque Stats #{e}")
15
- ensure
16
- sleep frequency
17
- end
18
- end
9
+
10
+ worker_loop do
11
+ client.send_json(resque_collector.collect)
19
12
  end
13
+
14
+ super
20
15
  end
21
16
 
22
17
  def collect
@@ -15,16 +15,24 @@ module PrometheusExporter::Instrumentation
15
15
  -> (job, ex) do
16
16
  job_is_fire_and_forget = job["retry"] == false
17
17
 
18
+ worker_class = Object.const_get(job["class"])
19
+ worker_custom_labels = self.get_worker_custom_labels(worker_class)
20
+
18
21
  unless job_is_fire_and_forget
19
22
  PrometheusExporter::Client.default.send_json(
20
23
  type: "sidekiq",
21
24
  name: job["class"],
22
25
  dead: true,
26
+ custom_labels: worker_custom_labels
23
27
  )
24
28
  end
25
29
  end
26
30
  end
27
31
 
32
+ def self.get_worker_custom_labels(worker_class)
33
+ worker_class.respond_to?(:custom_labels) ? worker_class.custom_labels : {}
34
+ end
35
+
28
36
  def initialize(client: nil)
29
37
  @client = client || PrometheusExporter::Client.default
30
38
  end
@@ -47,7 +55,8 @@ module PrometheusExporter::Instrumentation
47
55
  queue: queue,
48
56
  success: success,
49
57
  shutdown: shutdown,
50
- duration: duration
58
+ duration: duration,
59
+ custom_labels: self.class.get_worker_custom_labels(worker.class)
51
60
  )
52
61
  end
53
62
 
@@ -69,19 +78,30 @@ module PrometheusExporter::Instrumentation
69
78
  end
70
79
 
71
80
  def get_delayed_name(msg, class_name)
72
- # fallback to class_name since we're relying on the internal implementation
73
- # of the delayed extensions
74
- # https://github.com/mperham/sidekiq/blob/master/lib/sidekiq/extensions/class_methods.rb
75
81
  begin
82
+ # fallback to class_name since we're relying on the internal implementation
83
+ # of the delayed extensions
84
+ # https://github.com/mperham/sidekiq/blob/master/lib/sidekiq/extensions/class_methods.rb
76
85
  (target, method_name, _args) = YAML.load(msg['args'].first) # rubocop:disable Security/YAMLLoad
77
86
  if target.class == Class
78
87
  "#{target.name}##{method_name}"
79
88
  else
80
89
  "#{target.class.name}##{method_name}"
81
90
  end
82
- rescue
83
- class_name
91
+ rescue Psych::DisallowedClass, ArgumentError
92
+ parsed = Psych.parse(msg['args'].first)
93
+ children = parsed.root.children
94
+ target = (children[0].value || children[0].tag).sub('!', '')
95
+ method_name = (children[1].value || children[1].tag).sub(':', '')
96
+
97
+ if target && method_name
98
+ "#{target}##{method_name}"
99
+ else
100
+ class_name
101
+ end
84
102
  end
103
+ rescue
104
+ class_name
85
105
  end
86
106
  end
87
107
  end
@@ -1,22 +1,16 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module PrometheusExporter::Instrumentation
4
- class SidekiqProcess
4
+ class SidekiqProcess < PeriodicStats
5
5
  def self.start(client: nil, frequency: 30)
6
6
  client ||= PrometheusExporter::Client.default
7
7
  sidekiq_process_collector = new
8
8
 
9
- Thread.new do
10
- loop do
11
- begin
12
- client.send_json(sidekiq_process_collector.collect)
13
- rescue StandardError => e
14
- STDERR.puts("Prometheus Exporter Failed To Collect Sidekiq Processes metrics #{e}")
15
- ensure
16
- sleep frequency
17
- end
18
- end
9
+ worker_loop do
10
+ client.send_json(sidekiq_process_collector.collect)
19
11
  end
12
+
13
+ super
20
14
  end
21
15
 
22
16
  def initialize
@@ -1,22 +1,16 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module PrometheusExporter::Instrumentation
4
- class SidekiqQueue
4
+ class SidekiqQueue < PeriodicStats
5
5
  def self.start(client: nil, frequency: 30, all_queues: false)
6
6
  client ||= PrometheusExporter::Client.default
7
7
  sidekiq_queue_collector = new(all_queues: all_queues)
8
8
 
9
- Thread.new do
10
- loop do
11
- begin
12
- client.send_json(sidekiq_queue_collector.collect)
13
- rescue StandardError => e
14
- client.logger.error("Prometheus Exporter Failed To Collect Sidekiq Queue metrics #{e}")
15
- ensure
16
- sleep frequency
17
- end
18
- end
9
+ worker_loop do
10
+ client.send_json(sidekiq_queue_collector.collect)
19
11
  end
12
+
13
+ super
20
14
  end
21
15
 
22
16
  def initialize(all_queues: false)
@@ -1,22 +1,16 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module PrometheusExporter::Instrumentation
4
- class SidekiqStats
4
+ class SidekiqStats < PeriodicStats
5
5
  def self.start(client: nil, frequency: 30)
6
6
  client ||= PrometheusExporter::Client.default
7
7
  sidekiq_stats_collector = new
8
8
 
9
- Thread.new do
10
- loop do
11
- begin
12
- client.send_json(sidekiq_stats_collector.collect)
13
- rescue StandardError => e
14
- STDERR.puts("Prometheus Exporter Failed To Collect Sidekiq Stats metrics #{e}")
15
- ensure
16
- sleep frequency
17
- end
18
- end
9
+ worker_loop do
10
+ client.send_json(sidekiq_stats_collector.collect)
19
11
  end
12
+
13
+ super
20
14
  end
21
15
 
22
16
  def collect
@@ -8,22 +8,17 @@ end
8
8
 
9
9
  module PrometheusExporter::Instrumentation
10
10
  # collects stats from unicorn
11
- class Unicorn
11
+ class Unicorn < PeriodicStats
12
12
  def self.start(pid_file:, listener_address:, client: nil, frequency: 30)
13
13
  unicorn_collector = new(pid_file: pid_file, listener_address: listener_address)
14
14
  client ||= PrometheusExporter::Client.default
15
- Thread.new do
16
- loop do
17
- begin
18
- metric = unicorn_collector.collect
19
- client.send_json metric
20
- rescue StandardError => e
21
- client.logger.error("Prometheus Exporter Failed To Collect Unicorn Stats #{e}")
22
- ensure
23
- sleep frequency
24
- end
25
- end
15
+
16
+ worker_loop do
17
+ metric = unicorn_collector.collect
18
+ client.send_json metric
26
19
  end
20
+
21
+ super
27
22
  end
28
23
 
29
24
  def initialize(pid_file:, listener_address:)
@@ -1,6 +1,7 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  require_relative "client"
4
+ require_relative "instrumentation/periodic_stats"
4
5
  require_relative "instrumentation/process"
5
6
  require_relative "instrumentation/method_profiler"
6
7
  require_relative "instrumentation/sidekiq"
@@ -106,15 +106,8 @@ module PrometheusExporter::Metric
106
106
  end
107
107
  end
108
108
 
109
- # when we drop Ruby 2.3 we can drop this
110
- if "".respond_to? :match?
111
- def needs_escape?(str)
112
- str.match?(/[\n"\\]/m)
113
- end
114
- else
115
- def needs_escape?(str)
116
- !!str.match(/[\n"\\]/m)
117
- end
109
+ def needs_escape?(str)
110
+ str.match?(/[\n"\\]/m)
118
111
  end
119
112
 
120
113
  end
@@ -5,9 +5,21 @@ module PrometheusExporter::Metric
5
5
 
6
6
  DEFAULT_BUCKETS = [0.005, 0.01, 0.025, 0.05, 0.1, 0.25, 0.5, 1, 2.5, 5.0, 10.0].freeze
7
7
 
8
+ @default_buckets = nil if !defined?(@default_buckets)
9
+
10
+ def self.default_buckets
11
+ @default_buckets || DEFAULT_BUCKETS
12
+ end
13
+
14
+ def self.default_buckets=(buckets)
15
+ @default_buckets = buckets
16
+ end
17
+
18
+ attr_reader :buckets
19
+
8
20
  def initialize(name, help, opts = {})
9
21
  super(name, help)
10
- @buckets = (opts[:buckets] || DEFAULT_BUCKETS).sort.reverse
22
+ @buckets = (opts[:buckets] || self.class.default_buckets).sort.reverse
11
23
  reset!
12
24
  end
13
25
 
@@ -36,11 +36,12 @@ class PrometheusExporter::Middleware
36
36
 
37
37
  result
38
38
  ensure
39
-
39
+ status = (result && result[0]) || -1
40
40
  obj = {
41
41
  type: "web",
42
42
  timings: info,
43
43
  queue_time: queue_time,
44
+ status: status,
44
45
  default_labels: default_labels(env, result)
45
46
  }
46
47
  labels = custom_labels(env)
@@ -52,7 +53,6 @@ class PrometheusExporter::Middleware
52
53
  end
53
54
 
54
55
  def default_labels(env, result)
55
- status = (result && result[0]) || -1
56
56
  params = env["action_dispatch.request.parameters"]
57
57
  action = controller = nil
58
58
  if params
@@ -67,8 +67,7 @@ class PrometheusExporter::Middleware
67
67
 
68
68
  {
69
69
  action: action || "other",
70
- controller: controller || "other",
71
- status: status
70
+ controller: controller || "other"
72
71
  }
73
72
  end
74
73
 
@@ -1,7 +1,6 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  require_relative '../client'
4
- require_relative '../instrumentation/unicorn'
5
4
 
6
5
  module PrometheusExporter::Server
7
6
  class RunnerException < StandardError; end
@@ -39,6 +38,9 @@ module PrometheusExporter::Server
39
38
  end
40
39
 
41
40
  if unicorn_listen_address && unicorn_pid_file
41
+
42
+ require_relative '../instrumentation'
43
+
42
44
  local_client = PrometheusExporter::LocalClient.new(collector: collector)
43
45
  PrometheusExporter::Instrumentation::Unicorn.start(
44
46
  pid_file: unicorn_pid_file,
@@ -5,10 +5,10 @@ module PrometheusExporter::Server
5
5
  def initialize
6
6
  @metrics = {}
7
7
  @http_requests_total = nil
8
- @http_duration_seconds = nil
9
- @http_redis_duration_seconds = nil
10
- @http_sql_duration_seconds = nil
11
- @http_queue_duration_seconds = nil
8
+ @http_request_duration_seconds = nil
9
+ @http_request_redis_duration_seconds = nil
10
+ @http_request_sql_duration_seconds = nil
11
+ @http_request_queue_duration_seconds = nil
12
12
  end
13
13
 
14
14
  def type
@@ -33,23 +33,23 @@ module PrometheusExporter::Server
33
33
  "Total HTTP requests from web app."
34
34
  )
35
35
 
36
- @metrics["http_duration_seconds"] = @http_duration_seconds = PrometheusExporter::Metric::Base.default_aggregation.new(
37
- "http_duration_seconds",
36
+ @metrics["http_request_duration_seconds"] = @http_request_duration_seconds = PrometheusExporter::Metric::Base.default_aggregation.new(
37
+ "http_request_duration_seconds",
38
38
  "Time spent in HTTP reqs in seconds."
39
39
  )
40
40
 
41
- @metrics["http_redis_duration_seconds"] = @http_redis_duration_seconds = PrometheusExporter::Metric::Base.default_aggregation.new(
42
- "http_redis_duration_seconds",
41
+ @metrics["http_request_redis_duration_seconds"] = @http_request_redis_duration_seconds = PrometheusExporter::Metric::Base.default_aggregation.new(
42
+ "http_request_redis_duration_seconds",
43
43
  "Time spent in HTTP reqs in Redis, in seconds."
44
44
  )
45
45
 
46
- @metrics["http_sql_duration_seconds"] = @http_sql_duration_seconds = PrometheusExporter::Metric::Base.default_aggregation.new(
47
- "http_sql_duration_seconds",
46
+ @metrics["http_request_sql_duration_seconds"] = @http_request_sql_duration_seconds = PrometheusExporter::Metric::Base.default_aggregation.new(
47
+ "http_request_sql_duration_seconds",
48
48
  "Time spent in HTTP reqs in SQL in seconds."
49
49
  )
50
50
 
51
- @metrics["http_queue_duration_seconds"] = @http_queue_duration_seconds = PrometheusExporter::Metric::Base.default_aggregation.new(
52
- "http_queue_duration_seconds",
51
+ @metrics["http_request_queue_duration_seconds"] = @http_request_queue_duration_seconds = PrometheusExporter::Metric::Base.default_aggregation.new(
52
+ "http_request_queue_duration_seconds",
53
53
  "Time spent queueing the request in load balancer in seconds."
54
54
  )
55
55
  end
@@ -60,19 +60,19 @@ module PrometheusExporter::Server
60
60
  custom_labels = obj['custom_labels']
61
61
  labels = custom_labels.nil? ? default_labels : default_labels.merge(custom_labels)
62
62
 
63
- @http_requests_total.observe(1, labels)
63
+ @http_requests_total.observe(1, labels.merge(status: obj["status"]))
64
64
 
65
65
  if timings = obj["timings"]
66
- @http_duration_seconds.observe(timings["total_duration"], labels)
66
+ @http_request_duration_seconds.observe(timings["total_duration"], labels)
67
67
  if redis = timings["redis"]
68
- @http_redis_duration_seconds.observe(redis["duration"], labels)
68
+ @http_request_redis_duration_seconds.observe(redis["duration"], labels)
69
69
  end
70
70
  if sql = timings["sql"]
71
- @http_sql_duration_seconds.observe(sql["duration"], labels)
71
+ @http_request_sql_duration_seconds.observe(sql["duration"], labels)
72
72
  end
73
73
  end
74
74
  if queue_time = obj["queue_time"]
75
- @http_queue_duration_seconds.observe(queue_time, labels)
75
+ @http_request_queue_duration_seconds.observe(queue_time, labels)
76
76
  end
77
77
  end
78
78
  end
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module PrometheusExporter
4
- VERSION = '1.0.1'
4
+ VERSION = '2.0.2'
5
5
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: prometheus_exporter
3
3
  version: !ruby/object:Gem::Version
4
- version: 1.0.1
4
+ version: 2.0.2
5
5
  platform: ruby
6
6
  authors:
7
7
  - Sam Saffron
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2021-12-22 00:00:00.000000000 Z
11
+ date: 2022-02-25 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: webrick
@@ -238,6 +238,7 @@ files:
238
238
  - lib/prometheus_exporter/instrumentation/delayed_job.rb
239
239
  - lib/prometheus_exporter/instrumentation/hutch.rb
240
240
  - lib/prometheus_exporter/instrumentation/method_profiler.rb
241
+ - lib/prometheus_exporter/instrumentation/periodic_stats.rb
241
242
  - lib/prometheus_exporter/instrumentation/process.rb
242
243
  - lib/prometheus_exporter/instrumentation/puma.rb
243
244
  - lib/prometheus_exporter/instrumentation/resque.rb