prometheus_exporter 0.5.1 → 0.5.2

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 4855e62023cc91a24d7f604765a4c34c3cc652421e2f60d6041b8b87376dd74f
4
- data.tar.gz: 9772f78ab97d6ae4218afefc353660f1c162df5c801aca60f326e0eeda1894c9
3
+ metadata.gz: 89d503831ed018381a2f32d77b14f04db0533d8f03f19befda4067fe29ad464e
4
+ data.tar.gz: 31227ad5e110551343e4d0d1837a834333d0203c1df2b6343fe7b5584bb818ac
5
5
  SHA512:
6
- metadata.gz: 2e1450823b8b770a2538891a4315451e5be4dbe6bc3ffe77ba0b0081c6f436b5a2cfe189c4e4c77442bad72180f0ba6144c074378b55c266d5e6c33b4aa40f42
7
- data.tar.gz: 62457c9bdfaedfc02acbdfd0e472790a4ba5b5c124778f8f58173fc0b98f442d7d23e71fd16345334fe71778e46096559e7c2ae32c94255f155abb0dc3a92619
6
+ metadata.gz: 7b594c1eec85156ccf1ee08e5ef9c4bc864f266c9d6960b74148dc5b2dc8c2712f37281125d0399928cdd49897b558593f25069d507db190bbe19616a2ad6a93
7
+ data.tar.gz: 794a3837221c3e16aca9547eb833fade063b501627f6dafc7da8454e85d288d72d4d78adf6fe82a164ff54fa656ee0ea6024a418c328197d6b9aebf1a407c7e3
@@ -1 +1,2 @@
1
- inherit_from: https://raw.githubusercontent.com/discourse/discourse/master/.rubocop.yml
1
+ inherit_gem:
2
+ rubocop-discourse: default.yml
data/CHANGELOG CHANGED
@@ -1,13 +1,18 @@
1
- 0.5.1 - 25-02-2019
1
+ 0.5.2 - 01-07-2020
2
+
3
+ - FEATURE: expanded instrumentation for sidekiq
4
+ - FEATURE: configurable default labels
5
+
6
+ 0.5.1 - 25-02-2020
2
7
 
3
8
  - FEATURE: Allow configuring the default client's host and port via environment variables
4
9
 
5
- 0.5.0 - 14-02-2019
10
+ 0.5.0 - 14-02-2020
6
11
 
7
12
  - Breaking change: listen only to localhost by default to prevent unintended insecure configuration
8
13
  - FIX: Avoid calling `hostname` aggressively, instead cache it on the exporter instance
9
14
 
10
- 0.4.17 - 13-01-2019
15
+ 0.4.17 - 13-01-2020
11
16
 
12
17
  - FEATURE: add support for `to_h` on all metrics which can be used to query existing key/values
13
18
 
data/README.md CHANGED
@@ -190,6 +190,23 @@ Ensure you run the exporter in a monitored background process:
190
190
  $ bundle exec prometheus_exporter
191
191
  ```
192
192
 
193
+ #### Metrics collected by Rails integration middleware
194
+
195
+ | Type | Name | Description |
196
+ | --- | --- | --- |
197
+ | Counter | `http_requests_total` | Total HTTP requests from web app |
198
+ | Summary | `http_duration_seconds` | Time spent in HTTP reqs in seconds |
199
+ | Summary | `http_redis_duration_seconds`¹ | Time spent in HTTP reqs in Redis, in seconds |
200
+ | Summary | `http_sql_duration_seconds`² | Time spent in HTTP reqs in SQL in seconds |
201
+ | Summary | `http_queue_duration_seconds`³ | Time spent queueing the request in load balancer in seconds |
202
+
203
+ All metrics have a `controller` and an `action` label.
204
+ `http_requests_total` additionally has a (HTTP response) `status` label.
205
+
206
+ ¹) Only available when Redis is used.
207
+ ²) Only available when Mysql or PostgreSQL are used.
208
+ ³) Only available when [Instrumenting Request Queueing Time](#instrumenting-request-queueing-time) is set up.
209
+
193
210
  #### Activerecord Connection Pool Metrics
194
211
 
195
212
  This collects activerecord connection pool metrics.
@@ -244,6 +261,19 @@ Sidekiq.configure_server do |config|
244
261
  end
245
262
  ```
246
263
 
264
+ ##### Metrics collected by ActiveRecord Instrumentation
265
+
266
+ | Type | Name | Description |
267
+ | --- | --- | --- |
268
+ | Gauge | `active_record_connection_pool_connections` | Total connections in pool |
269
+ | Gauge | `active_record_connection_pool_busy` | Connections in use in pool |
270
+ | Gauge | `active_record_connection_pool_dead` | Dead connections in pool |
271
+ | Gauge | `active_record_connection_pool_idle` | Idle connections in pool |
272
+ | Gauge | `active_record_connection_pool_waiting` | Connection requests waiting |
273
+ | Gauge | `active_record_connection_pool_size` | Maximum allowed connection pool size |
274
+
275
+ All metrics collected by the ActiveRecord integration include at least the following labels: `pid` (of the process the stats where collected in), `pool_name`, any labels included in the `config_labels` option (prefixed with `dbconfig_`, example: `dbconfig_host`), and all custom labels provided with the `custom_labels` option.
276
+
247
277
  #### Per-process stats
248
278
 
249
279
  You may also be interested in per-process stats. This collects memory and GC stats:
@@ -260,11 +290,30 @@ end
260
290
  # in unicorn/puma/passenger be sure to run a new process instrumenter after fork
261
291
  after_fork do
262
292
  require 'prometheus_exporter/instrumentation'
263
- PrometheusExporter::Instrumentation::Process.start(type:"web")
293
+ PrometheusExporter::Instrumentation::Process.start(type: "web")
264
294
  end
265
295
 
266
296
  ```
267
297
 
298
+ ##### Metrics collected by Process Instrumentation
299
+
300
+ | Type | Name | Description |
301
+ | --- | --- | --- |
302
+ | Gauge | `heap_free_slots` | Free ruby heap slots |
303
+ | Gauge | `heap_live_slots` | Used ruby heap slots |
304
+ | Gauge | `v8_heap_size`* | Total JavaScript V8 heap size (bytes) |
305
+ | Gauge | `v8_used_heap_size`* | Total used JavaScript V8 heap size (bytes) |
306
+ | Gauge | `v8_physical_size`* | Physical size consumed by V8 heaps |
307
+ | Gauge | `v8_heap_count`* | Number of V8 contexts running |
308
+ | Gauge | `rss` | Total RSS used by process |
309
+ | Counter | `major_gc_ops_total` | Major GC operations by process |
310
+ | Counter | `minor_gc_ops_total` | Minor GC operations by process |
311
+ | Counter | `allocated_objects_total` | Total number of allocated objects by process |
312
+
313
+ _Metrics marked with * are only collected when `MiniRacer` is defined._
314
+
315
+ Metrics collected by Process instrumentation include labels `type` (as given with the `type` option), `pid` (of the process the stats where collected in), and any custom labels given to `Process.start` with the `labels` option.
316
+
268
317
  #### Sidekiq metrics
269
318
 
270
319
  Including Sidekiq metrics (how many jobs ran? how many failed? how long did they take? how many are dead? how many were restarted?)
@@ -279,6 +328,17 @@ Sidekiq.configure_server do |config|
279
328
  end
280
329
  ```
281
330
 
331
+ To monitor Queue size and latency:
332
+
333
+ ```ruby
334
+ Sidekiq.configure_server do |config|
335
+ config.on :startup do
336
+ require 'prometheus_exporter/instrumentation'
337
+ PrometheusExporter::Instrumentation::SidekiqQueue.start
338
+ end
339
+ end
340
+ ```
341
+
282
342
  To monitor Sidekiq process info:
283
343
 
284
344
  ```ruby
@@ -300,6 +360,35 @@ Sometimes the Sidekiq server shuts down before it can send metrics, that were ge
300
360
  end
301
361
  ```
302
362
 
363
+ ##### Metrics collected by Sidekiq Instrumentation
364
+
365
+ **PrometheusExporter::Instrumentation::Sidekiq**
366
+ | Type | Name | Description |
367
+ | --- | --- | --- |
368
+ | Counter | `sidekiq_job_duration_seconds` | Total time spent in sidekiq jobs |
369
+ | Counter | `sidekiq_jobs_total` | Total number of sidekiq jobs executed |
370
+ | Counter | `sidekiq_restarted_jobs_total` | Total number of sidekiq jobs that we restarted because of a sidekiq shutdown |
371
+ | Counter | `sidekiq_failed_jobs_total` | Total number of failed sidekiq jobs |
372
+
373
+ All metrics have a `job_name` label.
374
+
375
+ **PrometheusExporter::Instrumentation::Sidekiq.death_handler**
376
+ | Type | Name | Description |
377
+ | --- | --- | --- |
378
+ | Counter | `sidekiq_dead_jobs_total` | Total number of dead sidekiq jobs |
379
+
380
+ This metric also has a `job_name` label.
381
+
382
+ **PrometheusExporter::Instrumentation::SidekiqQueue**
383
+ | Type | Name | Description |
384
+ | --- | --- | --- |
385
+ | Gauge | `sidekiq_queue_backlog_total` | Size of the sidekiq queue |
386
+ | Gauge | `sidekiq_queue_latency_seconds` | Latency of the sidekiq queue |
387
+
388
+ Both metrics will have a `queue` label with the name of the queue.
389
+
390
+ _See [Metrics collected by Process Instrumentation](#metrics-collected-by-process-instrumentation) for a list of metrics the Process instrumentation will produce._
391
+
303
392
  #### Shoryuken metrics
304
393
 
305
394
  For Shoryuken metrics (how many jobs ran? how many failed? how long did they take? how many were restarted?)
@@ -313,6 +402,17 @@ Shoryuken.configure_server do |config|
313
402
  end
314
403
  ```
315
404
 
405
+ ##### Metrics collected by Shoryuken Instrumentation
406
+
407
+ | Type | Name | Description |
408
+ | --- | --- | --- |
409
+ | Counter | `shoryuken_job_duration_seconds` | Total time spent in shoryuken jobs |
410
+ | Counter | `shoryuken_jobs_total` | Total number of shoryuken jobs executed |
411
+ | Counter | `shoryuken_restarted_jobs_total` | Total number of shoryuken jobs that we restarted because of a shoryuken shutdown |
412
+ | Counter | `shoryuken_failed_jobs_total` | Total number of failed shoryuken jobs |
413
+
414
+ All metrics have labels for `job_name` and `queue_name`.
415
+
316
416
  #### Delayed Job plugin
317
417
 
318
418
  In an initializer:
@@ -324,6 +424,19 @@ unless Rails.env == "test"
324
424
  end
325
425
  ```
326
426
 
427
+ ##### Metrics collected by Delayed Job Instrumentation
428
+
429
+ | Type | Name | Description | Labels |
430
+ | --- | --- | --- | --- |
431
+ | Counter | `delayed_job_duration_seconds` | Total time spent in delayed jobs | `job_name` |
432
+ | Counter | `delayed_jobs_total` | Total number of delayed jobs executed | `job_name` |
433
+ | Gauge | `delayed_jobs_enqueued` | Number of enqueued delayed jobs | - |
434
+ | Gauge | `delayed_jobs_pending` | Number of pending delayed jobs | - |
435
+ | Counter | `delayed_failed_jobs_total` | Total number failed delayed jobs executed | `job_name` |
436
+ | Counter | `delayed_jobs_max_attempts_reached_total` | Total number of delayed jobs that reached max attempts | - |
437
+ | Summary | `delayed_job_duration_seconds_summary` | Summary of the time it takes jobs to execute | `status` |
438
+ | Summary | `delayed_job_attempts_summary` | Summary of the amount of attempts it takes delayed jobs to succeed | - |
439
+
327
440
  #### Hutch Message Processing Tracer
328
441
 
329
442
  Capture [Hutch](https://github.com/gocardless/hutch) metrics (how many jobs ran? how many failed? how long did they take?)
@@ -335,6 +448,16 @@ unless Rails.env == "test"
335
448
  end
336
449
  ```
337
450
 
451
+ ##### Metrics collected by Hutch Instrumentation
452
+
453
+ | Type | Name | Description |
454
+ | --- | --- | --- |
455
+ | Counter | `hutch_job_duration_seconds` | Total time spent in hutch jobs |
456
+ | Counter | `hutch_jobs_total` | Total number of hutch jobs executed |
457
+ | Counter | `hutch_failed_jobs_total` | Total number failed hutch jobs executed |
458
+
459
+ All metrics have a `job_name` label.
460
+
338
461
  #### Instrumenting Request Queueing Time
339
462
 
340
463
  Request Queueing is defined as the time it takes for a request to reach your application (instrumented by this `prometheus_exporter`) from farther upstream (as your load balancer). A high queueing time usually means that your backend cannot handle all the incoming requests in time, so they queue up (= you should see if you need to add more capacity).
@@ -359,6 +482,20 @@ after_worker_boot do
359
482
  end
360
483
  ```
361
484
 
485
+ #### Metrics collected by Puma Instrumentation
486
+
487
+ | Type | Name | Description |
488
+ | --- | --- | --- |
489
+ | Gauge | `puma_workers_total` | Number of puma workers |
490
+ | Gauge | `puma_booted_workers_total` | Number of puma workers booted |
491
+ | Gauge | `puma_old_workers_total` | Number of old puma workers |
492
+ | Gauge | `puma_running_threads_total` | Number of puma threads currently running |
493
+ | Gauge | `puma_request_backlog_total` | Number of requests waiting to be processed by a puma thread |
494
+ | Gauge | `puma_thread_pool_capacity_total` | Number of puma threads available at current scale |
495
+ | Gauge | `puma_max_threads_total` | Number of puma threads at available at max scale |
496
+
497
+ All metrics may have a `phase` label.
498
+
362
499
  ### Unicorn process metrics
363
500
 
364
501
  In order to gather metrics from unicorn processes, we use `rainbows`, which exposes `Rainbows::Linux.tcp_listener_stats` to gather information about active workers and queued requests. To start monitoring your unicorn processes, you'll need to know both the path to unicorn PID file and the listen address (`pid_file` and `listen` in your unicorn config file)
@@ -374,6 +511,14 @@ prometheus_exporter --unicorn-master /var/run/unicorn.pid --unicorn-listen-addre
374
511
 
375
512
  Note: You must install the `raindrops` gem in your `Gemfile` or locally.
376
513
 
514
+ #### Metrics collected by Unicorn Instrumentation
515
+
516
+ | Type | Name | Description |
517
+ | --- | --- | --- |
518
+ | Gauge | `unicorn_workers_total` | Number of unicorn workers |
519
+ | Gauge | `unicorn_active_workers_total` | Number of active unicorn workers |
520
+ | Gauge | `unicorn_request_backlog_total` | Number of requests waiting to be processed by a unicorn worker |
521
+
377
522
  ### Custom type collectors
378
523
 
379
524
  In some cases you may have custom metrics you want to ship the collector in a batch. In this case you may still be interested in the base collector behavior, but would like to add your own special messages.
@@ -542,6 +687,26 @@ ruby_web_requests{hostname="app-server-01",route="test/route"} 1
542
687
  ruby_web_requests{hostname="app-server-01"} 1
543
688
  ```
544
689
 
690
+ ### Exporter Process Configuration
691
+
692
+ When running the process for `prometheus_exporter` using `bin/prometheus_exporter`, there are several configurations that
693
+ can be passed in:
694
+
695
+ The following will run the process at
696
+ - Port `8080` (default `9394`)
697
+ - Bind to `0.0.0.0` (default `localhost`)
698
+ - Timeout in `1 second` for metrics endpoint (default `2 seconds`)
699
+ - Metric prefix as `foo_` (default `ruby_`)
700
+ - Default labels as `{environment: "integration", foo: "bar"}`
701
+
702
+ ```bash
703
+ prometheus_exporter -p 8080 \
704
+ -b 0.0.0.0 \
705
+ -t 1 \
706
+ --label '{"environment": "integration", "foo": "bar"}' \
707
+ --prefix 'foo_'
708
+ ```
709
+
545
710
  ### Client default labels
546
711
 
547
712
  You can specify a default label for instrumentation metrics sent by a specific client. For example:
@@ -2,6 +2,7 @@
2
2
  # frozen_string_literal: true
3
3
 
4
4
  require 'optparse'
5
+ require 'json'
5
6
 
6
7
  require_relative "./../lib/prometheus_exporter"
7
8
  require_relative "./../lib/prometheus_exporter/server"
@@ -34,6 +35,9 @@ def run
34
35
  opt.on('--prefix METRIC_PREFIX', "Prefix to apply to all metrics (default: #{PrometheusExporter::DEFAULT_PREFIX})") do |o|
35
36
  options[:prefix] = o.to_s
36
37
  end
38
+ opt.on('--label METRIC_LABEL', "Label to apply to all metrics (default: #{PrometheusExporter::DEFAULT_LABEL})") do |o|
39
+ options[:label] = JSON.parse(o.to_s)
40
+ end
37
41
  opt.on('-c', '--collector FILE', String, "(optional) Custom collector to run") do |o|
38
42
  custom_collector_filename = o.to_s
39
43
  end
@@ -9,6 +9,7 @@ module PrometheusExporter
9
9
  DEFAULT_PORT = 9394
10
10
  DEFAULT_BIND_ADDRESS = 'localhost'
11
11
  DEFAULT_PREFIX = 'ruby_'
12
+ DEFAULT_LABEL = {}
12
13
  DEFAULT_TIMEOUT = 2
13
14
 
14
15
  class OjCompat
@@ -4,6 +4,7 @@ require_relative "client"
4
4
  require_relative "instrumentation/process"
5
5
  require_relative "instrumentation/method_profiler"
6
6
  require_relative "instrumentation/sidekiq"
7
+ require_relative "instrumentation/sidekiq_queue"
7
8
  require_relative "instrumentation/delayed_job"
8
9
  require_relative "instrumentation/puma"
9
10
  require_relative "instrumentation/hutch"
@@ -3,6 +3,8 @@
3
3
  # collects stats from currently running process
4
4
  module PrometheusExporter::Instrumentation
5
5
  class Process
6
+ @thread = nil if !defined?(@thread)
7
+
6
8
  def self.start(client: nil, type: "ruby", frequency: 30, labels: nil)
7
9
 
8
10
  metric_labels =
@@ -0,0 +1,39 @@
1
+ # frozen_string_literal: true
2
+
3
+ module PrometheusExporter::Instrumentation
4
+ class SidekiqQueue
5
+ def self.start(client: nil, frequency: 30)
6
+ client ||= PrometheusExporter::Client.default
7
+ sidekiq_queue_collector = new
8
+
9
+ Thread.new do
10
+ loop do
11
+ begin
12
+ client.send_json(sidekiq_queue_collector.collect)
13
+ rescue StandardError => e
14
+ STDERR.puts("Prometheus Exporter Failed To Collect Sidekiq Queue metrics #{e}")
15
+ ensure
16
+ sleep frequency
17
+ end
18
+ end
19
+ end
20
+ end
21
+
22
+ def collect
23
+ {
24
+ type: 'sidekiq_queue',
25
+ queues: collect_queue_stats
26
+ }
27
+ end
28
+
29
+ def collect_queue_stats
30
+ ::Sidekiq::Queue.all.map do |queue|
31
+ {
32
+ backlog_total: queue.size,
33
+ latency_seconds: queue.latency.to_i,
34
+ labels: { queue: queue.name }
35
+ }
36
+ end
37
+ end
38
+ end
39
+ end
@@ -2,6 +2,10 @@
2
2
 
3
3
  module PrometheusExporter::Metric
4
4
  class Base
5
+
6
+ @default_prefix = nil if !defined?(@default_prefix)
7
+ @default_labels = nil if !defined?(@default_labels)
8
+
5
9
  # prefix applied to all metrics
6
10
  def self.default_prefix=(name)
7
11
  @default_prefix = name
@@ -5,6 +5,7 @@ require_relative "server/type_collector"
5
5
  require_relative "server/web_collector"
6
6
  require_relative "server/process_collector"
7
7
  require_relative "server/sidekiq_collector"
8
+ require_relative "server/sidekiq_queue_collector"
8
9
  require_relative "server/delayed_job_collector"
9
10
  require_relative "server/collector_base"
10
11
  require_relative "server/collector"
@@ -13,6 +13,7 @@ module PrometheusExporter::Server
13
13
  register_collector(WebCollector.new)
14
14
  register_collector(ProcessCollector.new)
15
15
  register_collector(SidekiqCollector.new)
16
+ register_collector(SidekiqQueueCollector.new)
16
17
  register_collector(DelayedJobCollector.new)
17
18
  register_collector(PumaCollector.new)
18
19
  register_collector(HutchCollector.new)
@@ -2,6 +2,17 @@
2
2
 
3
3
  module PrometheusExporter::Server
4
4
  class DelayedJobCollector < TypeCollector
5
+ def initialize
6
+ @delayed_jobs_total = nil
7
+ @delayed_job_duration_seconds = nil
8
+ @delayed_jobs_total = nil
9
+ @delayed_failed_jobs_total = nil
10
+ @delayed_jobs_max_attempts_reached_total = nil
11
+ @delayed_job_duration_seconds_summary = nil
12
+ @delayed_job_attempts_summary = nil
13
+ @delayed_jobs_enqueued = nil
14
+ @delayed_jobs_pending = nil
15
+ end
5
16
 
6
17
  def type
7
18
  "delayed_job"
@@ -2,6 +2,12 @@
2
2
 
3
3
  module PrometheusExporter::Server
4
4
  class HutchCollector < TypeCollector
5
+ def initialize
6
+ @hutch_jobs_total = nil
7
+ @hutch_job_duration_seconds = nil
8
+ @hutch_jobs_total = nil
9
+ @hutch_failed_jobs_total = nil
10
+ end
5
11
 
6
12
  def type
7
13
  "hutch"
@@ -9,6 +9,13 @@ module PrometheusExporter::Server
9
9
 
10
10
  class Runner
11
11
  def initialize(options = {})
12
+ @timeout = nil
13
+ @port = nil
14
+ @bind = nil
15
+ @collector_class = nil
16
+ @type_collectors = nil
17
+ @prefix = nil
18
+
12
19
  options.each do |k, v|
13
20
  send("#{k}=", v) if self.class.method_defined?("#{k}=")
14
21
  end
@@ -16,6 +23,7 @@ module PrometheusExporter::Server
16
23
 
17
24
  def start
18
25
  PrometheusExporter::Metric::Base.default_prefix = prefix
26
+ PrometheusExporter::Metric::Base.default_labels = label
19
27
 
20
28
  register_type_collectors
21
29
 
@@ -37,7 +45,7 @@ module PrometheusExporter::Server
37
45
  end
38
46
 
39
47
  attr_accessor :unicorn_listen_address, :unicorn_pid_file
40
- attr_writer :prefix, :port, :bind, :collector_class, :type_collectors, :timeout, :verbose, :server_class
48
+ attr_writer :prefix, :port, :bind, :collector_class, :type_collectors, :timeout, :verbose, :server_class, :label
41
49
 
42
50
  def prefix
43
51
  @prefix || PrometheusExporter::DEFAULT_PREFIX
@@ -76,6 +84,10 @@ module PrometheusExporter::Server
76
84
  @_collector ||= collector_class.new
77
85
  end
78
86
 
87
+ def label
88
+ @label ||= PrometheusExporter::DEFAULT_LABEL
89
+ end
90
+
79
91
  private
80
92
 
81
93
  def register_type_collectors
@@ -3,6 +3,14 @@
3
3
  module PrometheusExporter::Server
4
4
  class ShoryukenCollector < TypeCollector
5
5
 
6
+ def initialize
7
+ @shoryuken_jobs_total = nil
8
+ @shoryuken_job_duration_seconds = nil
9
+ @shoryuken_jobs_total = nil
10
+ @shoryuken_restarted_jobs_total = nil
11
+ @shoryuken_failed_jobs_total = nil
12
+ end
13
+
6
14
  def type
7
15
  "shoryuken"
8
16
  end
@@ -3,6 +3,15 @@
3
3
  module PrometheusExporter::Server
4
4
  class SidekiqCollector < TypeCollector
5
5
 
6
+ def initialize
7
+ @sidekiq_jobs_total = nil
8
+ @sidekiq_job_duration_seconds = nil
9
+ @sidekiq_jobs_total = nil
10
+ @sidekiq_restarted_jobs_total = nil
11
+ @sidekiq_failed_jobs_total = nil
12
+ @sidekiq_dead_jobs_total = nil
13
+ end
14
+
6
15
  def type
7
16
  "sidekiq"
8
17
  end
@@ -0,0 +1,45 @@
1
+ # frozen_string_literal: true
2
+ module PrometheusExporter::Server
3
+ class SidekiqQueueCollector < TypeCollector
4
+ MAX_SIDEKIQ_METRIC_AGE = 60
5
+
6
+ SIDEKIQ_QUEUE_GAUGES = {
7
+ 'backlog_total' => 'Size of the sidekiq queue.',
8
+ 'latency_seconds' => 'Latency of the sidekiq queue.',
9
+ }.freeze
10
+
11
+ attr_reader :sidekiq_metrics, :gauges
12
+
13
+ def initialize
14
+ @sidekiq_metrics = []
15
+ @gauges = {}
16
+ end
17
+
18
+ def type
19
+ 'sidekiq_queue'
20
+ end
21
+
22
+ def metrics
23
+ sidekiq_metrics.map do |metric|
24
+ labels = metric.fetch("labels", {})
25
+ SIDEKIQ_QUEUE_GAUGES.map do |name, help|
26
+ if (value = metric[name])
27
+ gauge = gauges[name] ||= PrometheusExporter::Metric::Gauge.new("sidekiq_queue_#{name}", help)
28
+ gauge.observe(value, labels)
29
+ end
30
+ end
31
+ end
32
+
33
+ gauges.values
34
+ end
35
+
36
+ def collect(object)
37
+ now = ::Process.clock_gettime(::Process::CLOCK_MONOTONIC)
38
+ object['queues'].each do |queue|
39
+ queue["created_at"] = now
40
+ sidekiq_metrics.delete_if { |metric| metric['created_at'] + MAX_SIDEKIQ_METRIC_AGE < now }
41
+ sidekiq_metrics << queue
42
+ end
43
+ end
44
+ end
45
+ end
@@ -4,6 +4,11 @@ module PrometheusExporter::Server
4
4
  class WebCollector < TypeCollector
5
5
  def initialize
6
6
  @metrics = {}
7
+ @http_requests_total = nil
8
+ @http_duration_seconds = nil
9
+ @http_redis_duration_seconds = nil
10
+ @http_sql_duration_seconds = nil
11
+ @http_queue_duration_seconds = nil
7
12
  end
8
13
 
9
14
  def type
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module PrometheusExporter
4
- VERSION = '0.5.1'
4
+ VERSION = '0.5.2'
5
5
  end
@@ -26,7 +26,7 @@ Gem::Specification.new do |spec|
26
26
 
27
27
  spec.add_development_dependency "rubocop", ">= 0.69"
28
28
  spec.add_development_dependency "bundler", "> 1.16"
29
- spec.add_development_dependency "rake", "~> 10.0"
29
+ spec.add_development_dependency "rake", "~> 13.0"
30
30
  spec.add_development_dependency "minitest", "~> 5.0"
31
31
  spec.add_development_dependency "guard", "~> 2.0"
32
32
  spec.add_development_dependency "mini_racer", "~> 0.1"
@@ -34,7 +34,7 @@ Gem::Specification.new do |spec|
34
34
  spec.add_development_dependency "oj", "~> 3.0"
35
35
  spec.add_development_dependency "rack-test", "~> 0.8.3"
36
36
  spec.add_development_dependency "minitest-stub-const", "~> 0.6"
37
- spec.add_development_dependency 'rubocop-discourse', '~> 1.0'
37
+ spec.add_development_dependency "rubocop-discourse", ">2"
38
38
  if !RUBY_ENGINE == 'jruby'
39
39
  spec.add_development_dependency "raindrops", "~> 0.19"
40
40
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: prometheus_exporter
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.5.1
4
+ version: 0.5.2
5
5
  platform: ruby
6
6
  authors:
7
7
  - Sam Saffron
8
- autorequire:
8
+ autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2020-02-25 00:00:00.000000000 Z
11
+ date: 2020-07-01 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: rubocop
@@ -44,14 +44,14 @@ dependencies:
44
44
  requirements:
45
45
  - - "~>"
46
46
  - !ruby/object:Gem::Version
47
- version: '10.0'
47
+ version: '13.0'
48
48
  type: :development
49
49
  prerelease: false
50
50
  version_requirements: !ruby/object:Gem::Requirement
51
51
  requirements:
52
52
  - - "~>"
53
53
  - !ruby/object:Gem::Version
54
- version: '10.0'
54
+ version: '13.0'
55
55
  - !ruby/object:Gem::Dependency
56
56
  name: minitest
57
57
  requirement: !ruby/object:Gem::Requirement
@@ -154,16 +154,16 @@ dependencies:
154
154
  name: rubocop-discourse
155
155
  requirement: !ruby/object:Gem::Requirement
156
156
  requirements:
157
- - - "~>"
157
+ - - ">"
158
158
  - !ruby/object:Gem::Version
159
- version: '1.0'
159
+ version: '2'
160
160
  type: :development
161
161
  prerelease: false
162
162
  version_requirements: !ruby/object:Gem::Requirement
163
163
  requirements:
164
- - - "~>"
164
+ - - ">"
165
165
  - !ruby/object:Gem::Version
166
- version: '1.0'
166
+ version: '2'
167
167
  description: Prometheus metric collector and exporter for Ruby
168
168
  email:
169
169
  - sam.saffron@gmail.com
@@ -196,6 +196,7 @@ files:
196
196
  - lib/prometheus_exporter/instrumentation/puma.rb
197
197
  - lib/prometheus_exporter/instrumentation/shoryuken.rb
198
198
  - lib/prometheus_exporter/instrumentation/sidekiq.rb
199
+ - lib/prometheus_exporter/instrumentation/sidekiq_queue.rb
199
200
  - lib/prometheus_exporter/instrumentation/unicorn.rb
200
201
  - lib/prometheus_exporter/metric.rb
201
202
  - lib/prometheus_exporter/metric/base.rb
@@ -215,6 +216,7 @@ files:
215
216
  - lib/prometheus_exporter/server/runner.rb
216
217
  - lib/prometheus_exporter/server/shoryuken_collector.rb
217
218
  - lib/prometheus_exporter/server/sidekiq_collector.rb
219
+ - lib/prometheus_exporter/server/sidekiq_queue_collector.rb
218
220
  - lib/prometheus_exporter/server/type_collector.rb
219
221
  - lib/prometheus_exporter/server/unicorn_collector.rb
220
222
  - lib/prometheus_exporter/server/web_collector.rb
@@ -242,7 +244,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
242
244
  version: '0'
243
245
  requirements: []
244
246
  rubygems_version: 3.0.3
245
- signing_key:
247
+ signing_key:
246
248
  specification_version: 4
247
249
  summary: Prometheus Exporter
248
250
  test_files: []