prometheus_exporter 0.7.0 → 2.1.0

Sign up to get free protection for your applications and to get access to all the features.
Files changed (50) hide show
  1. checksums.yaml +4 -4
  2. data/.github/workflows/ci.yml +82 -25
  3. data/Appraisals +7 -3
  4. data/CHANGELOG +104 -24
  5. data/Dockerfile +9 -0
  6. data/README.md +258 -51
  7. data/bin/prometheus_exporter +19 -6
  8. data/examples/custom_collector.rb +1 -1
  9. data/gemfiles/ar_70.gemfile +5 -0
  10. data/lib/prometheus_exporter/client.rb +48 -23
  11. data/lib/prometheus_exporter/instrumentation/active_record.rb +11 -29
  12. data/lib/prometheus_exporter/instrumentation/delayed_job.rb +5 -2
  13. data/lib/prometheus_exporter/instrumentation/good_job.rb +30 -0
  14. data/lib/prometheus_exporter/instrumentation/method_profiler.rb +63 -23
  15. data/lib/prometheus_exporter/instrumentation/periodic_stats.rb +62 -0
  16. data/lib/prometheus_exporter/instrumentation/process.rb +5 -21
  17. data/lib/prometheus_exporter/instrumentation/puma.rb +34 -27
  18. data/lib/prometheus_exporter/instrumentation/resque.rb +35 -0
  19. data/lib/prometheus_exporter/instrumentation/sidekiq.rb +53 -23
  20. data/lib/prometheus_exporter/instrumentation/sidekiq_process.rb +52 -0
  21. data/lib/prometheus_exporter/instrumentation/sidekiq_queue.rb +32 -24
  22. data/lib/prometheus_exporter/instrumentation/sidekiq_stats.rb +37 -0
  23. data/lib/prometheus_exporter/instrumentation/unicorn.rb +10 -15
  24. data/lib/prometheus_exporter/instrumentation.rb +5 -0
  25. data/lib/prometheus_exporter/metric/base.rb +12 -10
  26. data/lib/prometheus_exporter/metric/gauge.rb +4 -0
  27. data/lib/prometheus_exporter/metric/histogram.rb +15 -3
  28. data/lib/prometheus_exporter/middleware.rb +45 -19
  29. data/lib/prometheus_exporter/server/active_record_collector.rb +9 -12
  30. data/lib/prometheus_exporter/server/collector.rb +4 -0
  31. data/lib/prometheus_exporter/server/delayed_job_collector.rb +24 -18
  32. data/lib/prometheus_exporter/server/good_job_collector.rb +52 -0
  33. data/lib/prometheus_exporter/server/metrics_container.rb +66 -0
  34. data/lib/prometheus_exporter/server/process_collector.rb +8 -13
  35. data/lib/prometheus_exporter/server/puma_collector.rb +14 -12
  36. data/lib/prometheus_exporter/server/resque_collector.rb +50 -0
  37. data/lib/prometheus_exporter/server/runner.rb +14 -3
  38. data/lib/prometheus_exporter/server/sidekiq_collector.rb +1 -1
  39. data/lib/prometheus_exporter/server/sidekiq_process_collector.rb +43 -0
  40. data/lib/prometheus_exporter/server/sidekiq_queue_collector.rb +6 -7
  41. data/lib/prometheus_exporter/server/sidekiq_stats_collector.rb +48 -0
  42. data/lib/prometheus_exporter/server/type_collector.rb +2 -0
  43. data/lib/prometheus_exporter/server/unicorn_collector.rb +32 -33
  44. data/lib/prometheus_exporter/server/web_collector.rb +17 -17
  45. data/lib/prometheus_exporter/server/web_server.rb +72 -41
  46. data/lib/prometheus_exporter/server.rb +4 -0
  47. data/lib/prometheus_exporter/version.rb +1 -1
  48. data/lib/prometheus_exporter.rb +12 -13
  49. data/prometheus_exporter.gemspec +6 -6
  50. metadata +53 -14
data/README.md CHANGED
@@ -5,6 +5,7 @@ Prometheus Exporter allows you to aggregate custom metrics from multiple process
5
5
  To learn more see [Instrumenting Rails with Prometheus](https://samsaffron.com/archive/2018/02/02/instrumenting-rails-with-prometheus) (it has pretty pictures!)
6
6
 
7
7
  * [Requirements](#requirements)
8
+ * [Migrating from v0.x](#migrating-from-v0x)
8
9
  * [Installation](#installation)
9
10
  * [Usage](#usage)
10
11
  * [Single process mode](#single-process-mode)
@@ -19,21 +20,33 @@ To learn more see [Instrumenting Rails with Prometheus](https://samsaffron.com/a
19
20
  * [Hutch metrics](#hutch-message-processing-tracer)
20
21
  * [Puma metrics](#puma-metrics)
21
22
  * [Unicorn metrics](#unicorn-process-metrics)
23
+ * [Resque metrics](#resque-metrics)
24
+ * [GoodJob metrics](#goodjob-metrics)
22
25
  * [Custom type collectors](#custom-type-collectors)
23
26
  * [Multi process mode with custom collector](#multi-process-mode-with-custom-collector)
24
27
  * [GraphQL support](#graphql-support)
25
28
  * [Metrics default prefix / labels](#metrics-default-prefix--labels)
26
29
  * [Client default labels](#client-default-labels)
27
30
  * [Client default host](#client-default-host)
31
+ * [Histogram mode](#histogram-mode)
32
+ * [Histogram - custom buckets](#histogram---custom-buckets)
28
33
  * [Transport concerns](#transport-concerns)
29
34
  * [JSON generation and parsing](#json-generation-and-parsing)
35
+ * [Logging](#logging)
36
+ * [Docker Usage](#docker-usage)
30
37
  * [Contributing](#contributing)
31
38
  * [License](#license)
32
39
  * [Code of Conduct](#code-of-conduct)
33
40
 
34
41
  ## Requirements
35
42
 
36
- Minimum Ruby of version 2.5.0 is required, Ruby 2.4.0 is EOL as of 2020-04-05
43
+ Minimum Ruby of version 2.6.0 is required, Ruby 2.5.0 is EOL as of March 31st 2021.
44
+
45
+ ## Migrating from v0.x
46
+
47
+ There are some major changes in v1.x from v0.x.
48
+
49
+ - Some of metrics are renamed to match [prometheus official guide for metric names](https://prometheus.io/docs/practices/naming/#metric-names). (#184)
37
50
 
38
51
  ## Installation
39
52
 
@@ -176,7 +189,7 @@ gem 'prometheus_exporter'
176
189
  In an initializer:
177
190
 
178
191
  ```ruby
179
- unless Rails.env == "test"
192
+ unless Rails.env.test?
180
193
  require 'prometheus_exporter/middleware'
181
194
 
182
195
  # This reports stats per request like HTTP status and timings
@@ -190,15 +203,23 @@ Ensure you run the exporter in a monitored background process:
190
203
  $ bundle exec prometheus_exporter
191
204
  ```
192
205
 
206
+ #### Choosing the style of method patching
207
+
208
+ By default, `prometheus_exporter` uses `alias_method` to instrument methods used by SQL and Redis as it is the fastest approach (see [this article](https://samsaffron.com/archive/2017/10/18/fastest-way-to-profile-a-method-in-ruby)). You may desire to add additional instrumentation libraries beyond `prometheus_exporter` to your app. This can become problematic if these other libraries instead use `prepend` to instrument methods. To resolve this, you can tell the middleware to instrument using `prepend` by passing an `instrument` option like so:
209
+
210
+ ```ruby
211
+ Rails.application.middleware.unshift PrometheusExporter::Middleware, instrument: :prepend
212
+ ```
213
+
193
214
  #### Metrics collected by Rails integration middleware
194
215
 
195
- | Type | Name | Description |
196
- | --- | --- | --- |
197
- | Counter | `http_requests_total` | Total HTTP requests from web app |
198
- | Summary | `http_duration_seconds` | Time spent in HTTP reqs in seconds |
199
- | Summary | `http_redis_duration_seconds | Time spent in HTTP reqs in Redis, in seconds |
200
- | Summary | `http_sql_duration_seconds | Time spent in HTTP reqs in SQL in seconds |
201
- | Summary | `http_queue_duration_seconds | Time spent queueing the request in load balancer in seconds |
216
+ | Type | Name | Description |
217
+ | --- | --- | --- |
218
+ | Counter | `http_requests_total` | Total HTTP requests from web app |
219
+ | Summary | `http_request_duration_seconds` | Time spent in HTTP reqs in seconds |
220
+ | Summary | `http_request_redis_duration_seconds | Time spent in HTTP reqs in Redis, in seconds |
221
+ | Summary | `http_request_sql_duration_seconds | Time spent in HTTP reqs in SQL in seconds |
222
+ | Summary | `http_request_queue_duration_seconds | Time spent queueing the request in load balancer in seconds |
202
223
 
203
224
  All metrics have a `controller` and an `action` label.
204
225
  `http_requests_total` additionally has a (HTTP response) `status` label.
@@ -241,7 +262,7 @@ end
241
262
  ```
242
263
  That way you won't have all metrics labeled with `controller=other` and `action=other`, but have labels such as
243
264
  ```
244
- ruby_http_duration_seconds{path="/api/v1/teams/:id",method="GET",status="200",quantile="0.99"} 0.009880661998977303
265
+ ruby_http_request_duration_seconds{path="/api/v1/teams/:id",method="GET",status="200",quantile="0.99"} 0.009880661998977303
245
266
  ```
246
267
 
247
268
  ¹) Only available when Redis is used.
@@ -321,7 +342,7 @@ You may also be interested in per-process stats. This collects memory and GC sta
321
342
 
322
343
  ```ruby
323
344
  # in an initializer
324
- unless Rails.env == "test"
345
+ unless Rails.env.test?
325
346
  require 'prometheus_exporter/instrumentation'
326
347
 
327
348
  # this reports basic process stats like RSS and GC info
@@ -357,40 +378,49 @@ Metrics collected by Process instrumentation include labels `type` (as given wit
357
378
 
358
379
  #### Sidekiq metrics
359
380
 
360
- Including Sidekiq metrics (how many jobs ran? how many failed? how long did they take? how many are dead? how many were restarted?)
361
-
362
- ```ruby
363
- Sidekiq.configure_server do |config|
364
- config.server_middleware do |chain|
365
- require 'prometheus_exporter/instrumentation'
366
- chain.add PrometheusExporter::Instrumentation::Sidekiq
367
- end
368
- config.death_handlers << PrometheusExporter::Instrumentation::Sidekiq.death_handler
369
- end
370
- ```
371
-
372
- To monitor Queue size and latency:
381
+ There are different kinds of Sidekiq metrics that can be collected. A recommended setup looks like this:
373
382
 
374
383
  ```ruby
375
384
  Sidekiq.configure_server do |config|
385
+ require 'prometheus_exporter/instrumentation'
386
+ config.server_middleware do |chain|
387
+ chain.add PrometheusExporter::Instrumentation::Sidekiq
388
+ end
389
+ config.death_handlers << PrometheusExporter::Instrumentation::Sidekiq.death_handler
376
390
  config.on :startup do
377
- require 'prometheus_exporter/instrumentation'
391
+ PrometheusExporter::Instrumentation::Process.start type: 'sidekiq'
392
+ PrometheusExporter::Instrumentation::SidekiqProcess.start
378
393
  PrometheusExporter::Instrumentation::SidekiqQueue.start
394
+ PrometheusExporter::Instrumentation::SidekiqStats.start
379
395
  end
380
396
  end
381
397
  ```
382
398
 
383
- To monitor Sidekiq process info:
399
+ * The middleware and death handler will generate job specific metrics (how many jobs ran? how many failed? how long did they take? how many are dead? how many were restarted?).
400
+ * The [`Process`](#per-process-stats) metrics provide basic ruby metrics.
401
+ * The `SidekiqProcess` metrics provide the concurrency and busy metrics for this process.
402
+ * The `SidekiqQueue` metrics provides size and latency for the queues run by this process.
403
+ * The `SidekiqStats` metrics provide general, global Sidekiq stats (size of Scheduled, Retries, Dead queues, total number of jobs, etc).
404
+
405
+ For `SidekiqQueue`, if you run more than one process for the same queues, note that the same metrics will be exposed by all the processes, just like the `SidekiqStats` will if you run more than one process of any kind. You might want use `avg` or `max` when consuming their metrics.
406
+
407
+ An alternative would be to expose these metrics in lone, long-lived process. Using a rake task, for example:
384
408
 
385
409
  ```ruby
386
- Sidekiq.configure_server do |config|
387
- config.on :startup do
388
- require 'prometheus_exporter/instrumentation'
389
- PrometheusExporter::Instrumentation::Process.start type: 'sidekiq'
390
- end
410
+ task :sidekiq_metrics do
411
+ server = PrometheusExporter::Server::WebServer.new
412
+ server.start
413
+
414
+ PrometheusExporter::Client.default = PrometheusExporter::LocalClient.new(collector: server.collector)
415
+
416
+ PrometheusExporter::Instrumentation::SidekiqQueue.start(all_queues: true)
417
+ PrometheusExporter::Instrumentation::SidekiqStats.start
418
+ sleep
391
419
  end
392
420
  ```
393
421
 
422
+ The `all_queues` parameter for `SidekiqQueue` will expose metrics for all queues.
423
+
394
424
  Sometimes the Sidekiq server shuts down before it can send metrics, that were generated right before the shutdown, to the collector. Especially if you care about the `sidekiq_restarted_jobs_total` metric, it is a good idea to explicitly stop the client:
395
425
 
396
426
  ```ruby
@@ -401,6 +431,18 @@ Sometimes the Sidekiq server shuts down before it can send metrics, that were ge
401
431
  end
402
432
  ```
403
433
 
434
+ Custom labels can be added for individual jobs by defining a class method on the job class. These labels will be added to all Sidekiq metrics written by the job:
435
+
436
+ ```ruby
437
+ class WorkerWithCustomLabels
438
+ def self.custom_labels
439
+ { my_label: 'value-here', other_label: 'second-val' }
440
+ end
441
+
442
+ def perform; end
443
+ end
444
+ ```
445
+
404
446
  ##### Metrics collected by Sidekiq Instrumentation
405
447
 
406
448
  **PrometheusExporter::Instrumentation::Sidekiq**
@@ -423,11 +465,33 @@ This metric has a `job_name` label and a `queue` label.
423
465
  **PrometheusExporter::Instrumentation::SidekiqQueue**
424
466
  | Type | Name | Description |
425
467
  | --- | --- | --- |
426
- | Gauge | `sidekiq_queue_backlog_total` | Size of the sidekiq queue |
468
+ | Gauge | `sidekiq_queue_backlog` | Size of the sidekiq queue |
427
469
  | Gauge | `sidekiq_queue_latency_seconds` | Latency of the sidekiq queue |
428
470
 
429
471
  Both metrics will have a `queue` label with the name of the queue.
430
472
 
473
+ **PrometheusExporter::Instrumentation::SidekiqProcess**
474
+ | Type | Name | Description |
475
+ | --- | --- | --- |
476
+ | Gauge | `sidekiq_process_busy` | Number of busy workers for this process |
477
+ | Gauge | `sidekiq_process_concurrency` | Concurrency for this process |
478
+
479
+ Both metrics will include the labels `labels`, `queues`, `quiet`, `tag`, `hostname` and `identity`, as returned by the [Sidekiq Processes API](https://github.com/mperham/sidekiq/wiki/API#processes).
480
+
481
+ **PrometheusExporter::Instrumentation::SidekiqStats**
482
+ | Type | Name | Description |
483
+ | --- | --- | --- |
484
+ | Gauge | `sidekiq_stats_dead_size` | Size of the dead queue |
485
+ | Gauge | `sidekiq_stats_enqueued` | Number of enqueued jobs |
486
+ | Gauge | `sidekiq_stats_failed` | Number of failed jobs |
487
+ | Gauge | `sidekiq_stats_processed` | Total number of processed jobs |
488
+ | Gauge | `sidekiq_stats_processes_size` | Number of processes |
489
+ | Gauge | `sidekiq_stats_retry_size` | Size of the retries queue |
490
+ | Gauge | `sidekiq_stats_scheduled_size` | Size of the scheduled queue |
491
+ | Gauge | `sidekiq_stats_workers_size` | Number of jobs actively being processed |
492
+
493
+ Based on the [Sidekiq Stats API](https://github.com/mperham/sidekiq/wiki/API#stats).
494
+
431
495
  _See [Metrics collected by Process Instrumentation](#metrics-collected-by-process-instrumentation) for a list of metrics the Process instrumentation will produce._
432
496
 
433
497
  #### Shoryuken metrics
@@ -459,7 +523,7 @@ All metrics have labels for `job_name` and `queue_name`.
459
523
  In an initializer:
460
524
 
461
525
  ```ruby
462
- unless Rails.env == "test"
526
+ unless Rails.env.test?
463
527
  require 'prometheus_exporter/instrumentation'
464
528
  PrometheusExporter::Instrumentation::DelayedJob.register_plugin
465
529
  end
@@ -470,6 +534,7 @@ end
470
534
  | Type | Name | Description | Labels |
471
535
  | --- | --- | --- | --- |
472
536
  | Counter | `delayed_job_duration_seconds` | Total time spent in delayed jobs | `job_name` |
537
+ | Counter | `delayed_job_latency_seconds_total` | Total delayed jobs latency | `job_name` |
473
538
  | Counter | `delayed_jobs_total` | Total number of delayed jobs executed | `job_name` |
474
539
  | Gauge | `delayed_jobs_enqueued` | Number of enqueued delayed jobs | - |
475
540
  | Gauge | `delayed_jobs_pending` | Number of pending delayed jobs | - |
@@ -478,12 +543,15 @@ end
478
543
  | Summary | `delayed_job_duration_seconds_summary` | Summary of the time it takes jobs to execute | `status` |
479
544
  | Summary | `delayed_job_attempts_summary` | Summary of the amount of attempts it takes delayed jobs to succeed | - |
480
545
 
546
+ All metrics have labels for `job_name` and `queue_name`.
547
+ `delayed_job_latency_seconds_total` is considering delayed job's [sleep_delay](https://github.com/collectiveidea/delayed_job#:~:text=If%20no%20jobs%20are%20found%2C%20the%20worker%20sleeps%20for%20the%20amount%20of%20time%20specified%20by%20the%20sleep%20delay%20option.%20Set%20Delayed%3A%3AWorker.sleep_delay%20%3D%2060%20for%20a%2060%20second%20sleep%20time.) parameter, so please be aware of this in case you are looking for high latency precision.
548
+
481
549
  #### Hutch Message Processing Tracer
482
550
 
483
551
  Capture [Hutch](https://github.com/gocardless/hutch) metrics (how many jobs ran? how many failed? how long did they take?)
484
552
 
485
553
  ```ruby
486
- unless Rails.env == "test"
554
+ unless Rails.env.test?
487
555
  require 'prometheus_exporter/instrumentation'
488
556
  Hutch::Config.set(:tracer, PrometheusExporter::Instrumentation::Hutch)
489
557
  end
@@ -505,7 +573,7 @@ Request Queueing is defined as the time it takes for a request to reach your app
505
573
 
506
574
  As this metric starts before `prometheus_exporter` can handle the request, you must add a specific HTTP header as early in your infrastructure as possible (we recommend your load balancer or reverse proxy).
507
575
 
508
- Configure your HTTP server / load balancer to add a header `X-Request-Start: t=<MSEC>` when passing the request upstream. For more information, please consult your software manual.
576
+ The Amazon Application Load Balancer [request tracing header](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-request-tracing.html) is natively supported. If you are using another upstream entrypoint, you may configure your HTTP server / load balancer to add a header `X-Request-Start: t=<MSEC>` when passing the request upstream. Please keep in mind request time start is reported as epoch time (in seconds) and lacks precision, which may introduce additional latency in reported metrics. For more information, please consult your software manual.
509
577
 
510
578
  Hint: we aim to be API-compatible with the big APM solutions, so if you've got requests queueing time configured for them, it should be expected to also work with `prometheus_exporter`.
511
579
 
@@ -519,23 +587,71 @@ The easiest way to gather this metrics is to put the following in your `puma.rb`
519
587
  # puma.rb config
520
588
  after_worker_boot do
521
589
  require 'prometheus_exporter/instrumentation'
522
- PrometheusExporter::Instrumentation::Puma.start
590
+ # optional check, avoids spinning up and down threads per worker
591
+ if !PrometheusExporter::Instrumentation::Puma.started?
592
+ PrometheusExporter::Instrumentation::Puma.start
593
+ end
523
594
  end
524
595
  ```
525
596
 
526
597
  #### Metrics collected by Puma Instrumentation
527
598
 
528
- | Type | Name | Description |
529
- | --- | --- | --- |
530
- | Gauge | `puma_workers_total` | Number of puma workers |
531
- | Gauge | `puma_booted_workers_total` | Number of puma workers booted |
532
- | Gauge | `puma_old_workers_total` | Number of old puma workers |
533
- | Gauge | `puma_running_threads_total` | Number of puma threads currently running |
534
- | Gauge | `puma_request_backlog_total` | Number of requests waiting to be processed by a puma thread |
535
- | Gauge | `puma_thread_pool_capacity_total` | Number of puma threads available at current scale |
536
- | Gauge | `puma_max_threads_total` | Number of puma threads at available at max scale |
599
+ | Type | Name | Description |
600
+ | --- | --- | --- |
601
+ | Gauge | `puma_workers` | Number of puma workers |
602
+ | Gauge | `puma_booted_workers` | Number of puma workers booted |
603
+ | Gauge | `puma_old_workers` | Number of old puma workers |
604
+ | Gauge | `puma_running_threads` | Number of puma threads currently running |
605
+ | Gauge | `puma_request_backlog` | Number of requests waiting to be processed by a puma thread |
606
+ | Gauge | `puma_thread_pool_capacity` | Number of puma threads available at current scale |
607
+ | Gauge | `puma_max_threads` | Number of puma threads at available at max scale |
608
+
609
+ All metrics may have a `phase` label and all custom labels provided with the `labels` option.
610
+
611
+ ### Resque metrics
612
+
613
+ The resque metrics are using the `Resque.info` method, which queries Redis internally. To start monitoring your resque
614
+ installation, you'll need to start the instrumentation:
615
+
616
+ ```ruby
617
+ # e.g. config/initializers/resque.rb
618
+ require 'prometheus_exporter/instrumentation'
619
+ PrometheusExporter::Instrumentation::Resque.start
620
+ ```
621
+
622
+ #### Metrics collected by Resque Instrumentation
537
623
 
538
- All metrics may have a `phase` label.
624
+ | Type | Name | Description |
625
+ | --- | --- | --- |
626
+ | Gauge | `resque_processed_jobs` | Total number of processed Resque jobs |
627
+ | Gauge | `resque_failed_jobs` | Total number of failed Resque jobs |
628
+ | Gauge | `resque_pending_jobs` | Total number of pending Resque jobs |
629
+ | Gauge | `resque_queues` | Total number of Resque queues |
630
+ | Gauge | `resque_workers` | Total number of Resque workers running |
631
+ | Gauge | `resque_working` | Total number of Resque workers working |
632
+
633
+ ### GoodJob metrics
634
+
635
+ The metrics are generated from the database using the relevant scopes. To start monitoring your GoodJob
636
+ installation, you'll need to start the instrumentation:
637
+
638
+ ```ruby
639
+ # e.g. config/initializers/good_job.rb
640
+ require 'prometheus_exporter/instrumentation'
641
+ PrometheusExporter::Instrumentation::GoodJob.start
642
+ ```
643
+
644
+ #### Metrics collected by GoodJob Instrumentation
645
+
646
+ | Type | Name | Description |
647
+ | --- |----------------------|-----------------------------------------|
648
+ | Gauge | `good_job_scheduled` | Total number of scheduled GoodJob jobs. |
649
+ | Gauge | `good_job_retried` | Total number of retried GoodJob jobs. |
650
+ | Gauge | `good_job_queued` | Total number of queued GoodJob jobs. |
651
+ | Gauge | `good_job_running` | Total number of running GoodJob jobs. |
652
+ | Gauge | `good_job_finished` | Total number of finished GoodJob jobs. |
653
+ | Gauge | `good_job_succeeded` | Total number of succeeded GoodJob jobs. |
654
+ | Gauge | `good_job_discarded` | Total number of discarded GoodJob jobs |
539
655
 
540
656
  ### Unicorn process metrics
541
657
 
@@ -554,11 +670,11 @@ Note: You must install the `raindrops` gem in your `Gemfile` or locally.
554
670
 
555
671
  #### Metrics collected by Unicorn Instrumentation
556
672
 
557
- | Type | Name | Description |
558
- | --- | --- | --- |
559
- | Gauge | `unicorn_workers_total` | Number of unicorn workers |
560
- | Gauge | `unicorn_active_workers_total` | Number of active unicorn workers |
561
- | Gauge | `unicorn_request_backlog_total` | Number of requests waiting to be processed by a unicorn worker |
673
+ | Type | Name | Description |
674
+ | --- | --- | --- |
675
+ | Gauge | `unicorn_workers` | Number of unicorn workers |
676
+ | Gauge | `unicorn_active_workers` | Number of active unicorn workers |
677
+ | Gauge | `unicorn_request_backlog` | Number of requests waiting to be processed by a unicorn worker |
562
678
 
563
679
  ### Custom type collectors
564
680
 
@@ -743,6 +859,7 @@ Usage: prometheus_exporter [options]
743
859
  -c, --collector FILE (optional) Custom collector to run
744
860
  -a, --type-collector FILE (optional) Custom type collectors to run in main collector
745
861
  -v, --verbose
862
+ -g, --histogram Use histogram instead of summary for aggregations
746
863
  --auth FILE (optional) enable basic authentication using a htpasswd FILE
747
864
  --realm REALM (optional) Use REALM for basic authentication (default: "Prometheus Exporter")
748
865
  --unicorn-listen-address ADDRESS
@@ -767,6 +884,9 @@ prometheus_exporter -p 8080 \
767
884
  --prefix 'foo_'
768
885
  ```
769
886
 
887
+ You can use `-b` option to bind the `prometheus_exporter` web server to any IPv4 interface with `-b 0.0.0.0`,
888
+ any IPv6 interface with `-b ::`, or `-b ANY` to any IPv4/IPv6 interfaces available on your host system.
889
+
770
890
  #### Enabling Basic Authentication
771
891
 
772
892
  If you desire authentication on your `/metrics` route, you can enable basic authentication with the `--auth` option.
@@ -813,6 +933,38 @@ http_requests_total{service="app-server-01",app_name="app-01"} 1
813
933
 
814
934
  By default, `PrometheusExporter::Client.default` connects to `localhost:9394`. If your setup requires this (e.g. when using `docker-compose`), you can change the default host and port by setting the environment variables `PROMETHEUS_EXPORTER_HOST` and `PROMETHEUS_EXPORTER_PORT`.
815
935
 
936
+ ### Histogram mode
937
+
938
+ By default, the built-in collectors will report aggregations as summaries. If you need to aggregate metrics across labels, you can switch from summaries to histograms:
939
+
940
+ ```
941
+ $ prometheus_exporter --histogram
942
+ ```
943
+
944
+ In histogram mode, the same metrics will be collected but will be reported as histograms rather than summaries. This sacrifices some precision but allows aggregating metrics across actions and nodes using [`histogram_quantile`].
945
+
946
+ [`histogram_quantile`]: https://prometheus.io/docs/prometheus/latest/querying/functions/#histogram_quantile
947
+
948
+ ### Histogram - custom buckets
949
+
950
+ By default these buckets will be used:
951
+ ```
952
+ [0.005, 0.01, 0.025, 0.05, 0.1, 0.25, 0.5, 1, 2.5, 5.0, 10.0].freeze
953
+ ```
954
+ if this is not enough you can specify `default_buckets` like this:
955
+ ```
956
+ Histogram.default_buckets = [0.005, 0.01, 0.025, 0.05, 0.1, 0.25, 0.5, 1, 2, 2.5, 3, 4, 5.0, 10.0, 12, 14, 15, 20, 25].freeze
957
+ ```
958
+
959
+ Specfied buckets on the instance takes precedence over default:
960
+
961
+ ```
962
+ Histogram.default_buckets = [0.005, 0.01, 0,5].freeze
963
+ buckets = [0.1, 0.2, 0.3]
964
+ histogram = Histogram.new('test_bucktets', 'I have specified buckets', buckets: buckets)
965
+ histogram.buckets => [0.1, 0.2, 0.3]
966
+ ```
967
+
816
968
  ## Transport concerns
817
969
 
818
970
  Prometheus Exporter handles transport using a simple HTTP protocol. In multi process mode we avoid needing a large number of HTTP request by using chunked encoding to send metrics. This means that a single HTTP channel can deliver 100s or even 1000s of metrics over a single HTTP session to the `/send-metrics` endpoint. All calls to `send` and `send_json` on the `PrometheusExporter::Client` class are **non-blocking** and batched.
@@ -825,6 +977,61 @@ The `PrometheusExporter::Client` class has the method `#send-json`. This method,
825
977
 
826
978
  When `PrometheusExporter::Server::Collector` parses your JSON, by default it will use the faster Oj deserializer if available. This happens cause it only expects a simple Hash out of the box. You can opt in for the default JSON deserializer with `json_serializer: :json`.
827
979
 
980
+ ## Logging
981
+
982
+ `PrometheusExporter::Client.default` will export to `STDERR`. To change this, you can pass your own logger:
983
+ ```ruby
984
+ PrometheusExporter::Client.new(logger: Rails.logger)
985
+ PrometheusExporter::Client.new(logger: Logger.new(STDOUT))
986
+ ```
987
+
988
+ You can also pass a log level (default is [`Logger::WARN`](https://ruby-doc.org/stdlib-3.0.1/libdoc/logger/rdoc/Logger.html)):
989
+ ```ruby
990
+ PrometheusExporter::Client.new(log_level: Logger::DEBUG)
991
+ ```
992
+
993
+ ## Docker Usage
994
+
995
+ You can run `prometheus_exporter` project using an official Docker image:
996
+
997
+ ```bash
998
+ docker pull discourse/prometheus_exporter:latest
999
+ # or use specific version
1000
+ docker pull discourse/prometheus_exporter:x.x.x
1001
+ ```
1002
+
1003
+ The start the container:
1004
+
1005
+ ```bash
1006
+ docker run -p 9394:9394 discourse/prometheus_exporter
1007
+ ```
1008
+
1009
+ Additional flags could be included:
1010
+
1011
+ ```
1012
+ docker run -p 9394:9394 discourse/prometheus_exporter --verbose --prefix=myapp
1013
+ ```
1014
+
1015
+ ## Docker/Kubernetes Healthcheck
1016
+
1017
+ A `/ping` endpoint which only returns `PONG` is available so you can run container healthchecks :
1018
+
1019
+ Example:
1020
+
1021
+ ```yml
1022
+ services:
1023
+ rails-exporter:
1024
+ command:
1025
+ - bin/prometheus_exporter
1026
+ - -b
1027
+ - 0.0.0.0
1028
+ healthcheck:
1029
+ test: ["CMD", "curl", "--silent", "--show-error", "--fail", "--max-time", "3", "http://0.0.0.0:9394/ping"]
1030
+ timeout: 3s
1031
+ interval: 10s
1032
+ retries: 5
1033
+ ```
1034
+
828
1035
  ## Contributing
829
1036
 
830
1037
  Bug reports and pull requests are welcome on GitHub at https://github.com/discourse/prometheus_exporter. This project is intended to be a safe, welcoming space for collaboration, and contributors are expected to adhere to the [Contributor Covenant](http://contributor-covenant.org) code of conduct.
@@ -3,12 +3,15 @@
3
3
 
4
4
  require 'optparse'
5
5
  require 'json'
6
+ require 'logger'
6
7
 
7
8
  require_relative "./../lib/prometheus_exporter"
8
9
  require_relative "./../lib/prometheus_exporter/server"
9
10
 
10
11
  def run
11
- options = {}
12
+ options = {
13
+ logger_path: STDERR
14
+ }
12
15
  custom_collector_filename = nil
13
16
  custom_type_collectors_filenames = []
14
17
 
@@ -47,6 +50,9 @@ def run
47
50
  opt.on('-v', '--verbose') do |o|
48
51
  options[:verbose] = true
49
52
  end
53
+ opt.on('-g', '--histogram', "Use histogram instead of summary for aggregations") do |o|
54
+ options[:histogram] = true
55
+ end
50
56
  opt.on('--auth FILE', String, "(optional) enable basic authentication using a htpasswd FILE") do |o|
51
57
  options[:auth] = o
52
58
  end
@@ -61,21 +67,28 @@ def run
61
67
  opt.on('--unicorn-master PID_FILE', String, '(optional) PID file of unicorn master process to monitor unicorn') do |o|
62
68
  options[:unicorn_pid_file] = o
63
69
  end
70
+
71
+ opt.on('--logger-path PATH', String, '(optional) Path to file for logger output. Defaults to STDERR') do |o|
72
+ options[:logger_path] = o
73
+ end
64
74
  end.parse!
65
75
 
76
+ logger = Logger.new(options[:logger_path])
77
+ logger.level = Logger::WARN
78
+
66
79
  if options.has_key?(:realm) && !options.has_key?(:auth)
67
- STDERR.puts "[Warn] Providing REALM without AUTH has no effect"
80
+ logger.warn "Providing REALM without AUTH has no effect"
68
81
  end
69
82
 
70
83
  if options.has_key?(:auth)
71
84
  unless File.exist?(options[:auth]) && File.readable?(options[:auth])
72
- STDERR.puts "[Error] The AUTH file either doesn't exist or we don't have access to it"
85
+ logger.fatal "The AUTH file either doesn't exist or we don't have access to it"
73
86
  exit 1
74
87
  end
75
88
  end
76
89
 
77
90
  if custom_collector_filename
78
- eval File.read(custom_collector_filename), nil, File.expand_path(custom_collector_filename)
91
+ require File.expand_path(custom_collector_filename)
79
92
  found = false
80
93
 
81
94
  base_klass = PrometheusExporter::Server::CollectorBase
@@ -88,14 +101,14 @@ def run
88
101
  end
89
102
 
90
103
  if !found
91
- STDERR.puts "[Error] Can not find a class inheriting off PrometheusExporter::Server::CollectorBase"
104
+ logger.fatal "Can not find a class inheriting off PrometheusExporter::Server::CollectorBase"
92
105
  exit 1
93
106
  end
94
107
  end
95
108
 
96
109
  if custom_type_collectors_filenames.length > 0
97
110
  custom_type_collectors_filenames.each do |t|
98
- eval File.read(t), nil, File.expand_path(t)
111
+ require File.expand_path(t)
99
112
  end
100
113
 
101
114
  ObjectSpace.each_object(Class) do |klass|
@@ -1,6 +1,6 @@
1
1
  # frozen_string_literal: true
2
2
 
3
- class MyCustomCollector < PrometheusExporter::Server::Collector
3
+ class MyCustomCollector < PrometheusExporter::Server::BaseCollector
4
4
  def initialize
5
5
  @gauge1 = PrometheusExporter::Metric::Gauge.new("thing1", "I am thing 1")
6
6
  @gauge2 = PrometheusExporter::Metric::Gauge.new("thing2", "I am thing 2")
@@ -0,0 +1,5 @@
1
+ # This file was generated by Appraisal
2
+
3
+ source "https://rubygems.org"
4
+
5
+ gemspec path: "../"