prometheus_exporter 0.5.0 → 0.7.0

Sign up to get free protection for your applications and to get access to all the features.
Files changed (39) hide show
  1. checksums.yaml +4 -4
  2. data/.github/workflows/ci.yml +42 -0
  3. data/.gitignore +2 -0
  4. data/.rubocop.yml +7 -1
  5. data/Appraisals +10 -0
  6. data/CHANGELOG +27 -2
  7. data/README.md +257 -5
  8. data/bin/prometheus_exporter +21 -0
  9. data/gemfiles/.bundle/config +2 -0
  10. data/gemfiles/ar_60.gemfile +5 -0
  11. data/gemfiles/ar_61.gemfile +7 -0
  12. data/lib/prometheus_exporter.rb +2 -0
  13. data/lib/prometheus_exporter/client.rb +32 -4
  14. data/lib/prometheus_exporter/instrumentation.rb +1 -0
  15. data/lib/prometheus_exporter/instrumentation/active_record.rb +16 -7
  16. data/lib/prometheus_exporter/instrumentation/process.rb +2 -0
  17. data/lib/prometheus_exporter/instrumentation/sidekiq.rb +44 -3
  18. data/lib/prometheus_exporter/instrumentation/sidekiq_queue.rb +50 -0
  19. data/lib/prometheus_exporter/metric/base.rb +4 -0
  20. data/lib/prometheus_exporter/metric/counter.rb +4 -0
  21. data/lib/prometheus_exporter/metric/gauge.rb +4 -0
  22. data/lib/prometheus_exporter/metric/histogram.rb +6 -0
  23. data/lib/prometheus_exporter/metric/summary.rb +7 -0
  24. data/lib/prometheus_exporter/middleware.rb +26 -8
  25. data/lib/prometheus_exporter/server.rb +1 -0
  26. data/lib/prometheus_exporter/server/active_record_collector.rb +1 -0
  27. data/lib/prometheus_exporter/server/collector.rb +1 -0
  28. data/lib/prometheus_exporter/server/delayed_job_collector.rb +11 -0
  29. data/lib/prometheus_exporter/server/hutch_collector.rb +6 -0
  30. data/lib/prometheus_exporter/server/runner.rb +24 -2
  31. data/lib/prometheus_exporter/server/shoryuken_collector.rb +8 -0
  32. data/lib/prometheus_exporter/server/sidekiq_collector.rb +11 -2
  33. data/lib/prometheus_exporter/server/sidekiq_queue_collector.rb +46 -0
  34. data/lib/prometheus_exporter/server/web_collector.rb +7 -5
  35. data/lib/prometheus_exporter/server/web_server.rb +29 -17
  36. data/lib/prometheus_exporter/version.rb +1 -1
  37. data/prometheus_exporter.gemspec +9 -5
  38. metadata +65 -17
  39. data/.travis.yml +0 -12
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: c77ede3528c6008266620498f698c8a0e9e17bfa0c0691fedf5a3c4c3bdb8ebd
4
- data.tar.gz: 62f8b159f68036f05b785fea4eebef48df66606b083136eda5c37e4d3c294132
3
+ metadata.gz: 2f949348bbafc06b8f7a3dd3927693de8fef54090d85a7889afbfbf7dcb167b6
4
+ data.tar.gz: adeca8000ebb59c7225c1453135900aeab5c1760902edd5b4fecf9fa4b879a57
5
5
  SHA512:
6
- metadata.gz: 2c0699014753102b514f1bd6fa965cd82f6d83c34084daf4a3b07537b9224295a1df02aedce3082588e44f4c001e71ea83178456bd7efd8a8eae836b2fc6e9a7
7
- data.tar.gz: df738517c22422ba9e4e2e7cfd6ea6bbcf7542175a827b1632e7ff69af4651d4acf45bcec9693c4dd899b9b305b952fac989999ebea23013f3ddad0d3c396e00
6
+ metadata.gz: 7d3829ad57f03a19f6081c3444f9b13906aedd4fa1c5b99237d454096e0d993c27c2df0993482c421b6034a58a55c9603620bcdf7698248e4ac23a5063d63718
7
+ data.tar.gz: fb915716acbbb4f24a152c8ebf9eb35bdc97185a7144eb986e50aafad07a36d5cc83cbbbf7c16e8b399207e9c66d90973d2e6f9ddcdc8a17cab797840d13ed8b
@@ -0,0 +1,42 @@
1
+ name: Test Exporter
2
+
3
+ on:
4
+ push:
5
+ pull_request:
6
+ schedule:
7
+ - cron: '0 0 * * 0' # weekly
8
+
9
+ jobs:
10
+ build:
11
+ runs-on: ubuntu-latest
12
+ name: Ruby ${{ matrix.ruby }}
13
+ strategy:
14
+ matrix:
15
+ ruby: ["2.7", "2.6", "2.5"]
16
+ steps:
17
+ - uses: actions/checkout@master
18
+ with:
19
+ fetch-depth: 1
20
+ - uses: actions/setup-ruby@v1
21
+ with:
22
+ ruby-version: ${{ matrix.ruby }}
23
+ - uses: actions/cache@v2
24
+ with:
25
+ path: vendor/bundle
26
+ key: ${{ runner.os }}-${{ matrix.ruby }}-gems-v2-${{ hashFiles('**/Gemfile.lock') }}
27
+ restore-keys: |
28
+ ${{ runner.os }}-${{ matrix.ruby }}-gems-v2-
29
+ - name: Setup gems
30
+ run: |
31
+ gem install bundler
32
+ # for Ruby <= 2.6 , details https://github.com/rubygems/rubygems/issues/3284
33
+ gem update --system 3.0.8 && gem update --system
34
+ bundle config path vendor/bundle
35
+ bundle install --jobs 4
36
+ bundle exec appraisal install
37
+ - name: Rubocop
38
+ run: bundle exec rubocop
39
+ - name: install gems
40
+ run: bundle exec appraisal bundle
41
+ - name: Run tests
42
+ run: bundle exec appraisal rake
data/.gitignore CHANGED
@@ -7,5 +7,7 @@
7
7
  /spec/reports/
8
8
  /tmp/
9
9
  Gemfile.lock
10
+ /gemfiles/*.gemfile.lock
11
+
10
12
 
11
13
  .rubocop-https---raw-githubusercontent-com-discourse-discourse-master--rubocop-yml
@@ -1 +1,7 @@
1
- inherit_from: https://raw.githubusercontent.com/discourse/discourse/master/.rubocop.yml
1
+ inherit_gem:
2
+ rubocop-discourse: default.yml
3
+
4
+ AllCops:
5
+ Exclude:
6
+ - 'gemfiles/**/*'
7
+ - 'vendor/**/*'
@@ -0,0 +1,10 @@
1
+ # frozen_string_literal: true
2
+
3
+ appraise "ar-60" do
4
+ # we are using this version as default in gemspec
5
+ # gem "activerecord", "~> 6.0.0"
6
+ end
7
+
8
+ appraise "ar-61" do
9
+ gem "activerecord", "~> 6.1.0.rc2"
10
+ end
data/CHANGELOG CHANGED
@@ -1,9 +1,34 @@
1
- 0.5.0 - 14-02-2019
1
+ 0.7.0 - 29-12-2020
2
+
3
+ - Dev: Removed support from EOL rubies, only 2.5, 2.6, 2.7 and 3.0 are supported now.
4
+ - Dev: Better support for Ruby 3.0, explicitly depending on webrick
5
+ - Dev: Rails 6.1 instrumentation support
6
+ - FEATURE: clean pattern for overriding middleware labels was introduced (in README)
7
+ - Fix: Better support for forking
8
+
9
+ 0.6.0 - 17-11-2020
10
+
11
+ - FEATURE: add support for basic-auth in the prometheus_exporter web server
12
+
13
+ 0.5.3 - 29-07-2020
14
+
15
+ - FEATURE: added #remove to all metric types so users can remove specific labels if needed
16
+
17
+ 0.5.2 - 01-07-2020
18
+
19
+ - FEATURE: expanded instrumentation for sidekiq
20
+ - FEATURE: configurable default labels
21
+
22
+ 0.5.1 - 25-02-2020
23
+
24
+ - FEATURE: Allow configuring the default client's host and port via environment variables
25
+
26
+ 0.5.0 - 14-02-2020
2
27
 
3
28
  - Breaking change: listen only to localhost by default to prevent unintended insecure configuration
4
29
  - FIX: Avoid calling `hostname` aggressively, instead cache it on the exporter instance
5
30
 
6
- 0.4.17 - 13-01-2019
31
+ 0.4.17 - 13-01-2020
7
32
 
8
33
  - FEATURE: add support for `to_h` on all metrics which can be used to query existing key/values
9
34
 
data/README.md CHANGED
@@ -24,6 +24,7 @@ To learn more see [Instrumenting Rails with Prometheus](https://samsaffron.com/a
24
24
  * [GraphQL support](#graphql-support)
25
25
  * [Metrics default prefix / labels](#metrics-default-prefix--labels)
26
26
  * [Client default labels](#client-default-labels)
27
+ * [Client default host](#client-default-host)
27
28
  * [Transport concerns](#transport-concerns)
28
29
  * [JSON generation and parsing](#json-generation-and-parsing)
29
30
  * [Contributing](#contributing)
@@ -32,7 +33,7 @@ To learn more see [Instrumenting Rails with Prometheus](https://samsaffron.com/a
32
33
 
33
34
  ## Requirements
34
35
 
35
- Minimum Ruby of version 2.3.0 is required, Ruby 2.2.0 is EOL as of 2018-03-31
36
+ Minimum Ruby of version 2.5.0 is required, Ruby 2.4.0 is EOL as of 2020-04-05
36
37
 
37
38
  ## Installation
38
39
 
@@ -189,13 +190,71 @@ Ensure you run the exporter in a monitored background process:
189
190
  $ bundle exec prometheus_exporter
190
191
  ```
191
192
 
193
+ #### Metrics collected by Rails integration middleware
194
+
195
+ | Type | Name | Description |
196
+ | --- | --- | --- |
197
+ | Counter | `http_requests_total` | Total HTTP requests from web app |
198
+ | Summary | `http_duration_seconds` | Time spent in HTTP reqs in seconds |
199
+ | Summary | `http_redis_duration_seconds`¹ | Time spent in HTTP reqs in Redis, in seconds |
200
+ | Summary | `http_sql_duration_seconds`² | Time spent in HTTP reqs in SQL in seconds |
201
+ | Summary | `http_queue_duration_seconds`³ | Time spent queueing the request in load balancer in seconds |
202
+
203
+ All metrics have a `controller` and an `action` label.
204
+ `http_requests_total` additionally has a (HTTP response) `status` label.
205
+
206
+ To add your own labels to the default metrics, create a subclass of `PrometheusExporter::Middleware`, override `custom_labels`, and use it in your initializer.
207
+ ```ruby
208
+ class MyMiddleware < PrometheusExporter::Middleware
209
+ def custom_labels(env)
210
+ labels = {}
211
+
212
+ if env['HTTP_X_PLATFORM']
213
+ labels['platform'] = env['HTTP_X_PLATFORM']
214
+ end
215
+
216
+ labels
217
+ end
218
+ end
219
+ ```
220
+
221
+ If you're not using Rails like framework, you can extend `PrometheusExporter::Middleware#default_labels` in a way to add more relevant labels.
222
+ For example you can mimic [prometheus-client](https://github.com/prometheus/client_ruby) labels with code like this:
223
+ ```ruby
224
+ class MyMiddleware < PrometheusExporter::Middleware
225
+ def default_labels(env, result)
226
+ status = (result && result[0]) || -1
227
+ path = [env["SCRIPT_NAME"], env["PATH_INFO"]].join
228
+ {
229
+ path: strip_ids_from_path(path),
230
+ method: env["REQUEST_METHOD"],
231
+ status: status
232
+ }
233
+ end
234
+
235
+ def strip_ids_from_path(path)
236
+ path
237
+ .gsub(%r{/[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}(/|$)}, '/:uuid\\1')
238
+ .gsub(%r{/\d+(/|$)}, '/:id\\1')
239
+ end
240
+ end
241
+ ```
242
+ That way you won't have all metrics labeled with `controller=other` and `action=other`, but have labels such as
243
+ ```
244
+ ruby_http_duration_seconds{path="/api/v1/teams/:id",method="GET",status="200",quantile="0.99"} 0.009880661998977303
245
+ ```
246
+
247
+ ¹) Only available when Redis is used.
248
+ ²) Only available when Mysql or PostgreSQL are used.
249
+ ³) Only available when [Instrumenting Request Queueing Time](#instrumenting-request-queueing-time) is set up.
250
+
192
251
  #### Activerecord Connection Pool Metrics
193
252
 
194
253
  This collects activerecord connection pool metrics.
195
254
 
196
255
  It supports injection of custom labels and the connection config options (`username`, `database`, `host`, `port`) as labels.
197
256
 
198
- For Puma single mode
257
+ For Puma single mode
199
258
  ```ruby
200
259
  #in puma.rb
201
260
  require 'prometheus_exporter/instrumentation'
@@ -243,6 +302,19 @@ Sidekiq.configure_server do |config|
243
302
  end
244
303
  ```
245
304
 
305
+ ##### Metrics collected by ActiveRecord Instrumentation
306
+
307
+ | Type | Name | Description |
308
+ | --- | --- | --- |
309
+ | Gauge | `active_record_connection_pool_connections` | Total connections in pool |
310
+ | Gauge | `active_record_connection_pool_busy` | Connections in use in pool |
311
+ | Gauge | `active_record_connection_pool_dead` | Dead connections in pool |
312
+ | Gauge | `active_record_connection_pool_idle` | Idle connections in pool |
313
+ | Gauge | `active_record_connection_pool_waiting` | Connection requests waiting |
314
+ | Gauge | `active_record_connection_pool_size` | Maximum allowed connection pool size |
315
+
316
+ All metrics collected by the ActiveRecord integration include at least the following labels: `pid` (of the process the stats where collected in), `pool_name`, any labels included in the `config_labels` option (prefixed with `dbconfig_`, example: `dbconfig_host`), and all custom labels provided with the `custom_labels` option.
317
+
246
318
  #### Per-process stats
247
319
 
248
320
  You may also be interested in per-process stats. This collects memory and GC stats:
@@ -259,11 +331,30 @@ end
259
331
  # in unicorn/puma/passenger be sure to run a new process instrumenter after fork
260
332
  after_fork do
261
333
  require 'prometheus_exporter/instrumentation'
262
- PrometheusExporter::Instrumentation::Process.start(type:"web")
334
+ PrometheusExporter::Instrumentation::Process.start(type: "web")
263
335
  end
264
336
 
265
337
  ```
266
338
 
339
+ ##### Metrics collected by Process Instrumentation
340
+
341
+ | Type | Name | Description |
342
+ | --- | --- | --- |
343
+ | Gauge | `heap_free_slots` | Free ruby heap slots |
344
+ | Gauge | `heap_live_slots` | Used ruby heap slots |
345
+ | Gauge | `v8_heap_size`* | Total JavaScript V8 heap size (bytes) |
346
+ | Gauge | `v8_used_heap_size`* | Total used JavaScript V8 heap size (bytes) |
347
+ | Gauge | `v8_physical_size`* | Physical size consumed by V8 heaps |
348
+ | Gauge | `v8_heap_count`* | Number of V8 contexts running |
349
+ | Gauge | `rss` | Total RSS used by process |
350
+ | Counter | `major_gc_ops_total` | Major GC operations by process |
351
+ | Counter | `minor_gc_ops_total` | Minor GC operations by process |
352
+ | Counter | `allocated_objects_total` | Total number of allocated objects by process |
353
+
354
+ _Metrics marked with * are only collected when `MiniRacer` is defined._
355
+
356
+ Metrics collected by Process instrumentation include labels `type` (as given with the `type` option), `pid` (of the process the stats where collected in), and any custom labels given to `Process.start` with the `labels` option.
357
+
267
358
  #### Sidekiq metrics
268
359
 
269
360
  Including Sidekiq metrics (how many jobs ran? how many failed? how long did they take? how many are dead? how many were restarted?)
@@ -278,6 +369,17 @@ Sidekiq.configure_server do |config|
278
369
  end
279
370
  ```
280
371
 
372
+ To monitor Queue size and latency:
373
+
374
+ ```ruby
375
+ Sidekiq.configure_server do |config|
376
+ config.on :startup do
377
+ require 'prometheus_exporter/instrumentation'
378
+ PrometheusExporter::Instrumentation::SidekiqQueue.start
379
+ end
380
+ end
381
+ ```
382
+
281
383
  To monitor Sidekiq process info:
282
384
 
283
385
  ```ruby
@@ -299,6 +401,35 @@ Sometimes the Sidekiq server shuts down before it can send metrics, that were ge
299
401
  end
300
402
  ```
301
403
 
404
+ ##### Metrics collected by Sidekiq Instrumentation
405
+
406
+ **PrometheusExporter::Instrumentation::Sidekiq**
407
+ | Type | Name | Description |
408
+ | --- | --- | --- |
409
+ | Summary | `sidekiq_job_duration_seconds` | Time spent in sidekiq jobs |
410
+ | Counter | `sidekiq_jobs_total` | Total number of sidekiq jobs executed |
411
+ | Counter | `sidekiq_restarted_jobs_total` | Total number of sidekiq jobs that we restarted because of a sidekiq shutdown |
412
+ | Counter | `sidekiq_failed_jobs_total` | Total number of failed sidekiq jobs |
413
+
414
+ All metrics have a `job_name` label and a `queue` label.
415
+
416
+ **PrometheusExporter::Instrumentation::Sidekiq.death_handler**
417
+ | Type | Name | Description |
418
+ | --- | --- | --- |
419
+ | Counter | `sidekiq_dead_jobs_total` | Total number of dead sidekiq jobs |
420
+
421
+ This metric has a `job_name` label and a `queue` label.
422
+
423
+ **PrometheusExporter::Instrumentation::SidekiqQueue**
424
+ | Type | Name | Description |
425
+ | --- | --- | --- |
426
+ | Gauge | `sidekiq_queue_backlog_total` | Size of the sidekiq queue |
427
+ | Gauge | `sidekiq_queue_latency_seconds` | Latency of the sidekiq queue |
428
+
429
+ Both metrics will have a `queue` label with the name of the queue.
430
+
431
+ _See [Metrics collected by Process Instrumentation](#metrics-collected-by-process-instrumentation) for a list of metrics the Process instrumentation will produce._
432
+
302
433
  #### Shoryuken metrics
303
434
 
304
435
  For Shoryuken metrics (how many jobs ran? how many failed? how long did they take? how many were restarted?)
@@ -312,6 +443,17 @@ Shoryuken.configure_server do |config|
312
443
  end
313
444
  ```
314
445
 
446
+ ##### Metrics collected by Shoryuken Instrumentation
447
+
448
+ | Type | Name | Description |
449
+ | --- | --- | --- |
450
+ | Counter | `shoryuken_job_duration_seconds` | Total time spent in shoryuken jobs |
451
+ | Counter | `shoryuken_jobs_total` | Total number of shoryuken jobs executed |
452
+ | Counter | `shoryuken_restarted_jobs_total` | Total number of shoryuken jobs that we restarted because of a shoryuken shutdown |
453
+ | Counter | `shoryuken_failed_jobs_total` | Total number of failed shoryuken jobs |
454
+
455
+ All metrics have labels for `job_name` and `queue_name`.
456
+
315
457
  #### Delayed Job plugin
316
458
 
317
459
  In an initializer:
@@ -323,6 +465,19 @@ unless Rails.env == "test"
323
465
  end
324
466
  ```
325
467
 
468
+ ##### Metrics collected by Delayed Job Instrumentation
469
+
470
+ | Type | Name | Description | Labels |
471
+ | --- | --- | --- | --- |
472
+ | Counter | `delayed_job_duration_seconds` | Total time spent in delayed jobs | `job_name` |
473
+ | Counter | `delayed_jobs_total` | Total number of delayed jobs executed | `job_name` |
474
+ | Gauge | `delayed_jobs_enqueued` | Number of enqueued delayed jobs | - |
475
+ | Gauge | `delayed_jobs_pending` | Number of pending delayed jobs | - |
476
+ | Counter | `delayed_failed_jobs_total` | Total number failed delayed jobs executed | `job_name` |
477
+ | Counter | `delayed_jobs_max_attempts_reached_total` | Total number of delayed jobs that reached max attempts | - |
478
+ | Summary | `delayed_job_duration_seconds_summary` | Summary of the time it takes jobs to execute | `status` |
479
+ | Summary | `delayed_job_attempts_summary` | Summary of the amount of attempts it takes delayed jobs to succeed | - |
480
+
326
481
  #### Hutch Message Processing Tracer
327
482
 
328
483
  Capture [Hutch](https://github.com/gocardless/hutch) metrics (how many jobs ran? how many failed? how long did they take?)
@@ -334,6 +489,16 @@ unless Rails.env == "test"
334
489
  end
335
490
  ```
336
491
 
492
+ ##### Metrics collected by Hutch Instrumentation
493
+
494
+ | Type | Name | Description |
495
+ | --- | --- | --- |
496
+ | Counter | `hutch_job_duration_seconds` | Total time spent in hutch jobs |
497
+ | Counter | `hutch_jobs_total` | Total number of hutch jobs executed |
498
+ | Counter | `hutch_failed_jobs_total` | Total number failed hutch jobs executed |
499
+
500
+ All metrics have a `job_name` label.
501
+
337
502
  #### Instrumenting Request Queueing Time
338
503
 
339
504
  Request Queueing is defined as the time it takes for a request to reach your application (instrumented by this `prometheus_exporter`) from farther upstream (as your load balancer). A high queueing time usually means that your backend cannot handle all the incoming requests in time, so they queue up (= you should see if you need to add more capacity).
@@ -358,6 +523,20 @@ after_worker_boot do
358
523
  end
359
524
  ```
360
525
 
526
+ #### Metrics collected by Puma Instrumentation
527
+
528
+ | Type | Name | Description |
529
+ | --- | --- | --- |
530
+ | Gauge | `puma_workers_total` | Number of puma workers |
531
+ | Gauge | `puma_booted_workers_total` | Number of puma workers booted |
532
+ | Gauge | `puma_old_workers_total` | Number of old puma workers |
533
+ | Gauge | `puma_running_threads_total` | Number of puma threads currently running |
534
+ | Gauge | `puma_request_backlog_total` | Number of requests waiting to be processed by a puma thread |
535
+ | Gauge | `puma_thread_pool_capacity_total` | Number of puma threads available at current scale |
536
+ | Gauge | `puma_max_threads_total` | Number of puma threads at available at max scale |
537
+
538
+ All metrics may have a `phase` label.
539
+
361
540
  ### Unicorn process metrics
362
541
 
363
542
  In order to gather metrics from unicorn processes, we use `rainbows`, which exposes `Rainbows::Linux.tcp_listener_stats` to gather information about active workers and queued requests. To start monitoring your unicorn processes, you'll need to know both the path to unicorn PID file and the listen address (`pid_file` and `listen` in your unicorn config file)
@@ -373,6 +552,14 @@ prometheus_exporter --unicorn-master /var/run/unicorn.pid --unicorn-listen-addre
373
552
 
374
553
  Note: You must install the `raindrops` gem in your `Gemfile` or locally.
375
554
 
555
+ #### Metrics collected by Unicorn Instrumentation
556
+
557
+ | Type | Name | Description |
558
+ | --- | --- | --- |
559
+ | Gauge | `unicorn_workers_total` | Number of unicorn workers |
560
+ | Gauge | `unicorn_active_workers_total` | Number of active unicorn workers |
561
+ | Gauge | `unicorn_request_backlog_total` | Number of requests waiting to be processed by a unicorn worker |
562
+
376
563
  ### Custom type collectors
377
564
 
378
565
  In some cases you may have custom metrics you want to ship the collector in a batch. In this case you may still be interested in the base collector behavior, but would like to add your own special messages.
@@ -431,8 +618,8 @@ Then you can collect the metrics you need on demand:
431
618
 
432
619
  ```ruby
433
620
  def metrics
434
- user_count_gague = PrometheusExporter::Metric::Gauge.new('user_count', 'number of users in the app')
435
- user_count_gague.observe User.count
621
+ user_count_gauge = PrometheusExporter::Metric::Gauge.new('user_count', 'number of users in the app')
622
+ user_count_gauge.observe User.count
436
623
  [user_count_gauge]
437
624
  end
438
625
  ```
@@ -541,6 +728,68 @@ ruby_web_requests{hostname="app-server-01",route="test/route"} 1
541
728
  ruby_web_requests{hostname="app-server-01"} 1
542
729
  ```
543
730
 
731
+ ### Exporter Process Configuration
732
+
733
+ When running the process for `prometheus_exporter` using `bin/prometheus_exporter`, there are several configurations that
734
+ can be passed in:
735
+
736
+ ```
737
+ Usage: prometheus_exporter [options]
738
+ -p, --port INTEGER Port exporter should listen on (default: 9394)
739
+ -b, --bind STRING IP address exporter should listen on (default: localhost)
740
+ -t, --timeout INTEGER Timeout in seconds for metrics endpoint (default: 2)
741
+ --prefix METRIC_PREFIX Prefix to apply to all metrics (default: ruby_)
742
+ --label METRIC_LABEL Label to apply to all metrics (default: {})
743
+ -c, --collector FILE (optional) Custom collector to run
744
+ -a, --type-collector FILE (optional) Custom type collectors to run in main collector
745
+ -v, --verbose
746
+ --auth FILE (optional) enable basic authentication using a htpasswd FILE
747
+ --realm REALM (optional) Use REALM for basic authentication (default: "Prometheus Exporter")
748
+ --unicorn-listen-address ADDRESS
749
+ (optional) Address where unicorn listens on (unix or TCP address)
750
+ --unicorn-master PID_FILE (optional) PID file of unicorn master process to monitor unicorn
751
+ ```
752
+
753
+ #### Example
754
+
755
+ The following will run the process at
756
+ - Port `8080` (default `9394`)
757
+ - Bind to `0.0.0.0` (default `localhost`)
758
+ - Timeout in `1 second` for metrics endpoint (default `2 seconds`)
759
+ - Metric prefix as `foo_` (default `ruby_`)
760
+ - Default labels as `{environment: "integration", foo: "bar"}`
761
+
762
+ ```bash
763
+ prometheus_exporter -p 8080 \
764
+ -b 0.0.0.0 \
765
+ -t 1 \
766
+ --label '{"environment": "integration", "foo": "bar"}' \
767
+ --prefix 'foo_'
768
+ ```
769
+
770
+ #### Enabling Basic Authentication
771
+
772
+ If you desire authentication on your `/metrics` route, you can enable basic authentication with the `--auth` option.
773
+
774
+ ```
775
+ $ prometheus_exporter --auth my-htpasswd-file
776
+ ```
777
+
778
+ Additionally, the `--realm` option may be used to provide a customized realm for the challenge request.
779
+
780
+ Notes:
781
+
782
+ * You will need to create a `htpasswd` formatted file before hand which contains one or more user:password entries
783
+ * Only the basic `crypt` encryption is currently supported
784
+
785
+ A simple `htpasswd` file can be created with the Apache `htpasswd` utility; e.g:
786
+
787
+ ```
788
+ $ htpasswd -cdb my-htpasswd-file my-user my-unencrypted-password
789
+ ```
790
+
791
+ This will create a file named `my-htpasswd-file` which is suitable for use the `--auth` option.
792
+
544
793
  ### Client default labels
545
794
 
546
795
  You can specify a default label for instrumentation metrics sent by a specific client. For example:
@@ -560,6 +809,9 @@ Will result in:
560
809
  http_requests_total{controller="home","action"="index",service="app-server-01",app_name="app-01"} 2
561
810
  http_requests_total{service="app-server-01",app_name="app-01"} 1
562
811
  ```
812
+ ### Client default host
813
+
814
+ By default, `PrometheusExporter::Client.default` connects to `localhost:9394`. If your setup requires this (e.g. when using `docker-compose`), you can change the default host and port by setting the environment variables `PROMETHEUS_EXPORTER_HOST` and `PROMETHEUS_EXPORTER_PORT`.
563
815
 
564
816
  ## Transport concerns
565
817