prometheus_exporter 0.4.17 → 0.6.0

Sign up to get free protection for your applications and to get access to all the features.
Files changed (35) hide show
  1. checksums.yaml +4 -4
  2. data/.github/workflows/ci.yml +36 -0
  3. data/.rubocop.yml +2 -1
  4. data/CHANGELOG +23 -1
  5. data/README.md +248 -7
  6. data/bin/prometheus_exporter +28 -1
  7. data/lib/prometheus_exporter.rb +14 -0
  8. data/lib/prometheus_exporter/client.rb +31 -3
  9. data/lib/prometheus_exporter/instrumentation.rb +2 -0
  10. data/lib/prometheus_exporter/instrumentation/active_record.rb +2 -13
  11. data/lib/prometheus_exporter/instrumentation/process.rb +3 -12
  12. data/lib/prometheus_exporter/instrumentation/shoryuken.rb +31 -0
  13. data/lib/prometheus_exporter/instrumentation/sidekiq.rb +44 -3
  14. data/lib/prometheus_exporter/instrumentation/sidekiq_queue.rb +50 -0
  15. data/lib/prometheus_exporter/metric/base.rb +4 -0
  16. data/lib/prometheus_exporter/metric/counter.rb +4 -0
  17. data/lib/prometheus_exporter/metric/gauge.rb +4 -0
  18. data/lib/prometheus_exporter/metric/histogram.rb +6 -0
  19. data/lib/prometheus_exporter/metric/summary.rb +7 -0
  20. data/lib/prometheus_exporter/middleware.rb +13 -2
  21. data/lib/prometheus_exporter/server.rb +2 -0
  22. data/lib/prometheus_exporter/server/active_record_collector.rb +1 -0
  23. data/lib/prometheus_exporter/server/collector.rb +2 -0
  24. data/lib/prometheus_exporter/server/delayed_job_collector.rb +11 -0
  25. data/lib/prometheus_exporter/server/hutch_collector.rb +6 -0
  26. data/lib/prometheus_exporter/server/runner.rb +26 -27
  27. data/lib/prometheus_exporter/server/shoryuken_collector.rb +67 -0
  28. data/lib/prometheus_exporter/server/sidekiq_collector.rb +11 -2
  29. data/lib/prometheus_exporter/server/sidekiq_queue_collector.rb +46 -0
  30. data/lib/prometheus_exporter/server/web_collector.rb +5 -0
  31. data/lib/prometheus_exporter/server/web_server.rb +29 -16
  32. data/lib/prometheus_exporter/version.rb +1 -1
  33. data/prometheus_exporter.gemspec +16 -14
  34. metadata +17 -12
  35. data/.travis.yml +0 -12
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: ca59b198973bbfb5df6e5204f7a74138df05759f6a44839a60eaabd1d0d58697
4
- data.tar.gz: 384a5a138d1eb8c24ccc47984d6f6437bcced89f058cc55460bf3dcb60f326fb
3
+ metadata.gz: f516c39448418a2216851149f0479f3f9e0f5feadae5a5468e7953fc0551d318
4
+ data.tar.gz: dcc937b79e05d4cd74a64ab2135425c75c4bba1be053bec99e8db0695a3ac998
5
5
  SHA512:
6
- metadata.gz: 7406bf4ed7f65ee3dd243ac01294737292b721b9cd04f34a93b84200ad3259bc18ee8326aa8f5bea438a6ada13dab42453dd380e0fd2d34008378ecd8f581504
7
- data.tar.gz: 8834124a60121673f01023a2e564b3afb55f3977d2d0214cccc2b7fb201788c2b15650992c49402180e6468a7676278ce963a0bded8f2fcf6ccc9984937a9199
6
+ metadata.gz: 5664f24c1c4a1520bafe789e5df8a9f1c40b118933cab9b73baa953f633ec6e81c3a11bbc9fc63097a2a2acdd2390daeade9449e6451cc83207fc2162575b359
7
+ data.tar.gz: ca4aeedbc6e211818569257e8a3e9fde93be363a2da246abae45749bc9cc232052ac52f0850cd54d0f600904bed193a10db1f76e4fc9567b837b20a78264158b
@@ -0,0 +1,36 @@
1
+ name: Test Exporter
2
+
3
+ on:
4
+ push:
5
+ pull_request:
6
+ schedule:
7
+ - cron: '0 0 * * 0' # weekly
8
+
9
+ jobs:
10
+ build:
11
+ runs-on: ubuntu-latest
12
+ name: Ruby ${{ matrix.ruby }}
13
+ strategy:
14
+ matrix:
15
+ ruby: ["2.7", "2.6", "2.5", "2.4"]
16
+ steps:
17
+ - uses: actions/checkout@master
18
+ with:
19
+ fetch-depth: 1
20
+ - uses: actions/setup-ruby@v1
21
+ with:
22
+ ruby-version: ${{ matrix.ruby }}
23
+ - uses: actions/cache@v2
24
+ with:
25
+ path: vendor/bundle
26
+ key: ${{ runner.os }}-${{ matrix.ruby }}-gems-${{ hashFiles('**/Gemfile.lock') }}
27
+ restore-keys: |
28
+ ${{ runner.os }}-${{ matrix.ruby }}-gems-
29
+ - name: Setup gems
30
+ run: |
31
+ bundle config path vendor/bundle
32
+ bundle install --jobs 4
33
+ - name: Rubocop
34
+ run: bundle exec rubocop
35
+ - name: Run tests
36
+ run: bundle exec rake
@@ -1 +1,2 @@
1
- inherit_from: https://raw.githubusercontent.com/discourse/discourse/master/.rubocop.yml
1
+ inherit_gem:
2
+ rubocop-discourse: default.yml
data/CHANGELOG CHANGED
@@ -1,4 +1,26 @@
1
- 0.4.17 - 13-01-2019
1
+ 0.6.0 - 10-11-2020
2
+
3
+ - FEATURE: add support for basic-auth in the prometheus_exporter web server
4
+
5
+ 0.5.3 - 29-07-2020
6
+
7
+ - FEATURE: added #remove to all metric types so users can remove specific labels if needed
8
+
9
+ 0.5.2 - 01-07-2020
10
+
11
+ - FEATURE: expanded instrumentation for sidekiq
12
+ - FEATURE: configurable default labels
13
+
14
+ 0.5.1 - 25-02-2020
15
+
16
+ - FEATURE: Allow configuring the default client's host and port via environment variables
17
+
18
+ 0.5.0 - 14-02-2020
19
+
20
+ - Breaking change: listen only to localhost by default to prevent unintended insecure configuration
21
+ - FIX: Avoid calling `hostname` aggressively, instead cache it on the exporter instance
22
+
23
+ 0.4.17 - 13-01-2020
2
24
 
3
25
  - FEATURE: add support for `to_h` on all metrics which can be used to query existing key/values
4
26
 
data/README.md CHANGED
@@ -13,6 +13,7 @@ To learn more see [Instrumenting Rails with Prometheus](https://samsaffron.com/a
13
13
  * [Rails integration](#rails-integration)
14
14
  * [Per-process stats](#per-process-stats)
15
15
  * [Sidekiq metrics](#sidekiq-metrics)
16
+ * [Shoryuken metrics](#shoryuken-metrics)
16
17
  * [ActiveRecord Connection Pool Metrics](#activerecord-connection-pool-metrics)
17
18
  * [Delayed Job plugin](#delayed-job-plugin)
18
19
  * [Hutch metrics](#hutch-message-processing-tracer)
@@ -23,6 +24,7 @@ To learn more see [Instrumenting Rails with Prometheus](https://samsaffron.com/a
23
24
  * [GraphQL support](#graphql-support)
24
25
  * [Metrics default prefix / labels](#metrics-default-prefix--labels)
25
26
  * [Client default labels](#client-default-labels)
27
+ * [Client default host](#client-default-host)
26
28
  * [Transport concerns](#transport-concerns)
27
29
  * [JSON generation and parsing](#json-generation-and-parsing)
28
30
  * [Contributing](#contributing)
@@ -62,8 +64,9 @@ require 'prometheus_exporter/server'
62
64
  require 'prometheus_exporter/client'
63
65
  require 'prometheus_exporter/instrumentation'
64
66
 
67
+ # bind is the address, on which the webserver will listen
65
68
  # port is the port that will provide the /metrics route
66
- server = PrometheusExporter::Server::WebServer.new port: 12345
69
+ server = PrometheusExporter::Server::WebServer.new bind: 'localhost', port: 12345
67
70
  server.start
68
71
 
69
72
  # wire up a default local client
@@ -115,7 +118,7 @@ In some cases (for example, unicorn or puma clusters) you may want to aggregate
115
118
 
116
119
  Simplest way to achieve this is to use the built-in collector.
117
120
 
118
- First, run an exporter on your desired port (we use the default port of 9394):
121
+ First, run an exporter on your desired port (we use the default bind to localhost and port of 9394):
119
122
 
120
123
  ```
121
124
  $ prometheus_exporter
@@ -187,13 +190,45 @@ Ensure you run the exporter in a monitored background process:
187
190
  $ bundle exec prometheus_exporter
188
191
  ```
189
192
 
193
+ #### Metrics collected by Rails integration middleware
194
+
195
+ | Type | Name | Description |
196
+ | --- | --- | --- |
197
+ | Counter | `http_requests_total` | Total HTTP requests from web app |
198
+ | Summary | `http_duration_seconds` | Time spent in HTTP reqs in seconds |
199
+ | Summary | `http_redis_duration_seconds`¹ | Time spent in HTTP reqs in Redis, in seconds |
200
+ | Summary | `http_sql_duration_seconds`² | Time spent in HTTP reqs in SQL in seconds |
201
+ | Summary | `http_queue_duration_seconds`³ | Time spent queueing the request in load balancer in seconds |
202
+
203
+ All metrics have a `controller` and an `action` label.
204
+ `http_requests_total` additionally has a (HTTP response) `status` label.
205
+
206
+ To add your own labels to the default metrics, create a subclass of `PrometheusExporter::Middleware`, override `custom_labels`, and use it in your initializer.
207
+ ```ruby
208
+ class MyMiddleware < PrometheusExporter::Middleware
209
+ def custom_labels(env)
210
+ labels = {}
211
+
212
+ if env['HTTP_X_PLATFORM']
213
+ labels['platform'] = env['HTTP_X_PLATFORM']
214
+ end
215
+
216
+ labels
217
+ end
218
+ end
219
+ ```
220
+
221
+ ¹) Only available when Redis is used.
222
+ ²) Only available when Mysql or PostgreSQL are used.
223
+ ³) Only available when [Instrumenting Request Queueing Time](#instrumenting-request-queueing-time) is set up.
224
+
190
225
  #### Activerecord Connection Pool Metrics
191
226
 
192
227
  This collects activerecord connection pool metrics.
193
228
 
194
229
  It supports injection of custom labels and the connection config options (`username`, `database`, `host`, `port`) as labels.
195
230
 
196
- For Puma single mode
231
+ For Puma single mode
197
232
  ```ruby
198
233
  #in puma.rb
199
234
  require 'prometheus_exporter/instrumentation'
@@ -219,7 +254,7 @@ end
219
254
  For Unicorn / Passenger
220
255
 
221
256
  ```ruby
222
- after_fork do
257
+ after_fork do |_server, _worker|
223
258
  require 'prometheus_exporter/instrumentation'
224
259
  PrometheusExporter::Instrumentation::ActiveRecord.start(
225
260
  custom_labels: { type: "unicorn_worker" }, #optional params
@@ -241,6 +276,19 @@ Sidekiq.configure_server do |config|
241
276
  end
242
277
  ```
243
278
 
279
+ ##### Metrics collected by ActiveRecord Instrumentation
280
+
281
+ | Type | Name | Description |
282
+ | --- | --- | --- |
283
+ | Gauge | `active_record_connection_pool_connections` | Total connections in pool |
284
+ | Gauge | `active_record_connection_pool_busy` | Connections in use in pool |
285
+ | Gauge | `active_record_connection_pool_dead` | Dead connections in pool |
286
+ | Gauge | `active_record_connection_pool_idle` | Idle connections in pool |
287
+ | Gauge | `active_record_connection_pool_waiting` | Connection requests waiting |
288
+ | Gauge | `active_record_connection_pool_size` | Maximum allowed connection pool size |
289
+
290
+ All metrics collected by the ActiveRecord integration include at least the following labels: `pid` (of the process the stats where collected in), `pool_name`, any labels included in the `config_labels` option (prefixed with `dbconfig_`, example: `dbconfig_host`), and all custom labels provided with the `custom_labels` option.
291
+
244
292
  #### Per-process stats
245
293
 
246
294
  You may also be interested in per-process stats. This collects memory and GC stats:
@@ -257,11 +305,30 @@ end
257
305
  # in unicorn/puma/passenger be sure to run a new process instrumenter after fork
258
306
  after_fork do
259
307
  require 'prometheus_exporter/instrumentation'
260
- PrometheusExporter::Instrumentation::Process.start(type:"web")
308
+ PrometheusExporter::Instrumentation::Process.start(type: "web")
261
309
  end
262
310
 
263
311
  ```
264
312
 
313
+ ##### Metrics collected by Process Instrumentation
314
+
315
+ | Type | Name | Description |
316
+ | --- | --- | --- |
317
+ | Gauge | `heap_free_slots` | Free ruby heap slots |
318
+ | Gauge | `heap_live_slots` | Used ruby heap slots |
319
+ | Gauge | `v8_heap_size`* | Total JavaScript V8 heap size (bytes) |
320
+ | Gauge | `v8_used_heap_size`* | Total used JavaScript V8 heap size (bytes) |
321
+ | Gauge | `v8_physical_size`* | Physical size consumed by V8 heaps |
322
+ | Gauge | `v8_heap_count`* | Number of V8 contexts running |
323
+ | Gauge | `rss` | Total RSS used by process |
324
+ | Counter | `major_gc_ops_total` | Major GC operations by process |
325
+ | Counter | `minor_gc_ops_total` | Minor GC operations by process |
326
+ | Counter | `allocated_objects_total` | Total number of allocated objects by process |
327
+
328
+ _Metrics marked with * are only collected when `MiniRacer` is defined._
329
+
330
+ Metrics collected by Process instrumentation include labels `type` (as given with the `type` option), `pid` (of the process the stats where collected in), and any custom labels given to `Process.start` with the `labels` option.
331
+
265
332
  #### Sidekiq metrics
266
333
 
267
334
  Including Sidekiq metrics (how many jobs ran? how many failed? how long did they take? how many are dead? how many were restarted?)
@@ -276,6 +343,17 @@ Sidekiq.configure_server do |config|
276
343
  end
277
344
  ```
278
345
 
346
+ To monitor Queue size and latency:
347
+
348
+ ```ruby
349
+ Sidekiq.configure_server do |config|
350
+ config.on :startup do
351
+ require 'prometheus_exporter/instrumentation'
352
+ PrometheusExporter::Instrumentation::SidekiqQueue.start
353
+ end
354
+ end
355
+ ```
356
+
279
357
  To monitor Sidekiq process info:
280
358
 
281
359
  ```ruby
@@ -297,6 +375,59 @@ Sometimes the Sidekiq server shuts down before it can send metrics, that were ge
297
375
  end
298
376
  ```
299
377
 
378
+ ##### Metrics collected by Sidekiq Instrumentation
379
+
380
+ **PrometheusExporter::Instrumentation::Sidekiq**
381
+ | Type | Name | Description |
382
+ | --- | --- | --- |
383
+ | Summary | `sidekiq_job_duration_seconds` | Time spent in sidekiq jobs |
384
+ | Counter | `sidekiq_jobs_total` | Total number of sidekiq jobs executed |
385
+ | Counter | `sidekiq_restarted_jobs_total` | Total number of sidekiq jobs that we restarted because of a sidekiq shutdown |
386
+ | Counter | `sidekiq_failed_jobs_total` | Total number of failed sidekiq jobs |
387
+
388
+ All metrics have a `job_name` label and a `queue` label.
389
+
390
+ **PrometheusExporter::Instrumentation::Sidekiq.death_handler**
391
+ | Type | Name | Description |
392
+ | --- | --- | --- |
393
+ | Counter | `sidekiq_dead_jobs_total` | Total number of dead sidekiq jobs |
394
+
395
+ This metric has a `job_name` label and a `queue` label.
396
+
397
+ **PrometheusExporter::Instrumentation::SidekiqQueue**
398
+ | Type | Name | Description |
399
+ | --- | --- | --- |
400
+ | Gauge | `sidekiq_queue_backlog_total` | Size of the sidekiq queue |
401
+ | Gauge | `sidekiq_queue_latency_seconds` | Latency of the sidekiq queue |
402
+
403
+ Both metrics will have a `queue` label with the name of the queue.
404
+
405
+ _See [Metrics collected by Process Instrumentation](#metrics-collected-by-process-instrumentation) for a list of metrics the Process instrumentation will produce._
406
+
407
+ #### Shoryuken metrics
408
+
409
+ For Shoryuken metrics (how many jobs ran? how many failed? how long did they take? how many were restarted?)
410
+
411
+ ```ruby
412
+ Shoryuken.configure_server do |config|
413
+ config.server_middleware do |chain|
414
+ require 'prometheus_exporter/instrumentation'
415
+ chain.add PrometheusExporter::Instrumentation::Shoryuken
416
+ end
417
+ end
418
+ ```
419
+
420
+ ##### Metrics collected by Shoryuken Instrumentation
421
+
422
+ | Type | Name | Description |
423
+ | --- | --- | --- |
424
+ | Counter | `shoryuken_job_duration_seconds` | Total time spent in shoryuken jobs |
425
+ | Counter | `shoryuken_jobs_total` | Total number of shoryuken jobs executed |
426
+ | Counter | `shoryuken_restarted_jobs_total` | Total number of shoryuken jobs that we restarted because of a shoryuken shutdown |
427
+ | Counter | `shoryuken_failed_jobs_total` | Total number of failed shoryuken jobs |
428
+
429
+ All metrics have labels for `job_name` and `queue_name`.
430
+
300
431
  #### Delayed Job plugin
301
432
 
302
433
  In an initializer:
@@ -308,6 +439,19 @@ unless Rails.env == "test"
308
439
  end
309
440
  ```
310
441
 
442
+ ##### Metrics collected by Delayed Job Instrumentation
443
+
444
+ | Type | Name | Description | Labels |
445
+ | --- | --- | --- | --- |
446
+ | Counter | `delayed_job_duration_seconds` | Total time spent in delayed jobs | `job_name` |
447
+ | Counter | `delayed_jobs_total` | Total number of delayed jobs executed | `job_name` |
448
+ | Gauge | `delayed_jobs_enqueued` | Number of enqueued delayed jobs | - |
449
+ | Gauge | `delayed_jobs_pending` | Number of pending delayed jobs | - |
450
+ | Counter | `delayed_failed_jobs_total` | Total number failed delayed jobs executed | `job_name` |
451
+ | Counter | `delayed_jobs_max_attempts_reached_total` | Total number of delayed jobs that reached max attempts | - |
452
+ | Summary | `delayed_job_duration_seconds_summary` | Summary of the time it takes jobs to execute | `status` |
453
+ | Summary | `delayed_job_attempts_summary` | Summary of the amount of attempts it takes delayed jobs to succeed | - |
454
+
311
455
  #### Hutch Message Processing Tracer
312
456
 
313
457
  Capture [Hutch](https://github.com/gocardless/hutch) metrics (how many jobs ran? how many failed? how long did they take?)
@@ -319,6 +463,16 @@ unless Rails.env == "test"
319
463
  end
320
464
  ```
321
465
 
466
+ ##### Metrics collected by Hutch Instrumentation
467
+
468
+ | Type | Name | Description |
469
+ | --- | --- | --- |
470
+ | Counter | `hutch_job_duration_seconds` | Total time spent in hutch jobs |
471
+ | Counter | `hutch_jobs_total` | Total number of hutch jobs executed |
472
+ | Counter | `hutch_failed_jobs_total` | Total number failed hutch jobs executed |
473
+
474
+ All metrics have a `job_name` label.
475
+
322
476
  #### Instrumenting Request Queueing Time
323
477
 
324
478
  Request Queueing is defined as the time it takes for a request to reach your application (instrumented by this `prometheus_exporter`) from farther upstream (as your load balancer). A high queueing time usually means that your backend cannot handle all the incoming requests in time, so they queue up (= you should see if you need to add more capacity).
@@ -343,6 +497,20 @@ after_worker_boot do
343
497
  end
344
498
  ```
345
499
 
500
+ #### Metrics collected by Puma Instrumentation
501
+
502
+ | Type | Name | Description |
503
+ | --- | --- | --- |
504
+ | Gauge | `puma_workers_total` | Number of puma workers |
505
+ | Gauge | `puma_booted_workers_total` | Number of puma workers booted |
506
+ | Gauge | `puma_old_workers_total` | Number of old puma workers |
507
+ | Gauge | `puma_running_threads_total` | Number of puma threads currently running |
508
+ | Gauge | `puma_request_backlog_total` | Number of requests waiting to be processed by a puma thread |
509
+ | Gauge | `puma_thread_pool_capacity_total` | Number of puma threads available at current scale |
510
+ | Gauge | `puma_max_threads_total` | Number of puma threads at available at max scale |
511
+
512
+ All metrics may have a `phase` label.
513
+
346
514
  ### Unicorn process metrics
347
515
 
348
516
  In order to gather metrics from unicorn processes, we use `rainbows`, which exposes `Rainbows::Linux.tcp_listener_stats` to gather information about active workers and queued requests. To start monitoring your unicorn processes, you'll need to know both the path to unicorn PID file and the listen address (`pid_file` and `listen` in your unicorn config file)
@@ -358,6 +526,14 @@ prometheus_exporter --unicorn-master /var/run/unicorn.pid --unicorn-listen-addre
358
526
 
359
527
  Note: You must install the `raindrops` gem in your `Gemfile` or locally.
360
528
 
529
+ #### Metrics collected by Unicorn Instrumentation
530
+
531
+ | Type | Name | Description |
532
+ | --- | --- | --- |
533
+ | Gauge | `unicorn_workers_total` | Number of unicorn workers |
534
+ | Gauge | `unicorn_active_workers_total` | Number of active unicorn workers |
535
+ | Gauge | `unicorn_request_backlog_total` | Number of requests waiting to be processed by a unicorn worker |
536
+
361
537
  ### Custom type collectors
362
538
 
363
539
  In some cases you may have custom metrics you want to ship the collector in a batch. In this case you may still be interested in the base collector behavior, but would like to add your own special messages.
@@ -416,8 +592,8 @@ Then you can collect the metrics you need on demand:
416
592
 
417
593
  ```ruby
418
594
  def metrics
419
- user_count_gague = PrometheusExporter::Metric::Gauge.new('user_count', 'number of users in the app')
420
- user_count_gague.observe User.count
595
+ user_count_gauge = PrometheusExporter::Metric::Gauge.new('user_count', 'number of users in the app')
596
+ user_count_gauge.observe User.count
421
597
  [user_count_gauge]
422
598
  end
423
599
  ```
@@ -526,6 +702,68 @@ ruby_web_requests{hostname="app-server-01",route="test/route"} 1
526
702
  ruby_web_requests{hostname="app-server-01"} 1
527
703
  ```
528
704
 
705
+ ### Exporter Process Configuration
706
+
707
+ When running the process for `prometheus_exporter` using `bin/prometheus_exporter`, there are several configurations that
708
+ can be passed in:
709
+
710
+ ```
711
+ Usage: prometheus_exporter [options]
712
+ -p, --port INTEGER Port exporter should listen on (default: 9394)
713
+ -b, --bind STRING IP address exporter should listen on (default: localhost)
714
+ -t, --timeout INTEGER Timeout in seconds for metrics endpoint (default: 2)
715
+ --prefix METRIC_PREFIX Prefix to apply to all metrics (default: ruby_)
716
+ --label METRIC_LABEL Label to apply to all metrics (default: {})
717
+ -c, --collector FILE (optional) Custom collector to run
718
+ -a, --type-collector FILE (optional) Custom type collectors to run in main collector
719
+ -v, --verbose
720
+ --auth FILE (optional) enable basic authentication using a htpasswd FILE
721
+ --realm REALM (optional) Use REALM for basic authentication (default: "Prometheus Exporter")
722
+ --unicorn-listen-address ADDRESS
723
+ (optional) Address where unicorn listens on (unix or TCP address)
724
+ --unicorn-master PID_FILE (optional) PID file of unicorn master process to monitor unicorn
725
+ ```
726
+
727
+ #### Example
728
+
729
+ The following will run the process at
730
+ - Port `8080` (default `9394`)
731
+ - Bind to `0.0.0.0` (default `localhost`)
732
+ - Timeout in `1 second` for metrics endpoint (default `2 seconds`)
733
+ - Metric prefix as `foo_` (default `ruby_`)
734
+ - Default labels as `{environment: "integration", foo: "bar"}`
735
+
736
+ ```bash
737
+ prometheus_exporter -p 8080 \
738
+ -b 0.0.0.0 \
739
+ -t 1 \
740
+ --label '{"environment": "integration", "foo": "bar"}' \
741
+ --prefix 'foo_'
742
+ ```
743
+
744
+ #### Enabling Basic Authentication
745
+
746
+ If you desire authentication on your `/metrics` route, you can enable basic authentication with the `--auth` option.
747
+
748
+ ```
749
+ $ prometheus_exporter --auth my-htpasswd-file
750
+ ```
751
+
752
+ Additionally, the `--realm` option may be used to provide a customized realm for the challenge request.
753
+
754
+ Notes:
755
+
756
+ * You will need to create a `htpasswd` formatted file before hand which contains one or more user:password entries
757
+ * Only the basic `crypt` encryption is currently supported
758
+
759
+ A simple `htpasswd` file can be created with the Apache `htpasswd` utility; e.g:
760
+
761
+ ```
762
+ $ htpasswd -cdb my-htpasswd-file my-user my-unencrypted-password
763
+ ```
764
+
765
+ This will create a file named `my-htpasswd-file` which is suitable for use the `--auth` option.
766
+
529
767
  ### Client default labels
530
768
 
531
769
  You can specify a default label for instrumentation metrics sent by a specific client. For example:
@@ -545,6 +783,9 @@ Will result in:
545
783
  http_requests_total{controller="home","action"="index",service="app-server-01",app_name="app-01"} 2
546
784
  http_requests_total{service="app-server-01",app_name="app-01"} 1
547
785
  ```
786
+ ### Client default host
787
+
788
+ By default, `PrometheusExporter::Client.default` connects to `localhost:9394`. If your setup requires this (e.g. when using `docker-compose`), you can change the default host and port by setting the environment variables `PROMETHEUS_EXPORTER_HOST` and `PROMETHEUS_EXPORTER_PORT`.
548
789
 
549
790
  ## Transport concerns
550
791