prometheus-client 0.11.0.pre.alpha.1 → 3.0.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: a558dc9a9c12b948a111369fd378ef1f41135695b0f7fb30eb2691a86a05ea3f
4
- data.tar.gz: 0d8554c3e1a4cec8e6db066158247c7a1c8e0fc86653e9a252bc3c08c04d1618
3
+ metadata.gz: b7017e1e3c284f9558d758e91d4855ad97b5df63fdef115cee035aa0948aed30
4
+ data.tar.gz: 52a7abaadf1addf7bf9303d73a31a14338429882b488eb80c2144a6817cfdb7d
5
5
  SHA512:
6
- metadata.gz: 91b4d4bb9f606b33f77bc0fab83a523d0cf61d6f875f5c42de738513aebc2e6a44994c9098c103b51b54eb2c80957c82616aab4fadb07e583c1b5b0ef3821697
7
- data.tar.gz: d956b78c30fff9db52e45f9826f739f1088501ebd12ad71b016e0d58abe3262c320180fd68ba74dbc898a279141e6e27b9f51d2d8471b0d28f7e5bb5eef95595
6
+ metadata.gz: 1e3b2ff9368dacbdc175ea8cb709e1ad7be73b0d8e644468f65bf36f44b517798a28d7f5dab73b538519fc1a918f977e083e101986a761fdf2fe42ab719fcc1d
7
+ data.tar.gz: 7fb5a1887a88f7687f2af5ef4839e26030da25cc0c51a03445bbb43706caf8d2231c65108cf3265cc024bc286d5dc8a38ec5d29145b39b2f0cbd0277f29f1972
data/README.md CHANGED
@@ -5,11 +5,17 @@ through a HTTP interface. Intended to be used together with a
5
5
  [Prometheus server][1].
6
6
 
7
7
  [![Gem Version][4]](http://badge.fury.io/rb/prometheus-client)
8
- [![Build Status][3]](http://travis-ci.org/prometheus/client_ruby)
9
- [![Coverage Status][7]](https://coveralls.io/r/prometheus/client_ruby)
8
+ [![Build Status][3]](https://circleci.com/gh/prometheus/client_ruby/tree/master.svg?style=svg)
10
9
 
11
10
  ## Usage
12
11
 
12
+ ### Installation
13
+
14
+ For a global installation run `gem install prometheus-client`.
15
+
16
+ If you're using [Bundler](https://bundler.io/) add `gem "prometheus-client"` to your `Gemfile`.
17
+ Make sure to run `bundle install` afterwards.
18
+
13
19
  ### Overview
14
20
 
15
21
  ```ruby
@@ -64,7 +70,7 @@ integrated [example application](examples/rack/README.md).
64
70
  The Ruby client can also be used to push its collected metrics to a
65
71
  [Pushgateway][8]. This comes in handy with batch jobs or in other scenarios
66
72
  where it's not possible or feasible to let a Prometheus server scrape a Ruby
67
- process. TLS and basic access authentication are supported.
73
+ process. TLS and HTTP basic authentication are supported.
68
74
 
69
75
  ```ruby
70
76
  require 'prometheus/client'
@@ -74,18 +80,59 @@ registry = Prometheus::Client.registry
74
80
  # ... register some metrics, set/increment/observe/etc. their values
75
81
 
76
82
  # push the registry state to the default gateway
77
- Prometheus::Client::Push.new('my-batch-job').add(registry)
83
+ Prometheus::Client::Push.new(job: 'my-batch-job').add(registry)
84
+
85
+ # optional: specify a grouping key that uniquely identifies a job instance, and gateway.
86
+ #
87
+ # Note: the labels you use in the grouping key must not conflict with labels set on the
88
+ # metrics being pushed. If they do, an error will be raised.
89
+ Prometheus::Client::Push.new(
90
+ job: 'my-batch-job',
91
+ gateway: 'https://example.domain:1234',
92
+ grouping_key: { instance: 'some-instance', extra_key: 'foobar' }
93
+ ).add(registry)
94
+
95
+ # If you want to replace any previously pushed metrics for a given grouping key,
96
+ # use the #replace method.
97
+ #
98
+ # Unlike #add, this will completely replace the metrics under the specified grouping key
99
+ # (i.e. anything currently present in the pushgateway for the specified grouping key, but
100
+ # not present in the registry for that grouping key will be removed).
101
+ #
102
+ # See https://github.com/prometheus/pushgateway#put-method for a full explanation.
103
+ Prometheus::Client::Push.new(job: 'my-batch-job').replace(registry)
104
+
105
+ # If you want to delete all previously pushed metrics for a given grouping key,
106
+ # use the #delete method.
107
+ Prometheus::Client::Push.new(job: 'my-batch-job').delete
108
+ ```
78
109
 
79
- # optional: specify the instance name (instead of IP) and gateway.
80
- Prometheus::Client::Push.new('my-batch-job', 'foobar', 'https://example.domain:1234').add(registry)
110
+ #### Basic authentication
81
111
 
82
- # If you want to replace any previously pushed metrics for a given instance,
83
- # use the #replace method.
84
- Prometheus::Client::Push.new('my-batch-job').replace(registry)
112
+ By design, `Prometheus::Client::Push` doesn't read credentials for HTTP basic
113
+ authentication when they are passed in via the gateway URL using the
114
+ `http://user:password@example.com:9091` syntax, and will in fact raise an error if they're
115
+ supplied that way.
85
116
 
86
- # If you want to delete all previously pushed metrics for a given instance,
87
- # use the #delete method.
88
- Prometheus::Client::Push.new('my-batch-job').delete
117
+ The reason for this is that when using that syntax, the username and password
118
+ have to follow the usual rules for URL encoding of characters [per RFC
119
+ 3986](https://datatracker.ietf.org/doc/html/rfc3986#section-2.1).
120
+
121
+ Rather than place the burden of correctly performing that encoding on users of this gem,
122
+ we decided to have a separate method for supplying HTTP basic authentication credentials,
123
+ with no requirement to URL encode the characters in them.
124
+
125
+ Instead of passing credentials like this:
126
+
127
+ ```ruby
128
+ push = Prometheus::Client::Push.new(job: "my-job", gateway: "http://user:password@localhost:9091")
129
+ ```
130
+
131
+ please pass them like this:
132
+
133
+ ```ruby
134
+ push = Prometheus::Client::Push.new(job: "my-job", gateway: "http://localhost:9091")
135
+ push.basic_auth("user", "password")
89
136
  ```
90
137
 
91
138
  ## Metrics
@@ -151,6 +198,11 @@ histogram.get(labels: { service: 'users' })
151
198
  # => { 0.005 => 3, 0.01 => 15, 0.025 => 18, ..., 2.5 => 42, 5 => 42, 10 = >42 }
152
199
  ```
153
200
 
201
+ Histograms provide default buckets of `[0.005, 0.01, 0.025, 0.05, 0.1, 0.25, 0.5, 1, 2.5, 5, 10]`
202
+
203
+ You can specify your own buckets, either explicitly, or using the `Histogram.linear_buckets`
204
+ or `Histogram.exponential_buckets` methods to define regularly spaced buckets.
205
+
154
206
  ### Summary
155
207
 
156
208
  Summary, similar to histograms, is an accumulator for samples. It captures
@@ -254,6 +306,12 @@ class MyComponent
254
306
  end
255
307
  ```
256
308
 
309
+ ### `init_label_set`
310
+
311
+ The time series of a metric are not initialized until something happens. For counters, for example, this means that the time series do not exist until the counter is incremented for the first time.
312
+
313
+ To get around this problem the client provides the `init_label_set` method that can be used to initialise the time series of a metric for a given label set.
314
+
257
315
  ### Reserved labels
258
316
 
259
317
  The following labels are reserved by the client library, and attempting to use them in a
@@ -271,7 +329,7 @@ is stored in a global Data Store object, rather than in the metric objects thems
271
329
  (This "storage" is ephemeral, generally in-memory, it's not "long-term storage")
272
330
 
273
331
  The main reason to do this is that different applications may have different requirements
274
- for their metrics storage. Application running in pre-fork servers (like Unicorn, for
332
+ for their metrics storage. Applications running in pre-fork servers (like Unicorn, for
275
333
  example), require a shared store between all the processes, to be able to report coherent
276
334
  numbers. At the same time, other applications may not have this requirement but be very
277
335
  sensitive to performance, and would prefer instead a simpler, faster store.
@@ -307,11 +365,11 @@ When instantiating metrics, there is an optional `store_settings` attribute. Thi
307
365
  to set up store-specific settings for each metric. For most stores, this is not used, but
308
366
  for multi-process stores, this is used to specify how to aggregate the values of each
309
367
  metric across multiple processes. For the most part, this is used for Gauges, to specify
310
- whether you want to report the `SUM`, `MAX` or `MIN` value observed across all processes.
311
- For almost all other cases, you'd leave the default (`SUM`). More on this on the
312
- *Aggregation* section below.
368
+ whether you want to report the `SUM`, `MAX`, `MIN`, or `MOST_RECENT` value observed across
369
+ all processes. For almost all other cases, you'd leave the default (`SUM`). More on this
370
+ on the *Aggregation* section below.
313
371
 
314
- Other custom stores may also accept extra parameters besides `:aggregation`. See the
372
+ Custom stores may also accept extra parameters besides `:aggregation`. See the
315
373
  documentation of each store for more details.
316
374
 
317
375
  ### Built-in stores
@@ -326,26 +384,73 @@ There are 3 built-in stores, with different trade-offs:
326
384
  it's absolutely not thread safe.
327
385
  - **DirectFileStore**: Stores data in binary files, one file per process and per metric.
328
386
  This is generally the recommended store to use with pre-fork servers and other
329
- "multi-process" scenarios.
330
-
331
- Each metric gets a file for each process, and manages its contents by storing keys and
332
- binary floats next to them, and updating the offsets of those Floats directly. When
333
- exporting metrics, it will find all the files that apply to each metric, read them,
334
- and aggregate them.
335
-
336
- In order to do this, each Metric needs an `:aggregation` setting, specifying how
337
- to aggregate the multiple possible values we can get for each labelset. By default,
338
- they are `SUM`med, which is what most use-cases call for (counters and histograms,
339
- for example). However, for Gauges, it's possible to set `MAX` or `MIN` as aggregation,
340
- to get the highest/lowest value of all the processes / threads.
341
-
342
- Even though this store saves data on disk, it's still much faster than would probably be
343
- expected, because the files are never actually `fsync`ed, so the store never blocks
344
- while waiting for disk. The kernel's page cache is incredibly efficient in this regard.
345
-
346
- If in doubt, check the benchmark scripts described in the documentation for creating
347
- your own stores and run them in your particular runtime environment to make sure this
348
- provides adequate performance.
387
+ "multi-process" scenarios. There are some important caveats to using this store, so
388
+ please read on the section below.
389
+
390
+ ### `DirectFileStore` caveats and things to keep in mind
391
+
392
+ Each metric gets a file for each process, and manages its contents by storing keys and
393
+ binary floats next to them, and updating the offsets of those Floats directly. When
394
+ exporting metrics, it will find all the files that apply to each metric, read them,
395
+ and aggregate them.
396
+
397
+ **Aggregation of metrics**: Since there will be several files per metrics (one per process),
398
+ these need to be aggregated to present a coherent view to Prometheus. Depending on your
399
+ use case, you may need to control how this works. When using this store,
400
+ each Metric allows you to specify an `:aggregation` setting, defining how
401
+ to aggregate the multiple possible values we can get for each labelset. By default,
402
+ Counters, Histograms and Summaries are `SUM`med, and Gauges report all their values (one
403
+ for each process), tagged with a `pid` label. You can also select `SUM`, `MAX`, `MIN`, or
404
+ `MOST_RECENT` for your gauges, depending on your use case.
405
+
406
+ Please note that that the `MOST_RECENT` aggregation only works for gauges, and it does not
407
+ allow the use of `increment` / `decrement`, you can only use `set`.
408
+
409
+ **Memory Usage**: When scraped by Prometheus, this store will read all these files, get all
410
+ the values and aggregate them. We have notice this can have a noticeable effect on memory
411
+ usage for your app. We recommend you test this in a realistic usage scenario to make sure
412
+ you won't hit any memory limits your app may have.
413
+
414
+ **Resetting your metrics on each run**: You should also make sure that the directory where
415
+ you store your metric files (specified when initializing the `DirectFileStore`) is emptied
416
+ when your app starts. Otherwise, each app run will continue exporting the metrics from the
417
+ previous run.
418
+
419
+ If you have this issue, one way to do this is to run code similar to this as part of you
420
+ initialization:
421
+
422
+ ```ruby
423
+ Dir["#{app_path}/tmp/prometheus/*.bin"].each do |file_path|
424
+ File.unlink(file_path)
425
+ end
426
+ ```
427
+
428
+ If you are running in pre-fork servers (such as Unicorn or Puma with multiple processes),
429
+ make sure you do this **before** the server forks. Otherwise, each child process may delete
430
+ files created by other processes on *this* run, instead of deleting old files.
431
+
432
+ **Declare metrics before fork**: As well as deleting files before your process forks, you
433
+ should make sure to declare your metrics before forking too. Because the metric registry
434
+ is held in memory, any metrics declared after forking will only be present in child
435
+ processes where the code declaring them ran, and as a result may not be consistently
436
+ exported when scraped (i.e. they will only appear when a child process that declared them
437
+ is scraped).
438
+
439
+ If you're absolutely sure that every child process will run the metric declaration code,
440
+ then you won't run into this issue, but the simplest approach is to declare the metrics
441
+ before forking.
442
+
443
+ **Large numbers of files**: Because there is an individual file per metric and per process
444
+ (which is done to optimize for observation performance), you may end up with a large number
445
+ of files. We don't currently have a solution for this problem, but we're working on it.
446
+
447
+ **Performance**: Even though this store saves data on disk, it's still much faster than
448
+ would probably be expected, because the files are never actually `fsync`ed, so the store
449
+ never blocks while waiting for disk. The kernel's page cache is incredibly efficient in
450
+ this regard. If in doubt, check the benchmark scripts described in the documentation for
451
+ creating your own stores and run them in your particular runtime environment to make sure
452
+ this provides adequate performance.
453
+
349
454
 
350
455
  ### Building your own store, and stores other than the built-in ones.
351
456
 
@@ -364,16 +469,16 @@ If you are in a multi-process environment (such as pre-fork servers like Unicorn
364
469
  process will probably keep their own counters, which need to be aggregated when receiving
365
470
  a Prometheus scrape, to report coherent total numbers.
366
471
 
367
- For Counters and Histograms (and quantile-less Summaries), this is simply a matter of
472
+ For Counters, Histograms and quantile-less Summaries this is simply a matter of
368
473
  summing the values of each process.
369
474
 
370
475
  For Gauges, however, this may not be the right thing to do, depending on what they're
371
476
  measuring. You might want to take the maximum or minimum value observed in any process,
372
- rather than the sum of all of them. You may also want to export each process's individual
373
- value.
477
+ rather than the sum of all of them. By default, we export each process's individual
478
+ value, with a `pid` label identifying each one.
374
479
 
375
- In those cases, you should use the `store_settings` parameter when registering the
376
- metric, to specify an `:aggregation` setting.
480
+ If these defaults don't work for your use case, you should use the `store_settings`
481
+ parameter when registering the metric, to specify an `:aggregation` setting.
377
482
 
378
483
  ```ruby
379
484
  free_disk_space = registry.gauge(:free_disk_space_bytes,
@@ -187,7 +187,7 @@ has created a good amount of research, benchmarks, and experimental stores, whic
187
187
  weren't useful to include in this repo, but may be a useful resource or starting point
188
188
  if you are building your own store.
189
189
 
190
- Check out the [GoCardless Data Stores Experiments](gocardless/prometheus-client-ruby-data-stores-experiments)
190
+ Check out the [GoCardless Data Stores Experiments](https://github.com/gocardless/prometheus-client-ruby-data-stores-experiments)
191
191
  repository for these.
192
192
 
193
193
  ## Sample, imaginary multi-process Data Store
@@ -18,14 +18,18 @@ module Prometheus
18
18
  #
19
19
  # In order to do this, each Metric needs an `:aggregation` setting, specifying how
20
20
  # to aggregate the multiple possible values we can get for each labelset. By default,
21
- # they are `SUM`med, which is what most use cases call for (counters and histograms,
22
- # for example).
23
- # However, for Gauges, it's possible to set `MAX` or `MIN` as aggregation, to get
24
- # the highest value of all the processes / threads.
21
+ # Counters, Histograms and Summaries get `SUM`med, and Gauges will report `ALL`
22
+ # values, tagging each one with a `pid` label.
23
+ # For Gauges, it's also possible to set `SUM`, MAX` or `MIN` as aggregation, to get
24
+ # the highest / lowest value / or the sum of all the processes / threads.
25
+ #
26
+ # Before using this Store, please read the "`DirectFileStore` caveats and things to
27
+ # keep in mind" section of the main README in this repository. It includes a number
28
+ # of important things to keep in mind.
25
29
 
26
30
  class DirectFileStore
27
31
  class InvalidStoreSettingsError < StandardError; end
28
- AGGREGATION_MODES = [MAX = :max, MIN = :min, SUM = :sum, ALL = :all]
32
+ AGGREGATION_MODES = [MAX = :max, MIN = :min, SUM = :sum, ALL = :all, MOST_RECENT = :most_recent]
29
33
  DEFAULT_METRIC_SETTINGS = { aggregation: SUM }
30
34
  DEFAULT_GAUGE_SETTINGS = { aggregation: ALL }
31
35
 
@@ -41,7 +45,7 @@ module Prometheus
41
45
  end
42
46
 
43
47
  settings = default_settings.merge(metric_settings)
44
- validate_metric_settings(settings)
48
+ validate_metric_settings(metric_type, settings)
45
49
 
46
50
  MetricStore.new(metric_name: metric_name,
47
51
  store_settings: @store_settings,
@@ -50,7 +54,7 @@ module Prometheus
50
54
 
51
55
  private
52
56
 
53
- def validate_metric_settings(metric_settings)
57
+ def validate_metric_settings(metric_type, metric_settings)
54
58
  unless metric_settings.has_key?(:aggregation) &&
55
59
  AGGREGATION_MODES.include?(metric_settings[:aggregation])
56
60
  raise InvalidStoreSettingsError,
@@ -61,6 +65,11 @@ module Prometheus
61
65
  raise InvalidStoreSettingsError,
62
66
  "Only :aggregation setting can be specified"
63
67
  end
68
+
69
+ if metric_settings[:aggregation] == MOST_RECENT && metric_type != :gauge
70
+ raise InvalidStoreSettingsError,
71
+ "Only :gauge metrics support :most_recent aggregation"
72
+ end
64
73
  end
65
74
 
66
75
  class MetricStore
@@ -70,6 +79,7 @@ module Prometheus
70
79
  @metric_name = metric_name
71
80
  @store_settings = store_settings
72
81
  @values_aggregation_mode = metric_settings[:aggregation]
82
+ @store_opened_by_pid = nil
73
83
 
74
84
  @lock = Monitor.new
75
85
  end
@@ -96,6 +106,12 @@ module Prometheus
96
106
  end
97
107
 
98
108
  def increment(labels:, by: 1)
109
+ if @values_aggregation_mode == DirectFileStore::MOST_RECENT
110
+ raise InvalidStoreSettingsError,
111
+ "The :most_recent aggregation does not support the use of increment"\
112
+ "/decrement"
113
+ end
114
+
99
115
  key = store_key(labels)
100
116
  in_process_sync do
101
117
  value = internal_store.read_value(key)
@@ -117,7 +133,7 @@ module Prometheus
117
133
  stores_for_metric.each do |file_path|
118
134
  begin
119
135
  store = FileMappedDict.new(file_path, true)
120
- store.all_values.each do |(labelset_qs, v)|
136
+ store.all_values.each do |(labelset_qs, v, ts)|
121
137
  # Labels come as a query string, and CGI::parse returns arrays for each key
122
138
  # "foo=bar&x=y" => { "foo" => ["bar"], "x" => ["y"] }
123
139
  # Turn the keys back into symbols, and remove the arrays
@@ -125,7 +141,7 @@ module Prometheus
125
141
  [k.to_sym, vs.first]
126
142
  end.to_h
127
143
 
128
- stores_data[label_set] << v
144
+ stores_data[label_set] << [v, ts]
129
145
  end
130
146
  ensure
131
147
  store.close if store
@@ -177,30 +193,41 @@ module Prometheus
177
193
  end
178
194
 
179
195
  def aggregate_values(values)
180
- if @values_aggregation_mode == SUM
181
- values.inject { |sum, element| sum + element }
182
- elsif @values_aggregation_mode == MAX
183
- values.max
184
- elsif @values_aggregation_mode == MIN
185
- values.min
186
- elsif @values_aggregation_mode == ALL
187
- values.first
196
+ # Each entry in the `values` array is a tuple of `value` and `timestamp`,
197
+ # so for all aggregations except `MOST_RECENT`, we need to only take the
198
+ # first value in each entry and ignore the second.
199
+ if @values_aggregation_mode == MOST_RECENT
200
+ latest_tuple = values.max { |a,b| a[1] <=> b[1] }
201
+ latest_tuple.first # return the value without the timestamp
188
202
  else
189
- raise InvalidStoreSettingsError,
190
- "Invalid Aggregation Mode: #{ @values_aggregation_mode }"
203
+ values = values.map(&:first) # Discard timestamps
204
+
205
+ if @values_aggregation_mode == SUM
206
+ values.inject { |sum, element| sum + element }
207
+ elsif @values_aggregation_mode == MAX
208
+ values.max
209
+ elsif @values_aggregation_mode == MIN
210
+ values.min
211
+ elsif @values_aggregation_mode == ALL
212
+ values.first
213
+ else
214
+ raise InvalidStoreSettingsError,
215
+ "Invalid Aggregation Mode: #{ @values_aggregation_mode }"
216
+ end
191
217
  end
192
218
  end
193
219
  end
194
220
 
195
221
  private_constant :MetricStore
196
222
 
197
- # A dict of doubles, backed by an file we access directly a a byte array.
223
+ # A dict of doubles, backed by an file we access directly as a byte array.
198
224
  #
199
225
  # The file starts with a 4 byte int, indicating how much of it is used.
200
226
  # Then 4 bytes of padding.
201
227
  # There's then a number of entries, consisting of a 4 byte int which is the
202
228
  # size of the next field, a utf-8 encoded string key, padding to an 8 byte
203
- # alignment, and then a 8 byte float which is the value.
229
+ # alignment, and then a 8 byte float which is the value, and then a 8 byte
230
+ # float which is the unix timestamp when the value was set.
204
231
  class FileMappedDict
205
232
  INITIAL_FILE_SIZE = 1024*1024
206
233
 
@@ -231,8 +258,8 @@ module Prometheus
231
258
  with_file_lock do
232
259
  @positions.map do |key, pos|
233
260
  @f.seek(pos)
234
- value = @f.read(8).unpack('d')[0]
235
- [key, value]
261
+ value, timestamp = @f.read(16).unpack('dd')
262
+ [key, value, timestamp]
236
263
  end
237
264
  end
238
265
  end
@@ -252,9 +279,10 @@ module Prometheus
252
279
  init_value(key)
253
280
  end
254
281
 
282
+ now = Process.clock_gettime(Process::CLOCK_MONOTONIC)
255
283
  pos = @positions[key]
256
284
  @f.seek(pos)
257
- @f.write([value].pack('d'))
285
+ @f.write([value, now].pack('dd'))
258
286
  @f.flush
259
287
  end
260
288
 
@@ -295,7 +323,7 @@ module Prometheus
295
323
  def init_value(key)
296
324
  # Pad to be 8-byte aligned.
297
325
  padded = key + (' ' * (8 - (key.length + 4) % 8))
298
- value = [padded.length, padded, 0.0].pack("lA#{padded.length}d")
326
+ value = [padded.length, padded, 0.0, 0.0].pack("lA#{padded.length}dd")
299
327
  while @used + value.length > @capacity
300
328
  @capacity *= 2
301
329
  resize_file(@capacity)
@@ -306,7 +334,7 @@ module Prometheus
306
334
  @f.seek(0)
307
335
  @f.write([@used].pack('l'))
308
336
  @f.flush
309
- @positions[key] = @used - 8
337
+ @positions[key] = @used - 16
310
338
  end
311
339
 
312
340
  # Read position of all keys. No locking is performed.
@@ -316,7 +344,7 @@ module Prometheus
316
344
  padded_len = @f.read(4).unpack('l')[0]
317
345
  key = @f.read(padded_len).unpack("A#{padded_len}")[0].strip
318
346
  @positions[key] = @f.pos
319
- @f.seek(8, :CUR)
347
+ @f.seek(16, :CUR)
320
348
  end
321
349
  end
322
350
  end
@@ -6,7 +6,7 @@ module Prometheus
6
6
  module Client
7
7
  # A histogram samples observations (usually things like request durations
8
8
  # or response sizes) and counts them in configurable buckets. It also
9
- # provides a sum of all observed values.
9
+ # provides a total count and sum of all observed values.
10
10
  class Histogram < Metric
11
11
  # DEFAULT_BUCKETS are the default Histogram buckets. The default buckets
12
12
  # are tailored to broadly measure the response time (in seconds) of a
@@ -33,21 +33,41 @@ module Prometheus
33
33
  store_settings: store_settings)
34
34
  end
35
35
 
36
+ def self.linear_buckets(start:, width:, count:)
37
+ count.times.map { |idx| start.to_f + idx * width }
38
+ end
39
+
40
+ def self.exponential_buckets(start:, factor: 2, count:)
41
+ count.times.map { |idx| start.to_f * factor ** idx }
42
+ end
43
+
36
44
  def with_labels(labels)
37
- self.class.new(name,
38
- docstring: docstring,
39
- labels: @labels,
40
- preset_labels: preset_labels.merge(labels),
41
- buckets: @buckets,
42
- store_settings: @store_settings)
45
+ new_metric = self.class.new(name,
46
+ docstring: docstring,
47
+ labels: @labels,
48
+ preset_labels: preset_labels.merge(labels),
49
+ buckets: @buckets,
50
+ store_settings: @store_settings)
51
+
52
+ # The new metric needs to use the same store as the "main" declared one, otherwise
53
+ # any observations on that copy with the pre-set labels won't actually be exported.
54
+ new_metric.replace_internal_store(@store)
55
+
56
+ new_metric
43
57
  end
44
58
 
45
59
  def type
46
60
  :histogram
47
61
  end
48
62
 
63
+ # Records a given value. The recorded value is usually positive
64
+ # or zero. A negative value is accepted but prevents current
65
+ # versions of Prometheus from properly detecting counter resets
66
+ # in the sum of observations. See
67
+ # https://prometheus.io/docs/practices/histograms/#count-and-sum-of-observations
68
+ # for details.
49
69
  def observe(value, labels: {})
50
- bucket = buckets.find {|upper_limit| upper_limit > value }
70
+ bucket = buckets.find {|upper_limit| upper_limit >= value }
51
71
  bucket = "+Inf" if bucket.nil?
52
72
 
53
73
  base_label_set = label_set_for(labels)
@@ -81,19 +101,29 @@ module Prometheus
81
101
 
82
102
  # Returns all label sets with their values expressed as hashes with their buckets
83
103
  def values
84
- v = @store.all_values
104
+ values = @store.all_values
85
105
 
86
- result = v.each_with_object({}) do |(label_set, v), acc|
106
+ result = values.each_with_object({}) do |(label_set, v), acc|
87
107
  actual_label_set = label_set.reject{|l| l == :le }
88
108
  acc[actual_label_set] ||= @buckets.map{|b| [b.to_s, 0.0]}.to_h
89
109
  acc[actual_label_set][label_set[:le].to_s] = v
90
110
  end
91
111
 
92
- result.each do |(label_set, v)|
112
+ result.each do |(_label_set, v)|
93
113
  accumulate_buckets(v)
94
114
  end
95
115
  end
96
116
 
117
+ def init_label_set(labels)
118
+ base_label_set = label_set_for(labels)
119
+
120
+ @store.synchronize do
121
+ (buckets + ["+Inf", "sum"]).each do |bucket|
122
+ @store.set(labels: base_label_set.merge(le: bucket.to_s), val: 0)
123
+ end
124
+ end
125
+ end
126
+
97
127
  private
98
128
 
99
129
  # Modifies the passed in parameter
@@ -7,6 +7,7 @@ module Prometheus
7
7
  class LabelSetValidator
8
8
  # TODO: we might allow setting :instance in the future
9
9
  BASE_RESERVED_LABELS = [:job, :instance, :pid].freeze
10
+ LABEL_NAME_REGEX = /\A[a-zA-Z_][a-zA-Z0-9_]*\Z/
10
11
 
11
12
  class LabelSetError < StandardError; end
12
13
  class InvalidLabelSetError < LabelSetError; end
@@ -59,9 +60,15 @@ module Prometheus
59
60
  end
60
61
 
61
62
  def validate_name(key)
62
- return true unless key.to_s.start_with?('__')
63
+ if key.to_s.start_with?('__')
64
+ raise ReservedLabelError, "label #{key} must not start with __"
65
+ end
66
+
67
+ unless key.to_s =~ LABEL_NAME_REGEX
68
+ raise InvalidLabelError, "label name must match /#{LABEL_NAME_REGEX}/"
69
+ end
63
70
 
64
- raise ReservedLabelError, "label #{key} must not start with __"
71
+ true
65
72
  end
66
73
 
67
74
  def validate_reserved_key(key)
@@ -7,7 +7,7 @@ module Prometheus
7
7
  module Client
8
8
  # Metric
9
9
  class Metric
10
- attr_reader :name, :docstring, :preset_labels
10
+ attr_reader :name, :docstring, :labels, :preset_labels
11
11
 
12
12
  def initialize(name,
13
13
  docstring:,
@@ -29,18 +29,28 @@ module Prometheus
29
29
  @docstring = docstring
30
30
  @preset_labels = stringify_values(preset_labels)
31
31
 
32
+ @all_labels_preset = false
33
+ if preset_labels.keys.length == labels.length
34
+ @validator.validate_labelset!(preset_labels)
35
+ @all_labels_preset = true
36
+ end
37
+
32
38
  @store = Prometheus::Client.config.data_store.for_metric(
33
39
  name,
34
40
  metric_type: type,
35
41
  metric_settings: store_settings
36
42
  )
37
43
 
38
- if preset_labels.keys.length == labels.length
39
- @validator.validate_labelset!(preset_labels)
40
- @all_labels_preset = true
41
- end
44
+ # WARNING: Our internal store can be replaced later by `with_labels`
45
+ # Everything we do after this point needs to still work if @store gets replaced
46
+ init_label_set({}) if labels.empty?
47
+ end
48
+
49
+ protected def replace_internal_store(new_store)
50
+ @store = new_store
42
51
  end
43
52
 
53
+
44
54
  # Returns the value for the given label set
45
55
  def get(labels: {})
46
56
  label_set = label_set_for(labels)
@@ -48,11 +58,21 @@ module Prometheus
48
58
  end
49
59
 
50
60
  def with_labels(labels)
51
- self.class.new(name,
52
- docstring: docstring,
53
- labels: @labels,
54
- preset_labels: preset_labels.merge(labels),
55
- store_settings: @store_settings)
61
+ new_metric = self.class.new(name,
62
+ docstring: docstring,
63
+ labels: @labels,
64
+ preset_labels: preset_labels.merge(labels),
65
+ store_settings: @store_settings)
66
+
67
+ # The new metric needs to use the same store as the "main" declared one, otherwise
68
+ # any observations on that copy with the pre-set labels won't actually be exported.
69
+ new_metric.replace_internal_store(@store)
70
+
71
+ new_metric
72
+ end
73
+
74
+ def init_label_set(labels)
75
+ @store.set(labels: label_set_for(labels), val: 0)
56
76
  end
57
77
 
58
78
  # Returns all label sets with their values
@@ -1,11 +1,15 @@
1
1
  # encoding: UTF-8
2
2
 
3
+ require 'base64'
3
4
  require 'thread'
4
5
  require 'net/http'
5
6
  require 'uri'
7
+ require 'erb'
8
+ require 'set'
6
9
 
7
10
  require 'prometheus/client'
8
11
  require 'prometheus/client/formats/text'
12
+ require 'prometheus/client/label_set_validator'
9
13
 
10
14
  module Prometheus
11
15
  # Client is a ruby implementation for a Prometheus compatible client.
@@ -13,23 +17,41 @@ module Prometheus
13
17
  # Push implements a simple way to transmit a given registry to a given
14
18
  # Pushgateway.
15
19
  class Push
20
+ class HttpError < StandardError; end
21
+ class HttpRedirectError < HttpError; end
22
+ class HttpClientError < HttpError; end
23
+ class HttpServerError < HttpError; end
24
+
16
25
  DEFAULT_GATEWAY = 'http://localhost:9091'.freeze
17
26
  PATH = '/metrics/job/%s'.freeze
18
- INSTANCE_PATH = '/metrics/job/%s/instance/%s'.freeze
19
27
  SUPPORTED_SCHEMES = %w(http https).freeze
20
28
 
21
- attr_reader :job, :instance, :gateway, :path
29
+ attr_reader :job, :gateway, :path
30
+
31
+ def initialize(job:, gateway: DEFAULT_GATEWAY, grouping_key: {}, **kwargs)
32
+ raise ArgumentError, "job cannot be nil" if job.nil?
33
+ raise ArgumentError, "job cannot be empty" if job.empty?
34
+ @validator = LabelSetValidator.new(expected_labels: grouping_key.keys)
35
+ @validator.validate_symbols!(grouping_key)
22
36
 
23
- def initialize(job, instance = nil, gateway = nil)
24
37
  @mutex = Mutex.new
25
38
  @job = job
26
- @instance = instance
27
39
  @gateway = gateway || DEFAULT_GATEWAY
28
- @path = build_path(job, instance)
40
+ @grouping_key = grouping_key
41
+ @path = build_path(job, grouping_key)
42
+
29
43
  @uri = parse("#{@gateway}#{@path}")
44
+ validate_no_basic_auth!(@uri)
30
45
 
31
46
  @http = Net::HTTP.new(@uri.host, @uri.port)
32
47
  @http.use_ssl = (@uri.scheme == 'https')
48
+ @http.open_timeout = kwargs[:open_timeout] if kwargs[:open_timeout]
49
+ @http.read_timeout = kwargs[:read_timeout] if kwargs[:read_timeout]
50
+ end
51
+
52
+ def basic_auth(user, password)
53
+ @user = user
54
+ @password = password
33
55
  end
34
56
 
35
57
  def add(registry)
@@ -64,26 +86,118 @@ module Prometheus
64
86
  raise ArgumentError, "#{url} is not a valid URL: #{e}"
65
87
  end
66
88
 
67
- def build_path(job, instance)
68
- if instance
69
- format(INSTANCE_PATH, URI.escape(job), URI.escape(instance))
70
- else
71
- format(PATH, URI.escape(job))
89
+ def build_path(job, grouping_key)
90
+ path = format(PATH, ERB::Util::url_encode(job))
91
+
92
+ grouping_key.each do |label, value|
93
+ if value.include?('/')
94
+ encoded_value = Base64.urlsafe_encode64(value)
95
+ path += "/#{label}@base64/#{encoded_value}"
96
+ # While it's valid for the urlsafe_encode64 function to return an
97
+ # empty string when the input string is empty, it doesn't work for
98
+ # our specific use case as we're putting the result into a URL path
99
+ # segment. A double slash (`//`) can be normalised away by HTTP
100
+ # libraries, proxies, and web servers.
101
+ #
102
+ # For empty strings, we use a single padding character (`=`) as the
103
+ # value.
104
+ #
105
+ # See the pushgateway docs for more details:
106
+ #
107
+ # https://github.com/prometheus/pushgateway/blob/6393a901f56d4dda62cd0f6ab1f1f07c495b6354/README.md#url
108
+ elsif value.empty?
109
+ path += "/#{label}@base64/="
110
+ else
111
+ path += "/#{label}/#{ERB::Util::url_encode(value)}"
112
+ end
72
113
  end
114
+
115
+ path
73
116
  end
74
117
 
75
118
  def request(req_class, registry = nil)
119
+ validate_no_label_clashes!(registry) if registry
120
+
76
121
  req = req_class.new(@uri)
77
122
  req.content_type = Formats::Text::CONTENT_TYPE
78
- req.basic_auth(@uri.user, @uri.password) if @uri.user
123
+ req.basic_auth(@user, @password) if @user
79
124
  req.body = Formats::Text.marshal(registry) if registry
80
125
 
81
- @http.request(req)
126
+ response = @http.request(req)
127
+ validate_response!(response)
128
+
129
+ response
82
130
  end
83
131
 
84
132
  def synchronize
85
133
  @mutex.synchronize { yield }
86
134
  end
135
+
136
+ def validate_no_basic_auth!(uri)
137
+ if uri.user || uri.password
138
+ raise ArgumentError, <<~EOF
139
+ Setting Basic Auth credentials in the gateway URL is not supported, please call the `basic_auth` method.
140
+
141
+ Received username `#{uri.user}` in gateway URL. Instead of passing
142
+ Basic Auth credentials like this:
143
+
144
+ ```
145
+ push = Prometheus::Client::Push.new(job: "my-job", gateway: "http://user:password@localhost:9091")
146
+ ```
147
+
148
+ please pass them like this:
149
+
150
+ ```
151
+ push = Prometheus::Client::Push.new(job: "my-job", gateway: "http://localhost:9091")
152
+ push.basic_auth("user", "password")
153
+ ```
154
+
155
+ While URLs do support passing Basic Auth credentials using the
156
+ `http://user:password@example.com/` syntax, the username and
157
+ password in that syntax have to follow the usual rules for URL
158
+ encoding of characters per RFC 3986
159
+ (https://datatracker.ietf.org/doc/html/rfc3986#section-2.1).
160
+
161
+ Rather than place the burden of correctly performing that encoding
162
+ on users of this gem, we decided to have a separate method for
163
+ supplying Basic Auth credentials, with no requirement to URL encode
164
+ the characters in them.
165
+ EOF
166
+ end
167
+ end
168
+
169
+ def validate_no_label_clashes!(registry)
170
+ # There's nothing to check if we don't have a grouping key
171
+ return if @grouping_key.empty?
172
+
173
+ # We could be doing a lot of comparisons, so let's do them against a
174
+ # set rather than an array
175
+ grouping_key_labels = @grouping_key.keys.to_set
176
+
177
+ registry.metrics.each do |metric|
178
+ metric.labels.each do |label|
179
+ if grouping_key_labels.include?(label)
180
+ raise LabelSetValidator::InvalidLabelSetError,
181
+ "label :#{label} from grouping key collides with label of the " \
182
+ "same name from metric :#{metric.name} and would overwrite it"
183
+ end
184
+ end
185
+ end
186
+ end
187
+
188
+ def validate_response!(response)
189
+ status = Integer(response.code)
190
+ if status >= 300
191
+ message = "status: #{response.code}, message: #{response.message}, body: #{response.body}"
192
+ if status <= 399
193
+ raise HttpRedirectError, message
194
+ elsif status <= 499
195
+ raise HttpClientError, message
196
+ else
197
+ raise HttpServerError, message
198
+ end
199
+ end
200
+ end
87
201
  end
88
202
  end
89
203
  end
@@ -22,7 +22,7 @@ module Prometheus
22
22
  name = metric.name
23
23
 
24
24
  @mutex.synchronize do
25
- if exist?(name.to_sym)
25
+ if @metrics.key?(name.to_sym)
26
26
  raise AlreadyRegisteredError, "#{name} has already been registered"
27
27
  end
28
28
  @metrics[name.to_sym] = metric
@@ -73,15 +73,15 @@ module Prometheus
73
73
  end
74
74
 
75
75
  def exist?(name)
76
- @metrics.key?(name)
76
+ @mutex.synchronize { @metrics.key?(name) }
77
77
  end
78
78
 
79
79
  def get(name)
80
- @metrics[name.to_sym]
80
+ @mutex.synchronize { @metrics[name.to_sym] }
81
81
  end
82
82
 
83
83
  def metrics
84
- @metrics.values
84
+ @mutex.synchronize { @metrics.values }
85
85
  end
86
86
  end
87
87
  end
@@ -11,7 +11,12 @@ module Prometheus
11
11
  :summary
12
12
  end
13
13
 
14
- # Records a given value.
14
+ # Records a given value. The recorded value is usually positive
15
+ # or zero. A negative value is accepted but prevents current
16
+ # versions of Prometheus from properly detecting counter resets
17
+ # in the sum of observations. See
18
+ # https://prometheus.io/docs/practices/histograms/#count-and-sum-of-observations
19
+ # for details.
15
20
  def observe(value, labels: {})
16
21
  base_label_set = label_set_for(labels)
17
22
 
@@ -36,15 +41,24 @@ module Prometheus
36
41
 
37
42
  # Returns all label sets with their values expressed as hashes with their sum/count
38
43
  def values
39
- v = @store.all_values
44
+ values = @store.all_values
40
45
 
41
- v.each_with_object({}) do |(label_set, v), acc|
46
+ values.each_with_object({}) do |(label_set, v), acc|
42
47
  actual_label_set = label_set.reject{|l| l == :quantile }
43
48
  acc[actual_label_set] ||= { "count" => 0.0, "sum" => 0.0 }
44
49
  acc[actual_label_set][label_set[:quantile]] = v
45
50
  end
46
51
  end
47
52
 
53
+ def init_label_set(labels)
54
+ base_label_set = label_set_for(labels)
55
+
56
+ @store.synchronize do
57
+ @store.set(labels: base_label_set.merge(quantile: "count"), val: 0)
58
+ @store.set(labels: base_label_set.merge(quantile: "sum"), val: 0)
59
+ end
60
+ end
61
+
48
62
  private
49
63
 
50
64
  def reserved_labels
@@ -2,6 +2,6 @@
2
2
 
3
3
  module Prometheus
4
4
  module Client
5
- VERSION = '0.11.0-alpha.1'
5
+ VERSION = '3.0.0'
6
6
  end
7
7
  end
@@ -67,15 +67,17 @@ module Prometheus
67
67
  end
68
68
 
69
69
  def record(env, code, duration)
70
+ path = generate_path(env)
71
+
70
72
  counter_labels = {
71
73
  code: code,
72
74
  method: env['REQUEST_METHOD'].downcase,
73
- path: strip_ids_from_path(env['PATH_INFO']),
75
+ path: strip_ids_from_path(path),
74
76
  }
75
77
 
76
78
  duration_labels = {
77
79
  method: env['REQUEST_METHOD'].downcase,
78
- path: strip_ids_from_path(env['PATH_INFO']),
80
+ path: strip_ids_from_path(path),
79
81
  }
80
82
 
81
83
  @requests.increment(labels: counter_labels)
@@ -85,10 +87,58 @@ module Prometheus
85
87
  nil
86
88
  end
87
89
 
90
+ # While `PATH_INFO` is framework agnostic, and works for any Rack app, some Ruby web
91
+ # frameworks pass a more useful piece of information into the request env - the
92
+ # route that the request matched.
93
+ #
94
+ # This means that rather than using our generic `:id` and `:uuid` replacements in
95
+ # the `path` label for any path segments that look like dynamic IDs, we can put the
96
+ # actual route that matched in there, with correctly named parameters. For example,
97
+ # if a Sinatra app defined a route like:
98
+ #
99
+ # get "/foo/:bar" do
100
+ # ...
101
+ # end
102
+ #
103
+ # instead of containing `/foo/:id`, the `path` label would contain `/foo/:bar`.
104
+ #
105
+ # Sadly, Rails is a notable exception, and (as far as I can tell at the time of
106
+ # writing) doesn't provide this info in the request env.
107
+ def generate_path(env)
108
+ if env['sinatra.route']
109
+ route = env['sinatra.route'].partition(' ').last
110
+ elsif env['grape.routing_args']
111
+ # We are deep in the weeds of an object that Grape passes into the request env,
112
+ # but don't document any explicit guarantees about. Let's have a fallback in
113
+ # case they change it down the line.
114
+ #
115
+ # This code would be neater with the safe navigation operator (`&.`) here rather
116
+ # than the much more verbose `respond_to?` calls, but unlike Rails' `try`
117
+ # method, it still raises an error if the object is non-nil, but doesn't respond
118
+ # to the method being called on it.
119
+ route = nil
120
+
121
+ route_info = env.dig('grape.routing_args', :route_info)
122
+ if route_info.respond_to?(:pattern)
123
+ pattern = route_info.pattern
124
+ if pattern.respond_to?(:origin)
125
+ route = pattern.origin
126
+ end
127
+ end
128
+
129
+ # Fall back to PATH_INFO if Grape change the structure of `grape.routing_args`
130
+ route ||= env['PATH_INFO']
131
+ else
132
+ route = env['PATH_INFO']
133
+ end
134
+
135
+ [env['SCRIPT_NAME'], route].join
136
+ end
137
+
88
138
  def strip_ids_from_path(path)
89
139
  path
90
- .gsub(%r{/[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}(/|$)}, '/:uuid\\1')
91
- .gsub(%r{/\d+(/|$)}, '/:id\\1')
140
+ .gsub(%r{/[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}(?=/|$)}, '/:uuid\\1')
141
+ .gsub(%r{/\d+(?=/|$)}, '/:id\\1')
92
142
  end
93
143
  end
94
144
  end
@@ -21,11 +21,12 @@ module Prometheus
21
21
  @app = app
22
22
  @registry = options[:registry] || Client.registry
23
23
  @path = options[:path] || '/metrics'
24
+ @port = options[:port]
24
25
  @acceptable = build_dictionary(FORMATS, FALLBACK)
25
26
  end
26
27
 
27
28
  def call(env)
28
- if env['PATH_INFO'] == @path
29
+ if metrics_port?(env['SERVER_PORT']) && env['PATH_INFO'] == @path
29
30
  format = negotiate(env, @acceptable)
30
31
  format ? respond_with(format) : not_acceptable(FORMATS)
31
32
  else
@@ -86,6 +87,10 @@ module Prometheus
86
87
  memo[format::MEDIA_TYPE] = format
87
88
  end
88
89
  end
90
+
91
+ def metrics_port?(request_port)
92
+ @port.nil? || @port.to_s == request_port
93
+ end
89
94
  end
90
95
  end
91
96
  end
metadata CHANGED
@@ -1,16 +1,16 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: prometheus-client
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.11.0.pre.alpha.1
4
+ version: 3.0.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Ben Kochie
8
8
  - Chris Sinjakli
9
9
  - Daniel Magliola
10
- autorequire:
10
+ autorequire:
11
11
  bindir: bin
12
12
  cert_chain: []
13
- date: 2019-10-28 00:00:00.000000000 Z
13
+ date: 2022-02-05 00:00:00.000000000 Z
14
14
  dependencies:
15
15
  - !ruby/object:Gem::Dependency
16
16
  name: benchmark-ips
@@ -40,10 +40,10 @@ dependencies:
40
40
  - - ">="
41
41
  - !ruby/object:Gem::Version
42
42
  version: '0'
43
- description:
43
+ description:
44
44
  email:
45
45
  - superq@gmail.com
46
- - chris@gocardless.com
46
+ - chris@sinjakli.co.uk
47
47
  - dmagliola@crystalgears.com
48
48
  executables: []
49
49
  extensions: []
@@ -73,7 +73,7 @@ homepage: https://github.com/prometheus/client_ruby
73
73
  licenses:
74
74
  - Apache 2.0
75
75
  metadata: {}
76
- post_install_message:
76
+ post_install_message:
77
77
  rdoc_options: []
78
78
  require_paths:
79
79
  - lib
@@ -84,13 +84,12 @@ required_ruby_version: !ruby/object:Gem::Requirement
84
84
  version: '0'
85
85
  required_rubygems_version: !ruby/object:Gem::Requirement
86
86
  requirements:
87
- - - ">"
87
+ - - ">="
88
88
  - !ruby/object:Gem::Version
89
- version: 1.3.1
89
+ version: '0'
90
90
  requirements: []
91
- rubyforge_project:
92
- rubygems_version: 2.7.6
93
- signing_key:
91
+ rubygems_version: 3.2.32
92
+ signing_key:
94
93
  specification_version: 4
95
94
  summary: A suite of instrumentation metric primitivesthat can be exposed through a
96
95
  web services interface.