prometheus-client 2.0.0 → 2.1.0
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +4 -4
- data/README.md +26 -5
- data/lib/prometheus/client/data_stores/direct_file_store.rb +47 -23
- data/lib/prometheus/client/histogram.rb +11 -3
- data/lib/prometheus/client/metric.rb +6 -5
- data/lib/prometheus/client/push.rb +2 -2
- data/lib/prometheus/client/summary.rb +2 -2
- data/lib/prometheus/client/version.rb +1 -1
- metadata +3 -4
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA256:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: 26ff1f489c371d04487bbd7fe478ac70f43ed79f7c462d18b46d9475136f63f4
|
4
|
+
data.tar.gz: ddc03d2b15acfa3bf44ebc4661dd7653c2fd45a951d8ef2e2b015469a73d89c1
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: 54dc0dedfb667d7bf49c909fdf043209b5508473876b6d5df019db192dd5d788397424972e5ba8c6eab6e4162940bff1d43313dc8ea12379d84018b910840044
|
7
|
+
data.tar.gz: 98d74089ed076100bc845721c318944bd620b19a9228405b0160d81c248ea6023eab79c60019eea661c92ace20e7f59cc05b237f6bbbe43fd27500427242dd5a
|
data/README.md
CHANGED
@@ -158,6 +158,11 @@ histogram.get(labels: { service: 'users' })
|
|
158
158
|
# => { 0.005 => 3, 0.01 => 15, 0.025 => 18, ..., 2.5 => 42, 5 => 42, 10 = >42 }
|
159
159
|
```
|
160
160
|
|
161
|
+
Histograms provide default buckets of `[0.005, 0.01, 0.025, 0.05, 0.1, 0.25, 0.5, 1, 2.5, 5, 10]`
|
162
|
+
|
163
|
+
You can specify your own buckets, either explicitly, or using the `Histogram.linear_buckets`
|
164
|
+
or `Histogram.exponential_buckets` methods to define regularly spaced buckets.
|
165
|
+
|
161
166
|
### Summary
|
162
167
|
|
163
168
|
Summary, similar to histograms, is an accumulator for samples. It captures
|
@@ -320,9 +325,9 @@ When instantiating metrics, there is an optional `store_settings` attribute. Thi
|
|
320
325
|
to set up store-specific settings for each metric. For most stores, this is not used, but
|
321
326
|
for multi-process stores, this is used to specify how to aggregate the values of each
|
322
327
|
metric across multiple processes. For the most part, this is used for Gauges, to specify
|
323
|
-
whether you want to report the `SUM`, `MAX` or `
|
324
|
-
For almost all other cases, you'd leave the default (`SUM`). More on this
|
325
|
-
*Aggregation* section below.
|
328
|
+
whether you want to report the `SUM`, `MAX`, `MIN`, or `MOST_RECENT` value observed across
|
329
|
+
all processes. For almost all other cases, you'd leave the default (`SUM`). More on this
|
330
|
+
on the *Aggregation* section below.
|
326
331
|
|
327
332
|
Custom stores may also accept extra parameters besides `:aggregation`. See the
|
328
333
|
documentation of each store for more details.
|
@@ -355,8 +360,11 @@ use case, you may need to control how this works. When using this store,
|
|
355
360
|
each Metric allows you to specify an `:aggregation` setting, defining how
|
356
361
|
to aggregate the multiple possible values we can get for each labelset. By default,
|
357
362
|
Counters, Histograms and Summaries are `SUM`med, and Gauges report all their values (one
|
358
|
-
for each process), tagged with a `pid` label. You can also select `SUM`, `MAX` or
|
359
|
-
for your gauges, depending on your use case.
|
363
|
+
for each process), tagged with a `pid` label. You can also select `SUM`, `MAX`, `MIN`, or
|
364
|
+
`MOST_RECENT` for your gauges, depending on your use case.
|
365
|
+
|
366
|
+
Please note that that the `MOST_RECENT` aggregation only works for gauges, and it does not
|
367
|
+
allow the use of `increment` / `decrement`, you can only use `set`.
|
360
368
|
|
361
369
|
**Memory Usage**: When scraped by Prometheus, this store will read all these files, get all
|
362
370
|
the values and aggregate them. We have notice this can have a noticeable effect on memory
|
@@ -368,6 +376,19 @@ you store your metric files (specified when initializing the `DirectFileStore`)
|
|
368
376
|
when your app starts. Otherwise, each app run will continue exporting the metrics from the
|
369
377
|
previous run.
|
370
378
|
|
379
|
+
If you have this issue, one way to do this is to run code similar to this as part of you
|
380
|
+
initialization:
|
381
|
+
|
382
|
+
```ruby
|
383
|
+
Dir["#{app_path}/tmp/prometheus/*.bin"].each do |file_path|
|
384
|
+
File.unlink(file_path)
|
385
|
+
end
|
386
|
+
```
|
387
|
+
|
388
|
+
If you are running in pre-fork servers (such as Unicorn or Puma with multiple processes),
|
389
|
+
make sure you do this **before** the server forks. Otherwise, each child process may delete
|
390
|
+
files created by other processes on *this* run, instead of deleting old files.
|
391
|
+
|
371
392
|
**Large numbers of files**: Because there is an individual file per metric and per process
|
372
393
|
(which is done to optimize for observation performance), you may end up with a large number
|
373
394
|
of files. We don't currently have a solution for this problem, but we're working on it.
|
@@ -29,7 +29,7 @@ module Prometheus
|
|
29
29
|
|
30
30
|
class DirectFileStore
|
31
31
|
class InvalidStoreSettingsError < StandardError; end
|
32
|
-
AGGREGATION_MODES = [MAX = :max, MIN = :min, SUM = :sum, ALL = :all]
|
32
|
+
AGGREGATION_MODES = [MAX = :max, MIN = :min, SUM = :sum, ALL = :all, MOST_RECENT = :most_recent]
|
33
33
|
DEFAULT_METRIC_SETTINGS = { aggregation: SUM }
|
34
34
|
DEFAULT_GAUGE_SETTINGS = { aggregation: ALL }
|
35
35
|
|
@@ -45,7 +45,7 @@ module Prometheus
|
|
45
45
|
end
|
46
46
|
|
47
47
|
settings = default_settings.merge(metric_settings)
|
48
|
-
validate_metric_settings(settings)
|
48
|
+
validate_metric_settings(metric_type, settings)
|
49
49
|
|
50
50
|
MetricStore.new(metric_name: metric_name,
|
51
51
|
store_settings: @store_settings,
|
@@ -54,7 +54,7 @@ module Prometheus
|
|
54
54
|
|
55
55
|
private
|
56
56
|
|
57
|
-
def validate_metric_settings(metric_settings)
|
57
|
+
def validate_metric_settings(metric_type, metric_settings)
|
58
58
|
unless metric_settings.has_key?(:aggregation) &&
|
59
59
|
AGGREGATION_MODES.include?(metric_settings[:aggregation])
|
60
60
|
raise InvalidStoreSettingsError,
|
@@ -65,6 +65,11 @@ module Prometheus
|
|
65
65
|
raise InvalidStoreSettingsError,
|
66
66
|
"Only :aggregation setting can be specified"
|
67
67
|
end
|
68
|
+
|
69
|
+
if metric_settings[:aggregation] == MOST_RECENT && metric_type != :gauge
|
70
|
+
raise InvalidStoreSettingsError,
|
71
|
+
"Only :gauge metrics support :most_recent aggregation"
|
72
|
+
end
|
68
73
|
end
|
69
74
|
|
70
75
|
class MetricStore
|
@@ -74,6 +79,7 @@ module Prometheus
|
|
74
79
|
@metric_name = metric_name
|
75
80
|
@store_settings = store_settings
|
76
81
|
@values_aggregation_mode = metric_settings[:aggregation]
|
82
|
+
@store_opened_by_pid = nil
|
77
83
|
|
78
84
|
@lock = Monitor.new
|
79
85
|
end
|
@@ -100,6 +106,12 @@ module Prometheus
|
|
100
106
|
end
|
101
107
|
|
102
108
|
def increment(labels:, by: 1)
|
109
|
+
if @values_aggregation_mode == DirectFileStore::MOST_RECENT
|
110
|
+
raise InvalidStoreSettingsError,
|
111
|
+
"The :most_recent aggregation does not support the use of increment"\
|
112
|
+
"/decrement"
|
113
|
+
end
|
114
|
+
|
103
115
|
key = store_key(labels)
|
104
116
|
in_process_sync do
|
105
117
|
value = internal_store.read_value(key)
|
@@ -121,7 +133,7 @@ module Prometheus
|
|
121
133
|
stores_for_metric.each do |file_path|
|
122
134
|
begin
|
123
135
|
store = FileMappedDict.new(file_path, true)
|
124
|
-
store.all_values.each do |(labelset_qs, v)|
|
136
|
+
store.all_values.each do |(labelset_qs, v, ts)|
|
125
137
|
# Labels come as a query string, and CGI::parse returns arrays for each key
|
126
138
|
# "foo=bar&x=y" => { "foo" => ["bar"], "x" => ["y"] }
|
127
139
|
# Turn the keys back into symbols, and remove the arrays
|
@@ -129,7 +141,7 @@ module Prometheus
|
|
129
141
|
[k.to_sym, vs.first]
|
130
142
|
end.to_h
|
131
143
|
|
132
|
-
stores_data[label_set] << v
|
144
|
+
stores_data[label_set] << [v, ts]
|
133
145
|
end
|
134
146
|
ensure
|
135
147
|
store.close if store
|
@@ -181,30 +193,41 @@ module Prometheus
|
|
181
193
|
end
|
182
194
|
|
183
195
|
def aggregate_values(values)
|
184
|
-
|
185
|
-
|
186
|
-
|
187
|
-
|
188
|
-
|
189
|
-
|
190
|
-
elsif @values_aggregation_mode == ALL
|
191
|
-
values.first
|
196
|
+
# Each entry in the `values` array is a tuple of `value` and `timestamp`,
|
197
|
+
# so for all aggregations except `MOST_RECENT`, we need to only take the
|
198
|
+
# first value in each entry and ignore the second.
|
199
|
+
if @values_aggregation_mode == MOST_RECENT
|
200
|
+
latest_tuple = values.max { |a,b| a[1] <=> b[1] }
|
201
|
+
latest_tuple.first # return the value without the timestamp
|
192
202
|
else
|
193
|
-
|
194
|
-
|
203
|
+
values = values.map(&:first) # Discard timestamps
|
204
|
+
|
205
|
+
if @values_aggregation_mode == SUM
|
206
|
+
values.inject { |sum, element| sum + element }
|
207
|
+
elsif @values_aggregation_mode == MAX
|
208
|
+
values.max
|
209
|
+
elsif @values_aggregation_mode == MIN
|
210
|
+
values.min
|
211
|
+
elsif @values_aggregation_mode == ALL
|
212
|
+
values.first
|
213
|
+
else
|
214
|
+
raise InvalidStoreSettingsError,
|
215
|
+
"Invalid Aggregation Mode: #{ @values_aggregation_mode }"
|
216
|
+
end
|
195
217
|
end
|
196
218
|
end
|
197
219
|
end
|
198
220
|
|
199
221
|
private_constant :MetricStore
|
200
222
|
|
201
|
-
# A dict of doubles, backed by an file we access directly
|
223
|
+
# A dict of doubles, backed by an file we access directly as a byte array.
|
202
224
|
#
|
203
225
|
# The file starts with a 4 byte int, indicating how much of it is used.
|
204
226
|
# Then 4 bytes of padding.
|
205
227
|
# There's then a number of entries, consisting of a 4 byte int which is the
|
206
228
|
# size of the next field, a utf-8 encoded string key, padding to an 8 byte
|
207
|
-
# alignment, and then a 8 byte float which is the value
|
229
|
+
# alignment, and then a 8 byte float which is the value, and then a 8 byte
|
230
|
+
# float which is the unix timestamp when the value was set.
|
208
231
|
class FileMappedDict
|
209
232
|
INITIAL_FILE_SIZE = 1024*1024
|
210
233
|
|
@@ -235,8 +258,8 @@ module Prometheus
|
|
235
258
|
with_file_lock do
|
236
259
|
@positions.map do |key, pos|
|
237
260
|
@f.seek(pos)
|
238
|
-
value = @f.read(
|
239
|
-
[key, value]
|
261
|
+
value, timestamp = @f.read(16).unpack('dd')
|
262
|
+
[key, value, timestamp]
|
240
263
|
end
|
241
264
|
end
|
242
265
|
end
|
@@ -256,9 +279,10 @@ module Prometheus
|
|
256
279
|
init_value(key)
|
257
280
|
end
|
258
281
|
|
282
|
+
now = Process.clock_gettime(Process::CLOCK_MONOTONIC)
|
259
283
|
pos = @positions[key]
|
260
284
|
@f.seek(pos)
|
261
|
-
@f.write([value].pack('
|
285
|
+
@f.write([value, now].pack('dd'))
|
262
286
|
@f.flush
|
263
287
|
end
|
264
288
|
|
@@ -299,7 +323,7 @@ module Prometheus
|
|
299
323
|
def init_value(key)
|
300
324
|
# Pad to be 8-byte aligned.
|
301
325
|
padded = key + (' ' * (8 - (key.length + 4) % 8))
|
302
|
-
value = [padded.length, padded, 0.0].pack("lA#{padded.length}
|
326
|
+
value = [padded.length, padded, 0.0, 0.0].pack("lA#{padded.length}dd")
|
303
327
|
while @used + value.length > @capacity
|
304
328
|
@capacity *= 2
|
305
329
|
resize_file(@capacity)
|
@@ -310,7 +334,7 @@ module Prometheus
|
|
310
334
|
@f.seek(0)
|
311
335
|
@f.write([@used].pack('l'))
|
312
336
|
@f.flush
|
313
|
-
@positions[key] = @used -
|
337
|
+
@positions[key] = @used - 16
|
314
338
|
end
|
315
339
|
|
316
340
|
# Read position of all keys. No locking is performed.
|
@@ -320,7 +344,7 @@ module Prometheus
|
|
320
344
|
padded_len = @f.read(4).unpack('l')[0]
|
321
345
|
key = @f.read(padded_len).unpack("A#{padded_len}")[0].strip
|
322
346
|
@positions[key] = @f.pos
|
323
|
-
@f.seek(
|
347
|
+
@f.seek(16, :CUR)
|
324
348
|
end
|
325
349
|
end
|
326
350
|
end
|
@@ -33,6 +33,14 @@ module Prometheus
|
|
33
33
|
store_settings: store_settings)
|
34
34
|
end
|
35
35
|
|
36
|
+
def self.linear_buckets(start:, width:, count:)
|
37
|
+
count.times.map { |idx| start.to_f + idx * width }
|
38
|
+
end
|
39
|
+
|
40
|
+
def self.exponential_buckets(start:, factor: 2, count:)
|
41
|
+
count.times.map { |idx| start.to_f * factor ** idx }
|
42
|
+
end
|
43
|
+
|
36
44
|
def with_labels(labels)
|
37
45
|
self.class.new(name,
|
38
46
|
docstring: docstring,
|
@@ -81,15 +89,15 @@ module Prometheus
|
|
81
89
|
|
82
90
|
# Returns all label sets with their values expressed as hashes with their buckets
|
83
91
|
def values
|
84
|
-
|
92
|
+
values = @store.all_values
|
85
93
|
|
86
|
-
result =
|
94
|
+
result = values.each_with_object({}) do |(label_set, v), acc|
|
87
95
|
actual_label_set = label_set.reject{|l| l == :le }
|
88
96
|
acc[actual_label_set] ||= @buckets.map{|b| [b.to_s, 0.0]}.to_h
|
89
97
|
acc[actual_label_set][label_set[:le].to_s] = v
|
90
98
|
end
|
91
99
|
|
92
|
-
result.each do |(
|
100
|
+
result.each do |(_label_set, v)|
|
93
101
|
accumulate_buckets(v)
|
94
102
|
end
|
95
103
|
end
|
@@ -29,16 +29,17 @@ module Prometheus
|
|
29
29
|
@docstring = docstring
|
30
30
|
@preset_labels = stringify_values(preset_labels)
|
31
31
|
|
32
|
+
@all_labels_preset = false
|
33
|
+
if preset_labels.keys.length == labels.length
|
34
|
+
@validator.validate_labelset!(preset_labels)
|
35
|
+
@all_labels_preset = true
|
36
|
+
end
|
37
|
+
|
32
38
|
@store = Prometheus::Client.config.data_store.for_metric(
|
33
39
|
name,
|
34
40
|
metric_type: type,
|
35
41
|
metric_settings: store_settings
|
36
42
|
)
|
37
|
-
|
38
|
-
if preset_labels.keys.length == labels.length
|
39
|
-
@validator.validate_labelset!(preset_labels)
|
40
|
-
@all_labels_preset = true
|
41
|
-
end
|
42
43
|
end
|
43
44
|
|
44
45
|
# Returns the value for the given label set
|
@@ -66,9 +66,9 @@ module Prometheus
|
|
66
66
|
|
67
67
|
def build_path(job, instance)
|
68
68
|
if instance
|
69
|
-
format(INSTANCE_PATH,
|
69
|
+
format(INSTANCE_PATH, CGI::escape(job), CGI::escape(instance))
|
70
70
|
else
|
71
|
-
format(PATH,
|
71
|
+
format(PATH, CGI::escape(job))
|
72
72
|
end
|
73
73
|
end
|
74
74
|
|
@@ -36,9 +36,9 @@ module Prometheus
|
|
36
36
|
|
37
37
|
# Returns all label sets with their values expressed as hashes with their sum/count
|
38
38
|
def values
|
39
|
-
|
39
|
+
values = @store.all_values
|
40
40
|
|
41
|
-
|
41
|
+
values.each_with_object({}) do |(label_set, v), acc|
|
42
42
|
actual_label_set = label_set.reject{|l| l == :quantile }
|
43
43
|
acc[actual_label_set] ||= { "count" => 0.0, "sum" => 0.0 }
|
44
44
|
acc[actual_label_set][label_set[:quantile]] = v
|
metadata
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
2
2
|
name: prometheus-client
|
3
3
|
version: !ruby/object:Gem::Version
|
4
|
-
version: 2.
|
4
|
+
version: 2.1.0
|
5
5
|
platform: ruby
|
6
6
|
authors:
|
7
7
|
- Ben Kochie
|
@@ -10,7 +10,7 @@ authors:
|
|
10
10
|
autorequire:
|
11
11
|
bindir: bin
|
12
12
|
cert_chain: []
|
13
|
-
date: 2020-
|
13
|
+
date: 2020-06-29 00:00:00.000000000 Z
|
14
14
|
dependencies:
|
15
15
|
- !ruby/object:Gem::Dependency
|
16
16
|
name: benchmark-ips
|
@@ -88,8 +88,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
|
|
88
88
|
- !ruby/object:Gem::Version
|
89
89
|
version: '0'
|
90
90
|
requirements: []
|
91
|
-
|
92
|
-
rubygems_version: 2.7.6
|
91
|
+
rubygems_version: 3.1.2
|
93
92
|
signing_key:
|
94
93
|
specification_version: 4
|
95
94
|
summary: A suite of instrumentation metric primitivesthat can be exposed through a
|