prometheus-client 1.0.0 → 4.1.0
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +4 -4
- data/LICENSE +201 -0
- data/README.md +147 -63
- data/lib/prometheus/client/data_stores/README.md +1 -1
- data/lib/prometheus/client/data_stores/direct_file_store.rb +63 -25
- data/lib/prometheus/client/histogram.rb +41 -11
- data/lib/prometheus/client/label_set_validator.rb +10 -4
- data/lib/prometheus/client/metric.rb +30 -10
- data/lib/prometheus/client/push.rb +126 -12
- data/lib/prometheus/client/registry.rb +4 -4
- data/lib/prometheus/client/summary.rb +17 -3
- data/lib/prometheus/client/version.rb +1 -1
- data/lib/prometheus/middleware/collector.rb +12 -4
- data/lib/prometheus/middleware/exporter.rb +8 -3
- metadata +10 -10
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA256:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: 47fef80d530cba8aed214132cb1eb512c7094f9e0daa89db9d5db3f2f0393980
|
4
|
+
data.tar.gz: 98cfad337c9ff3907ba519fbd355e81697108af3251ec574fbb2c14bdd35af2c
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: 844a860cf2090a1833a0f3a24d0c7629a761391f69aac8ebea5550463a2219947144438caae36662038cdf242747eea0afe42f3fbe2e86e2e4177f75ee1e50bf
|
7
|
+
data.tar.gz: 3429071dd55ee6f0493790b8860639e653e0e0bb4ddfdc1dffc30c051a66df598272ea2056e9ef87e0c67eb964606cfeba29c9faf05408f2e40e48ab09e31fbc
|
data/LICENSE
ADDED
@@ -0,0 +1,201 @@
|
|
1
|
+
Apache License
|
2
|
+
Version 2.0, January 2004
|
3
|
+
http://www.apache.org/licenses/
|
4
|
+
|
5
|
+
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
6
|
+
|
7
|
+
1. Definitions.
|
8
|
+
|
9
|
+
"License" shall mean the terms and conditions for use, reproduction,
|
10
|
+
and distribution as defined by Sections 1 through 9 of this document.
|
11
|
+
|
12
|
+
"Licensor" shall mean the copyright owner or entity authorized by
|
13
|
+
the copyright owner that is granting the License.
|
14
|
+
|
15
|
+
"Legal Entity" shall mean the union of the acting entity and all
|
16
|
+
other entities that control, are controlled by, or are under common
|
17
|
+
control with that entity. For the purposes of this definition,
|
18
|
+
"control" means (i) the power, direct or indirect, to cause the
|
19
|
+
direction or management of such entity, whether by contract or
|
20
|
+
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
21
|
+
outstanding shares, or (iii) beneficial ownership of such entity.
|
22
|
+
|
23
|
+
"You" (or "Your") shall mean an individual or Legal Entity
|
24
|
+
exercising permissions granted by this License.
|
25
|
+
|
26
|
+
"Source" form shall mean the preferred form for making modifications,
|
27
|
+
including but not limited to software source code, documentation
|
28
|
+
source, and configuration files.
|
29
|
+
|
30
|
+
"Object" form shall mean any form resulting from mechanical
|
31
|
+
transformation or translation of a Source form, including but
|
32
|
+
not limited to compiled object code, generated documentation,
|
33
|
+
and conversions to other media types.
|
34
|
+
|
35
|
+
"Work" shall mean the work of authorship, whether in Source or
|
36
|
+
Object form, made available under the License, as indicated by a
|
37
|
+
copyright notice that is included in or attached to the work
|
38
|
+
(an example is provided in the Appendix below).
|
39
|
+
|
40
|
+
"Derivative Works" shall mean any work, whether in Source or Object
|
41
|
+
form, that is based on (or derived from) the Work and for which the
|
42
|
+
editorial revisions, annotations, elaborations, or other modifications
|
43
|
+
represent, as a whole, an original work of authorship. For the purposes
|
44
|
+
of this License, Derivative Works shall not include works that remain
|
45
|
+
separable from, or merely link (or bind by name) to the interfaces of,
|
46
|
+
the Work and Derivative Works thereof.
|
47
|
+
|
48
|
+
"Contribution" shall mean any work of authorship, including
|
49
|
+
the original version of the Work and any modifications or additions
|
50
|
+
to that Work or Derivative Works thereof, that is intentionally
|
51
|
+
submitted to Licensor for inclusion in the Work by the copyright owner
|
52
|
+
or by an individual or Legal Entity authorized to submit on behalf of
|
53
|
+
the copyright owner. For the purposes of this definition, "submitted"
|
54
|
+
means any form of electronic, verbal, or written communication sent
|
55
|
+
to the Licensor or its representatives, including but not limited to
|
56
|
+
communication on electronic mailing lists, source code control systems,
|
57
|
+
and issue tracking systems that are managed by, or on behalf of, the
|
58
|
+
Licensor for the purpose of discussing and improving the Work, but
|
59
|
+
excluding communication that is conspicuously marked or otherwise
|
60
|
+
designated in writing by the copyright owner as "Not a Contribution."
|
61
|
+
|
62
|
+
"Contributor" shall mean Licensor and any individual or Legal Entity
|
63
|
+
on behalf of whom a Contribution has been received by Licensor and
|
64
|
+
subsequently incorporated within the Work.
|
65
|
+
|
66
|
+
2. Grant of Copyright License. Subject to the terms and conditions of
|
67
|
+
this License, each Contributor hereby grants to You a perpetual,
|
68
|
+
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
69
|
+
copyright license to reproduce, prepare Derivative Works of,
|
70
|
+
publicly display, publicly perform, sublicense, and distribute the
|
71
|
+
Work and such Derivative Works in Source or Object form.
|
72
|
+
|
73
|
+
3. Grant of Patent License. Subject to the terms and conditions of
|
74
|
+
this License, each Contributor hereby grants to You a perpetual,
|
75
|
+
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
76
|
+
(except as stated in this section) patent license to make, have made,
|
77
|
+
use, offer to sell, sell, import, and otherwise transfer the Work,
|
78
|
+
where such license applies only to those patent claims licensable
|
79
|
+
by such Contributor that are necessarily infringed by their
|
80
|
+
Contribution(s) alone or by combination of their Contribution(s)
|
81
|
+
with the Work to which such Contribution(s) was submitted. If You
|
82
|
+
institute patent litigation against any entity (including a
|
83
|
+
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
84
|
+
or a Contribution incorporated within the Work constitutes direct
|
85
|
+
or contributory patent infringement, then any patent licenses
|
86
|
+
granted to You under this License for that Work shall terminate
|
87
|
+
as of the date such litigation is filed.
|
88
|
+
|
89
|
+
4. Redistribution. You may reproduce and distribute copies of the
|
90
|
+
Work or Derivative Works thereof in any medium, with or without
|
91
|
+
modifications, and in Source or Object form, provided that You
|
92
|
+
meet the following conditions:
|
93
|
+
|
94
|
+
(a) You must give any other recipients of the Work or
|
95
|
+
Derivative Works a copy of this License; and
|
96
|
+
|
97
|
+
(b) You must cause any modified files to carry prominent notices
|
98
|
+
stating that You changed the files; and
|
99
|
+
|
100
|
+
(c) You must retain, in the Source form of any Derivative Works
|
101
|
+
that You distribute, all copyright, patent, trademark, and
|
102
|
+
attribution notices from the Source form of the Work,
|
103
|
+
excluding those notices that do not pertain to any part of
|
104
|
+
the Derivative Works; and
|
105
|
+
|
106
|
+
(d) If the Work includes a "NOTICE" text file as part of its
|
107
|
+
distribution, then any Derivative Works that You distribute must
|
108
|
+
include a readable copy of the attribution notices contained
|
109
|
+
within such NOTICE file, excluding those notices that do not
|
110
|
+
pertain to any part of the Derivative Works, in at least one
|
111
|
+
of the following places: within a NOTICE text file distributed
|
112
|
+
as part of the Derivative Works; within the Source form or
|
113
|
+
documentation, if provided along with the Derivative Works; or,
|
114
|
+
within a display generated by the Derivative Works, if and
|
115
|
+
wherever such third-party notices normally appear. The contents
|
116
|
+
of the NOTICE file are for informational purposes only and
|
117
|
+
do not modify the License. You may add Your own attribution
|
118
|
+
notices within Derivative Works that You distribute, alongside
|
119
|
+
or as an addendum to the NOTICE text from the Work, provided
|
120
|
+
that such additional attribution notices cannot be construed
|
121
|
+
as modifying the License.
|
122
|
+
|
123
|
+
You may add Your own copyright statement to Your modifications and
|
124
|
+
may provide additional or different license terms and conditions
|
125
|
+
for use, reproduction, or distribution of Your modifications, or
|
126
|
+
for any such Derivative Works as a whole, provided Your use,
|
127
|
+
reproduction, and distribution of the Work otherwise complies with
|
128
|
+
the conditions stated in this License.
|
129
|
+
|
130
|
+
5. Submission of Contributions. Unless You explicitly state otherwise,
|
131
|
+
any Contribution intentionally submitted for inclusion in the Work
|
132
|
+
by You to the Licensor shall be under the terms and conditions of
|
133
|
+
this License, without any additional terms or conditions.
|
134
|
+
Notwithstanding the above, nothing herein shall supersede or modify
|
135
|
+
the terms of any separate license agreement you may have executed
|
136
|
+
with Licensor regarding such Contributions.
|
137
|
+
|
138
|
+
6. Trademarks. This License does not grant permission to use the trade
|
139
|
+
names, trademarks, service marks, or product names of the Licensor,
|
140
|
+
except as required for reasonable and customary use in describing the
|
141
|
+
origin of the Work and reproducing the content of the NOTICE file.
|
142
|
+
|
143
|
+
7. Disclaimer of Warranty. Unless required by applicable law or
|
144
|
+
agreed to in writing, Licensor provides the Work (and each
|
145
|
+
Contributor provides its Contributions) on an "AS IS" BASIS,
|
146
|
+
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
147
|
+
implied, including, without limitation, any warranties or conditions
|
148
|
+
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
149
|
+
PARTICULAR PURPOSE. You are solely responsible for determining the
|
150
|
+
appropriateness of using or redistributing the Work and assume any
|
151
|
+
risks associated with Your exercise of permissions under this License.
|
152
|
+
|
153
|
+
8. Limitation of Liability. In no event and under no legal theory,
|
154
|
+
whether in tort (including negligence), contract, or otherwise,
|
155
|
+
unless required by applicable law (such as deliberate and grossly
|
156
|
+
negligent acts) or agreed to in writing, shall any Contributor be
|
157
|
+
liable to You for damages, including any direct, indirect, special,
|
158
|
+
incidental, or consequential damages of any character arising as a
|
159
|
+
result of this License or out of the use or inability to use the
|
160
|
+
Work (including but not limited to damages for loss of goodwill,
|
161
|
+
work stoppage, computer failure or malfunction, or any and all
|
162
|
+
other commercial damages or losses), even if such Contributor
|
163
|
+
has been advised of the possibility of such damages.
|
164
|
+
|
165
|
+
9. Accepting Warranty or Additional Liability. While redistributing
|
166
|
+
the Work or Derivative Works thereof, You may choose to offer,
|
167
|
+
and charge a fee for, acceptance of support, warranty, indemnity,
|
168
|
+
or other liability obligations and/or rights consistent with this
|
169
|
+
License. However, in accepting such obligations, You may act only
|
170
|
+
on Your own behalf and on Your sole responsibility, not on behalf
|
171
|
+
of any other Contributor, and only if You agree to indemnify,
|
172
|
+
defend, and hold each Contributor harmless for any liability
|
173
|
+
incurred by, or claims asserted against, such Contributor by reason
|
174
|
+
of your accepting any such warranty or additional liability.
|
175
|
+
|
176
|
+
END OF TERMS AND CONDITIONS
|
177
|
+
|
178
|
+
APPENDIX: How to apply the Apache License to your work.
|
179
|
+
|
180
|
+
To apply the Apache License to your work, attach the following
|
181
|
+
boilerplate notice, with the fields enclosed by brackets "[]"
|
182
|
+
replaced with your own identifying information. (Don't include
|
183
|
+
the brackets!) The text should be enclosed in the appropriate
|
184
|
+
comment syntax for the file format. We also recommend that a
|
185
|
+
file or class name and description of purpose be included on the
|
186
|
+
same "printed page" as the copyright notice for easier
|
187
|
+
identification within third-party archives.
|
188
|
+
|
189
|
+
Copyright [yyyy] [name of copyright owner]
|
190
|
+
|
191
|
+
Licensed under the Apache License, Version 2.0 (the "License");
|
192
|
+
you may not use this file except in compliance with the License.
|
193
|
+
You may obtain a copy of the License at
|
194
|
+
|
195
|
+
http://www.apache.org/licenses/LICENSE-2.0
|
196
|
+
|
197
|
+
Unless required by applicable law or agreed to in writing, software
|
198
|
+
distributed under the License is distributed on an "AS IS" BASIS,
|
199
|
+
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
200
|
+
See the License for the specific language governing permissions and
|
201
|
+
limitations under the License.
|
data/README.md
CHANGED
@@ -5,11 +5,17 @@ through a HTTP interface. Intended to be used together with a
|
|
5
5
|
[Prometheus server][1].
|
6
6
|
|
7
7
|
[![Gem Version][4]](http://badge.fury.io/rb/prometheus-client)
|
8
|
-
[![Build Status][3]](
|
9
|
-
[![Coverage Status][7]](https://coveralls.io/r/prometheus/client_ruby)
|
8
|
+
[![Build Status][3]](https://circleci.com/gh/prometheus/client_ruby/tree/main.svg?style=svg)
|
10
9
|
|
11
10
|
## Usage
|
12
11
|
|
12
|
+
### Installation
|
13
|
+
|
14
|
+
For a global installation run `gem install prometheus-client`.
|
15
|
+
|
16
|
+
If you're using [Bundler](https://bundler.io/) add `gem "prometheus-client"` to your `Gemfile`.
|
17
|
+
Make sure to run `bundle install` afterwards.
|
18
|
+
|
13
19
|
### Overview
|
14
20
|
|
15
21
|
```ruby
|
@@ -50,11 +56,11 @@ use Rack::Deflater
|
|
50
56
|
use Prometheus::Middleware::Collector
|
51
57
|
use Prometheus::Middleware::Exporter
|
52
58
|
|
53
|
-
run ->(_) { [200, {'
|
59
|
+
run ->(_) { [200, {'content-type' => 'text/html'}, ['OK']] }
|
54
60
|
```
|
55
61
|
|
56
62
|
Start the server and have a look at the metrics endpoint:
|
57
|
-
[http://localhost:
|
63
|
+
[http://localhost:5123/metrics](http://localhost:5123/metrics).
|
58
64
|
|
59
65
|
For further instructions and other scripts to get started, have a look at the
|
60
66
|
integrated [example application](examples/rack/README.md).
|
@@ -64,7 +70,7 @@ integrated [example application](examples/rack/README.md).
|
|
64
70
|
The Ruby client can also be used to push its collected metrics to a
|
65
71
|
[Pushgateway][8]. This comes in handy with batch jobs or in other scenarios
|
66
72
|
where it's not possible or feasible to let a Prometheus server scrape a Ruby
|
67
|
-
process. TLS and basic
|
73
|
+
process. TLS and HTTP basic authentication are supported.
|
68
74
|
|
69
75
|
```ruby
|
70
76
|
require 'prometheus/client'
|
@@ -74,18 +80,59 @@ registry = Prometheus::Client.registry
|
|
74
80
|
# ... register some metrics, set/increment/observe/etc. their values
|
75
81
|
|
76
82
|
# push the registry state to the default gateway
|
77
|
-
Prometheus::Client::Push.new('my-batch-job').add(registry)
|
83
|
+
Prometheus::Client::Push.new(job: 'my-batch-job').add(registry)
|
84
|
+
|
85
|
+
# optional: specify a grouping key that uniquely identifies a job instance, and gateway.
|
86
|
+
#
|
87
|
+
# Note: the labels you use in the grouping key must not conflict with labels set on the
|
88
|
+
# metrics being pushed. If they do, an error will be raised.
|
89
|
+
Prometheus::Client::Push.new(
|
90
|
+
job: 'my-batch-job',
|
91
|
+
gateway: 'https://example.domain:1234',
|
92
|
+
grouping_key: { instance: 'some-instance', extra_key: 'foobar' }
|
93
|
+
).add(registry)
|
94
|
+
|
95
|
+
# If you want to replace any previously pushed metrics for a given grouping key,
|
96
|
+
# use the #replace method.
|
97
|
+
#
|
98
|
+
# Unlike #add, this will completely replace the metrics under the specified grouping key
|
99
|
+
# (i.e. anything currently present in the pushgateway for the specified grouping key, but
|
100
|
+
# not present in the registry for that grouping key will be removed).
|
101
|
+
#
|
102
|
+
# See https://github.com/prometheus/pushgateway#put-method for a full explanation.
|
103
|
+
Prometheus::Client::Push.new(job: 'my-batch-job').replace(registry)
|
104
|
+
|
105
|
+
# If you want to delete all previously pushed metrics for a given grouping key,
|
106
|
+
# use the #delete method.
|
107
|
+
Prometheus::Client::Push.new(job: 'my-batch-job').delete
|
108
|
+
```
|
78
109
|
|
79
|
-
|
80
|
-
Prometheus::Client::Push.new('my-batch-job', 'foobar', 'https://example.domain:1234').add(registry)
|
110
|
+
#### Basic authentication
|
81
111
|
|
82
|
-
|
83
|
-
|
84
|
-
|
112
|
+
By design, `Prometheus::Client::Push` doesn't read credentials for HTTP basic
|
113
|
+
authentication when they are passed in via the gateway URL using the
|
114
|
+
`http://user:password@example.com:9091` syntax, and will in fact raise an error if they're
|
115
|
+
supplied that way.
|
85
116
|
|
86
|
-
|
87
|
-
|
88
|
-
|
117
|
+
The reason for this is that when using that syntax, the username and password
|
118
|
+
have to follow the usual rules for URL encoding of characters [per RFC
|
119
|
+
3986](https://datatracker.ietf.org/doc/html/rfc3986#section-2.1).
|
120
|
+
|
121
|
+
Rather than place the burden of correctly performing that encoding on users of this gem,
|
122
|
+
we decided to have a separate method for supplying HTTP basic authentication credentials,
|
123
|
+
with no requirement to URL encode the characters in them.
|
124
|
+
|
125
|
+
Instead of passing credentials like this:
|
126
|
+
|
127
|
+
```ruby
|
128
|
+
push = Prometheus::Client::Push.new(job: "my-job", gateway: "http://user:password@localhost:9091")
|
129
|
+
```
|
130
|
+
|
131
|
+
please pass them like this:
|
132
|
+
|
133
|
+
```ruby
|
134
|
+
push = Prometheus::Client::Push.new(job: "my-job", gateway: "http://localhost:9091")
|
135
|
+
push.basic_auth("user", "password")
|
89
136
|
```
|
90
137
|
|
91
138
|
## Metrics
|
@@ -151,6 +198,11 @@ histogram.get(labels: { service: 'users' })
|
|
151
198
|
# => { 0.005 => 3, 0.01 => 15, 0.025 => 18, ..., 2.5 => 42, 5 => 42, 10 = >42 }
|
152
199
|
```
|
153
200
|
|
201
|
+
Histograms provide default buckets of `[0.005, 0.01, 0.025, 0.05, 0.1, 0.25, 0.5, 1, 2.5, 5, 10]`
|
202
|
+
|
203
|
+
You can specify your own buckets, either explicitly, or using the `Histogram.linear_buckets`
|
204
|
+
or `Histogram.exponential_buckets` methods to define regularly spaced buckets.
|
205
|
+
|
154
206
|
### Summary
|
155
207
|
|
156
208
|
Summary, similar to histograms, is an accumulator for samples. It captures
|
@@ -175,17 +227,17 @@ summary_value['count'] # => 100
|
|
175
227
|
All metrics can have labels, allowing grouping of related time series.
|
176
228
|
|
177
229
|
Labels are an extremely powerful feature, but one that must be used with care.
|
178
|
-
Refer to the best practices on [naming](https://prometheus.io/docs/practices/naming/) and
|
230
|
+
Refer to the best practices on [naming](https://prometheus.io/docs/practices/naming/) and
|
179
231
|
[labels](https://prometheus.io/docs/practices/instrumentation/#use-labels).
|
180
232
|
|
181
|
-
Most importantly, avoid labels that can have a large number of possible values (high
|
233
|
+
Most importantly, avoid labels that can have a large number of possible values (high
|
182
234
|
cardinality). For example, an HTTP Status Code is a good label. A User ID is **not**.
|
183
235
|
|
184
236
|
Labels are specified optionally when updating metrics, as a hash of `label_name => value`.
|
185
|
-
Refer to [the Prometheus documentation](https://prometheus.io/docs/concepts/data_model/#metric-names-and-labels)
|
237
|
+
Refer to [the Prometheus documentation](https://prometheus.io/docs/concepts/data_model/#metric-names-and-labels)
|
186
238
|
as to what's a valid `label_name`.
|
187
239
|
|
188
|
-
In order for a metric to accept labels, their names must be specified when first initializing
|
240
|
+
In order for a metric to accept labels, their names must be specified when first initializing
|
189
241
|
the metric. Then, when the metric is updated, all the specified labels must be present.
|
190
242
|
|
191
243
|
Example:
|
@@ -203,8 +255,8 @@ You can also "pre-set" some of these label values, if they'll always be the same
|
|
203
255
|
need to specify them every time:
|
204
256
|
|
205
257
|
```ruby
|
206
|
-
https_requests_total = Counter.new(:http_requests_total,
|
207
|
-
docstring: '...',
|
258
|
+
https_requests_total = Counter.new(:http_requests_total,
|
259
|
+
docstring: '...',
|
208
260
|
labels: [:service, :status_code],
|
209
261
|
preset_labels: { service: "my_service" })
|
210
262
|
|
@@ -219,7 +271,7 @@ with a subset (or full set) of labels set, so that you can increment / observe t
|
|
219
271
|
without having to specify the labels for every call.
|
220
272
|
|
221
273
|
Moreover, if all the labels the metric can take have been pre-set, validation of the labels
|
222
|
-
is done on the call to `with_labels`, and then skipped for each observation, which can
|
274
|
+
is done on the call to `with_labels`, and then skipped for each observation, which can
|
223
275
|
lead to performance improvements. If you are incrementing a counter in a fast loop, you
|
224
276
|
definitely want to be doing this.
|
225
277
|
|
@@ -230,8 +282,8 @@ Examples:
|
|
230
282
|
|
231
283
|
```ruby
|
232
284
|
# in the metric definition:
|
233
|
-
records_processed_total = registry.counter.new(:records_processed_total,
|
234
|
-
docstring: '...',
|
285
|
+
records_processed_total = registry.counter.new(:records_processed_total,
|
286
|
+
docstring: '...',
|
235
287
|
labels: [:service, :component],
|
236
288
|
preset_labels: { service: "my_service" })
|
237
289
|
|
@@ -244,16 +296,22 @@ class MyComponent
|
|
244
296
|
def metric
|
245
297
|
@metric ||= records_processed_total.with_labels(component: "my_component")
|
246
298
|
end
|
247
|
-
|
299
|
+
|
248
300
|
def process
|
249
301
|
records.each do |record|
|
250
302
|
# process the record
|
251
|
-
metric.increment
|
303
|
+
metric.increment
|
252
304
|
end
|
253
305
|
end
|
254
306
|
end
|
255
307
|
```
|
256
308
|
|
309
|
+
### `init_label_set`
|
310
|
+
|
311
|
+
The time series of a metric are not initialized until something happens. For counters, for example, this means that the time series do not exist until the counter is incremented for the first time.
|
312
|
+
|
313
|
+
To get around this problem the client provides the `init_label_set` method that can be used to initialise the time series of a metric for a given label set.
|
314
|
+
|
257
315
|
### Reserved labels
|
258
316
|
|
259
317
|
The following labels are reserved by the client library, and attempting to use them in a
|
@@ -266,7 +324,7 @@ metric definition will result in a
|
|
266
324
|
|
267
325
|
## Data Stores
|
268
326
|
|
269
|
-
The data for all the metrics (the internal counters associated with each labelset)
|
327
|
+
The data for all the metrics (the internal counters associated with each labelset)
|
270
328
|
is stored in a global Data Store object, rather than in the metric objects themselves.
|
271
329
|
(This "storage" is ephemeral, generally in-memory, it's not "long-term storage")
|
272
330
|
|
@@ -276,12 +334,12 @@ example), require a shared store between all the processes, to be able to report
|
|
276
334
|
numbers. At the same time, other applications may not have this requirement but be very
|
277
335
|
sensitive to performance, and would prefer instead a simpler, faster store.
|
278
336
|
|
279
|
-
By having a standardized and simple interface that metrics use to access this store,
|
337
|
+
By having a standardized and simple interface that metrics use to access this store,
|
280
338
|
we abstract away the details of storing the data from the specific needs of each metric.
|
281
|
-
This allows us to then simply swap around the stores based on the needs of different
|
282
|
-
applications, with no changes to the rest of the client.
|
339
|
+
This allows us to then simply swap around the stores based on the needs of different
|
340
|
+
applications, with no changes to the rest of the client.
|
283
341
|
|
284
|
-
The client provides 3 built-in stores, but if neither of these is ideal for your
|
342
|
+
The client provides 3 built-in stores, but if neither of these is ideal for your
|
285
343
|
requirements, you can easily make your own store and use that instead. More on this below.
|
286
344
|
|
287
345
|
### Configuring which store to use.
|
@@ -299,7 +357,7 @@ NOTE: You **must** make sure to set the `data_store` before initializing any met
|
|
299
357
|
If using Rails, you probably want to set up your Data Store on `config/application.rb`,
|
300
358
|
or `config/environments/*`, both of which run before `config/initializers/*`
|
301
359
|
|
302
|
-
Also note that `config.data_store` is set to an *instance* of a `DataStore`, not to the
|
360
|
+
Also note that `config.data_store` is set to an *instance* of a `DataStore`, not to the
|
303
361
|
class. This is so that the stores can receive parameters. Most of the built-in stores
|
304
362
|
don't require any, but `DirectFileStore` does, for example.
|
305
363
|
|
@@ -307,9 +365,9 @@ When instantiating metrics, there is an optional `store_settings` attribute. Thi
|
|
307
365
|
to set up store-specific settings for each metric. For most stores, this is not used, but
|
308
366
|
for multi-process stores, this is used to specify how to aggregate the values of each
|
309
367
|
metric across multiple processes. For the most part, this is used for Gauges, to specify
|
310
|
-
whether you want to report the `SUM`, `MAX` or `
|
311
|
-
For almost all other cases, you'd leave the default (`SUM`). More on this
|
312
|
-
*Aggregation* section below.
|
368
|
+
whether you want to report the `SUM`, `MAX`, `MIN`, or `MOST_RECENT` value observed across
|
369
|
+
all processes. For almost all other cases, you'd leave the default (`SUM`). More on this
|
370
|
+
on the *Aggregation* section below.
|
313
371
|
|
314
372
|
Custom stores may also accept extra parameters besides `:aggregation`. See the
|
315
373
|
documentation of each store for more details.
|
@@ -318,52 +376,79 @@ documentation of each store for more details.
|
|
318
376
|
|
319
377
|
There are 3 built-in stores, with different trade-offs:
|
320
378
|
|
321
|
-
- **Synchronized**: Default store. Thread safe, but not suitable for multi-process
|
379
|
+
- **Synchronized**: Default store. Thread safe, but not suitable for multi-process
|
322
380
|
scenarios (e.g. pre-fork servers, like Unicorn). Stores data in Hashes, with all accesses
|
323
|
-
protected by Mutexes.
|
381
|
+
protected by Mutexes.
|
324
382
|
- **SingleThreaded**: Fastest store, but only suitable for single-threaded scenarios.
|
325
|
-
This store does not make any effort to synchronize access to its internal hashes, so
|
383
|
+
This store does not make any effort to synchronize access to its internal hashes, so
|
326
384
|
it's absolutely not thread safe.
|
327
385
|
- **DirectFileStore**: Stores data in binary files, one file per process and per metric.
|
328
|
-
This is generally the recommended store to use with pre-fork servers and other
|
386
|
+
This is generally the recommended store to use with pre-fork servers and other
|
329
387
|
"multi-process" scenarios. There are some important caveats to using this store, so
|
330
388
|
please read on the section below.
|
331
389
|
|
332
390
|
### `DirectFileStore` caveats and things to keep in mind
|
333
391
|
|
334
392
|
Each metric gets a file for each process, and manages its contents by storing keys and
|
335
|
-
binary floats next to them, and updating the offsets of those Floats directly. When
|
336
|
-
exporting metrics, it will find all the files that apply to each metric, read them,
|
393
|
+
binary floats next to them, and updating the offsets of those Floats directly. When
|
394
|
+
exporting metrics, it will find all the files that apply to each metric, read them,
|
337
395
|
and aggregate them.
|
338
396
|
|
339
397
|
**Aggregation of metrics**: Since there will be several files per metrics (one per process),
|
340
398
|
these need to be aggregated to present a coherent view to Prometheus. Depending on your
|
341
|
-
use case, you may need to control how this works. When using this store,
|
399
|
+
use case, you may need to control how this works. When using this store,
|
342
400
|
each Metric allows you to specify an `:aggregation` setting, defining how
|
343
401
|
to aggregate the multiple possible values we can get for each labelset. By default,
|
344
402
|
Counters, Histograms and Summaries are `SUM`med, and Gauges report all their values (one
|
345
|
-
for each process), tagged with a `pid` label. You can also select `SUM`, `MAX` or
|
346
|
-
for your gauges, depending on your use case.
|
403
|
+
for each process), tagged with a `pid` label. You can also select `SUM`, `MAX`, `MIN`, or
|
404
|
+
`MOST_RECENT` for your gauges, depending on your use case.
|
405
|
+
|
406
|
+
Please note that the `MOST_RECENT` aggregation only works for gauges, and it does not
|
407
|
+
allow the use of `increment` / `decrement`, you can only use `set`.
|
347
408
|
|
348
409
|
**Memory Usage**: When scraped by Prometheus, this store will read all these files, get all
|
349
410
|
the values and aggregate them. We have notice this can have a noticeable effect on memory
|
350
411
|
usage for your app. We recommend you test this in a realistic usage scenario to make sure
|
351
412
|
you won't hit any memory limits your app may have.
|
352
413
|
|
353
|
-
**Resetting your metrics on each run**: You should also make sure that the directory where
|
354
|
-
you store your metric files (specified when initializing the `DirectFileStore`) is emptied
|
355
|
-
when your app starts. Otherwise, each app run will continue exporting the metrics from the
|
356
|
-
previous run.
|
414
|
+
**Resetting your metrics on each run**: You should also make sure that the directory where
|
415
|
+
you store your metric files (specified when initializing the `DirectFileStore`) is emptied
|
416
|
+
when your app starts. Otherwise, each app run will continue exporting the metrics from the
|
417
|
+
previous run.
|
418
|
+
|
419
|
+
If you have this issue, one way to do this is to run code similar to this as part of you
|
420
|
+
initialization:
|
421
|
+
|
422
|
+
```ruby
|
423
|
+
Dir["#{app_path}/tmp/prometheus/*.bin"].each do |file_path|
|
424
|
+
File.unlink(file_path)
|
425
|
+
end
|
426
|
+
```
|
427
|
+
|
428
|
+
If you are running in pre-fork servers (such as Unicorn or Puma with multiple processes),
|
429
|
+
make sure you do this **before** the server forks. Otherwise, each child process may delete
|
430
|
+
files created by other processes on *this* run, instead of deleting old files.
|
431
|
+
|
432
|
+
**Declare metrics before fork**: As well as deleting files before your process forks, you
|
433
|
+
should make sure to declare your metrics before forking too. Because the metric registry
|
434
|
+
is held in memory, any metrics declared after forking will only be present in child
|
435
|
+
processes where the code declaring them ran, and as a result may not be consistently
|
436
|
+
exported when scraped (i.e. they will only appear when a child process that declared them
|
437
|
+
is scraped).
|
438
|
+
|
439
|
+
If you're absolutely sure that every child process will run the metric declaration code,
|
440
|
+
then you won't run into this issue, but the simplest approach is to declare the metrics
|
441
|
+
before forking.
|
357
442
|
|
358
|
-
**Large numbers of files**: Because there is an individual file per metric and per process
|
359
|
-
(which is done to optimize for observation performance), you may end up with a large number
|
443
|
+
**Large numbers of files**: Because there is an individual file per metric and per process
|
444
|
+
(which is done to optimize for observation performance), you may end up with a large number
|
360
445
|
of files. We don't currently have a solution for this problem, but we're working on it.
|
361
446
|
|
362
|
-
**Performance**: Even though this store saves data on disk, it's still much faster than
|
363
|
-
would probably be expected, because the files are never actually `fsync`ed, so the store
|
364
|
-
never blocks while waiting for disk. The kernel's page cache is incredibly efficient in
|
365
|
-
this regard. If in doubt, check the benchmark scripts described in the documentation for
|
366
|
-
creating your own stores and run them in your particular runtime environment to make sure
|
447
|
+
**Performance**: Even though this store saves data on disk, it's still much faster than
|
448
|
+
would probably be expected, because the files are never actually `fsync`ed, so the store
|
449
|
+
never blocks while waiting for disk. The kernel's page cache is incredibly efficient in
|
450
|
+
this regard. If in doubt, check the benchmark scripts described in the documentation for
|
451
|
+
creating your own stores and run them in your particular runtime environment to make sure
|
367
452
|
this provides adequate performance.
|
368
453
|
|
369
454
|
|
@@ -372,7 +457,7 @@ this provides adequate performance.
|
|
372
457
|
If none of these stores is suitable for your requirements, you can easily make your own.
|
373
458
|
|
374
459
|
The interface and requirements of Stores are specified in detail in the `README.md`
|
375
|
-
in the `client/data_stores` directory. This thoroughly documents how to make your own
|
460
|
+
in the `client/data_stores` directory. This thoroughly documents how to make your own
|
376
461
|
store.
|
377
462
|
|
378
463
|
There are also links there to non-built-in stores created by others that may be useful,
|
@@ -384,16 +469,16 @@ If you are in a multi-process environment (such as pre-fork servers like Unicorn
|
|
384
469
|
process will probably keep their own counters, which need to be aggregated when receiving
|
385
470
|
a Prometheus scrape, to report coherent total numbers.
|
386
471
|
|
387
|
-
For Counters, Histograms and quantile-less Summaries this is simply a matter of
|
472
|
+
For Counters, Histograms and quantile-less Summaries this is simply a matter of
|
388
473
|
summing the values of each process.
|
389
474
|
|
390
|
-
For Gauges, however, this may not be the right thing to do, depending on what they're
|
475
|
+
For Gauges, however, this may not be the right thing to do, depending on what they're
|
391
476
|
measuring. You might want to take the maximum or minimum value observed in any process,
|
392
477
|
rather than the sum of all of them. By default, we export each process's individual
|
393
478
|
value, with a `pid` label identifying each one.
|
394
479
|
|
395
|
-
If these defaults don't work for your use case, you should use the `store_settings`
|
396
|
-
parameter when registering the metric, to specify an `:aggregation` setting.
|
480
|
+
If these defaults don't work for your use case, you should use the `store_settings`
|
481
|
+
parameter when registering the metric, to specify an `:aggregation` setting.
|
397
482
|
|
398
483
|
```ruby
|
399
484
|
free_disk_space = registry.gauge(:free_disk_space_bytes,
|
@@ -404,8 +489,8 @@ free_disk_space = registry.gauge(:free_disk_space_bytes,
|
|
404
489
|
NOTE: This will only work if the store you're using supports the `:aggregation` setting.
|
405
490
|
Of the built-in stores, only `DirectFileStore` does.
|
406
491
|
|
407
|
-
Also note that the `:aggregation` setting works for all metric types, not just for gauges.
|
408
|
-
It would be unusual to use it for anything other than gauges, but if your use-case
|
492
|
+
Also note that the `:aggregation` setting works for all metric types, not just for gauges.
|
493
|
+
It would be unusual to use it for anything other than gauges, but if your use-case
|
409
494
|
requires it, the store will respect your aggregation wishes.
|
410
495
|
|
411
496
|
## Tests
|
@@ -419,9 +504,8 @@ rake
|
|
419
504
|
|
420
505
|
[1]: https://github.com/prometheus/prometheus
|
421
506
|
[2]: http://rack.github.io/
|
422
|
-
[3]: https://
|
507
|
+
[3]: https://circleci.com/gh/prometheus/client_ruby/tree/main.svg?style=svg
|
423
508
|
[4]: https://badge.fury.io/rb/prometheus-client.svg
|
424
|
-
[7]: https://coveralls.io/repos/prometheus/client_ruby/badge.svg?branch=master
|
425
509
|
[8]: https://github.com/prometheus/pushgateway
|
426
510
|
[9]: lib/prometheus/middleware/exporter.rb
|
427
511
|
[10]: lib/prometheus/middleware/collector.rb
|
@@ -187,7 +187,7 @@ has created a good amount of research, benchmarks, and experimental stores, whic
|
|
187
187
|
weren't useful to include in this repo, but may be a useful resource or starting point
|
188
188
|
if you are building your own store.
|
189
189
|
|
190
|
-
Check out the [GoCardless Data Stores Experiments](gocardless/prometheus-client-ruby-data-stores-experiments)
|
190
|
+
Check out the [GoCardless Data Stores Experiments](https://github.com/gocardless/prometheus-client-ruby-data-stores-experiments)
|
191
191
|
repository for these.
|
192
192
|
|
193
193
|
## Sample, imaginary multi-process Data Store
|