ezmetrics 1.1.0 → 2.0.0

Sign up to get free protection for your applications and to get access to all the features.
Files changed (5) hide show
  1. checksums.yaml +4 -4
  2. data/README.md +243 -32
  3. data/lib/ezmetrics.rb +177 -45
  4. data/lib/ezmetrics/benchmark.rb +44 -28
  5. metadata +2 -2
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 5ba2e24bca76b2786c1264b779e72764e123094150855cae241d673505df95ad
4
- data.tar.gz: 3c297b92a53c51b8b881fbbafb15929b9bac36eb53164c3faab37c2caa72db02
3
+ metadata.gz: 16333a0b9644e92d3f3648530e2217d6cc76eb2c427a63cf315d254f2e081cb6
4
+ data.tar.gz: 5dc280c1ec8594856addced752296731f9dfdc9ad7bae84625541d959f50f2d7
5
5
  SHA512:
6
- metadata.gz: 4d1e8118d48cac15dca166c6c4f0d8e38344b28bfcc85343fa87ff4877d34bc7f12b9841241de563723261df5a48210cf1838c09eece3bc15d5f2dde6a58c8c2
7
- data.tar.gz: 8e62b7b53f73a6343c8fd6f11e0e8a9f856314dd262c10611cd9bf061636c8801255de58818a1736069679525bed73c949e6f61778910de8a306da30c352e2a2
6
+ metadata.gz: 48c142669ed79439784f8ed4c12d658169a51b125c2dbfd9e1324271d388fc69df3fe6c960b73970898ab088ca000335a405877c44722f717c38b2169a7c0bc4
7
+ data.tar.gz: c2503cd1900b0723db05e72dae3ee3c7633e46dabde7083f218fae8e0c3354c78edf1ab549b7e308d356017b171dc24da540d93fa4fe1b067f948de8df02e521
data/README.md CHANGED
@@ -2,7 +2,7 @@
2
2
 
3
3
  [![Gem Version](https://badge.fury.io/rb/ezmetrics.svg)](https://badge.fury.io/rb/ezmetrics)
4
4
 
5
- A simple tool for capturing and displaying Rails metrics.
5
+ Simple, lightweight and fast metrics aggregation for Rails.
6
6
 
7
7
  ## Installation
8
8
 
@@ -10,6 +10,16 @@ A simple tool for capturing and displaying Rails metrics.
10
10
  gem 'ezmetrics'
11
11
  ```
12
12
 
13
+ ## Available metrics
14
+
15
+ | Type | Aggregate functions |
16
+ |:----------:|:---------------------------------:|
17
+ | `duration` | `avg`, `max`, `percentile` |
18
+ | `views` | `avg`, `max`, `percentile` |
19
+ | `db` | `avg`, `max`, `percentile` |
20
+ | `queries` | `avg`, `max`, `percentile` |
21
+ | `requests` | `all`, `2xx`, `3xx`, `4xx`, `5xx` |
22
+
13
23
  ## Usage
14
24
 
15
25
  ### Getting started
@@ -27,32 +37,42 @@ and stores them for the timeframe you specified, 60 seconds by default.
27
37
  You can change the timeframe according to your needs and save the metrics by calling `log` method:
28
38
 
29
39
  ```ruby
30
- # Store the metrics for 60 seconds (default behaviour)
31
- EZmetrics.new.log(duration: 100.5, views: 40.7, db: 59.8, queries: 4, status: 200)
40
+ # Store the metrics for 60 seconds (default behaviour)
41
+ EZmetrics.new.log(
42
+ duration: 100.5,
43
+ views: 40.7,
44
+ db: 59.8,
45
+ queries: 4,
46
+ status: 200
47
+ )
32
48
  ```
33
49
 
34
- or
35
-
36
50
  ```ruby
37
- # Store the metrics for 10 minutes
38
- EZmetrics.new(10.minutes).log(duration: 100.5, views: 40.7, db: 59.8, queries: 4, status: 200)
51
+ # Store the metrics for 10 minutes
52
+ EZmetrics.new(10.minutes).log(
53
+ duration: 100.5,
54
+ views: 40.7,
55
+ db: 59.8,
56
+ queries: 4,
57
+ status: 200
58
+ )
39
59
  ```
40
60
 
41
- For displaying metrics you need call `show` method:
61
+ ---
62
+
63
+ For displaying metrics you need to call `show` method:
42
64
 
43
65
  ```ruby
44
- # Aggregate and show metrics for last 60 seconds (default behaviour)
45
- EZmetrics.new.show
66
+ # Aggregate and show metrics for last 60 seconds (default behaviour)
67
+ EZmetrics.new.show
46
68
  ```
47
69
 
48
- or
49
-
50
70
  ```ruby
51
- # Aggregate and show metrics for last 10 minutes
52
- EZmetrics.new(10.minutes).show
71
+ # Aggregate and show metrics for last 10 minutes
72
+ EZmetrics.new(10.minutes).show
53
73
  ```
54
74
 
55
- > Please note that you can combine these timeframes, for example - store for 10 minutes, display for 5 minutes.
75
+ You can combine these timeframes, for example - store for 10 minutes, display for 5 minutes.
56
76
 
57
77
  ### Capture metrics
58
78
 
@@ -121,11 +141,52 @@ This will return a hash with the following structure:
121
141
  }
122
142
  ```
123
143
 
124
- ### Output configuration
144
+ ---
145
+
146
+ If you prefer a single level object - you can change the default output structure by calling `.flatten` before `.show`
147
+
148
+ ```ruby
149
+ EZmetrics.new(1.hour).flatten.show(db: :avg, duration: [:avg, :max])
150
+ ```
151
+
152
+ ```ruby
153
+ {
154
+ db_avg: 182,
155
+ duration_avg: 205,
156
+ duration_max: 5171
157
+ }
158
+ ```
159
+
160
+ ---
161
+
162
+ Same for [partitioned aggregation](#partitioning)
163
+
164
+ ```ruby
165
+ EZmetrics.new(1.hour).partition_by(:minute).flatten.show(db: :avg, duration: [:avg, :max])
166
+ ```
167
+
168
+ ```ruby
169
+ [
170
+ {
171
+ timestamp: 1575242880,
172
+ db_avg: 387,
173
+ duration_avg: 477,
174
+ duration_max: 8566
175
+ },
176
+ {
177
+ timestamp: 1575242940,
178
+ db_avg: 123,
179
+ duration_avg: 234,
180
+ duration_max: 3675
181
+ }
182
+ ]
183
+ ```
184
+
185
+ ### Aggregation
125
186
 
126
- The output can be easily configured by specifying aggregation options as in the following examples:
187
+ The aggregation can be easily configured by specifying aggregation options as in the following examples:
127
188
 
128
- 1. Single
189
+ **1. Single**
129
190
 
130
191
  ```ruby
131
192
  EZmetrics.new.show(duration: :max)
@@ -139,7 +200,9 @@ EZmetrics.new.show(duration: :max)
139
200
  }
140
201
  ```
141
202
 
142
- 2. Multiple
203
+ ---
204
+
205
+ **2. Multiple**
143
206
 
144
207
  ```ruby
145
208
  EZmetrics.new.show(queries: [:max, :avg])
@@ -154,7 +217,9 @@ EZmetrics.new.show(queries: [:max, :avg])
154
217
  }
155
218
  ```
156
219
 
157
- 3. Requests
220
+ ---
221
+
222
+ **3. Requests**
158
223
 
159
224
  ```ruby
160
225
  EZmetrics.new.show(requests: true)
@@ -174,7 +239,9 @@ EZmetrics.new.show(requests: true)
174
239
  }
175
240
  ```
176
241
 
177
- 4. Combined
242
+ ---
243
+
244
+ **4. Combined**
178
245
 
179
246
  ```ruby
180
247
  EZmetrics.new.show(views: :avg, :db: [:avg, :max], requests: true)
@@ -201,15 +268,108 @@ EZmetrics.new.show(views: :avg, :db: [:avg, :max], requests: true)
201
268
  }
202
269
  ```
203
270
 
204
- ### Performance
271
+ ---
272
+
273
+ **5. Percentile**
274
+
275
+ This feature is available since version `2.0.0`.
276
+
277
+ By default percentile aggregation is turned off because it requires to store each value of all metrics.
278
+
279
+ To enable this feature - you need to set `store_each_value: true` when saving the metrics:
280
+
281
+ ```ruby
282
+ EZmetrics.new.log(
283
+ duration: 100.5,
284
+ views: 40.7,
285
+ db: 59.8,
286
+ queries: 4,
287
+ status: 200,
288
+ store_each_value: true
289
+ )
290
+ ```
291
+
292
+ The aggregation syntax has the following format `metrics_type: :percentile_{number}` where `number` is any integer in the 1..99 range.
293
+
294
+
295
+ ```ruby
296
+ EZmetrics.new.show(db: [:avg, :percentile_90, :percentile_95], duration: :percentile_99)
297
+ ```
298
+
299
+ ```ruby
300
+ {
301
+ db: {
302
+ avg: 155,
303
+ percentile_90: 205,
304
+ percentile_95: 215
305
+ },
306
+ duration: {
307
+ percentile_99: 236
308
+ }
309
+ }
310
+ ```
205
311
 
206
- The implementation is based on **Redis** commands such as:
207
312
 
208
- - [`get`](https://redis.io/commands/get)
209
- - [`mget`](https://redis.io/commands/mget)
210
- - [`setex`](https://redis.io/commands/setex)
313
+ ### Partitioning
211
314
 
212
- which are extremely fast.
315
+ If you want to visualize your metrics by using a **line chart**, you will need to use partitioning.
316
+
317
+ To aggregate metrics, partitioned by a unit of time you need to call `.partition_by({time_unit})` before calling `.show`
318
+
319
+ ```ruby
320
+ # Aggregate metrics for last hour, partition by minute
321
+ EZmetrics.new(1.hour).partition_by(:minute).show(duration: [:avg, :max], db: :avg)
322
+ ```
323
+
324
+ This will return an array of objects with the following structure:
325
+
326
+ ```ruby
327
+ [
328
+ {
329
+ timestamp: # UNIX timestamp
330
+ data: # a hash with aggregated metrics
331
+ }
332
+ ]
333
+ ```
334
+
335
+ like in the example below:
336
+
337
+ ```ruby
338
+ [
339
+ {
340
+ timestamp: 1575242880,
341
+ data: {
342
+ duration: {
343
+ avg: 477,
344
+ max: 8566
345
+ },
346
+ db: {
347
+ avg: 387
348
+ }
349
+ }
350
+ },
351
+ {
352
+ timestamp: 1575242940,
353
+ data: {
354
+ duration: {
355
+ avg: 234,
356
+ max: 3675
357
+ },
358
+ db: {
359
+ avg: 123
360
+ }
361
+ }
362
+ }
363
+ ]
364
+ ```
365
+
366
+ Available time units for partitioning: `second`, `minute`, `hour`, `day`. Default: `minute`.
367
+
368
+ ### Performance
369
+
370
+ The aggregation speed relies on the performance of **Redis** (data storage) and **Oj** (json serialization/parsing).
371
+
372
+ #### Simple aggregation
213
373
 
214
374
  You can check the **aggregation** time by running:
215
375
 
@@ -217,12 +377,63 @@ You can check the **aggregation** time by running:
217
377
  EZmetrics::Benchmark.new.measure_aggregation
218
378
  ```
219
379
 
220
- The result of running this benchmark on a _2017 Macbook Pro 2.9 GHz Intel Core i7 with 16 GB of RAM_:
380
+ | Interval | Duration (seconds) |
381
+ | :------: | :----------------: |
382
+ | 1 minute | 0.0 |
383
+ | 1 hour | 0.02 |
384
+ | 12 hours | 0.22 |
385
+ | 24 hours | 0.61 |
386
+ | 48 hours | 1.42 |
387
+
388
+ ---
389
+
390
+ To check the **partitioned aggregation** time you need to run:
391
+
392
+ ```ruby
393
+ EZmetrics::Benchmark.new.measure_aggregation(:minute)
394
+ ```
221
395
 
222
396
  | Interval | Duration (seconds) |
223
397
  | :------: | :----------------: |
224
398
  | 1 minute | 0.0 |
225
- | 1 hour | 0.05 |
226
- | 12 hours | 0.66 |
227
- | 24 hours | 1.83 |
228
- | 48 hours | 4.06 |
399
+ | 1 hour | 0.02 |
400
+ | 12 hours | 0.25 |
401
+ | 24 hours | 0.78 |
402
+ | 48 hours | 1.75 |
403
+
404
+ ---
405
+
406
+ #### Percentile aggregation
407
+
408
+ You can check the **percentile aggregation** time by running:
409
+
410
+ ```ruby
411
+ EZmetrics::Benchmark.new(true).measure_aggregation
412
+ ```
413
+
414
+ | Interval | Duration (seconds) |
415
+ | :------: | :----------------: |
416
+ | 1 minute | 0.0 |
417
+ | 1 hour | 0.14 |
418
+ | 12 hours | 2.11 |
419
+ | 24 hours | 5.85 |
420
+ | 48 hours | 14.1 |
421
+
422
+ ---
423
+
424
+ To check the **partitioned aggregation** time for percentile you need to run:
425
+
426
+ ```ruby
427
+ EZmetrics::Benchmark.new(true).measure_aggregation(:minute)
428
+ ```
429
+
430
+ | Interval | Duration (seconds) |
431
+ | :------: | :----------------: |
432
+ | 1 minute | 0.0 |
433
+ | 1 hour | 0.16 |
434
+ | 12 hours | 1.97 |
435
+ | 24 hours | 5.85 |
436
+ | 48 hours | 13.9 |
437
+
438
+
439
+ The benchmarks above were run on a _2017 Macbook Pro 2.9 GHz Intel Core i7 with 16 GB of RAM_
@@ -5,25 +5,27 @@ require "oj"
5
5
  class EZmetrics
6
6
  METRICS = [:duration, :views, :db, :queries].freeze
7
7
  AGGREGATION_FUNCTIONS = [:max, :avg].freeze
8
+ PARTITION_UNITS = [:second, :minute, :hour, :day].freeze
8
9
 
9
10
  def initialize(interval_seconds=60)
10
11
  @interval_seconds = interval_seconds.to_i
11
- @redis = Redis.new
12
- @storage_key = "ez-metrics"
12
+ @redis = Redis.new(driver: :hiredis)
13
+ @schema = redis_schema
13
14
  end
14
15
 
15
- def log(payload={duration: 0.0, views: 0.0, db: 0.0, queries: 0, status: 200})
16
+ def log(payload={duration: 0.0, views: 0.0, db: 0.0, queries: 0, status: 200, store_each_value: false})
16
17
  @safe_payload = {
17
- duration: payload[:duration].to_f,
18
- views: payload[:views].to_f,
19
- db: payload[:db].to_f,
20
- queries: payload[:queries].to_i,
21
- status: payload[:status].to_i
18
+ duration: payload[:duration].to_f,
19
+ views: payload[:views].to_f,
20
+ db: payload[:db].to_f,
21
+ queries: payload[:queries].to_i,
22
+ status: payload[:status].to_i,
23
+ store_each_value: payload[:store_each_value].to_s == "true"
22
24
  }
23
25
 
24
26
  this_second = Time.now.to_i
25
27
  status_group = "#{payload[:status].to_s[0]}xx"
26
- @this_second_metrics = redis.get("#{storage_key}:#{this_second}")
28
+ @this_second_metrics = redis.get(this_second)
27
29
 
28
30
  if this_second_metrics
29
31
  @this_second_metrics = Oj.load(this_second_metrics)
@@ -31,12 +33,14 @@ class EZmetrics
31
33
  METRICS.each do |metrics_type|
32
34
  update_sum(metrics_type)
33
35
  update_max(metrics_type)
36
+ store_value(metrics_type) if safe_payload[:store_each_value]
34
37
  end
35
38
 
36
- this_second_metrics["statuses"]["all"] += 1
37
- this_second_metrics["statuses"][status_group] += 1
39
+ this_second_metrics[schema["all"]] += 1
40
+ this_second_metrics[schema[status_group]] += 1
38
41
  else
39
42
  @this_second_metrics = {
43
+ "second" => this_second,
40
44
  "duration_sum" => safe_payload[:duration],
41
45
  "duration_max" => safe_payload[:duration],
42
46
  "views_sum" => safe_payload[:views],
@@ -45,50 +49,78 @@ class EZmetrics
45
49
  "db_max" => safe_payload[:db],
46
50
  "queries_sum" => safe_payload[:queries],
47
51
  "queries_max" => safe_payload[:queries],
48
- "statuses" => { "2xx" => 0, "3xx" => 0, "4xx" => 0, "5xx" => 0, "all" => 1 }
52
+ "2xx" => 0,
53
+ "3xx" => 0,
54
+ "4xx" => 0,
55
+ "5xx" => 0,
56
+ "all" => 1
49
57
  }
50
58
 
51
- this_second_metrics["statuses"][status_group] = 1
59
+ if safe_payload[:store_each_value]
60
+ this_second_metrics.merge!(
61
+ "duration_values" => [safe_payload[:duration]],
62
+ "views_values" => [safe_payload[:views]],
63
+ "db_values" => [safe_payload[:db]],
64
+ "queries_values" => [safe_payload[:queries]]
65
+ )
66
+ end
67
+
68
+ this_second_metrics[status_group] = 1
69
+
70
+ @this_second_metrics = this_second_metrics.values
52
71
  end
53
72
 
54
- redis.setex("#{storage_key}:#{this_second}", interval_seconds, Oj.dump(this_second_metrics))
73
+ redis.setex(this_second, interval_seconds, Oj.dump(this_second_metrics))
55
74
  true
56
75
  rescue => error
57
76
  formatted_error(error)
58
77
  end
59
78
 
60
79
  def show(options=nil)
61
- @options = options || default_options
62
- interval_start = Time.now.to_i - interval_seconds
63
- interval_keys = (interval_start..Time.now.to_i).to_a.map { |second| "#{storage_key}:#{second}" }
64
- @interval_metrics = redis.mget(interval_keys).compact.map { |hash| Oj.load(hash) }
80
+ @options = options || default_options
81
+ partitioned_metrics ? aggregate_partitioned_data : aggregate_data
82
+ end
65
83
 
66
- return {} unless interval_metrics.any?
84
+ def flatten
85
+ @flat = true
86
+ self
87
+ end
67
88
 
68
- @requests = interval_metrics.map { |hash| hash["statuses"]["all"] }.compact.sum
89
+ def partition_by(time_unit=:minute)
90
+ time_unit = PARTITION_UNITS.include?(time_unit) ? time_unit : :minute
91
+ @partitioned_metrics = interval_metrics.group_by { |array| second_to_partition_unit(time_unit, array[schema["second"]]) }
92
+ self
93
+ end
94
+
95
+ private
96
+
97
+ attr_reader :redis, :interval_seconds, :interval_metrics, :requests, :flat, :schema,
98
+ :storage_key, :safe_payload, :this_second_metrics, :partitioned_metrics, :options
99
+
100
+ def aggregate_data
101
+ return {} unless interval_metrics.any?
102
+ @requests = interval_metrics.sum { |array| array[schema["all"]] }
69
103
  build_result
70
104
  rescue
71
105
  {}
72
106
  end
73
107
 
74
- private
75
-
76
- attr_reader :redis, :interval_seconds, :interval_metrics, :requests,
77
- :storage_key, :safe_payload, :this_second_metrics, :options
108
+ def aggregate_partitioned_data
109
+ partitioned_metrics.map do |partition, metrics|
110
+ @interval_metrics = metrics
111
+ @requests = interval_metrics.sum { |array| array[schema["all"]] }
112
+ METRICS.each { |metrics_type| instance_variable_set("@sorted_#{metrics_type}_values", nil) }
113
+ flat ? { timestamp: partition, **build_result } : { timestamp: partition, data: build_result }
114
+ end
115
+ rescue
116
+ self
117
+ end
78
118
 
79
119
  def build_result
80
120
  result = {}
81
121
 
82
122
  if options[:requests]
83
- result[:requests] = {
84
- all: requests,
85
- grouped: {
86
- "2xx" => count("2xx"),
87
- "3xx" => count("3xx"),
88
- "4xx" => count("4xx"),
89
- "5xx" => count("5xx")
90
- }
91
- }
123
+ append_requests_to_result(result, { all: requests, grouped: count_all_status_groups })
92
124
  end
93
125
 
94
126
  options.each do |metrics, aggregation_functions|
@@ -97,8 +129,8 @@ class EZmetrics
97
129
  next unless aggregation_functions.any?
98
130
 
99
131
  aggregation_functions.each do |aggregation_function|
100
- result[metrics] ||= {}
101
- result[metrics][aggregation_function] = aggregate(metrics, aggregation_function)
132
+ aggregated_metrics = aggregate(metrics, aggregation_function)
133
+ append_metrics_to_result(result, metrics, aggregation_function, aggregated_metrics)
102
134
  end
103
135
  end
104
136
  result
@@ -106,31 +138,131 @@ class EZmetrics
106
138
  result
107
139
  end
108
140
 
141
+ def append_requests_to_result(result, aggregated_requests)
142
+ return result[:requests] = aggregated_requests unless flat
143
+
144
+ result[:requests_all] = aggregated_requests[:all]
145
+ aggregated_requests[:grouped].each do |group, counter|
146
+ result[:"requests_#{group}"] = counter
147
+ end
148
+ end
149
+
150
+ def append_metrics_to_result(result, metrics, aggregation_function, aggregated_metrics)
151
+ return result[:"#{metrics}_#{aggregation_function}"] = aggregated_metrics if flat
152
+
153
+ result[metrics] ||= {}
154
+ result[metrics][aggregation_function] = aggregated_metrics
155
+ end
156
+
157
+ def second_to_partition_unit(time_unit, second)
158
+ return second if time_unit == :second
159
+ time = Time.at(second)
160
+ return (time - time.sec - time.min * 60 - time.hour * 3600).to_i if time_unit == :day
161
+ return (time - time.sec - time.min * 60).to_i if time_unit == :hour
162
+ (time - time.sec).to_i
163
+ end
164
+
165
+ def interval_metrics
166
+ @interval_metrics ||= begin
167
+ interval_start = Time.now.to_i - interval_seconds
168
+ interval_keys = (interval_start..Time.now.to_i).to_a
169
+ redis.mget(interval_keys).compact.map { |array| Oj.load(array) }
170
+ end
171
+ end
172
+
109
173
  def aggregate(metrics, aggregation_function)
110
- return unless AGGREGATION_FUNCTIONS.include?(aggregation_function)
111
- return avg("#{metrics}_sum".to_sym) if aggregation_function == :avg
112
- return max("#{metrics}_max".to_sym) if aggregation_function == :max
174
+ return avg("#{metrics}_sum") if aggregation_function == :avg
175
+ return max("#{metrics}_max") if aggregation_function == :max
176
+
177
+ percentile = aggregation_function.match(/percentile_(?<value>\d+)/)
178
+
179
+ if percentile && percentile["value"]
180
+ sorted_values = send("sorted_#{metrics}_values")
181
+ percentile(sorted_values, percentile["value"].to_i)&.round
182
+ end
183
+ end
184
+
185
+ METRICS.each do |metrics|
186
+ define_method "sorted_#{metrics}_values" do
187
+ instance_variable_get("@sorted_#{metrics}_values") || instance_variable_set(
188
+ "@sorted_#{metrics}_values", interval_metrics.map { |array| array[schema["#{metrics}_values"]] }.flatten.compact
189
+ )
190
+ end
191
+ end
192
+
193
+ def redis_schema
194
+ [
195
+ "second",
196
+ "duration_sum",
197
+ "duration_max",
198
+ "views_sum",
199
+ "views_max",
200
+ "db_sum",
201
+ "db_max",
202
+ "queries_sum",
203
+ "queries_max",
204
+ "2xx",
205
+ "3xx",
206
+ "4xx",
207
+ "5xx",
208
+ "all",
209
+ "duration_values",
210
+ "views_values",
211
+ "db_values",
212
+ "queries_values"
213
+ ].each_with_index.inject({}){ |result, pair| result[pair[0]] = pair[1] ; result }
113
214
  end
114
215
 
115
216
  def update_sum(metrics)
116
- this_second_metrics["#{metrics}_sum"] += safe_payload[metrics.to_sym]
217
+ this_second_metrics[schema["#{metrics}_sum"]] += safe_payload[metrics]
218
+ end
219
+
220
+ def store_value(metrics)
221
+ this_second_metrics[schema["#{metrics}_values"]] << safe_payload[metrics]
117
222
  end
118
223
 
119
224
  def update_max(metrics)
120
- max_value = [safe_payload[metrics.to_sym], this_second_metrics["#{metrics}_max"]].max
121
- this_second_metrics["#{metrics}_max"] = max_value
225
+ max_value = [safe_payload[metrics], this_second_metrics[schema["#{metrics}_max"]]].max
226
+ this_second_metrics[schema["#{metrics}_max"]] = max_value
122
227
  end
123
228
 
124
229
  def avg(metrics)
125
- (interval_metrics.map { |h| h[metrics.to_s] }.sum.to_f / requests).round
230
+ (interval_metrics.sum { |array| array[schema[metrics]] }.to_f / requests).round
126
231
  end
127
232
 
128
233
  def max(metrics)
129
- interval_metrics.map { |h| h[metrics.to_s] }.max.round
234
+ interval_metrics.max { |array| array[schema[metrics]] }[schema[metrics]].round
235
+ end
236
+
237
+ def percentile(array, pcnt)
238
+ sorted_array = array.sort
239
+
240
+ return nil if array.length == 0
241
+
242
+ rank = (pcnt.to_f / 100) * (array.length + 1)
243
+ whole = rank.truncate
244
+
245
+ # if has fractional part
246
+ if whole != rank
247
+ s0 = sorted_array[whole - 1]
248
+ s1 = sorted_array[whole]
249
+
250
+ f = (rank - rank.truncate).abs
251
+
252
+ return (f * (s1 - s0)) + s0
253
+ else
254
+ return sorted_array[whole - 1]
255
+ end
130
256
  end
131
257
 
132
- def count(group)
133
- interval_metrics.map { |h| h["statuses"][group.to_s] }.sum
258
+ def count_all_status_groups
259
+ interval_metrics.inject({ "2xx" => 0, "3xx" => 0, "4xx" => 0, "5xx" => 0 }) do |result, array|
260
+ result["2xx"] += array[schema["2xx"]]
261
+ result["3xx"] += array[schema["3xx"]]
262
+ result["4xx"] += array[schema["4xx"]]
263
+ result["5xx"] += array[schema["5xx"]]
264
+ result
265
+ end
134
266
  end
135
267
 
136
268
  def default_options
@@ -2,12 +2,13 @@ require "benchmark"
2
2
 
3
3
  class EZmetrics::Benchmark
4
4
 
5
- def initialize
6
- @start = Time.now.to_i
7
- @redis = Redis.new
8
- @durations = []
9
- @iterations = 3
10
- @intervals = {
5
+ def initialize(store_each_value=false)
6
+ @store_each_value = store_each_value
7
+ @start = Time.now.to_i
8
+ @redis = Redis.new(driver: :hiredis)
9
+ @durations = []
10
+ @iterations = 1
11
+ @intervals = {
11
12
  "1.minute" => 60,
12
13
  "1.hour " => 3600,
13
14
  "12.hours" => 43200,
@@ -16,11 +17,11 @@ class EZmetrics::Benchmark
16
17
  }
17
18
  end
18
19
 
19
- def measure_aggregation
20
+ def measure_aggregation(partition_by=nil)
20
21
  write_metrics
21
22
  print_header
22
23
  intervals.each do |interval, seconds|
23
- result = measure_aggregation_time(interval, seconds)
24
+ result = measure_aggregation_time(interval, seconds, partition_by)
24
25
  print_row(result)
25
26
  end
26
27
  cleanup_metrics
@@ -29,43 +30,58 @@ class EZmetrics::Benchmark
29
30
 
30
31
  private
31
32
 
32
- attr_reader :start, :redis, :durations, :intervals, :iterations
33
+ attr_reader :start, :redis, :durations, :intervals, :iterations, :store_each_value
33
34
 
34
35
  def write_metrics
35
36
  seconds = intervals.values.max
36
37
  seconds.times do |i|
37
38
  second = start - i
38
39
  payload = {
39
- "duration_sum" => rand(10000),
40
- "duration_max" => rand(10000),
41
- "views_sum" => rand(1000),
42
- "views_max" => rand(1000),
43
- "db_sum" => rand(8000),
44
- "db_max" => rand(8000),
45
- "queries_sum" => rand(100),
46
- "queries_max" => rand(100),
47
- "statuses" => {
48
- "2xx" => rand(10),
49
- "3xx" => rand(10),
50
- "4xx" => rand(10),
51
- "5xx" => rand(10),
52
- "all" => rand(40)
53
- }
40
+ "second" => second,
41
+ "duration_sum" => rand(10000),
42
+ "duration_max" => rand(10000),
43
+ "views_sum" => rand(1000),
44
+ "views_max" => rand(1000),
45
+ "db_sum" => rand(8000),
46
+ "db_max" => rand(8000),
47
+ "queries_sum" => rand(100),
48
+ "queries_max" => rand(100),
49
+ "2xx" => rand(1..10),
50
+ "3xx" => rand(1..10),
51
+ "4xx" => rand(1..10),
52
+ "5xx" => rand(1..10),
53
+ "all" => rand(1..40)
54
54
  }
55
- redis.setex("ez-metrics:#{second}", seconds, Oj.dump(payload))
55
+
56
+ if store_each_value
57
+ payload.merge!(
58
+ "duration_values" => Array.new(100) { rand(10..60000) },
59
+ "views_values" => Array.new(100) { rand(10..60000) },
60
+ "db_values" => Array.new(100) { rand(10..60000) },
61
+ "queries_values" => Array.new(10) { rand(1..60) }
62
+ )
63
+ end
64
+ redis.setex(second, seconds, Oj.dump(payload.values))
56
65
  end
57
66
  nil
58
67
  end
59
68
 
60
69
  def cleanup_metrics
61
70
  interval_start = Time.now.to_i - intervals.values.max - 100
62
- interval_keys = (interval_start..Time.now.to_i).to_a.map { |second| "ez-metrics:#{second}" }
71
+ interval_keys = (interval_start..Time.now.to_i).to_a
63
72
  redis.del(interval_keys)
64
73
  end
65
74
 
66
- def measure_aggregation_time(interval, seconds)
75
+ def measure_aggregation_time(interval, seconds, partition_by)
67
76
  iterations.times do
68
- durations << ::Benchmark.measure { EZmetrics.new(seconds).show }.real
77
+ durations << ::Benchmark.measure do
78
+ ezmetrics = EZmetrics.new(seconds)
79
+ if store_each_value
80
+ partition_by ? ezmetrics.partition_by(partition_by).show(db: :percentile_90) : ezmetrics.show(db: :percentile_90)
81
+ else
82
+ partition_by ? ezmetrics.partition_by(partition_by).show : ezmetrics.show
83
+ end
84
+ end.real
69
85
  end
70
86
 
71
87
  return {
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: ezmetrics
3
3
  version: !ruby/object:Gem::Version
4
- version: 1.1.0
4
+ version: 2.0.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Nicolae Rotaru
@@ -66,7 +66,7 @@ dependencies:
66
66
  - - "~>"
67
67
  - !ruby/object:Gem::Version
68
68
  version: '3.5'
69
- description: A simple tool for capturing and displaying Rails metrics.
69
+ description: Simple, lightweight and fast metrics aggregation for Rails.
70
70
  email: nyku.rn@gmail.com
71
71
  executables: []
72
72
  extensions: []