ezmetrics 1.1.1 → 2.0.1

Sign up to get free protection for your applications and to get access to all the features.
Files changed (5) hide show
  1. checksums.yaml +4 -4
  2. data/README.md +229 -16
  3. data/lib/ezmetrics.rb +172 -35
  4. data/lib/ezmetrics/benchmark.rb +43 -27
  5. metadata +2 -2
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 518d9350ed026801311bdc695857fdd057856f6b7fe40715bf8072c990e8e5b8
4
- data.tar.gz: 7d36fbd3e63994789ce8ac04212fb44c30041fd33f7173fbfdf1a6257beea99a
3
+ metadata.gz: 6dca2c75bfaae789d33af8f319e5c6ff86772f480d4d2f6549f044f4bfe12586
4
+ data.tar.gz: 259881b87a073eaceb729b402166d185b3d66ee48af1727e27cc78a8ae5c226c
5
5
  SHA512:
6
- metadata.gz: 42e990916d474ed52f08330bd7210b4c6f464a0199354cfc1fe3710817b9d57790d445d0d1f88c8fccd6daeb627638a80dd8b578c685544f6a4f9c7765522a5f
7
- data.tar.gz: 5b49eec021acaef39af2c9401681fa85f02d368f36231a60a64faf595e5b0c1432ce07a67fb86ce9c8b3041835fd4480da86ea5d91f7ca0939ef5b40592e6a49
6
+ metadata.gz: 119325aef351ecbb8cf54a80941474904b727207ed658298f04e225bb54c99f2e47509a42f076fd1ddbe034f86d48db2b9692dbd5498c1e594903253aa4f65d6
7
+ data.tar.gz: fe97199f23bfac6eddf90755bfbcf74ff524b6735cf340b1dcbb625c9f66dfa4bd021733dccb7ba0d7c63aca6cb6ae6dd5523ba478fbd7ca22f1f91b33c3d661
data/README.md CHANGED
@@ -2,7 +2,7 @@
2
2
 
3
3
  [![Gem Version](https://badge.fury.io/rb/ezmetrics.svg)](https://badge.fury.io/rb/ezmetrics)
4
4
 
5
- A simple tool for capturing and displaying Rails metrics.
5
+ Simple, lightweight and fast metrics aggregation for Rails.
6
6
 
7
7
  ## Installation
8
8
 
@@ -10,6 +10,16 @@ A simple tool for capturing and displaying Rails metrics.
10
10
  gem 'ezmetrics'
11
11
  ```
12
12
 
13
+ ## Available metrics
14
+
15
+ | Type | Aggregate functions |
16
+ |:----------:|:---------------------------------:|
17
+ | `duration` | `avg`, `max`, `percentile` |
18
+ | `views` | `avg`, `max`, `percentile` |
19
+ | `db` | `avg`, `max`, `percentile` |
20
+ | `queries` | `avg`, `max`, `percentile` |
21
+ | `requests` | `all`, `2xx`, `3xx`, `4xx`, `5xx` |
22
+
13
23
  ## Usage
14
24
 
15
25
  ### Getting started
@@ -27,13 +37,25 @@ and stores them for the timeframe you specified, 60 seconds by default.
27
37
  You can change the timeframe according to your needs and save the metrics by calling `log` method:
28
38
 
29
39
  ```ruby
30
- # Store the metrics for 60 seconds (default behaviour)
31
- EZmetrics.new.log(duration: 100.5, views: 40.7, db: 59.8, queries: 4, status: 200)
40
+ # Store the metrics for 60 seconds (default behaviour)
41
+ EZmetrics.new.log(
42
+ duration: 100.5,
43
+ views: 40.7,
44
+ db: 59.8,
45
+ queries: 4,
46
+ status: 200
47
+ )
32
48
  ```
33
49
 
34
50
  ```ruby
35
- # Store the metrics for 10 minutes
36
- EZmetrics.new(10.minutes).log(duration: 100.5, views: 40.7, db: 59.8, queries: 4, status: 200)
51
+ # Store the metrics for 10 minutes
52
+ EZmetrics.new(10.minutes).log(
53
+ duration: 100.5,
54
+ views: 40.7,
55
+ db: 59.8,
56
+ queries: 4,
57
+ status: 200
58
+ )
37
59
  ```
38
60
 
39
61
  ---
@@ -41,16 +63,16 @@ You can change the timeframe according to your needs and save the metrics by cal
41
63
  For displaying metrics you need to call `show` method:
42
64
 
43
65
  ```ruby
44
- # Aggregate and show metrics for last 60 seconds (default behaviour)
45
- EZmetrics.new.show
66
+ # Aggregate and show metrics for last 60 seconds (default behaviour)
67
+ EZmetrics.new.show
46
68
  ```
47
69
 
48
70
  ```ruby
49
- # Aggregate and show metrics for last 10 minutes
50
- EZmetrics.new(10.minutes).show
71
+ # Aggregate and show metrics for last 10 minutes
72
+ EZmetrics.new(10.minutes).show
51
73
  ```
52
74
 
53
- > Please note that you can combine these timeframes, for example - store for 10 minutes, display for 5 minutes.
75
+ You can combine these timeframes, for example - store for 10 minutes, display for 5 minutes.
54
76
 
55
77
  ### Capture metrics
56
78
 
@@ -119,7 +141,48 @@ This will return a hash with the following structure:
119
141
  }
120
142
  ```
121
143
 
122
- ### Aggregation options
144
+ ---
145
+
146
+ If you prefer a single level object - you can change the default output structure by calling `.flatten` before `.show`
147
+
148
+ ```ruby
149
+ EZmetrics.new(1.hour).flatten.show(db: :avg, duration: [:avg, :max])
150
+ ```
151
+
152
+ ```ruby
153
+ {
154
+ db_avg: 182,
155
+ duration_avg: 205,
156
+ duration_max: 5171
157
+ }
158
+ ```
159
+
160
+ ---
161
+
162
+ Same for [partitioned aggregation](#partitioning)
163
+
164
+ ```ruby
165
+ EZmetrics.new(1.hour).partition_by(:minute).flatten.show(db: :avg, duration: [:avg, :max])
166
+ ```
167
+
168
+ ```ruby
169
+ [
170
+ {
171
+ timestamp: 1575242880,
172
+ db_avg: 387,
173
+ duration_avg: 477,
174
+ duration_max: 8566
175
+ },
176
+ {
177
+ timestamp: 1575242940,
178
+ db_avg: 123,
179
+ duration_avg: 234,
180
+ duration_max: 3675
181
+ }
182
+ ]
183
+ ```
184
+
185
+ ### Aggregation
123
186
 
124
187
  The aggregation can be easily configured by specifying aggregation options as in the following examples:
125
188
 
@@ -205,22 +268,172 @@ EZmetrics.new.show(views: :avg, :db: [:avg, :max], requests: true)
205
268
  }
206
269
  ```
207
270
 
271
+ ---
272
+
273
+ **5. Percentile**
274
+
275
+ This feature is available since version `2.0.0`.
276
+
277
+ By default percentile aggregation is turned off because it requires to store each value of all metrics.
278
+
279
+ To enable this feature - you need to set `store_each_value: true` when saving the metrics:
280
+
281
+ ```ruby
282
+ EZmetrics.new.log(
283
+ duration: 100.5,
284
+ views: 40.7,
285
+ db: 59.8,
286
+ queries: 4,
287
+ status: 200,
288
+ store_each_value: true
289
+ )
290
+ ```
291
+
292
+ The aggregation syntax has the following format `metrics_type: :percentile_{number}` where `number` is any integer in the 1..99 range.
293
+
294
+
295
+ ```ruby
296
+ EZmetrics.new.show(db: [:avg, :percentile_90, :percentile_95], duration: :percentile_99)
297
+ ```
298
+
299
+ ```ruby
300
+ {
301
+ db: {
302
+ avg: 155,
303
+ percentile_90: 205,
304
+ percentile_95: 215
305
+ },
306
+ duration: {
307
+ percentile_99: 236
308
+ }
309
+ }
310
+ ```
311
+
312
+
313
+ ### Partitioning
314
+
315
+ If you want to visualize your metrics by using a **line chart**, you will need to use partitioning.
316
+
317
+ To aggregate metrics, partitioned by a unit of time you need to call `.partition_by({time_unit})` before calling `.show`
318
+
319
+ ```ruby
320
+ # Aggregate metrics for last hour, partition by minute
321
+ EZmetrics.new(1.hour).partition_by(:minute).show(duration: [:avg, :max], db: :avg)
322
+ ```
323
+
324
+ This will return an array of objects with the following structure:
325
+
326
+ ```ruby
327
+ [
328
+ {
329
+ timestamp: # UNIX timestamp
330
+ data: # a hash with aggregated metrics
331
+ }
332
+ ]
333
+ ```
334
+
335
+ like in the example below:
336
+
337
+ ```ruby
338
+ [
339
+ {
340
+ timestamp: 1575242880,
341
+ data: {
342
+ duration: {
343
+ avg: 477,
344
+ max: 8566
345
+ },
346
+ db: {
347
+ avg: 387
348
+ }
349
+ }
350
+ },
351
+ {
352
+ timestamp: 1575242940,
353
+ data: {
354
+ duration: {
355
+ avg: 234,
356
+ max: 3675
357
+ },
358
+ db: {
359
+ avg: 123
360
+ }
361
+ }
362
+ }
363
+ ]
364
+ ```
365
+
366
+ Available time units for partitioning: `second`, `minute`, `hour`, `day`. Default: `minute`.
367
+
208
368
  ### Performance
209
369
 
210
370
  The aggregation speed relies on the performance of **Redis** (data storage) and **Oj** (json serialization/parsing).
211
371
 
372
+ #### Simple aggregation
373
+
212
374
  You can check the **aggregation** time by running:
213
375
 
214
376
  ```ruby
215
377
  EZmetrics::Benchmark.new.measure_aggregation
216
378
  ```
217
379
 
218
- The result of running this benchmark on a _2017 Macbook Pro 2.9 GHz Intel Core i7 with 16 GB of RAM_:
380
+ | Interval | Duration (seconds) |
381
+ | :------: | :----------------: |
382
+ | 1 minute | 0.0 |
383
+ | 1 hour | 0.02 |
384
+ | 12 hours | 0.22 |
385
+ | 24 hours | 0.61 |
386
+ | 48 hours | 1.42 |
387
+
388
+ ---
389
+
390
+ To check the **partitioned aggregation** time you need to run:
391
+
392
+ ```ruby
393
+ EZmetrics::Benchmark.new.measure_aggregation(:minute)
394
+ ```
395
+
396
+ | Interval | Duration (seconds) |
397
+ | :------: | :----------------: |
398
+ | 1 minute | 0.0 |
399
+ | 1 hour | 0.02 |
400
+ | 12 hours | 0.25 |
401
+ | 24 hours | 0.78 |
402
+ | 48 hours | 1.75 |
403
+
404
+ ---
405
+
406
+ #### Percentile aggregation
407
+
408
+ You can check the **percentile aggregation** time by running:
409
+
410
+ ```ruby
411
+ EZmetrics::Benchmark.new(true).measure_aggregation
412
+ ```
219
413
 
220
414
  | Interval | Duration (seconds) |
221
415
  | :------: | :----------------: |
222
416
  | 1 minute | 0.0 |
223
- | 1 hour | 0.04 |
224
- | 12 hours | 0.49 |
225
- | 24 hours | 1.51 |
226
- | 48 hours | 3.48 |
417
+ | 1 hour | 0.14 |
418
+ | 12 hours | 2.11 |
419
+ | 24 hours | 5.85 |
420
+ | 48 hours | 14.1 |
421
+
422
+ ---
423
+
424
+ To check the **partitioned aggregation** time for percentile you need to run:
425
+
426
+ ```ruby
427
+ EZmetrics::Benchmark.new(true).measure_aggregation(:minute)
428
+ ```
429
+
430
+ | Interval | Duration (seconds) |
431
+ | :------: | :----------------: |
432
+ | 1 minute | 0.0 |
433
+ | 1 hour | 0.16 |
434
+ | 12 hours | 1.97 |
435
+ | 24 hours | 5.85 |
436
+ | 48 hours | 13.9 |
437
+
438
+
439
+ The benchmarks above were run on a _2017 Macbook Pro 2.9 GHz Intel Core i7 with 16 GB of RAM_
data/lib/ezmetrics.rb CHANGED
@@ -5,19 +5,22 @@ require "oj"
5
5
  class EZmetrics
6
6
  METRICS = [:duration, :views, :db, :queries].freeze
7
7
  AGGREGATION_FUNCTIONS = [:max, :avg].freeze
8
+ PARTITION_UNITS = [:second, :minute, :hour, :day].freeze
8
9
 
9
10
  def initialize(interval_seconds=60)
10
11
  @interval_seconds = interval_seconds.to_i
11
- @redis = Redis.new
12
+ @redis = Redis.new(driver: :hiredis)
13
+ @schema = redis_schema
12
14
  end
13
15
 
14
- def log(payload={duration: 0.0, views: 0.0, db: 0.0, queries: 0, status: 200})
16
+ def log(payload={duration: 0.0, views: 0.0, db: 0.0, queries: 0, status: 200, store_each_value: false})
15
17
  @safe_payload = {
16
- duration: payload[:duration].to_f,
17
- views: payload[:views].to_f,
18
- db: payload[:db].to_f,
19
- queries: payload[:queries].to_i,
20
- status: payload[:status].to_i
18
+ duration: payload[:duration].to_f,
19
+ views: payload[:views].to_f,
20
+ db: payload[:db].to_f,
21
+ queries: payload[:queries].to_i,
22
+ status: payload[:status].to_i,
23
+ store_each_value: payload[:store_each_value].to_s == "true"
21
24
  }
22
25
 
23
26
  this_second = Time.now.to_i
@@ -30,12 +33,14 @@ class EZmetrics
30
33
  METRICS.each do |metrics_type|
31
34
  update_sum(metrics_type)
32
35
  update_max(metrics_type)
36
+ store_value(metrics_type) if safe_payload[:store_each_value]
33
37
  end
34
38
 
35
- this_second_metrics["statuses"]["all"] += 1
36
- this_second_metrics["statuses"][status_group] += 1
39
+ this_second_metrics[schema["all"]] += 1
40
+ this_second_metrics[schema[status_group]] += 1
37
41
  else
38
42
  @this_second_metrics = {
43
+ "second" => this_second,
39
44
  "duration_sum" => safe_payload[:duration],
40
45
  "duration_max" => safe_payload[:duration],
41
46
  "views_sum" => safe_payload[:views],
@@ -44,10 +49,25 @@ class EZmetrics
44
49
  "db_max" => safe_payload[:db],
45
50
  "queries_sum" => safe_payload[:queries],
46
51
  "queries_max" => safe_payload[:queries],
47
- "statuses" => { "2xx" => 0, "3xx" => 0, "4xx" => 0, "5xx" => 0, "all" => 1 }
52
+ "2xx" => 0,
53
+ "3xx" => 0,
54
+ "4xx" => 0,
55
+ "5xx" => 0,
56
+ "all" => 1
48
57
  }
49
58
 
50
- this_second_metrics["statuses"][status_group] = 1
59
+ if safe_payload[:store_each_value]
60
+ this_second_metrics.merge!(
61
+ "duration_values" => [safe_payload[:duration]],
62
+ "views_values" => [safe_payload[:views]],
63
+ "db_values" => [safe_payload[:db]],
64
+ "queries_values" => [safe_payload[:queries]]
65
+ )
66
+ end
67
+
68
+ this_second_metrics[status_group] = 1
69
+
70
+ @this_second_metrics = this_second_metrics.values
51
71
  end
52
72
 
53
73
  redis.setex(this_second, interval_seconds, Oj.dump(this_second_metrics))
@@ -57,28 +77,51 @@ class EZmetrics
57
77
  end
58
78
 
59
79
  def show(options=nil)
60
- @options = options || default_options
61
- interval_start = Time.now.to_i - interval_seconds
62
- interval_keys = (interval_start..Time.now.to_i).to_a
63
- @interval_metrics = redis.mget(interval_keys).compact.map { |hash| Oj.load(hash) }
80
+ @options = options || default_options
81
+ partitioned_metrics ? aggregate_partitioned_data : aggregate_data
82
+ end
64
83
 
65
- return {} unless interval_metrics.any?
84
+ def flatten
85
+ @flat = true
86
+ self
87
+ end
66
88
 
67
- @requests = interval_metrics.sum { |hash| hash["statuses"]["all"] }
89
+ def partition_by(time_unit=:minute)
90
+ time_unit = PARTITION_UNITS.include?(time_unit) ? time_unit : :minute
91
+ @partitioned_metrics = interval_metrics.group_by { |array| second_to_partition_unit(time_unit, array[schema["second"]]) }
92
+ self
93
+ end
94
+
95
+ private
96
+
97
+ attr_reader :redis, :interval_seconds, :interval_metrics, :requests, :flat, :schema,
98
+ :storage_key, :safe_payload, :this_second_metrics, :partitioned_metrics, :options
99
+
100
+ def aggregate_data
101
+ return {} unless interval_metrics.any?
102
+ @requests = interval_metrics.sum { |array| array[schema["all"]] }
68
103
  build_result
69
104
  rescue
70
105
  {}
71
106
  end
72
107
 
73
- private
74
-
75
- attr_reader :redis, :interval_seconds, :interval_metrics, :requests,
76
- :storage_key, :safe_payload, :this_second_metrics, :options
108
+ def aggregate_partitioned_data
109
+ partitioned_metrics.map do |partition, metrics|
110
+ @interval_metrics = metrics
111
+ @requests = interval_metrics.sum { |array| array[schema["all"]] }
112
+ METRICS.each { |metrics_type| instance_variable_set("@sorted_#{metrics_type}_values", nil) }
113
+ flat ? { timestamp: partition, **build_result } : { timestamp: partition, data: build_result }
114
+ end
115
+ rescue
116
+ self
117
+ end
77
118
 
78
119
  def build_result
79
120
  result = {}
80
121
 
81
- result[:requests] = { all: requests, grouped: count_all_status_groups } if options[:requests]
122
+ if options[:requests]
123
+ append_requests_to_result(result, { all: requests, grouped: count_all_status_groups })
124
+ end
82
125
 
83
126
  options.each do |metrics, aggregation_functions|
84
127
  next unless METRICS.include?(metrics)
@@ -86,8 +129,8 @@ class EZmetrics
86
129
  next unless aggregation_functions.any?
87
130
 
88
131
  aggregation_functions.each do |aggregation_function|
89
- result[metrics] ||= {}
90
- result[metrics][aggregation_function] = aggregate(metrics, aggregation_function)
132
+ aggregated_metrics = aggregate(metrics, aggregation_function)
133
+ append_metrics_to_result(result, metrics, aggregation_function, aggregated_metrics)
91
134
  end
92
135
  end
93
136
  result
@@ -95,35 +138,129 @@ class EZmetrics
95
138
  result
96
139
  end
97
140
 
141
+ def append_requests_to_result(result, aggregated_requests)
142
+ return result[:requests] = aggregated_requests unless flat
143
+
144
+ result[:requests_all] = aggregated_requests[:all]
145
+ aggregated_requests[:grouped].each do |group, counter|
146
+ result[:"requests_#{group}"] = counter
147
+ end
148
+ end
149
+
150
+ def append_metrics_to_result(result, metrics, aggregation_function, aggregated_metrics)
151
+ return result[:"#{metrics}_#{aggregation_function}"] = aggregated_metrics if flat
152
+
153
+ result[metrics] ||= {}
154
+ result[metrics][aggregation_function] = aggregated_metrics
155
+ end
156
+
157
+ def second_to_partition_unit(time_unit, second)
158
+ return second if time_unit == :second
159
+ time = Time.at(second)
160
+ return (time - time.sec - time.min * 60 - time.hour * 3600).to_i if time_unit == :day
161
+ return (time - time.sec - time.min * 60).to_i if time_unit == :hour
162
+ (time - time.sec).to_i
163
+ end
164
+
165
+ def interval_metrics
166
+ @interval_metrics ||= begin
167
+ interval_start = Time.now.to_i - interval_seconds
168
+ interval_keys = (interval_start..Time.now.to_i).to_a
169
+ redis.mget(interval_keys).compact.map { |array| Oj.load(array) }
170
+ end
171
+ end
172
+
98
173
  def aggregate(metrics, aggregation_function)
99
- return unless AGGREGATION_FUNCTIONS.include?(aggregation_function)
100
174
  return avg("#{metrics}_sum") if aggregation_function == :avg
101
175
  return max("#{metrics}_max") if aggregation_function == :max
176
+
177
+ percentile = aggregation_function.match(/percentile_(?<value>\d+)/)
178
+
179
+ if percentile && percentile["value"]
180
+ sorted_values = send("sorted_#{metrics}_values")
181
+ percentile(sorted_values, percentile["value"].to_i)&.round
182
+ end
183
+ end
184
+
185
+ METRICS.each do |metrics|
186
+ define_method "sorted_#{metrics}_values" do
187
+ instance_variable_get("@sorted_#{metrics}_values") || instance_variable_set(
188
+ "@sorted_#{metrics}_values", interval_metrics.map { |array| array[schema["#{metrics}_values"]] }.flatten.compact
189
+ )
190
+ end
191
+ end
192
+
193
+ def redis_schema
194
+ [
195
+ "second",
196
+ "duration_sum",
197
+ "duration_max",
198
+ "views_sum",
199
+ "views_max",
200
+ "db_sum",
201
+ "db_max",
202
+ "queries_sum",
203
+ "queries_max",
204
+ "2xx",
205
+ "3xx",
206
+ "4xx",
207
+ "5xx",
208
+ "all",
209
+ "duration_values",
210
+ "views_values",
211
+ "db_values",
212
+ "queries_values"
213
+ ].each_with_index.inject({}){ |result, pair| result[pair[0]] = pair[1] ; result }
102
214
  end
103
215
 
104
216
  def update_sum(metrics)
105
- this_second_metrics["#{metrics}_sum"] += safe_payload[metrics]
217
+ this_second_metrics[schema["#{metrics}_sum"]] += safe_payload[metrics]
218
+ end
219
+
220
+ def store_value(metrics)
221
+ this_second_metrics[schema["#{metrics}_values"]] << safe_payload[metrics]
106
222
  end
107
223
 
108
224
  def update_max(metrics)
109
- max_value = [safe_payload[metrics], this_second_metrics["#{metrics}_max"]].max
110
- this_second_metrics["#{metrics}_max"] = max_value
225
+ max_value = [safe_payload[metrics], this_second_metrics[schema["#{metrics}_max"]]].max
226
+ this_second_metrics[schema["#{metrics}_max"]] = max_value
111
227
  end
112
228
 
113
229
  def avg(metrics)
114
- (interval_metrics.sum { |h| h[metrics] }.to_f / requests).round
230
+ (interval_metrics.sum { |array| array[schema[metrics]] }.to_f / requests).round
115
231
  end
116
232
 
117
233
  def max(metrics)
118
- interval_metrics.max { |h| h[metrics] }[metrics].round
234
+ interval_metrics.max { |array| array[schema[metrics]] }[schema[metrics]].round
235
+ end
236
+
237
+ def percentile(array, pcnt)
238
+ sorted_array = array.sort
239
+
240
+ return nil if array.length == 0
241
+
242
+ rank = (pcnt.to_f / 100) * (array.length + 1)
243
+ whole = rank.truncate
244
+
245
+ # if has fractional part
246
+ if whole != rank
247
+ s0 = sorted_array[whole - 1]
248
+ s1 = sorted_array[whole]
249
+
250
+ f = (rank - rank.truncate).abs
251
+
252
+ return (f * (s1 - s0)) + s0
253
+ else
254
+ return sorted_array[whole - 1]
255
+ end
119
256
  end
120
257
 
121
258
  def count_all_status_groups
122
- interval_metrics.inject({ "2xx" => 0, "3xx" => 0, "4xx" => 0, "5xx" => 0 }) do |result, h|
123
- result["2xx"] += h["statuses"]["2xx"]
124
- result["3xx"] += h["statuses"]["3xx"]
125
- result["4xx"] += h["statuses"]["4xx"]
126
- result["5xx"] += h["statuses"]["5xx"]
259
+ interval_metrics.inject({ "2xx" => 0, "3xx" => 0, "4xx" => 0, "5xx" => 0 }) do |result, array|
260
+ result["2xx"] += array[schema["2xx"]]
261
+ result["3xx"] += array[schema["3xx"]]
262
+ result["4xx"] += array[schema["4xx"]]
263
+ result["5xx"] += array[schema["5xx"]]
127
264
  result
128
265
  end
129
266
  end
@@ -2,12 +2,13 @@ require "benchmark"
2
2
 
3
3
  class EZmetrics::Benchmark
4
4
 
5
- def initialize
6
- @start = Time.now.to_i
7
- @redis = Redis.new
8
- @durations = []
9
- @iterations = 3
10
- @intervals = {
5
+ def initialize(store_each_value=false)
6
+ @store_each_value = store_each_value
7
+ @start = Time.now.to_i
8
+ @redis = Redis.new(driver: :hiredis)
9
+ @durations = []
10
+ @iterations = 1
11
+ @intervals = {
11
12
  "1.minute" => 60,
12
13
  "1.hour " => 3600,
13
14
  "12.hours" => 43200,
@@ -16,11 +17,11 @@ class EZmetrics::Benchmark
16
17
  }
17
18
  end
18
19
 
19
- def measure_aggregation
20
+ def measure_aggregation(partition_by=nil)
20
21
  write_metrics
21
22
  print_header
22
23
  intervals.each do |interval, seconds|
23
- result = measure_aggregation_time(interval, seconds)
24
+ result = measure_aggregation_time(interval, seconds, partition_by)
24
25
  print_row(result)
25
26
  end
26
27
  cleanup_metrics
@@ -29,30 +30,38 @@ class EZmetrics::Benchmark
29
30
 
30
31
  private
31
32
 
32
- attr_reader :start, :redis, :durations, :intervals, :iterations
33
+ attr_reader :start, :redis, :durations, :intervals, :iterations, :store_each_value
33
34
 
34
35
  def write_metrics
35
36
  seconds = intervals.values.max
36
37
  seconds.times do |i|
37
38
  second = start - i
38
39
  payload = {
39
- "duration_sum" => rand(10000),
40
- "duration_max" => rand(10000),
41
- "views_sum" => rand(1000),
42
- "views_max" => rand(1000),
43
- "db_sum" => rand(8000),
44
- "db_max" => rand(8000),
45
- "queries_sum" => rand(100),
46
- "queries_max" => rand(100),
47
- "statuses" => {
48
- "2xx" => rand(10),
49
- "3xx" => rand(10),
50
- "4xx" => rand(10),
51
- "5xx" => rand(10),
52
- "all" => rand(40)
53
- }
40
+ "second" => second,
41
+ "duration_sum" => rand(10000),
42
+ "duration_max" => rand(10000),
43
+ "views_sum" => rand(1000),
44
+ "views_max" => rand(1000),
45
+ "db_sum" => rand(8000),
46
+ "db_max" => rand(8000),
47
+ "queries_sum" => rand(100),
48
+ "queries_max" => rand(100),
49
+ "2xx" => rand(1..10),
50
+ "3xx" => rand(1..10),
51
+ "4xx" => rand(1..10),
52
+ "5xx" => rand(1..10),
53
+ "all" => rand(1..40)
54
54
  }
55
- redis.setex(second, seconds, Oj.dump(payload))
55
+
56
+ if store_each_value
57
+ payload.merge!(
58
+ "duration_values" => Array.new(100) { rand(10..60000) },
59
+ "views_values" => Array.new(100) { rand(10..60000) },
60
+ "db_values" => Array.new(100) { rand(10..60000) },
61
+ "queries_values" => Array.new(10) { rand(1..60) }
62
+ )
63
+ end
64
+ redis.setex(second, seconds, Oj.dump(payload.values))
56
65
  end
57
66
  nil
58
67
  end
@@ -63,9 +72,16 @@ class EZmetrics::Benchmark
63
72
  redis.del(interval_keys)
64
73
  end
65
74
 
66
- def measure_aggregation_time(interval, seconds)
75
+ def measure_aggregation_time(interval, seconds, partition_by)
67
76
  iterations.times do
68
- durations << ::Benchmark.measure { EZmetrics.new(seconds).show }.real
77
+ durations << ::Benchmark.measure do
78
+ ezmetrics = EZmetrics.new(seconds)
79
+ if store_each_value
80
+ partition_by ? ezmetrics.partition_by(partition_by).show(db: :percentile_90) : ezmetrics.show(db: :percentile_90)
81
+ else
82
+ partition_by ? ezmetrics.partition_by(partition_by).show : ezmetrics.show
83
+ end
84
+ end.real
69
85
  end
70
86
 
71
87
  return {
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: ezmetrics
3
3
  version: !ruby/object:Gem::Version
4
- version: 1.1.1
4
+ version: 2.0.1
5
5
  platform: ruby
6
6
  authors:
7
7
  - Nicolae Rotaru
@@ -66,7 +66,7 @@ dependencies:
66
66
  - - "~>"
67
67
  - !ruby/object:Gem::Version
68
68
  version: '3.5'
69
- description: A simple tool for capturing and displaying Rails metrics.
69
+ description: Simple, lightweight and fast metrics aggregation for Rails.
70
70
  email: nyku.rn@gmail.com
71
71
  executables: []
72
72
  extensions: []