ezmetrics 1.0.6 → 1.2.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (5) hide show
  1. checksums.yaml +5 -5
  2. data/README.md +230 -27
  3. data/lib/ezmetrics.rb +126 -78
  4. data/lib/ezmetrics/benchmark.rb +30 -22
  5. metadata +31 -4
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
- SHA1:
3
- metadata.gz: dc9e49c32e99f5d45f45ca6d3d467e53c09b8502
4
- data.tar.gz: ce9f340d9f70ddfd7aebbac487e2d1b4b094527f
2
+ SHA256:
3
+ metadata.gz: 408332cfc86d05b75d8b6004098d736ad60a624189f46022af822fdf6c24960d
4
+ data.tar.gz: 6d4ef89fa6a80e28cb443e16f1528b351a93bf7bf50e2a8fc756dc943a25f11f
5
5
  SHA512:
6
- metadata.gz: 295ae4b987d80ec5feb6cbb21e0f5dee0ea6523d140e50840b5637b57c3263f712b5c798eed3c69794fb8618a5236e7f6005a1fee34f4e2f6e8812f97ab636e7
7
- data.tar.gz: 8c9523d1385036e83a1eaea18e275c91f2865c391ad60f50a2af76b865843338e1d4408e8f08f037c2af14ba552181f7a82e85be7ea5b7242755f2c4070fb418
6
+ metadata.gz: 99a4ac48560305d2967cdcc8b248978ee3f0748ab1fb7da2830a3804d732d29664f25bfdc4bd2bfa50c449994913ecd455362f57af1f39f0f45e904e36876c12
7
+ data.tar.gz: c53617fbc324765b9b388b1280b6d363a3707b002a2718cd0467a16ff24708a8adbf81a812aca111a643924de4834a155efdef252911195570837fdb8f6a9dce
data/README.md CHANGED
@@ -2,8 +2,7 @@
2
2
 
3
3
  [![Gem Version](https://badge.fury.io/rb/ezmetrics.svg)](https://badge.fury.io/rb/ezmetrics)
4
4
 
5
- A simple tool for capturing and displaying Rails metrics.
6
-
5
+ Simple, lightweight and fast metrics aggregation for Rails.
7
6
 
8
7
  ## Installation
9
8
 
@@ -16,6 +15,7 @@ gem 'ezmetrics'
16
15
  ### Getting started
17
16
 
18
17
  This tool captures and aggregates Rails application metrics such as
18
+
19
19
  - `duration`
20
20
  - `views`
21
21
  - `db`
@@ -27,32 +27,42 @@ and stores them for the timeframe you specified, 60 seconds by default.
27
27
  You can change the timeframe according to your needs and save the metrics by calling `log` method:
28
28
 
29
29
  ```ruby
30
- # Store the metrics for 60 seconds (default behaviour)
31
- EZmetrics.new.log(duration: 100.5, views: 40.7, db: 59.8, queries: 4, status: 200)
30
+ # Store the metrics for 60 seconds (default behaviour)
31
+ EZmetrics.new.log(
32
+ duration: 100.5,
33
+ views: 40.7,
34
+ db: 59.8,
35
+ queries: 4,
36
+ status: 200
37
+ )
32
38
  ```
33
- or
34
39
 
35
40
  ```ruby
36
- # Store the metrics for 10 minutes
37
- EZmetrics.new(10.minutes).log(duration: 100.5, views: 40.7, db: 59.8, queries: 4, status: 200)
41
+ # Store the metrics for 10 minutes
42
+ EZmetrics.new(10.minutes).log(
43
+ duration: 100.5,
44
+ views: 40.7,
45
+ db: 59.8,
46
+ queries: 4,
47
+ status: 200
48
+ )
38
49
  ```
39
50
 
40
- For displaying metrics you need call `show` method:
51
+ ---
52
+
53
+ For displaying metrics you need to call `show` method:
41
54
 
42
55
  ```ruby
43
- # Aggregate and show metrics for last 60 seconds (default behaviour)
44
- EZmetrics.new.show
56
+ # Aggregate and show metrics for last 60 seconds (default behaviour)
57
+ EZmetrics.new.show
45
58
  ```
46
59
 
47
- or
48
-
49
60
  ```ruby
50
- # Aggregate and show metrics for last 10 minutes
51
- EZmetrics.new(10.minutes).show
61
+ # Aggregate and show metrics for last 10 minutes
62
+ EZmetrics.new(10.minutes).show
52
63
  ```
53
64
 
54
- > Please note that you can combine these timeframes, for example - store for 10 minutes, display for 5 minutes.
55
-
65
+ You can combine these timeframes, for example - store for 10 minutes, display for 5 minutes.
56
66
 
57
67
  ### Capture metrics
58
68
 
@@ -121,15 +131,191 @@ This will return a hash with the following structure:
121
131
  }
122
132
  ```
123
133
 
124
- ### Performance
134
+ ---
135
+
136
+ If you prefer a single level object - you can change the default output structure by calling `.flatten` before `.show`
137
+
138
+ ```ruby
139
+ EZmetrics.new(1.hour).flatten.show(db: :avg, duration: [:avg, :max])
140
+ ```
141
+
142
+ ```ruby
143
+ {
144
+ db_avg: 182,
145
+ duration_avg: 205,
146
+ duration_max: 5171
147
+ }
148
+ ```
149
+
150
+ ---
151
+
152
+ Same for [partitioned aggregation](#partitioning)
153
+
154
+ ```ruby
155
+ EZmetrics.new(1.hour).partition_by(:minute).flatten.show(db: :avg, duration: [:avg, :max])
156
+ ```
157
+
158
+ ```ruby
159
+ [
160
+ {
161
+ timestamp: 1575242880,
162
+ db_avg: 387,
163
+ duration_avg: 477,
164
+ duration_max: 8566
165
+ },
166
+ {
167
+ timestamp: 1575242940,
168
+ db_avg: 123,
169
+ duration_avg: 234,
170
+ duration_max: 3675
171
+ }
172
+ ]
173
+ ```
174
+
175
+ ### Aggregation
176
+
177
+ The aggregation can be easily configured by specifying aggregation options as in the following examples:
178
+
179
+ **1. Single**
180
+
181
+ ```ruby
182
+ EZmetrics.new.show(duration: :max)
183
+ ```
184
+
185
+ ```ruby
186
+ {
187
+ duration: {
188
+ max: 9675
189
+ }
190
+ }
191
+ ```
192
+
193
+ ---
194
+
195
+ **2. Multiple**
196
+
197
+ ```ruby
198
+ EZmetrics.new.show(queries: [:max, :avg])
199
+ ```
200
+
201
+ ```ruby
202
+ {
203
+ queries: {
204
+ max: 76,
205
+ avg: 26
206
+ }
207
+ }
208
+ ```
209
+
210
+ ---
211
+
212
+ **3. Requests**
213
+
214
+ ```ruby
215
+ EZmetrics.new.show(requests: true)
216
+ ```
217
+
218
+ ```ruby
219
+ {
220
+ requests: {
221
+ all: 2000,
222
+ grouped: {
223
+ "2xx" => 1900,
224
+ "3xx" => 15,
225
+ "4xx" => 80,
226
+ "5xx" => 5
227
+ }
228
+ }
229
+ }
230
+ ```
231
+
232
+ ---
233
+
234
+ **4. Combined**
235
+
236
+ ```ruby
237
+ EZmetrics.new.show(views: :avg, :db: [:avg, :max], requests: true)
238
+ ```
239
+
240
+ ```ruby
241
+ {
242
+ views: {
243
+ avg: 12
244
+ },
245
+ db: {
246
+ avg: 155,
247
+ max: 4382
248
+ },
249
+ requests: {
250
+ all: 2000,
251
+ grouped: {
252
+ "2xx" => 1900,
253
+ "3xx" => 15,
254
+ "4xx" => 80,
255
+ "5xx" => 5
256
+ }
257
+ }
258
+ }
259
+ ```
260
+
261
+ ### Partitioning
262
+
263
+ If you want to visualize your metrics by using a **line chart**, you will need to use partitioning.
264
+
265
+ To aggregate metrics, partitioned by a unit of time you need to call `.partition_by({time_unit})` before calling `.show`
266
+
267
+ ```ruby
268
+ # Aggregate metrics for last hour, partition by minute
269
+ EZmetrics.new(1.hour).partition_by(:minute).show(duration: [:avg, :max], db: :avg)
270
+ ```
125
271
 
126
- The implementation is based on **Redis** commands such as:
272
+ This will return an array of objects with the following structure:
127
273
 
128
- - [`get`](https://redis.io/commands/get)
129
- - [`mget`](https://redis.io/commands/mget)
130
- - [`setex`](https://redis.io/commands/setex)
274
+ ```ruby
275
+ [
276
+ {
277
+ timestamp: # UNIX timestamp
278
+ data: # a hash with aggregated metrics
279
+ }
280
+ ]
281
+ ```
131
282
 
132
- which are extremely fast.
283
+ like in the example below:
284
+
285
+ ```ruby
286
+ [
287
+ {
288
+ timestamp: 1575242880,
289
+ data: {
290
+ duration: {
291
+ avg: 477,
292
+ max: 8566
293
+ },
294
+ db: {
295
+ avg: 387
296
+ }
297
+ }
298
+ },
299
+ {
300
+ timestamp: 1575242940,
301
+ data: {
302
+ duration: {
303
+ avg: 234,
304
+ max: 3675
305
+ },
306
+ db: {
307
+ avg: 123
308
+ }
309
+ }
310
+ }
311
+ ]
312
+ ```
313
+
314
+ Available time units for partitioning: `second`, `minute`, `hour`, `day`. Default: `minute`.
315
+
316
+ ### Performance
317
+
318
+ The aggregation speed relies on the performance of **Redis** (data storage) and **Oj** (json serialization/parsing).
133
319
 
134
320
  You can check the **aggregation** time by running:
135
321
 
@@ -137,11 +323,28 @@ You can check the **aggregation** time by running:
137
323
  EZmetrics::Benchmark.new.measure_aggregation
138
324
  ```
139
325
 
140
- The result of running this benchmark on a *2017 Macbook Pro 2.9 GHz Intel Core i7 with 16 GB of RAM*:
326
+ | Interval | Duration (seconds) |
327
+ | :------: | :----------------: |
328
+ | 1 minute | 0.0 |
329
+ | 1 hour | 0.04 |
330
+ | 12 hours | 0.49 |
331
+ | 24 hours | 1.51 |
332
+ | 48 hours | 3.48 |
333
+
334
+ ---
335
+
336
+ To check the **partitioned aggregation** time you need to run:
337
+
338
+ ```ruby
339
+ EZmetrics::Benchmark.new.measure_aggregation(:minute)
340
+ ```
141
341
 
142
342
  | Interval | Duration (seconds) |
143
- |:--------:|:------------------:|
343
+ | :------: | :----------------: |
144
344
  | 1 minute | 0.0 |
145
- | 1 hour | 0.11 |
146
- | 12 hours | 1.6 |
147
- | 24 hours | 3.5 |
345
+ | 1 hour | 0.04 |
346
+ | 12 hours | 0.53 |
347
+ | 24 hours | 1.59 |
348
+ | 48 hours | 3.51 |
349
+
350
+ The benchmarks above were run on a _2017 Macbook Pro 2.9 GHz Intel Core i7 with 16 GB of RAM_
data/lib/ezmetrics.rb CHANGED
@@ -1,11 +1,15 @@
1
- require "redis" unless defined?(Redis)
2
- require "json" unless defined?(JSON)
1
+ require "redis"
2
+ require "redis/connection/hiredis"
3
+ require "oj"
3
4
 
4
5
  class EZmetrics
6
+ METRICS = [:duration, :views, :db, :queries].freeze
7
+ AGGREGATION_FUNCTIONS = [:max, :avg].freeze
8
+ PARTITION_UNITS = [:second, :minute, :hour, :day].freeze
9
+
5
10
  def initialize(interval_seconds=60)
6
11
  @interval_seconds = interval_seconds.to_i
7
12
  @redis = Redis.new
8
- @storage_key = "ez-metrics"
9
13
  end
10
14
 
11
15
  def log(payload={duration: 0.0, views: 0.0, db: 0.0, queries: 0, status: 200})
@@ -19,12 +23,12 @@ class EZmetrics
19
23
 
20
24
  this_second = Time.now.to_i
21
25
  status_group = "#{payload[:status].to_s[0]}xx"
22
- @this_second_metrics = redis.get("#{storage_key}:#{this_second}")
26
+ @this_second_metrics = redis.get(this_second)
23
27
 
24
28
  if this_second_metrics
25
- @this_second_metrics = JSON.parse(this_second_metrics)
29
+ @this_second_metrics = Oj.load(this_second_metrics)
26
30
 
27
- [:duration, :views, :db, :queries].each do |metrics_type|
31
+ METRICS.each do |metrics_type|
28
32
  update_sum(metrics_type)
29
33
  update_max(metrics_type)
30
34
  end
@@ -33,6 +37,7 @@ class EZmetrics
33
37
  this_second_metrics["statuses"][status_group] += 1
34
38
  else
35
39
  @this_second_metrics = {
40
+ "second" => this_second,
36
41
  "duration_sum" => safe_payload[:duration],
37
42
  "duration_max" => safe_payload[:duration],
38
43
  "views_sum" => safe_payload[:views],
@@ -47,112 +52,155 @@ class EZmetrics
47
52
  this_second_metrics["statuses"][status_group] = 1
48
53
  end
49
54
 
50
- redis.setex("#{storage_key}:#{this_second}", interval_seconds, JSON.generate(this_second_metrics))
51
-
55
+ redis.setex(this_second, interval_seconds, Oj.dump(this_second_metrics))
52
56
  true
53
57
  rescue => error
54
58
  formatted_error(error)
55
59
  end
56
60
 
57
- def show
58
- interval_start = Time.now.to_i - interval_seconds
59
- interval_keys = (interval_start..Time.now.to_i).to_a.map { |second| "#{storage_key}:#{second}" }
60
- @interval_metrics = redis.mget(interval_keys).compact.map { |hash| JSON.parse(hash) }
61
+ def show(options=nil)
62
+ @options = options || default_options
63
+ partitioned_metrics ? aggregate_partitioned_data : aggregate_data
64
+ end
65
+
66
+ def flatten
67
+ @flat = true
68
+ self
69
+ end
70
+
71
+ def partition_by(time_unit=:minute)
72
+ time_unit = PARTITION_UNITS.include?(time_unit) ? time_unit : :minute
73
+ @partitioned_metrics = interval_metrics.group_by { |h| second_to_partition_unit(time_unit, h["second"]) }
74
+ self
75
+ end
61
76
 
62
- return empty_metrics_object unless interval_metrics.any?
77
+ private
63
78
 
64
- @requests = interval_metrics.map { |hash| hash["statuses"]["all"] }.compact.sum
79
+ attr_reader :redis, :interval_seconds, :interval_metrics, :requests, :flat,
80
+ :storage_key, :safe_payload, :this_second_metrics, :partitioned_metrics, :options
65
81
 
66
- metrics_object
82
+ def aggregate_data
83
+ return {} unless interval_metrics.any?
84
+ @requests = interval_metrics.sum { |hash| hash["statuses"]["all"] }
85
+ build_result
67
86
  rescue
68
- empty_metrics_object
87
+ {}
69
88
  end
70
89
 
71
- private
90
+ def aggregate_partitioned_data
91
+ partitioned_metrics.map do |partition, metrics|
92
+ @interval_metrics = metrics
93
+ @requests = interval_metrics.sum { |hash| hash["statuses"]["all"] }
94
+ flat ? { timestamp: partition, **build_result } : { timestamp: partition, data: build_result }
95
+ end
96
+ rescue
97
+ new(options)
98
+ end
99
+
100
+ def build_result
101
+ result = {}
102
+
103
+ if options[:requests]
104
+ append_requests_to_result(result, { all: requests, grouped: count_all_status_groups })
105
+ end
106
+
107
+ options.each do |metrics, aggregation_functions|
108
+ next unless METRICS.include?(metrics)
109
+ aggregation_functions = [aggregation_functions] unless aggregation_functions.is_a?(Array)
110
+ next unless aggregation_functions.any?
111
+
112
+ aggregation_functions.each do |aggregation_function|
113
+ aggregated_metrics = aggregate(metrics, aggregation_function)
114
+ append_metrics_to_result(result, metrics, aggregation_function, aggregated_metrics)
115
+ end
116
+ end
117
+ result
118
+ ensure
119
+ result
120
+ end
121
+
122
+ def append_requests_to_result(result, aggregated_requests)
123
+ return result[:requests] = aggregated_requests unless flat
124
+
125
+ result[:requests_all] = aggregated_requests[:all]
126
+ aggregated_requests[:grouped].each do |group, counter|
127
+ result[:"requests_#{group}"] = counter
128
+ end
129
+ end
72
130
 
73
- attr_reader :redis, :interval_seconds, :interval_metrics, :requests, :storage_key,
74
- :safe_payload, :this_second_metrics
131
+ def append_metrics_to_result(result, metrics, aggregation_function, aggregated_metrics)
132
+ return result[:"#{metrics}_#{aggregation_function}"] = aggregated_metrics if flat
133
+
134
+ result[metrics] ||= {}
135
+ result[metrics][aggregation_function] = aggregated_metrics
136
+ end
137
+
138
+ def second_to_partition_unit(time_unit, second)
139
+ return second if time_unit == :second
140
+ time = Time.at(second)
141
+ return (time - time.sec - time.min * 60 - time.hour * 3600).to_i if time_unit == :day
142
+ return (time - time.sec - time.min * 60).to_i if time_unit == :hour
143
+ (time - time.sec).to_i
144
+ end
145
+
146
+ def interval_metrics
147
+ @interval_metrics ||= begin
148
+ interval_start = Time.now.to_i - interval_seconds
149
+ interval_keys = (interval_start..Time.now.to_i).to_a
150
+ redis.mget(interval_keys).compact.map { |hash| Oj.load(hash) }
151
+ end
152
+ end
153
+
154
+ def aggregate(metrics, aggregation_function)
155
+ return unless AGGREGATION_FUNCTIONS.include?(aggregation_function)
156
+ return avg("#{metrics}_sum") if aggregation_function == :avg
157
+ return max("#{metrics}_max") if aggregation_function == :max
158
+ end
75
159
 
76
160
  def update_sum(metrics)
77
- this_second_metrics["#{metrics}_sum"] += safe_payload[metrics.to_sym]
161
+ this_second_metrics["#{metrics}_sum"] += safe_payload[metrics]
78
162
  end
79
163
 
80
164
  def update_max(metrics)
81
- max_value = [safe_payload[metrics.to_sym], this_second_metrics["#{metrics}_max"]].max
165
+ max_value = [safe_payload[metrics], this_second_metrics["#{metrics}_max"]].max
82
166
  this_second_metrics["#{metrics}_max"] = max_value
83
167
  end
84
168
 
85
169
  def avg(metrics)
86
- (interval_metrics.map { |h| h[metrics.to_s] }.sum.to_f / requests).round
170
+ (interval_metrics.sum { |h| h[metrics] }.to_f / requests).round
87
171
  end
88
172
 
89
173
  def max(metrics)
90
- interval_metrics.map { |h| h[metrics.to_s] }.max.round
174
+ interval_metrics.max { |h| h[metrics] }[metrics].round
91
175
  end
92
176
 
93
- def count(group)
94
- interval_metrics.map { |h| h["statuses"][group.to_s] }.sum
95
- end
96
-
97
- def formatted_error(error)
98
- {
99
- error: error.class.name,
100
- message: error.message,
101
- backtrace: error.backtrace.reject { |line| line.match(/ruby|gems/) }
102
- }
177
+ def count_all_status_groups
178
+ interval_metrics.inject({ "2xx" => 0, "3xx" => 0, "4xx" => 0, "5xx" => 0 }) do |result, h|
179
+ result["2xx"] += h["statuses"]["2xx"]
180
+ result["3xx"] += h["statuses"]["3xx"]
181
+ result["4xx"] += h["statuses"]["4xx"]
182
+ result["5xx"] += h["statuses"]["5xx"]
183
+ result
184
+ end
103
185
  end
104
186
 
105
- def metrics_object
187
+ def default_options
106
188
  {
107
- duration: {
108
- avg: avg(:duration_sum),
109
- max: max(:duration_max)
110
- },
111
- views: {
112
- avg: avg(:views_sum),
113
- max: max(:views_max)
114
- },
115
- db: {
116
- avg: avg(:db_sum),
117
- max: max(:db_max)
118
- },
119
- queries: {
120
- avg: avg(:queries_sum),
121
- max: max(:queries_max)
122
- },
123
- requests: {
124
- all: requests,
125
- grouped: {
126
- "2xx" => count("2xx"),
127
- "3xx" => count("3xx"),
128
- "4xx" => count("4xx"),
129
- "5xx" => count("5xx")
130
- }
131
- }
189
+ duration: AGGREGATION_FUNCTIONS,
190
+ views: AGGREGATION_FUNCTIONS,
191
+ db: AGGREGATION_FUNCTIONS,
192
+ queries: AGGREGATION_FUNCTIONS,
193
+ requests: true
132
194
  }
133
195
  end
134
196
 
135
- def empty_metrics_object
197
+ def formatted_error(error)
136
198
  {
137
- duration: {
138
- avg: 0,
139
- max: 0
140
- },
141
- views: {
142
- avg: 0,
143
- max: 0
144
- },
145
- db: {
146
- avg: 0,
147
- max: 0
148
- },
149
- queries: {
150
- avg: 0,
151
- max: 0
152
- },
153
- requests: {}
199
+ error: error.class.name,
200
+ message: error.message,
201
+ backtrace: error.backtrace.reject { |line| line.match(/ruby|gems/) }
154
202
  }
155
203
  end
156
204
  end
157
205
 
158
- require "ezmetrics/benchmark"
206
+ require "ezmetrics/benchmark"
@@ -11,15 +11,16 @@ class EZmetrics::Benchmark
11
11
  "1.minute" => 60,
12
12
  "1.hour " => 3600,
13
13
  "12.hours" => 43200,
14
- "24.hours" => 86400
14
+ "24.hours" => 86400,
15
+ "48.hours" => 172800
15
16
  }
16
17
  end
17
18
 
18
- def measure_aggregation
19
+ def measure_aggregation(partition_by=nil)
19
20
  write_metrics
20
21
  print_header
21
22
  intervals.each do |interval, seconds|
22
- result = measure_aggregation_time(interval, seconds)
23
+ result = measure_aggregation_time(interval, seconds, partition_by)
23
24
  print_row(result)
24
25
  end
25
26
  cleanup_metrics
@@ -35,36 +36,43 @@ class EZmetrics::Benchmark
35
36
  seconds.times do |i|
36
37
  second = start - i
37
38
  payload = {
38
- "duration_sum" => rand(10000),
39
- "duration_max" => rand(10000),
40
- "views_sum" => rand(1000),
41
- "views_max" => rand(1000),
42
- "db_sum" => rand(8000),
43
- "db_max" => rand(8000),
44
- "queries_sum" => rand(100),
45
- "queries_max" => rand(100),
46
- "statuses" => {
47
- "2xx" => rand(10),
48
- "3xx" => rand(10),
49
- "4xx" => rand(10),
50
- "5xx" => rand(10),
51
- "all" => rand(40)
39
+ "second" => second,
40
+ "duration_sum" => rand(10000),
41
+ "duration_max" => rand(10000),
42
+ "views_sum" => rand(1000),
43
+ "views_max" => rand(1000),
44
+ "db_sum" => rand(8000),
45
+ "db_max" => rand(8000),
46
+ "queries_sum" => rand(100),
47
+ "queries_max" => rand(100),
48
+ "statuses" => {
49
+ "2xx" => rand(1..10),
50
+ "3xx" => rand(1..10),
51
+ "4xx" => rand(1..10),
52
+ "5xx" => rand(1..10),
53
+ "all" => rand(1..40)
52
54
  }
53
55
  }
54
- redis.setex("ez-metrics:#{second}", seconds, JSON.generate(payload))
56
+ redis.setex(second, seconds, Oj.dump(payload))
55
57
  end
56
58
  nil
57
59
  end
58
60
 
59
61
  def cleanup_metrics
60
62
  interval_start = Time.now.to_i - intervals.values.max - 100
61
- interval_keys = (interval_start..Time.now.to_i).to_a.map { |second| "ez-metrics:#{second}" }
63
+ interval_keys = (interval_start..Time.now.to_i).to_a
62
64
  redis.del(interval_keys)
63
65
  end
64
66
 
65
- def measure_aggregation_time(interval, seconds)
67
+ def measure_aggregation_time(interval, seconds, partition_by)
66
68
  iterations.times do
67
- durations << ::Benchmark.measure { EZmetrics.new(seconds).show }.real
69
+ durations << ::Benchmark.measure do
70
+ if partition_by
71
+ EZmetrics.new(seconds).partition_by(partition_by).show
72
+ else
73
+ EZmetrics.new(seconds).show
74
+ end
75
+ end.real
68
76
  end
69
77
 
70
78
  return {
@@ -84,4 +92,4 @@ class EZmetrics::Benchmark
84
92
  def print_footer
85
93
  print "#{'─'*31}\n"
86
94
  end
87
- end
95
+ end
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: ezmetrics
3
3
  version: !ruby/object:Gem::Version
4
- version: 1.0.6
4
+ version: 1.2.2
5
5
  platform: ruby
6
6
  authors:
7
7
  - Nicolae Rotaru
@@ -24,6 +24,34 @@ dependencies:
24
24
  - - "~>"
25
25
  - !ruby/object:Gem::Version
26
26
  version: '4.0'
27
+ - !ruby/object:Gem::Dependency
28
+ name: hiredis
29
+ requirement: !ruby/object:Gem::Requirement
30
+ requirements:
31
+ - - "~>"
32
+ - !ruby/object:Gem::Version
33
+ version: 0.6.3
34
+ type: :runtime
35
+ prerelease: false
36
+ version_requirements: !ruby/object:Gem::Requirement
37
+ requirements:
38
+ - - "~>"
39
+ - !ruby/object:Gem::Version
40
+ version: 0.6.3
41
+ - !ruby/object:Gem::Dependency
42
+ name: oj
43
+ requirement: !ruby/object:Gem::Requirement
44
+ requirements:
45
+ - - "~>"
46
+ - !ruby/object:Gem::Version
47
+ version: '3.10'
48
+ type: :runtime
49
+ prerelease: false
50
+ version_requirements: !ruby/object:Gem::Requirement
51
+ requirements:
52
+ - - "~>"
53
+ - !ruby/object:Gem::Version
54
+ version: '3.10'
27
55
  - !ruby/object:Gem::Dependency
28
56
  name: rspec
29
57
  requirement: !ruby/object:Gem::Requirement
@@ -38,7 +66,7 @@ dependencies:
38
66
  - - "~>"
39
67
  - !ruby/object:Gem::Version
40
68
  version: '3.5'
41
- description: A simple tool for capturing and displaying Rails metrics.
69
+ description: Simple, lightweight and fast metrics aggregation for Rails.
42
70
  email: nyku.rn@gmail.com
43
71
  executables: []
44
72
  extensions: []
@@ -67,8 +95,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
67
95
  - !ruby/object:Gem::Version
68
96
  version: '0'
69
97
  requirements: []
70
- rubyforge_project:
71
- rubygems_version: 2.6.13
98
+ rubygems_version: 3.0.6
72
99
  signing_key:
73
100
  specification_version: 4
74
101
  summary: Rails metrics aggregation tool.