ezmetrics 1.2.0 → 2.0.2

Sign up to get free protection for your applications and to get access to all the features.
Files changed (5) hide show
  1. checksums.yaml +4 -4
  2. data/README.md +163 -31
  3. data/lib/ezmetrics.rb +140 -36
  4. data/lib/ezmetrics/benchmark.rb +36 -27
  5. metadata +3 -4
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 0affbdfcda841ffe4eec9105ded5526b13d209ccf0aa564e1b84c91651079d0e
4
- data.tar.gz: 8c0f3879ee0f2cedeba7f6b0526f679ef9962e2270c7639837821f226a0f3580
3
+ metadata.gz: df2d108f37b4f0a5988f72ad465c531d0559ef651c78e3f58aa72f7c9fcc0a3c
4
+ data.tar.gz: a1b871a56189e2035289a976e508dd437125c8a6bc9fa29dfbb47402096faf1b
5
5
  SHA512:
6
- metadata.gz: 28d6883722e140005b24071ab637104d6d0e16e22a75fffac734b242fafbb88c1ecdbfd4d4d44a11439a8cbac5d0ad5dcfba705ed26a706f6abb83f55f3f5513
7
- data.tar.gz: b02ddcabf2669f9458a21dd37e0495218eedff6f15047c923b81d6fbcd311ce34ed2d27c7ea664a97aebaf806eac653dd4ec09fb3ee526607da796ec4c3a54f2
6
+ metadata.gz: ad8ce6744522f55b65029bffeb8d56491ce188c29870a5560389ea44795131999bbe6f9d11d3a444831c49c2c0aad2f2b8bca448cdd2c18c5a11ba8d04216889
7
+ data.tar.gz: 80c76c1709db877c71063650db012b423116232a32f6ff5d9c17262e43e80e8696d4a0fc687b798291551ca5f0f6b84f3f7ae7a635ecad644f09628daf842cd1
data/README.md CHANGED
@@ -10,6 +10,16 @@ Simple, lightweight and fast metrics aggregation for Rails.
10
10
  gem 'ezmetrics'
11
11
  ```
12
12
 
13
+ ## Available metrics
14
+
15
+ | Type | Aggregate functions |
16
+ |:----------:|:---------------------------------:|
17
+ | `duration` | `avg`, `max`, `percentile` |
18
+ | `views` | `avg`, `max`, `percentile` |
19
+ | `db` | `avg`, `max`, `percentile` |
20
+ | `queries` | `avg`, `max`, `percentile` |
21
+ | `requests` | `all`, `2xx`, `3xx`, `4xx`, `5xx` |
22
+
13
23
  ## Usage
14
24
 
15
25
  ### Getting started
@@ -27,25 +37,25 @@ and stores them for the timeframe you specified, 60 seconds by default.
27
37
  You can change the timeframe according to your needs and save the metrics by calling `log` method:
28
38
 
29
39
  ```ruby
30
- # Store the metrics for 60 seconds (default behaviour)
31
- EZmetrics.new.log(
32
- duration: 100.5,
33
- views: 40.7,
34
- db: 59.8,
35
- queries: 4,
36
- status: 200
37
- )
40
+ # Store the metrics for 60 seconds (default behaviour)
41
+ EZmetrics.new.log(
42
+ duration: 100.5,
43
+ views: 40.7,
44
+ db: 59.8,
45
+ queries: 4,
46
+ status: 200
47
+ )
38
48
  ```
39
49
 
40
50
  ```ruby
41
- # Store the metrics for 10 minutes
42
- EZmetrics.new(10.minutes).log(
43
- duration: 100.5,
44
- views: 40.7,
45
- db: 59.8,
46
- queries: 4,
47
- status: 200
48
- )
51
+ # Store the metrics for 10 minutes
52
+ EZmetrics.new(10.minutes).log(
53
+ duration: 100.5,
54
+ views: 40.7,
55
+ db: 59.8,
56
+ queries: 4,
57
+ status: 200
58
+ )
49
59
  ```
50
60
 
51
61
  ---
@@ -53,13 +63,13 @@ You can change the timeframe according to your needs and save the metrics by cal
53
63
  For displaying metrics you need to call `show` method:
54
64
 
55
65
  ```ruby
56
- # Aggregate and show metrics for last 60 seconds (default behaviour)
57
- EZmetrics.new.show
66
+ # Aggregate and show metrics for last 60 seconds (default behaviour)
67
+ EZmetrics.new.show
58
68
  ```
59
69
 
60
70
  ```ruby
61
- # Aggregate and show metrics for last 10 minutes
62
- EZmetrics.new(10.minutes).show
71
+ # Aggregate and show metrics for last 10 minutes
72
+ EZmetrics.new(10.minutes).show
63
73
  ```
64
74
 
65
75
  You can combine these timeframes, for example - store for 10 minutes, display for 5 minutes.
@@ -131,6 +141,47 @@ This will return a hash with the following structure:
131
141
  }
132
142
  ```
133
143
 
144
+ ---
145
+
146
+ If you prefer a single level object - you can change the default output structure by calling `.flatten` before `.show`
147
+
148
+ ```ruby
149
+ EZmetrics.new(1.hour).flatten.show(db: :avg, duration: [:avg, :max])
150
+ ```
151
+
152
+ ```ruby
153
+ {
154
+ db_avg: 182,
155
+ duration_avg: 205,
156
+ duration_max: 5171
157
+ }
158
+ ```
159
+
160
+ ---
161
+
162
+ Same for [partitioned aggregation](#partitioning)
163
+
164
+ ```ruby
165
+ EZmetrics.new(1.hour).partition_by(:minute).flatten.show(db: :avg, duration: [:avg, :max])
166
+ ```
167
+
168
+ ```ruby
169
+ [
170
+ {
171
+ timestamp: 1575242880,
172
+ db_avg: 387,
173
+ duration_avg: 477,
174
+ duration_max: 8566
175
+ },
176
+ {
177
+ timestamp: 1575242940,
178
+ db_avg: 123,
179
+ duration_avg: 234,
180
+ duration_max: 3675
181
+ }
182
+ ]
183
+ ```
184
+
134
185
  ### Aggregation
135
186
 
136
187
  The aggregation can be easily configured by specifying aggregation options as in the following examples:
@@ -217,13 +268,57 @@ EZmetrics.new.show(views: :avg, :db: [:avg, :max], requests: true)
217
268
  }
218
269
  ```
219
270
 
271
+ ---
272
+
273
+ **5. Percentile**
274
+
275
+ This feature is available since version `2.0.0`.
276
+
277
+ By default percentile aggregation is turned off because it requires to store each value of all metrics.
278
+
279
+ To enable this feature - you need to set `store_each_value: true` when saving the metrics:
280
+
281
+ ```ruby
282
+ EZmetrics.new.log(
283
+ duration: 100.5,
284
+ views: 40.7,
285
+ db: 59.8,
286
+ queries: 4,
287
+ status: 200,
288
+ store_each_value: true
289
+ )
290
+ ```
291
+
292
+ The aggregation syntax has the following format `metrics_type: :percentile_{number}` where `number` is any integer in the 1..99 range.
293
+
294
+
295
+ ```ruby
296
+ EZmetrics.new.show(db: [:avg, :percentile_90, :percentile_95], duration: :percentile_99)
297
+ ```
298
+
299
+ ```ruby
300
+ {
301
+ db: {
302
+ avg: 155,
303
+ percentile_90: 205,
304
+ percentile_95: 215
305
+ },
306
+ duration: {
307
+ percentile_99: 236
308
+ }
309
+ }
310
+ ```
311
+
312
+
220
313
  ### Partitioning
221
314
 
222
- To aggregate metrics, partitioned by a unit of time you need to call `partition_by({time_unit})` before calling `show`
315
+ If you want to visualize your metrics by using a **line chart**, you will need to use partitioning.
316
+
317
+ To aggregate metrics, partitioned by a unit of time you need to call `.partition_by({time_unit})` before calling `.show`
223
318
 
224
319
  ```ruby
225
- # Aggregate metrics for last hour, partition by minute
226
- EZmetrics.new(1.hour).partition_by(:minute).show(duration: [:avg, :max], db: :avg)
320
+ # Aggregate metrics for last hour, partition by minute
321
+ EZmetrics.new(1.hour).partition_by(:minute).show(duration: [:avg, :max], db: :avg)
227
322
  ```
228
323
 
229
324
  This will return an array of objects with the following structure:
@@ -274,6 +369,8 @@ Available time units for partitioning: `second`, `minute`, `hour`, `day`. Defaul
274
369
 
275
370
  The aggregation speed relies on the performance of **Redis** (data storage) and **Oj** (json serialization/parsing).
276
371
 
372
+ #### Simple aggregation
373
+
277
374
  You can check the **aggregation** time by running:
278
375
 
279
376
  ```ruby
@@ -283,10 +380,10 @@ EZmetrics::Benchmark.new.measure_aggregation
283
380
  | Interval | Duration (seconds) |
284
381
  | :------: | :----------------: |
285
382
  | 1 minute | 0.0 |
286
- | 1 hour | 0.04 |
287
- | 12 hours | 0.49 |
288
- | 24 hours | 1.51 |
289
- | 48 hours | 3.48 |
383
+ | 1 hour | 0.02 |
384
+ | 12 hours | 0.22 |
385
+ | 24 hours | 0.61 |
386
+ | 48 hours | 1.42 |
290
387
 
291
388
  ---
292
389
 
@@ -299,9 +396,44 @@ EZmetrics::Benchmark.new.measure_aggregation(:minute)
299
396
  | Interval | Duration (seconds) |
300
397
  | :------: | :----------------: |
301
398
  | 1 minute | 0.0 |
302
- | 1 hour | 0.05 |
303
- | 12 hours | 0.74 |
304
- | 24 hours | 2.12 |
305
- | 48 hours | 4.85 |
399
+ | 1 hour | 0.02 |
400
+ | 12 hours | 0.25 |
401
+ | 24 hours | 0.78 |
402
+ | 48 hours | 1.75 |
403
+
404
+ ---
405
+
406
+ #### Percentile aggregation
407
+
408
+ You can check the **percentile aggregation** time by running:
409
+
410
+ ```ruby
411
+ EZmetrics::Benchmark.new(true).measure_aggregation
412
+ ```
413
+
414
+ | Interval | Duration (seconds) |
415
+ | :------: | :----------------: |
416
+ | 1 minute | 0.0 |
417
+ | 1 hour | 0.14 |
418
+ | 12 hours | 2.11 |
419
+ | 24 hours | 5.85 |
420
+ | 48 hours | 14.1 |
421
+
422
+ ---
423
+
424
+ To check the **partitioned aggregation** time for percentile you need to run:
425
+
426
+ ```ruby
427
+ EZmetrics::Benchmark.new(true).measure_aggregation(:minute)
428
+ ```
429
+
430
+ | Interval | Duration (seconds) |
431
+ | :------: | :----------------: |
432
+ | 1 minute | 0.0 |
433
+ | 1 hour | 0.16 |
434
+ | 12 hours | 1.97 |
435
+ | 24 hours | 5.85 |
436
+ | 48 hours | 13.9 |
437
+
306
438
 
307
439
  The benchmarks above were run on a _2017 Macbook Pro 2.9 GHz Intel Core i7 with 16 GB of RAM_
data/lib/ezmetrics.rb CHANGED
@@ -9,16 +9,18 @@ class EZmetrics
9
9
 
10
10
  def initialize(interval_seconds=60)
11
11
  @interval_seconds = interval_seconds.to_i
12
- @redis = Redis.new
12
+ @redis = Redis.new(driver: :hiredis)
13
+ @schema = redis_schema
13
14
  end
14
15
 
15
- def log(payload={duration: 0.0, views: 0.0, db: 0.0, queries: 0, status: 200})
16
+ def log(payload={duration: 0.0, views: 0.0, db: 0.0, queries: 0, status: 200, store_each_value: false})
16
17
  @safe_payload = {
17
- duration: payload[:duration].to_f,
18
- views: payload[:views].to_f,
19
- db: payload[:db].to_f,
20
- queries: payload[:queries].to_i,
21
- status: payload[:status].to_i
18
+ duration: payload[:duration].to_f,
19
+ views: payload[:views].to_f,
20
+ db: payload[:db].to_f,
21
+ queries: payload[:queries].to_i,
22
+ status: payload[:status].to_i,
23
+ store_each_value: payload[:store_each_value].to_s == "true"
22
24
  }
23
25
 
24
26
  this_second = Time.now.to_i
@@ -31,10 +33,11 @@ class EZmetrics
31
33
  METRICS.each do |metrics_type|
32
34
  update_sum(metrics_type)
33
35
  update_max(metrics_type)
36
+ store_value(metrics_type) if safe_payload[:store_each_value]
34
37
  end
35
38
 
36
- this_second_metrics["statuses"]["all"] += 1
37
- this_second_metrics["statuses"][status_group] += 1
39
+ this_second_metrics[schema["all"]] += 1
40
+ this_second_metrics[schema[status_group]] += 1
38
41
  else
39
42
  @this_second_metrics = {
40
43
  "second" => this_second,
@@ -46,10 +49,25 @@ class EZmetrics
46
49
  "db_max" => safe_payload[:db],
47
50
  "queries_sum" => safe_payload[:queries],
48
51
  "queries_max" => safe_payload[:queries],
49
- "statuses" => { "2xx" => 0, "3xx" => 0, "4xx" => 0, "5xx" => 0, "all" => 1 }
52
+ "2xx" => 0,
53
+ "3xx" => 0,
54
+ "4xx" => 0,
55
+ "5xx" => 0,
56
+ "all" => 1
50
57
  }
51
58
 
52
- this_second_metrics["statuses"][status_group] = 1
59
+ if safe_payload[:store_each_value]
60
+ this_second_metrics.merge!(
61
+ "duration_values" => [safe_payload[:duration]],
62
+ "views_values" => [safe_payload[:views]],
63
+ "db_values" => [safe_payload[:db]],
64
+ "queries_values" => [safe_payload[:queries]]
65
+ )
66
+ end
67
+
68
+ this_second_metrics[status_group] = 1
69
+
70
+ @this_second_metrics = this_second_metrics.values
53
71
  end
54
72
 
55
73
  redis.setex(this_second, interval_seconds, Oj.dump(this_second_metrics))
@@ -63,20 +81,25 @@ class EZmetrics
63
81
  partitioned_metrics ? aggregate_partitioned_data : aggregate_data
64
82
  end
65
83
 
84
+ def flatten
85
+ @flat = true
86
+ self
87
+ end
88
+
66
89
  def partition_by(time_unit=:minute)
67
90
  time_unit = PARTITION_UNITS.include?(time_unit) ? time_unit : :minute
68
- @partitioned_metrics = interval_metrics.group_by { |h| second_to_partition_unit(time_unit, h["second"]) }
91
+ @partitioned_metrics = interval_metrics.group_by { |array| second_to_partition_unit(time_unit, array[schema["second"]]) }
69
92
  self
70
93
  end
71
94
 
72
95
  private
73
96
 
74
- attr_reader :redis, :interval_seconds, :interval_metrics, :requests,
97
+ attr_reader :redis, :interval_seconds, :interval_metrics, :requests, :flat, :schema,
75
98
  :storage_key, :safe_payload, :this_second_metrics, :partitioned_metrics, :options
76
99
 
77
100
  def aggregate_data
78
101
  return {} unless interval_metrics.any?
79
- @requests = interval_metrics.sum { |hash| hash["statuses"]["all"] }
102
+ @requests = interval_metrics.sum { |array| array[schema["all"]] }
80
103
  build_result
81
104
  rescue
82
105
  {}
@@ -85,17 +108,20 @@ class EZmetrics
85
108
  def aggregate_partitioned_data
86
109
  partitioned_metrics.map do |partition, metrics|
87
110
  @interval_metrics = metrics
88
- @requests = interval_metrics.sum { |hash| hash["statuses"]["all"] }
89
- { timestamp: partition, data: build_result }
111
+ @requests = interval_metrics.sum { |array| array[schema["all"]] }
112
+ METRICS.each { |metrics_type| instance_variable_set("@sorted_#{metrics_type}_values", nil) }
113
+ flat ? { timestamp: partition, **build_result } : { timestamp: partition, data: build_result }
90
114
  end
91
115
  rescue
92
- new(options)
116
+ self
93
117
  end
94
118
 
95
119
  def build_result
96
120
  result = {}
97
121
 
98
- result[:requests] = { all: requests, grouped: count_all_status_groups } if options[:requests]
122
+ if options[:requests]
123
+ append_requests_to_result(result, { all: requests, grouped: count_all_status_groups })
124
+ end
99
125
 
100
126
  options.each do |metrics, aggregation_functions|
101
127
  next unless METRICS.include?(metrics)
@@ -103,8 +129,8 @@ class EZmetrics
103
129
  next unless aggregation_functions.any?
104
130
 
105
131
  aggregation_functions.each do |aggregation_function|
106
- result[metrics] ||= {}
107
- result[metrics][aggregation_function] = aggregate(metrics, aggregation_function)
132
+ aggregated_metrics = aggregate(metrics, aggregation_function)
133
+ append_metrics_to_result(result, metrics, aggregation_function, aggregated_metrics)
108
134
  end
109
135
  end
110
136
  result
@@ -112,51 +138,129 @@ class EZmetrics
112
138
  result
113
139
  end
114
140
 
141
+ def append_requests_to_result(result, aggregated_requests)
142
+ return result[:requests] = aggregated_requests unless flat
143
+
144
+ result[:requests_all] = aggregated_requests[:all]
145
+ aggregated_requests[:grouped].each do |group, counter|
146
+ result[:"requests_#{group}"] = counter
147
+ end
148
+ end
149
+
150
+ def append_metrics_to_result(result, metrics, aggregation_function, aggregated_metrics)
151
+ return result[:"#{metrics}_#{aggregation_function}"] = aggregated_metrics if flat
152
+
153
+ result[metrics] ||= {}
154
+ result[metrics][aggregation_function] = aggregated_metrics
155
+ end
156
+
115
157
  def second_to_partition_unit(time_unit, second)
116
158
  return second if time_unit == :second
117
- time_unit_depth = { minute: 4, hour: 3, day: 2 }
118
- reset_depth = time_unit_depth[time_unit]
119
- time_to_array = Time.at(second).to_a[0..5].reverse
120
- Time.new(*time_to_array[0..reset_depth]).to_i
159
+ time = Time.at(second)
160
+ return (time - time.sec - time.min * 60 - time.hour * 3600).to_i if time_unit == :day
161
+ return (time - time.sec - time.min * 60).to_i if time_unit == :hour
162
+ (time - time.sec).to_i
121
163
  end
122
164
 
123
165
  def interval_metrics
124
166
  @interval_metrics ||= begin
125
167
  interval_start = Time.now.to_i - interval_seconds
126
168
  interval_keys = (interval_start..Time.now.to_i).to_a
127
- redis.mget(interval_keys).compact.map { |hash| Oj.load(hash) }
169
+ redis.mget(interval_keys).compact.map { |array| Oj.load(array) }
128
170
  end
129
171
  end
130
172
 
131
173
  def aggregate(metrics, aggregation_function)
132
- return unless AGGREGATION_FUNCTIONS.include?(aggregation_function)
133
174
  return avg("#{metrics}_sum") if aggregation_function == :avg
134
175
  return max("#{metrics}_max") if aggregation_function == :max
176
+
177
+ percentile = aggregation_function.match(/percentile_(?<value>\d+)/)
178
+
179
+ if percentile && percentile["value"].to_i.between?(1, 99)
180
+ sorted_values = send("sorted_#{metrics}_values")
181
+ percentile(sorted_values, percentile["value"].to_i)
182
+ end
183
+ end
184
+
185
+ METRICS.each do |metrics|
186
+ define_method "sorted_#{metrics}_values" do
187
+ instance_variable_get("@sorted_#{metrics}_values") || instance_variable_set(
188
+ "@sorted_#{metrics}_values", interval_metrics.map { |array| array[schema["#{metrics}_values"]] }.flatten.compact.sort
189
+ )
190
+ end
191
+ end
192
+
193
+ def redis_schema
194
+ [
195
+ "second",
196
+ "duration_sum",
197
+ "duration_max",
198
+ "views_sum",
199
+ "views_max",
200
+ "db_sum",
201
+ "db_max",
202
+ "queries_sum",
203
+ "queries_max",
204
+ "2xx",
205
+ "3xx",
206
+ "4xx",
207
+ "5xx",
208
+ "all",
209
+ "duration_values",
210
+ "views_values",
211
+ "db_values",
212
+ "queries_values"
213
+ ].each_with_index.inject({}){ |result, pair| result[pair[0]] = pair[1] ; result }
135
214
  end
136
215
 
137
216
  def update_sum(metrics)
138
- this_second_metrics["#{metrics}_sum"] += safe_payload[metrics]
217
+ this_second_metrics[schema["#{metrics}_sum"]] += safe_payload[metrics]
218
+ end
219
+
220
+ def store_value(metrics)
221
+ this_second_metrics[schema["#{metrics}_values"]] << safe_payload[metrics]
139
222
  end
140
223
 
141
224
  def update_max(metrics)
142
- max_value = [safe_payload[metrics], this_second_metrics["#{metrics}_max"]].max
143
- this_second_metrics["#{metrics}_max"] = max_value
225
+ max_value = [safe_payload[metrics], this_second_metrics[schema["#{metrics}_max"]]].max
226
+ this_second_metrics[schema["#{metrics}_max"]] = max_value
144
227
  end
145
228
 
146
229
  def avg(metrics)
147
- (interval_metrics.sum { |h| h[metrics] }.to_f / requests).round
230
+ (interval_metrics.sum { |array| array[schema[metrics]] }.to_f / requests).round
148
231
  end
149
232
 
150
233
  def max(metrics)
151
- interval_metrics.max { |h| h[metrics] }[metrics].round
234
+ interval_metrics.max { |array| array[schema[metrics]] }[schema[metrics]].round
235
+ end
236
+
237
+ def percentile(sorted_array, pcnt)
238
+ array_length = sorted_array.length
239
+
240
+ return "not enough data (requests: #{array_length}, required: #{pcnt})" if array_length < pcnt
241
+
242
+ rank = (pcnt.to_f / 100) * (array_length + 1)
243
+ whole = rank.truncate
244
+
245
+ # if has fractional part
246
+ if whole != rank
247
+ s0 = sorted_array[whole - 1]
248
+ s1 = sorted_array[whole]
249
+
250
+ f = (rank - rank.truncate).abs
251
+
252
+ return ((f * (s1 - s0)) + s0)&.round
253
+ else
254
+ return (sorted_array[whole - 1])&.round
255
+ end
152
256
  end
153
257
 
154
258
  def count_all_status_groups
155
- interval_metrics.inject({ "2xx" => 0, "3xx" => 0, "4xx" => 0, "5xx" => 0 }) do |result, h|
156
- result["2xx"] += h["statuses"]["2xx"]
157
- result["3xx"] += h["statuses"]["3xx"]
158
- result["4xx"] += h["statuses"]["4xx"]
159
- result["5xx"] += h["statuses"]["5xx"]
259
+ interval_metrics.inject({ "2xx" => 0, "3xx" => 0, "4xx" => 0, "5xx" => 0 }) do |result, array|
260
+ result["2xx"] += array[schema["2xx"]]
261
+ result["3xx"] += array[schema["3xx"]]
262
+ result["4xx"] += array[schema["4xx"]]
263
+ result["5xx"] += array[schema["5xx"]]
160
264
  result
161
265
  end
162
266
  end
@@ -2,12 +2,13 @@ require "benchmark"
2
2
 
3
3
  class EZmetrics::Benchmark
4
4
 
5
- def initialize
6
- @start = Time.now.to_i
7
- @redis = Redis.new
8
- @durations = []
9
- @iterations = 3
10
- @intervals = {
5
+ def initialize(store_each_value=false)
6
+ @store_each_value = store_each_value
7
+ @start = Time.now.to_i
8
+ @redis = Redis.new(driver: :hiredis)
9
+ @durations = []
10
+ @iterations = 1
11
+ @intervals = {
11
12
  "1.minute" => 60,
12
13
  "1.hour " => 3600,
13
14
  "12.hours" => 43200,
@@ -29,31 +30,38 @@ class EZmetrics::Benchmark
29
30
 
30
31
  private
31
32
 
32
- attr_reader :start, :redis, :durations, :intervals, :iterations
33
+ attr_reader :start, :redis, :durations, :intervals, :iterations, :store_each_value
33
34
 
34
35
  def write_metrics
35
36
  seconds = intervals.values.max
36
37
  seconds.times do |i|
37
38
  second = start - i
38
39
  payload = {
39
- "second" => second,
40
- "duration_sum" => rand(10000),
41
- "duration_max" => rand(10000),
42
- "views_sum" => rand(1000),
43
- "views_max" => rand(1000),
44
- "db_sum" => rand(8000),
45
- "db_max" => rand(8000),
46
- "queries_sum" => rand(100),
47
- "queries_max" => rand(100),
48
- "statuses" => {
49
- "2xx" => rand(1..10),
50
- "3xx" => rand(1..10),
51
- "4xx" => rand(1..10),
52
- "5xx" => rand(1..10),
53
- "all" => rand(1..40)
54
- }
40
+ "second" => second,
41
+ "duration_sum" => rand(10000),
42
+ "duration_max" => rand(10000),
43
+ "views_sum" => rand(1000),
44
+ "views_max" => rand(1000),
45
+ "db_sum" => rand(8000),
46
+ "db_max" => rand(8000),
47
+ "queries_sum" => rand(100),
48
+ "queries_max" => rand(100),
49
+ "2xx" => rand(1..10),
50
+ "3xx" => rand(1..10),
51
+ "4xx" => rand(1..10),
52
+ "5xx" => rand(1..10),
53
+ "all" => rand(1..40)
55
54
  }
56
- redis.setex(second, seconds, Oj.dump(payload))
55
+
56
+ if store_each_value
57
+ payload.merge!(
58
+ "duration_values" => Array.new(100) { rand(10..60000) },
59
+ "views_values" => Array.new(100) { rand(10..60000) },
60
+ "db_values" => Array.new(100) { rand(10..60000) },
61
+ "queries_values" => Array.new(10) { rand(1..60) }
62
+ )
63
+ end
64
+ redis.setex(second, seconds, Oj.dump(payload.values))
57
65
  end
58
66
  nil
59
67
  end
@@ -67,10 +75,11 @@ class EZmetrics::Benchmark
67
75
  def measure_aggregation_time(interval, seconds, partition_by)
68
76
  iterations.times do
69
77
  durations << ::Benchmark.measure do
70
- if partition_by
71
- EZmetrics.new(seconds).partition_by(partition_by).show
78
+ ezmetrics = EZmetrics.new(seconds)
79
+ if store_each_value
80
+ partition_by ? ezmetrics.partition_by(partition_by).show(db: :percentile_90) : ezmetrics.show(db: :percentile_90)
72
81
  else
73
- EZmetrics.new(seconds).show
82
+ partition_by ? ezmetrics.partition_by(partition_by).show : ezmetrics.show
74
83
  end
75
84
  end.real
76
85
  end
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: ezmetrics
3
3
  version: !ruby/object:Gem::Version
4
- version: 1.2.0
4
+ version: 2.0.2
5
5
  platform: ruby
6
6
  authors:
7
7
  - Nicolae Rotaru
@@ -76,11 +76,10 @@ files:
76
76
  - README.md
77
77
  - lib/ezmetrics.rb
78
78
  - lib/ezmetrics/benchmark.rb
79
- homepage: https://nyku.github.io/ezmetrics
79
+ homepage: https://github.com/nyku/ezmetrics
80
80
  licenses:
81
81
  - GPL-3.0
82
- metadata:
83
- source_code_uri: https://github.com/nyku/ezmetrics
82
+ metadata: {}
84
83
  post_install_message:
85
84
  rdoc_options: []
86
85
  require_paths: