ezmetrics 1.0.4 → 1.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (5) hide show
  1. checksums.yaml +5 -5
  2. data/README.md +213 -19
  3. data/lib/ezmetrics.rb +131 -83
  4. data/lib/ezmetrics/benchmark.rb +95 -0
  5. metadata +35 -6
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
- SHA1:
3
- metadata.gz: 55610b1357968e974ca2e525a6d09bd2245c8f5e
4
- data.tar.gz: efef7c5d22cc0c4d9eb27e91b2149a23e3eb4832
2
+ SHA256:
3
+ metadata.gz: 0affbdfcda841ffe4eec9105ded5526b13d209ccf0aa564e1b84c91651079d0e
4
+ data.tar.gz: 8c0f3879ee0f2cedeba7f6b0526f679ef9962e2270c7639837821f226a0f3580
5
5
  SHA512:
6
- metadata.gz: a161d7d0715d82958a89131e31c3344dcd4553d78909406bedd0041c0d91e05486be64b2c7c382783b540ca837a57bc97763cc9fc1a88a821f7e19083b22e0ce
7
- data.tar.gz: 32073d20b467a1c2e98b0efe561f8513d7e2de23dd75740c42cc3628a62112b1ee69658d533f2b91c99453c375556815dda4515d159b90d7ccf1b7b9cba602e8
6
+ metadata.gz: 28d6883722e140005b24071ab637104d6d0e16e22a75fffac734b242fafbb88c1ecdbfd4d4d44a11439a8cbac5d0ad5dcfba705ed26a706f6abb83f55f3f5513
7
+ data.tar.gz: b02ddcabf2669f9458a21dd37e0495218eedff6f15047c923b81d6fbcd311ce34ed2d27c7ea664a97aebaf806eac653dd4ec09fb3ee526607da796ec4c3a54f2
data/README.md CHANGED
@@ -2,8 +2,7 @@
2
2
 
3
3
  [![Gem Version](https://badge.fury.io/rb/ezmetrics.svg)](https://badge.fury.io/rb/ezmetrics)
4
4
 
5
- A simple tool for capturing and displaying Rails metrics.
6
-
5
+ Simple, lightweight and fast metrics aggregation for Rails.
7
6
 
8
7
  ## Installation
9
8
 
@@ -15,45 +14,59 @@ gem 'ezmetrics'
15
14
 
16
15
  ### Getting started
17
16
 
18
- This tool captures and aggregates metrics such as
19
- - `status`
17
+ This tool captures and aggregates Rails application metrics such as
18
+
20
19
  - `duration`
21
- - `queries`
20
+ - `views`
22
21
  - `db`
22
+ - `queries`
23
+ - `status`
23
24
 
24
- for a 60 seconds timeframe by default.
25
+ and stores them for the timeframe you specified, 60 seconds by default.
25
26
 
26
- You can change the timeframe according to your needs and save the metrics by calling `log` method:
27
+ You can change the timeframe according to your needs and save the metrics by calling `log` method:
27
28
 
28
29
  ```ruby
29
30
  # Store the metrics for 60 seconds (default behaviour)
30
- EZmetrics.new.log(status: 200, db: 300.45, duration: 320.45, queries: 5)
31
+ EZmetrics.new.log(
32
+ duration: 100.5,
33
+ views: 40.7,
34
+ db: 59.8,
35
+ queries: 4,
36
+ status: 200
37
+ )
31
38
  ```
32
- or
33
39
 
34
40
  ```ruby
35
41
  # Store the metrics for 10 minutes
36
- EZmetrics.new(10.minutes).log(status: 200, db: 300.45, duration: 320.45, queries: 5)
42
+ EZmetrics.new(10.minutes).log(
43
+ duration: 100.5,
44
+ views: 40.7,
45
+ db: 59.8,
46
+ queries: 4,
47
+ status: 200
48
+ )
37
49
  ```
38
50
 
39
- For displaying metrics you need call `show` method:
51
+ ---
52
+
53
+ For displaying metrics you need to call `show` method:
40
54
 
41
55
  ```ruby
42
56
  # Aggregate and show metrics for last 60 seconds (default behaviour)
43
- EZmetrics.new.show
57
+ EZmetrics.new.show
44
58
  ```
45
59
 
46
- or
47
-
48
60
  ```ruby
49
61
  # Aggregate and show metrics for last 10 minutes
50
62
  EZmetrics.new(10.minutes).show
51
63
  ```
52
64
 
53
- > Please note that you can combine these timeframes, for example - store for 10 minutes, display for 5 minutes.
65
+ You can combine these timeframes, for example - store for 10 minutes, display for 5 minutes.
54
66
 
67
+ ### Capture metrics
55
68
 
56
- ### Add an initializer to your Rails application
69
+ Just add an initializer to your application:
57
70
 
58
71
  ```ruby
59
72
  # config/initializers/ezmetrics.rb
@@ -69,16 +82,19 @@ end
69
82
  ActiveSupport::Notifications.subscribe("process_action.action_controller") do |*args|
70
83
  event = ActiveSupport::Notifications::Event.new(*args)
71
84
  EZmetrics.new.log(
72
- queries: Thread.current[:queries].to_i,
73
- db: event.payload[:db_runtime].to_f,
74
85
  duration: event.duration.to_f,
75
- status: event.payload[:status].to_i || 500
86
+ views: event.payload[:view_runtime].to_f,
87
+ db: event.payload[:db_runtime].to_f,
88
+ status: event.payload[:status].to_i || 500,
89
+ queries: Thread.current[:queries].to_i,
76
90
  )
77
91
  end
78
92
  ```
79
93
 
80
94
  ### Display metrics
81
95
 
96
+ As simple as:
97
+
82
98
  ```ruby
83
99
  EZmetrics.new.show
84
100
  ```
@@ -91,6 +107,10 @@ This will return a hash with the following structure:
91
107
  avg: 5569,
92
108
  max: 9675
93
109
  },
110
+ views: {
111
+ avg: 12,
112
+ max: 240
113
+ },
94
114
  db: {
95
115
  avg: 155,
96
116
  max: 4382
@@ -111,3 +131,177 @@ This will return a hash with the following structure:
111
131
  }
112
132
  ```
113
133
 
134
+ ### Aggregation
135
+
136
+ The aggregation can be easily configured by specifying aggregation options as in the following examples:
137
+
138
+ **1. Single**
139
+
140
+ ```ruby
141
+ EZmetrics.new.show(duration: :max)
142
+ ```
143
+
144
+ ```ruby
145
+ {
146
+ duration: {
147
+ max: 9675
148
+ }
149
+ }
150
+ ```
151
+
152
+ ---
153
+
154
+ **2. Multiple**
155
+
156
+ ```ruby
157
+ EZmetrics.new.show(queries: [:max, :avg])
158
+ ```
159
+
160
+ ```ruby
161
+ {
162
+ queries: {
163
+ max: 76,
164
+ avg: 26
165
+ }
166
+ }
167
+ ```
168
+
169
+ ---
170
+
171
+ **3. Requests**
172
+
173
+ ```ruby
174
+ EZmetrics.new.show(requests: true)
175
+ ```
176
+
177
+ ```ruby
178
+ {
179
+ requests: {
180
+ all: 2000,
181
+ grouped: {
182
+ "2xx" => 1900,
183
+ "3xx" => 15,
184
+ "4xx" => 80,
185
+ "5xx" => 5
186
+ }
187
+ }
188
+ }
189
+ ```
190
+
191
+ ---
192
+
193
+ **4. Combined**
194
+
195
+ ```ruby
196
+ EZmetrics.new.show(views: :avg, :db: [:avg, :max], requests: true)
197
+ ```
198
+
199
+ ```ruby
200
+ {
201
+ views: {
202
+ avg: 12
203
+ },
204
+ db: {
205
+ avg: 155,
206
+ max: 4382
207
+ },
208
+ requests: {
209
+ all: 2000,
210
+ grouped: {
211
+ "2xx" => 1900,
212
+ "3xx" => 15,
213
+ "4xx" => 80,
214
+ "5xx" => 5
215
+ }
216
+ }
217
+ }
218
+ ```
219
+
220
+ ### Partitioning
221
+
222
+ To aggregate metrics, partitioned by a unit of time you need to call `partition_by({time_unit})` before calling `show`
223
+
224
+ ```ruby
225
+ # Aggregate metrics for last hour, partition by minute
226
+ EZmetrics.new(1.hour).partition_by(:minute).show(duration: [:avg, :max], db: :avg)
227
+ ```
228
+
229
+ This will return an array of objects with the following structure:
230
+
231
+ ```ruby
232
+ [
233
+ {
234
+ timestamp: # UNIX timestamp
235
+ data: # a hash with aggregated metrics
236
+ }
237
+ ]
238
+ ```
239
+
240
+ like in the example below:
241
+
242
+ ```ruby
243
+ [
244
+ {
245
+ timestamp: 1575242880,
246
+ data: {
247
+ duration: {
248
+ avg: 477,
249
+ max: 8566
250
+ },
251
+ db: {
252
+ avg: 387
253
+ }
254
+ }
255
+ },
256
+ {
257
+ timestamp: 1575242940,
258
+ data: {
259
+ duration: {
260
+ avg: 234,
261
+ max: 3675
262
+ },
263
+ db: {
264
+ avg: 123
265
+ }
266
+ }
267
+ }
268
+ ]
269
+ ```
270
+
271
+ Available time units for partitioning: `second`, `minute`, `hour`, `day`. Default: `minute`.
272
+
273
+ ### Performance
274
+
275
+ The aggregation speed relies on the performance of **Redis** (data storage) and **Oj** (json serialization/parsing).
276
+
277
+ You can check the **aggregation** time by running:
278
+
279
+ ```ruby
280
+ EZmetrics::Benchmark.new.measure_aggregation
281
+ ```
282
+
283
+ | Interval | Duration (seconds) |
284
+ | :------: | :----------------: |
285
+ | 1 minute | 0.0 |
286
+ | 1 hour | 0.04 |
287
+ | 12 hours | 0.49 |
288
+ | 24 hours | 1.51 |
289
+ | 48 hours | 3.48 |
290
+
291
+ ---
292
+
293
+ To check the **partitioned aggregation** time you need to run:
294
+
295
+ ```ruby
296
+ EZmetrics::Benchmark.new.measure_aggregation(:minute)
297
+ ```
298
+
299
+ | Interval | Duration (seconds) |
300
+ | :------: | :----------------: |
301
+ | 1 minute | 0.0 |
302
+ | 1 hour | 0.05 |
303
+ | 12 hours | 0.74 |
304
+ | 24 hours | 2.12 |
305
+ | 48 hours | 4.85 |
306
+
307
+ The benchmarks above were run on a _2017 Macbook Pro 2.9 GHz Intel Core i7 with 16 GB of RAM_
@@ -1,135 +1,183 @@
1
- require "redis" unless defined?(Redis)
2
- require "json" unless defined?(JSON)
1
+ require "redis"
2
+ require "redis/connection/hiredis"
3
+ require "oj"
3
4
 
4
5
  class EZmetrics
6
+ METRICS = [:duration, :views, :db, :queries].freeze
7
+ AGGREGATION_FUNCTIONS = [:max, :avg].freeze
8
+ PARTITION_UNITS = [:second, :minute, :hour, :day].freeze
9
+
5
10
  def initialize(interval_seconds=60)
6
11
  @interval_seconds = interval_seconds.to_i
7
12
  @redis = Redis.new
8
- @storage_key = "ez-metrics"
9
13
  end
10
14
 
11
- def log(payload={db: 0.0, duration: 0.0, queries: 0, status: 200})
12
- payload = {
15
+ def log(payload={duration: 0.0, views: 0.0, db: 0.0, queries: 0, status: 200})
16
+ @safe_payload = {
17
+ duration: payload[:duration].to_f,
18
+ views: payload[:views].to_f,
13
19
  db: payload[:db].to_f,
14
20
  queries: payload[:queries].to_i,
15
- duration: payload[:duration].to_f,
16
21
  status: payload[:status].to_i
17
22
  }
18
23
 
19
- this_second = Time.now.to_i
20
- status_group = "#{payload[:status].to_s[0]}xx"
21
- this_second_metrics = redis.get("#{storage_key}:#{this_second}")
24
+ this_second = Time.now.to_i
25
+ status_group = "#{payload[:status].to_s[0]}xx"
26
+ @this_second_metrics = redis.get(this_second)
22
27
 
23
28
  if this_second_metrics
24
- this_second_metrics = JSON.parse(this_second_metrics)
25
- this_second_metrics["db_sum"] += payload[:db]
26
- this_second_metrics["queries_sum"] += payload[:queries]
27
- this_second_metrics["duration_sum"] += payload[:duration]
29
+ @this_second_metrics = Oj.load(this_second_metrics)
30
+
31
+ METRICS.each do |metrics_type|
32
+ update_sum(metrics_type)
33
+ update_max(metrics_type)
34
+ end
35
+
28
36
  this_second_metrics["statuses"]["all"] += 1
29
37
  this_second_metrics["statuses"][status_group] += 1
30
- this_second_metrics["db_max"] = [payload[:db], this_second_metrics["db_max"]].max
31
- this_second_metrics["queries_max"] = [payload[:queries], this_second_metrics["queries_max"]].max
32
- this_second_metrics["duration_max"] = [payload[:duration], this_second_metrics["duration_max"]].max
33
38
  else
34
- this_second_metrics = {
35
- "db_sum" => payload[:db],
36
- "db_max" => payload[:db],
37
- "queries_sum" => payload[:queries],
38
- "queries_max" => payload[:queries],
39
- "duration_sum" => payload[:duration],
40
- "duration_max" => payload[:duration],
39
+ @this_second_metrics = {
40
+ "second" => this_second,
41
+ "duration_sum" => safe_payload[:duration],
42
+ "duration_max" => safe_payload[:duration],
43
+ "views_sum" => safe_payload[:views],
44
+ "views_max" => safe_payload[:views],
45
+ "db_sum" => safe_payload[:db],
46
+ "db_max" => safe_payload[:db],
47
+ "queries_sum" => safe_payload[:queries],
48
+ "queries_max" => safe_payload[:queries],
41
49
  "statuses" => { "2xx" => 0, "3xx" => 0, "4xx" => 0, "5xx" => 0, "all" => 1 }
42
50
  }
43
51
 
44
52
  this_second_metrics["statuses"][status_group] = 1
45
53
  end
46
54
 
47
- redis.setex("#{storage_key}:#{this_second}", interval_seconds, JSON.generate(this_second_metrics))
48
-
55
+ redis.setex(this_second, interval_seconds, Oj.dump(this_second_metrics))
49
56
  true
50
57
  rescue => error
51
58
  formatted_error(error)
52
59
  end
53
60
 
54
- def show
55
- interval_start = Time.now.to_i - interval_seconds
56
- interval_keys = (interval_start..Time.now.to_i).to_a.map { |second| "#{storage_key}:#{second}" }
57
- @interval_metrics = redis.mget(interval_keys).compact.map { |hash| JSON.parse(hash) }
61
+ def show(options=nil)
62
+ @options = options || default_options
63
+ partitioned_metrics ? aggregate_partitioned_data : aggregate_data
64
+ end
58
65
 
59
- return empty_metrics_object unless interval_metrics.any?
66
+ def partition_by(time_unit=:minute)
67
+ time_unit = PARTITION_UNITS.include?(time_unit) ? time_unit : :minute
68
+ @partitioned_metrics = interval_metrics.group_by { |h| second_to_partition_unit(time_unit, h["second"]) }
69
+ self
70
+ end
60
71
 
61
- @requests = interval_metrics.map { |hash| hash["statuses"]["all"] }.compact.sum
72
+ private
62
73
 
63
- metrics_object
74
+ attr_reader :redis, :interval_seconds, :interval_metrics, :requests,
75
+ :storage_key, :safe_payload, :this_second_metrics, :partitioned_metrics, :options
76
+
77
+ def aggregate_data
78
+ return {} unless interval_metrics.any?
79
+ @requests = interval_metrics.sum { |hash| hash["statuses"]["all"] }
80
+ build_result
64
81
  rescue
65
- empty_metrics_object
82
+ {}
66
83
  end
67
84
 
68
- private
85
+ def aggregate_partitioned_data
86
+ partitioned_metrics.map do |partition, metrics|
87
+ @interval_metrics = metrics
88
+ @requests = interval_metrics.sum { |hash| hash["statuses"]["all"] }
89
+ { timestamp: partition, data: build_result }
90
+ end
91
+ rescue
92
+ new(options)
93
+ end
69
94
 
70
- attr_reader :redis, :interval_seconds, :interval_metrics, :requests, :storage_key
95
+ def build_result
96
+ result = {}
97
+
98
+ result[:requests] = { all: requests, grouped: count_all_status_groups } if options[:requests]
99
+
100
+ options.each do |metrics, aggregation_functions|
101
+ next unless METRICS.include?(metrics)
102
+ aggregation_functions = [aggregation_functions] unless aggregation_functions.is_a?(Array)
103
+ next unless aggregation_functions.any?
104
+
105
+ aggregation_functions.each do |aggregation_function|
106
+ result[metrics] ||= {}
107
+ result[metrics][aggregation_function] = aggregate(metrics, aggregation_function)
108
+ end
109
+ end
110
+ result
111
+ ensure
112
+ result
113
+ end
114
+
115
+ def second_to_partition_unit(time_unit, second)
116
+ return second if time_unit == :second
117
+ time_unit_depth = { minute: 4, hour: 3, day: 2 }
118
+ reset_depth = time_unit_depth[time_unit]
119
+ time_to_array = Time.at(second).to_a[0..5].reverse
120
+ Time.new(*time_to_array[0..reset_depth]).to_i
121
+ end
122
+
123
+ def interval_metrics
124
+ @interval_metrics ||= begin
125
+ interval_start = Time.now.to_i - interval_seconds
126
+ interval_keys = (interval_start..Time.now.to_i).to_a
127
+ redis.mget(interval_keys).compact.map { |hash| Oj.load(hash) }
128
+ end
129
+ end
130
+
131
+ def aggregate(metrics, aggregation_function)
132
+ return unless AGGREGATION_FUNCTIONS.include?(aggregation_function)
133
+ return avg("#{metrics}_sum") if aggregation_function == :avg
134
+ return max("#{metrics}_max") if aggregation_function == :max
135
+ end
136
+
137
+ def update_sum(metrics)
138
+ this_second_metrics["#{metrics}_sum"] += safe_payload[metrics]
139
+ end
140
+
141
+ def update_max(metrics)
142
+ max_value = [safe_payload[metrics], this_second_metrics["#{metrics}_max"]].max
143
+ this_second_metrics["#{metrics}_max"] = max_value
144
+ end
71
145
 
72
146
  def avg(metrics)
73
- (interval_metrics.map { |h| h[metrics.to_s] }.sum.to_f / requests).round
147
+ (interval_metrics.sum { |h| h[metrics] }.to_f / requests).round
74
148
  end
75
149
 
76
150
  def max(metrics)
77
- interval_metrics.map { |h| h[metrics.to_s] }.max.round
151
+ interval_metrics.max { |h| h[metrics] }[metrics].round
78
152
  end
79
153
 
80
- def count(group)
81
- interval_metrics.map { |h| h["statuses"][group.to_s] }.sum
154
+ def count_all_status_groups
155
+ interval_metrics.inject({ "2xx" => 0, "3xx" => 0, "4xx" => 0, "5xx" => 0 }) do |result, h|
156
+ result["2xx"] += h["statuses"]["2xx"]
157
+ result["3xx"] += h["statuses"]["3xx"]
158
+ result["4xx"] += h["statuses"]["4xx"]
159
+ result["5xx"] += h["statuses"]["5xx"]
160
+ result
161
+ end
82
162
  end
83
163
 
84
- def formatted_error(error)
164
+ def default_options
85
165
  {
86
- error: error.class.name,
87
- message: error.message,
88
- backtrace: error.backtrace.reject { |line| line.match(/ruby|gems/) }
166
+ duration: AGGREGATION_FUNCTIONS,
167
+ views: AGGREGATION_FUNCTIONS,
168
+ db: AGGREGATION_FUNCTIONS,
169
+ queries: AGGREGATION_FUNCTIONS,
170
+ requests: true
89
171
  }
90
172
  end
91
173
 
92
- def metrics_object
174
+ def formatted_error(error)
93
175
  {
94
- duration: {
95
- avg: avg(:duration_sum),
96
- max: max(:duration_max)
97
- },
98
- db: {
99
- avg: avg(:db_sum),
100
- max: max(:db_max)
101
- },
102
- queries: {
103
- avg: avg(:queries_sum),
104
- max: max(:queries_max)
105
- },
106
- requests: {
107
- all: requests,
108
- grouped: {
109
- "2xx" => count("2xx"),
110
- "3xx" => count("3xx"),
111
- "4xx" => count("4xx"),
112
- "5xx" => count("5xx")
113
- }
114
- }
176
+ error: error.class.name,
177
+ message: error.message,
178
+ backtrace: error.backtrace.reject { |line| line.match(/ruby|gems/) }
115
179
  }
116
180
  end
181
+ end
117
182
 
118
- def empty_metrics_object
119
- {
120
- duration: {
121
- avg: 0,
122
- max: 0
123
- },
124
- db: {
125
- avg: 0,
126
- max: 0
127
- },
128
- queries: {
129
- avg: 0,
130
- max: 0
131
- },
132
- requests: {}
133
- }
134
- end
135
- end
183
+ require "ezmetrics/benchmark"
@@ -0,0 +1,95 @@
1
+ require "benchmark"
2
+
3
+ class EZmetrics::Benchmark
4
+
5
+ def initialize
6
+ @start = Time.now.to_i
7
+ @redis = Redis.new
8
+ @durations = []
9
+ @iterations = 3
10
+ @intervals = {
11
+ "1.minute" => 60,
12
+ "1.hour " => 3600,
13
+ "12.hours" => 43200,
14
+ "24.hours" => 86400,
15
+ "48.hours" => 172800
16
+ }
17
+ end
18
+
19
+ def measure_aggregation(partition_by=nil)
20
+ write_metrics
21
+ print_header
22
+ intervals.each do |interval, seconds|
23
+ result = measure_aggregation_time(interval, seconds, partition_by)
24
+ print_row(result)
25
+ end
26
+ cleanup_metrics
27
+ print_footer
28
+ end
29
+
30
+ private
31
+
32
+ attr_reader :start, :redis, :durations, :intervals, :iterations
33
+
34
+ def write_metrics
35
+ seconds = intervals.values.max
36
+ seconds.times do |i|
37
+ second = start - i
38
+ payload = {
39
+ "second" => second,
40
+ "duration_sum" => rand(10000),
41
+ "duration_max" => rand(10000),
42
+ "views_sum" => rand(1000),
43
+ "views_max" => rand(1000),
44
+ "db_sum" => rand(8000),
45
+ "db_max" => rand(8000),
46
+ "queries_sum" => rand(100),
47
+ "queries_max" => rand(100),
48
+ "statuses" => {
49
+ "2xx" => rand(1..10),
50
+ "3xx" => rand(1..10),
51
+ "4xx" => rand(1..10),
52
+ "5xx" => rand(1..10),
53
+ "all" => rand(1..40)
54
+ }
55
+ }
56
+ redis.setex(second, seconds, Oj.dump(payload))
57
+ end
58
+ nil
59
+ end
60
+
61
+ def cleanup_metrics
62
+ interval_start = Time.now.to_i - intervals.values.max - 100
63
+ interval_keys = (interval_start..Time.now.to_i).to_a
64
+ redis.del(interval_keys)
65
+ end
66
+
67
+ def measure_aggregation_time(interval, seconds, partition_by)
68
+ iterations.times do
69
+ durations << ::Benchmark.measure do
70
+ if partition_by
71
+ EZmetrics.new(seconds).partition_by(partition_by).show
72
+ else
73
+ EZmetrics.new(seconds).show
74
+ end
75
+ end.real
76
+ end
77
+
78
+ return {
79
+ interval: interval.gsub(".", " "),
80
+ duration: (durations.sum.to_f / iterations).round(2)
81
+ }
82
+ end
83
+
84
+ def print_header
85
+ print "\n#{'─'*31}\n| Interval | Duration (seconds)\n#{'─'*31}\n"
86
+ end
87
+
88
+ def print_row(result)
89
+ print "| #{result[:interval]} | #{result[:duration]}\n"
90
+ end
91
+
92
+ def print_footer
93
+ print "#{'─'*31}\n"
94
+ end
95
+ end
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: ezmetrics
3
3
  version: !ruby/object:Gem::Version
4
- version: 1.0.4
4
+ version: 1.2.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Nicolae Rotaru
@@ -24,6 +24,34 @@ dependencies:
24
24
  - - "~>"
25
25
  - !ruby/object:Gem::Version
26
26
  version: '4.0'
27
+ - !ruby/object:Gem::Dependency
28
+ name: hiredis
29
+ requirement: !ruby/object:Gem::Requirement
30
+ requirements:
31
+ - - "~>"
32
+ - !ruby/object:Gem::Version
33
+ version: 0.6.3
34
+ type: :runtime
35
+ prerelease: false
36
+ version_requirements: !ruby/object:Gem::Requirement
37
+ requirements:
38
+ - - "~>"
39
+ - !ruby/object:Gem::Version
40
+ version: 0.6.3
41
+ - !ruby/object:Gem::Dependency
42
+ name: oj
43
+ requirement: !ruby/object:Gem::Requirement
44
+ requirements:
45
+ - - "~>"
46
+ - !ruby/object:Gem::Version
47
+ version: '3.10'
48
+ type: :runtime
49
+ prerelease: false
50
+ version_requirements: !ruby/object:Gem::Requirement
51
+ requirements:
52
+ - - "~>"
53
+ - !ruby/object:Gem::Version
54
+ version: '3.10'
27
55
  - !ruby/object:Gem::Dependency
28
56
  name: rspec
29
57
  requirement: !ruby/object:Gem::Requirement
@@ -38,7 +66,7 @@ dependencies:
38
66
  - - "~>"
39
67
  - !ruby/object:Gem::Version
40
68
  version: '3.5'
41
- description: A simple tool for capturing and displaying Rails metrics.
69
+ description: Simple, lightweight and fast metrics aggregation for Rails.
42
70
  email: nyku.rn@gmail.com
43
71
  executables: []
44
72
  extensions: []
@@ -47,10 +75,12 @@ files:
47
75
  - LICENSE
48
76
  - README.md
49
77
  - lib/ezmetrics.rb
50
- homepage: https://github.com/nyku/ezmetrics
78
+ - lib/ezmetrics/benchmark.rb
79
+ homepage: https://nyku.github.io/ezmetrics
51
80
  licenses:
52
81
  - GPL-3.0
53
- metadata: {}
82
+ metadata:
83
+ source_code_uri: https://github.com/nyku/ezmetrics
54
84
  post_install_message:
55
85
  rdoc_options: []
56
86
  require_paths:
@@ -66,8 +96,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
66
96
  - !ruby/object:Gem::Version
67
97
  version: '0'
68
98
  requirements: []
69
- rubyforge_project:
70
- rubygems_version: 2.6.13
99
+ rubygems_version: 3.0.6
71
100
  signing_key:
72
101
  specification_version: 4
73
102
  summary: Rails metrics aggregation tool.