benchmark-perf 0.5.0 → 0.6.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 23be9b198b35a0cd2ba9fce98fe449cc11bf603d85361fd296fce4d25c4e6fcb
4
- data.tar.gz: 4e88b12e133ad01323eca2dbae4aa1a36cfcc44390ef9883785fa9df14009618
3
+ metadata.gz: c08c05dd2a152d8300d4973077f12564e771b2086c0deb4978a67b513d4364c5
4
+ data.tar.gz: 9bc2a4b6745a303ec61e21e474f347591b9c32298dbddbff99e6812c847bdd80
5
5
  SHA512:
6
- metadata.gz: 8cc2086bbfcedfa67021a9cd6fdc9ea0a6b04bbedf1457cd8a1921d28dde118a328b95568a0d361313ae0c4f16f18187fc023f76e6f82b185cb28355859d941b
7
- data.tar.gz: '0962f8911d08ec0be76097fc7bb7a225031ed943214c6be85dfebd909e606168463061013bd22cfc2e18c782861f6e3b5f12c51f935eec300e5923b63a8e0016'
6
+ metadata.gz: 543685961edfd985bdbba67b79e11f742211dc0a74d4a2d5ce8b3c66b603b06e44bc5f29545eb52fbf8763199c99198f5bf75d5bc589e2ec5f8c24512aa7be5c
7
+ data.tar.gz: 8f5b0ac7b3dacbb1a24dcdef567c421a8c3d1e0adaad7de4436848f17164787488c1d4b81f6662329eba93b1fd7077d285321f37cc768e82994b7a0fa0f3bd0f
@@ -1,5 +1,24 @@
1
1
  # Change log
2
2
 
3
+ ## [v0.6.0] - 2020-02-24
4
+
5
+ ### Added
6
+ * Add Clock for monotonic time measuring
7
+ * Add Stats for arithmetic operations
8
+ * Add Perf#ips and Perf#cpu helper methods
9
+ * Add IPSResult to capture measurements and stats for iterations
10
+ * Add CPUResult to capture measurements and stats for execution
11
+
12
+ ### Changed
13
+ * Change to fix Ruby 2.7 warnings
14
+ * Change to remove benchmark requirement
15
+ * Change to remove #assert_perform_ips & #assert_perform_under
16
+ * Change module name from ExecutionTime to Execution
17
+ * Change Iteration#run to measure only work elapsed time
18
+
19
+ ### Fixed
20
+ * Fix Iteration#run providing no measurements when warmup time exceeds bench time
21
+
3
22
  ## [v0.5.0] - 2019-04-21
4
23
 
5
24
  ### Added
@@ -59,6 +78,8 @@
59
78
 
60
79
  Initial release
61
80
 
81
+ [v0.6.0]: https://github.com/piotrmurach/benchmark-perf/compare/v0.5.0...v0.6.0
82
+ [v0.5.0]: https://github.com/piotrmurach/benchmark-perf/compare/v0.4.0...v0.5.0
62
83
  [v0.4.0]: https://github.com/piotrmurach/benchmark-perf/compare/v0.3.0...v0.4.0
63
84
  [v0.3.0]: https://github.com/piotrmurach/benchmark-perf/compare/v0.2.1...v0.3.0
64
85
  [v0.2.1]: https://github.com/piotrmurach/benchmark-perf/compare/v0.2.0...v0.2.1
data/README.md CHANGED
@@ -43,44 +43,73 @@ Or install it yourself as:
43
43
 
44
44
  ## 1. Usage
45
45
 
46
- To see how long it takes to execute a piece of code:
46
+ To see how long it takes to execute a piece of code do:
47
47
 
48
48
  ```ruby
49
- mean, stddev = Benchmark::Perf::ExecutionTime.run { ... }
49
+ result = Benchmark::Perf.cpu { ... }
50
50
  ```
51
51
 
52
- or to see how many iterations per second a piece of code can achieve:
52
+ The result will have information about:
53
53
 
54
54
  ```ruby
55
- mean, stddev, iter, elapsed_time = Benchmark::Perf::Iteration.run { ... }
55
+ result.avg # => average time in sec
56
+ result.stdev # => standard deviation in sec
57
+ result.dt # => elapsed time in sec
58
+ ```
59
+
60
+ Or to see how many iterations per second a piece of code takes do:
61
+
62
+ ```ruby
63
+ result = Benchmark::Perf.ips { ... }
64
+ ```
65
+
66
+ Then you can query result for:
67
+
68
+ ```ruby
69
+ result.avg # => average ips
70
+ result.stdev # => ips stadard deviation
71
+ result.iter # => number of iterations
72
+ result.dt # => elapsed time
56
73
  ```
57
74
 
58
75
  ## 2. API
59
76
 
60
77
  ### 2.1 Execution time
61
78
 
62
- By default `1` measurement is taken, and `1` warmup cycle is run. If you need to change number of measurements taken use `:repeat`:
79
+ By default `1` measurement is taken, and before that `1` warmup cycle is run.
80
+
81
+ If you need to change how many measurements are taken, use the `:repeat` option:
82
+
83
+ ```ruby
84
+ result = Benchmark::Perf.cpu(repeat: 10) { ... }
85
+ ```
86
+
87
+ Then you can query result for the following information:
63
88
 
64
89
  ```ruby
65
- mean, std_dev = Benchmark::Perf::ExecutionTime.run(repeat: 10) { ... }
90
+ result.avg # => average time in sec
91
+ result.stdev # => standard deviation in sec
92
+ result.dt # => elapsed time in sec
66
93
  ```
67
94
 
68
- And to change number of warmup cycles use `:warmup` keyword like so:
95
+ Increasing the number of measurements will lead to more stable results at the price of longer runtime.
96
+
97
+ To change how many warmup cycles are done before measuring, use `:warmup` option like so:
69
98
 
70
99
  ```ruby
71
- Benchmark::Perf::ExecutionTime.run(warmup: 2) { ... }
100
+ Benchmark::Perf.cpu(warmup: 2) { ... }
72
101
  ```
73
102
 
74
- If you're interested in having debug output to see exact measurements for each iteration specify stream with `:io`:
103
+ If you're interested in having debug output to see exact measurements for each measurement sample use the `:io` option and pass alternative stream:
75
104
 
76
105
  ```ruby
77
- Benchmark::Perf::ExecutionTime.run(io: $stdout) { ... }
106
+ Benchmark::Perf.cpu(io: $stdout) { ... }
78
107
  ```
79
108
 
80
- By default all measurements are done in subprocess to isolate them from other process activities. This may have negative consequences, for example when your code uses database connections and transactions. To switch this behaviour off use `:subprocess` option.
109
+ By default all measurements are done in subprocess to isolate the measured code from other process activities. Sometimes this may have some unintended consequences. For example, when code uses database connections and transactions, this may lead to lost connections. To switch running in subprocess off, use the `:subprocess` option:
81
110
 
82
111
  ```ruby
83
- Benchmark::Perf::ExeuctionTime.run(subprocess: false) { ... }
112
+ Benchmark::Perf.cpu(subprocess: false) { ... }
84
113
  ```
85
114
 
86
115
  Or use the environment variable `RUN_IN_SUBPROCESS` to toggle the behaviour.
@@ -90,19 +119,34 @@ Or use the environment variable `RUN_IN_SUBPROCESS` to toggle the behaviour.
90
119
  In order to check how many iterations per second a given code takes do:
91
120
 
92
121
  ```ruby
93
- mean, stddev, iter, elapsed_time = Benchmark::Perf::Iteration.run { ... }
122
+ reuslt = Benchmark::Perf.ips { ... }
123
+ ```
124
+
125
+ The result contains measurements that you can query:
126
+
127
+ ```ruby
128
+ result.avg # => average ips
129
+ result.stdev # => ips stadard deviation
130
+ result.iter # => number of iterations
131
+ result.dt # => elapsed time
132
+ ```
133
+
134
+ Alternatively, the result can be deconstructed into variables:
135
+
136
+ ```ruby
137
+ avg, stdev, iter, dt = *result
94
138
  ```
95
139
 
96
- By default `1` second is spent warming up Ruby VM, you change this passing `:warmup` :
140
+ By default `1` second is spent warming up Ruby VM, you can change this with the `:warmup` option that expects time value in seconds:
97
141
 
98
142
  ```ruby
99
- Benchmark::Perf::Itertion.run(warmup: 1.45) { ... } # 1.45 second
143
+ Benchmark::Perf.ips(warmup: 1.45) { ... } # 1.45 second
100
144
  ```
101
145
 
102
- The measurements as sampled for `2` seconds, you can change this value to increase precision using `:time`:
146
+ The measurements are sampled for `2` seconds by default. You can change this value to increase precision using `:time` option:
103
147
 
104
148
  ```ruby
105
- Benchmark::Perf::Iteration.run(time: 3.5) { ... } # 3.5 seconds
149
+ Benchmark::Perf.ips(time: 3.5) { ... } # 3.5 seconds
106
150
  ```
107
151
 
108
152
  ## Contributing
@@ -1,106 +1,48 @@
1
1
  # frozen_string_literal: true
2
2
 
3
- require 'benchmark'
4
-
5
- require_relative 'perf/execution_time'
6
- require_relative 'perf/iteration'
7
- require_relative 'perf/version'
3
+ require_relative "perf/execution"
4
+ require_relative "perf/iteration"
5
+ require_relative "perf/version"
8
6
 
9
7
  module Benchmark
10
8
  module Perf
11
- # Calculate arithemtic average of measurements
12
- #
13
- # @param [Array[Float]] measurements
14
- #
15
- # @return [Float]
16
- # the average of given measurements
17
- #
18
- # @api public
19
- def average(measurements)
20
- return 0 if measurements.empty?
21
-
22
- measurements.reduce(&:+).to_f / measurements.size
23
- end
24
- module_function :average
25
-
26
- # Calculate variance of measurements
27
- #
28
- # @param [Array[Float]] measurements
9
+ # Measure iterations a work could take in a second
29
10
  #
30
- # @return [Float]
11
+ # @example
12
+ # Benchmark::Perf.ips { "foo" + "bar" }
31
13
  #
32
- # @api public
33
- def variance(measurements)
34
- return 0 if measurements.empty?
35
-
36
- avg = average(measurements)
37
- total = measurements.reduce(0) do |sum, x|
38
- sum + (x - avg)**2
39
- end
40
- total.to_f / measurements.size
41
- end
42
- module_function :variance
43
-
44
- # Calculate standard deviation
14
+ # @param [Numeric] time
15
+ # the time to run measurements in seconds
16
+ # @param [Numeric] warmup
17
+ # the warmup time in seconds
45
18
  #
46
- # @param [Array[Float]] measurements
19
+ # @return [Array[Integer, Integer, Integer, Float]]
20
+ # the average, standard deviation, iterations and time
47
21
  #
48
22
  # @api public
49
- def std_dev(measurements)
50
- return 0 if measurements.empty?
51
-
52
- Math.sqrt(variance(measurements))
23
+ def ips(**options, &work)
24
+ Iteration.run(**options, &work)
53
25
  end
54
- module_function :std_dev
26
+ module_function :ips
55
27
 
56
- # Run given work and gather time statistics
57
- #
58
- # @param [Float] threshold
28
+ # Measure execution time(a.k.a cpu time) of a given work
59
29
  #
60
- # @return [Boolean]
61
- #
62
- # @api public
63
- def assert_perform_under(threshold, options = {}, &work)
64
- actual, _ = ExecutionTime.run(options, &work)
65
- actual <= threshold
66
- end
67
- module_function :assert_perform_under
68
-
69
- # Assert work is performed within expected iterations per second
30
+ # @example
31
+ # Benchmark::Perf.cpu { "foo" + "bar" }
70
32
  #
71
- # @param [Integer] iterations
33
+ # @param [Numeric] time
34
+ # the time to run measurements in seconds
35
+ # @param [Numeric] warmup
36
+ # the warmup time in seconds
37
+ # @param [Integer] repeat
38
+ # how many times to repeat measurements
72
39
  #
73
- # @return [Boolean]
40
+ # @return [Array[Float, Float]]
74
41
  #
75
42
  # @api public
76
- def assert_perform_ips(iterations, options = {}, &work)
77
- mean, stddev, _ = Iteration.run(options, &work)
78
- iterations <= (mean + 3 * stddev)
79
- end
80
- module_function :assert_perform_ips
81
-
82
- if defined?(Process::CLOCK_MONOTONIC)
83
- # Object representing current time
84
- def time_now
85
- Process.clock_gettime Process::CLOCK_MONOTONIC
86
- end
87
- else
88
- # Object represeting current time
89
- def time_now
90
- Time.now
91
- end
92
- end
93
- module_function :time_now
94
-
95
- # Measure time elapsed with a monotonic clock
96
- #
97
- # @public
98
- def clock_time
99
- before = time_now
100
- yield
101
- after = time_now
102
- after - before
43
+ def cpu(**options, &work)
44
+ Execution.run(**options, &work)
103
45
  end
104
- module_function :clock_time
46
+ module_function :cpu
105
47
  end # Perf
106
48
  end # Benchmark
@@ -0,0 +1,67 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Benchmark
4
+ module Perf
5
+ # Clock that represents monotonic time
6
+ module Clock
7
+ # Microseconds per second
8
+ MICROSECONDS_PER_SECOND = 1_000_000
9
+
10
+ # Microseconds per 100ms
11
+ MICROSECONDS_PER_100MS = 100_000
12
+
13
+ class_definition = Class.new do
14
+ def initialize
15
+ super()
16
+ @last_time = Time.now.to_f
17
+ @lock = Mutex.new
18
+ end
19
+
20
+ if defined?(Process::CLOCK_MONOTONIC)
21
+ # @api private
22
+ def now
23
+ Process.clock_gettime(Process::CLOCK_MONOTONIC)
24
+ end
25
+ else
26
+ # @api private
27
+ def now
28
+ @lock.synchronize do
29
+ current = Time.now.to_f
30
+ if @last_time < current
31
+ @last_time = current
32
+ else # clock moved back in time
33
+ @last_time += 0.000_001
34
+ end
35
+ end
36
+ end
37
+ end
38
+ end
39
+
40
+ MONOTONIC_CLOCK = class_definition.new
41
+ private_constant :MONOTONIC_CLOCK
42
+
43
+ # Current monotonic time
44
+ #
45
+ # @return [Float]
46
+ #
47
+ # @api public
48
+ def now
49
+ MONOTONIC_CLOCK.now
50
+ end
51
+ module_function :now
52
+
53
+ # Measure time elapsed with a monotonic clock
54
+ #
55
+ # @return [Float]
56
+ #
57
+ # @public
58
+ def measure
59
+ before = now
60
+ yield
61
+ after = now
62
+ after - before
63
+ end
64
+ module_function :measure
65
+ end # Clock
66
+ end # Perf
67
+ end # Benchmark
@@ -0,0 +1,78 @@
1
+ # frozen_string_literal: true
2
+
3
+ require_relative "stats"
4
+
5
+ module Benchmark
6
+ module Perf
7
+ class CPUResult
8
+ # Indicate no value
9
+ NO_VALUE = Module.new
10
+
11
+ # Create storage for ips results
12
+ #
13
+ # @api private
14
+ def initialize
15
+ @avg = NO_VALUE
16
+ @stdev = NO_VALUE
17
+ @dt = NO_VALUE
18
+ @measurements = []
19
+ end
20
+
21
+ # @api private
22
+ def add(time_s)
23
+ @measurements << time_s
24
+ @avg = NO_VALUE
25
+ @stdev = NO_VALUE
26
+ @dt = NO_VALUE
27
+ end
28
+ alias << add
29
+
30
+ # Average ips
31
+ #
32
+ # @return [Integer]
33
+ #
34
+ # @api public
35
+ def avg
36
+ return @avg unless @avg == NO_VALUE
37
+
38
+ @avg = Stats.average(@measurements)
39
+ end
40
+
41
+ # The ips standard deviation
42
+ #
43
+ # @return [Integer]
44
+ #
45
+ # @api public
46
+ def stdev
47
+ return @stdev unless @stdev == NO_VALUE
48
+
49
+ @stdev = Stats.stdev(@measurements)
50
+ end
51
+
52
+ # The time elapsed
53
+ #
54
+ # @return [Float]
55
+ #
56
+ # @api public
57
+ def dt
58
+ return @dt unless @dt == NO_VALUE
59
+
60
+ @dt = @measurements.reduce(0, :+)
61
+ end
62
+ alias elapsed_time dt
63
+
64
+ # @api public
65
+ def to_a
66
+ [avg, stdev, dt]
67
+ end
68
+ alias to_ary to_a
69
+
70
+ # A string representation of this instance
71
+ #
72
+ # @api public
73
+ def inspect
74
+ "#<#{self.class.name} @avg=#{avg} @stdev=#{stdev} @dt=#{dt}>"
75
+ end
76
+ end # IPSResult
77
+ end # Perf
78
+ end # Benchmark
@@ -1,11 +1,14 @@
1
1
  # frozen_string_literal: true
2
2
 
3
+ require_relative "clock"
4
+ require_relative "cpu_result"
5
+
3
6
  module Benchmark
4
7
  module Perf
5
8
  # Measure length of time the work could take on average
6
9
  #
7
10
  # @api public
8
- module ExecutionTime
11
+ module Execution
9
12
  # Check if measurements need to run in subprocess
10
13
  #
11
14
  # @api private
@@ -69,7 +72,7 @@ module Benchmark
69
72
 
70
73
  warmup.times do
71
74
  run_in_subprocess(io: io, subprocess: subprocess) do
72
- Perf.clock_time(&work)
75
+ Clock.measure(&work)
73
76
  end
74
77
  end
75
78
  end
@@ -81,7 +84,7 @@ module Benchmark
81
84
  # how many times to repeat the code measuremenets
82
85
  #
83
86
  # @example
84
- # ExecutionTime.run(times: 10) { ... }
87
+ # ExecutionTime.run(repeat: 10) { ... }
85
88
  #
86
89
  # @return [Array[Float, Float]]
87
90
  # average and standard deviation
@@ -89,19 +92,21 @@ module Benchmark
89
92
  # @api public
90
93
  def run(repeat: 1, io: nil, warmup: 1, subprocess: true, &work)
91
94
  check_greater(repeat, 0)
92
- measurements = []
95
+
96
+ result = CPUResult.new
93
97
 
94
98
  run_warmup(warmup: warmup, io: io, subprocess: subprocess, &work)
95
99
 
96
100
  repeat.times do
97
101
  GC.start
98
- measurements << run_in_subprocess(io: io, subprocess: subprocess) do
99
- Perf.clock_time(&work)
102
+ result << run_in_subprocess(io: io, subprocess: subprocess) do
103
+ Clock.measure(&work)
100
104
  end
101
105
  end
106
+
102
107
  io.puts if io
103
108
 
104
- [Perf.average(measurements), Perf.std_dev(measurements)]
109
+ result
105
110
  end
106
111
  module_function :run
107
112
 
@@ -0,0 +1,85 @@
1
+ # frozen_string_literal: true
2
+
3
+ require_relative "stats"
4
+
5
+ module Benchmark
6
+ module Perf
7
+ class IPSResult
8
+ # Indicate no value
9
+ NO_VALUE = Module.new
10
+
11
+ attr_reader :ips
12
+
13
+ attr_reader :iter
14
+
15
+ # Create storage for ips results
16
+ #
17
+ # @api private
18
+ def initialize
19
+ @avg = NO_VALUE
20
+ @stdev = NO_VALUE
21
+ @dt = NO_VALUE
22
+ @measurements = []
23
+ @ips = []
24
+ @iter = 0
25
+ end
26
+
27
+ # @api private
28
+ def add(time_s, cycles_in_100ms)
29
+ @measurements << time_s
30
+ @iter += cycles_in_100ms
31
+ @ips << cycles_in_100ms.to_f / time_s.to_f
32
+ @avg = NO_VALUE
33
+ @stdev = NO_VALUE
34
+ @dt = NO_VALUE
35
+ end
36
+
37
+ # Average ips
38
+ #
39
+ # @return [Integer]
40
+ #
41
+ # @api public
42
+ def avg
43
+ return @avg unless @avg == NO_VALUE
44
+
45
+ @avg = Stats.average(ips).round
46
+ end
47
+
48
+ # The ips standard deviation
49
+ #
50
+ # @return [Integer]
51
+ #
52
+ # @api public
53
+ def stdev
54
+ return @stdev unless @stdev == NO_VALUE
55
+
56
+ @stdev = Stats.stdev(ips).round
57
+ end
58
+
59
+ # The time elapsed
60
+ #
61
+ # @return [Float]
62
+ #
63
+ # @api public
64
+ def dt
65
+ return @dt unless @dt == NO_VALUE
66
+
67
+ @dt = @measurements.reduce(0, :+)
68
+ end
69
+ alias elapsed_time dt
70
+
71
+ # @api public
72
+ def to_a
73
+ [avg, stdev, iter, dt]
74
+ end
75
+ alias to_ary to_a
76
+
77
+ # A string representation of this instance
78
+ #
79
+ # @api public
80
+ def inspect
81
+ "#<#{self.class.name} @avg=#{avg} @stdev=#{stdev} @iter=#{iter} @dt=#{dt}>"
82
+ end
83
+ end # IPSResult
84
+ end # Perf
85
+ end # Benchmark
@@ -1,14 +1,15 @@
1
1
  # frozen_string_literal: true
2
2
 
3
+ require_relative "clock"
4
+ require_relative "stats"
5
+ require_relative "ips_result"
6
+
3
7
  module Benchmark
4
8
  module Perf
5
9
  # Measure number of iterations a work could take in a second
6
10
  #
7
11
  # @api private
8
12
  module Iteration
9
- MICROSECONDS_PER_SECOND = 1_000_000
10
- MICROSECONDS_PER_100MS = 100_000
11
-
12
13
  # Call work by given times
13
14
  #
14
15
  # @param [Integer] times
@@ -30,15 +31,17 @@ module Benchmark
30
31
  # Calcualte the number of cycles needed for 100ms
31
32
  #
32
33
  # @param [Integer] iterations
33
- # @param [Float] elapsed_time
34
- # the total time for all iterations
34
+ # @param [Float] time_s
35
+ # the total time for all iterations in seconds
35
36
  #
36
37
  # @return [Integer]
37
38
  # the cycles per 100ms
38
39
  #
39
40
  # @api private
40
- def cycles_per_100ms(iterations, elapsed_time)
41
- cycles = (iterations * (MICROSECONDS_PER_100MS / elapsed_time)).to_i
41
+ def cycles_per_100ms(iterations, time_s)
42
+ cycles = iterations * Clock::MICROSECONDS_PER_100MS
43
+ cycles /= time_s * Clock::MICROSECONDS_PER_SECOND
44
+ cycles = cycles.to_i
42
45
  cycles <= 0 ? 1 : cycles
43
46
  end
44
47
  module_function :cycles_per_100ms
@@ -51,50 +54,47 @@ module Benchmark
51
54
  # @api private
52
55
  def run_warmup(warmup: 1, &work)
53
56
  GC.start
54
- target = Perf.time_now + warmup
57
+
58
+ target = Clock.now + warmup
55
59
  iter = 0
56
60
 
57
- elapsed_time = Perf.clock_time do
58
- while Perf.time_now < target
61
+ time_s = Clock.measure do
62
+ while Clock.now < target
59
63
  call_times(1, &work)
60
64
  iter += 1
61
65
  end
62
66
  end
63
67
 
64
- elapsed_time *= MICROSECONDS_PER_SECOND
65
- cycles_per_100ms(iter, elapsed_time)
68
+ cycles_per_100ms(iter, time_s)
66
69
  end
67
70
  module_function :run_warmup
68
71
 
69
72
  # Run measurements
70
73
  #
71
74
  # @param [Numeric] time
72
- # the time to run measurements for
75
+ # the time to run measurements in seconds
76
+ # @param [Numeric] warmup
77
+ # the warmup time in seconds
73
78
  #
74
79
  # @api public
75
80
  def run(time: 2, warmup: 1, &work)
76
- target = Time.now + time
77
- iter = 0
78
- measurements = []
79
- cycles = run_warmup(warmup: warmup, &work)
81
+ cycles_in_100ms = run_warmup(warmup: warmup, &work)
80
82
 
81
83
  GC.start
82
84
 
83
- while Time.now < target
84
- bench_time = Perf.clock_time { call_times(cycles, &work) }
85
- next if bench_time <= 0.0 # Iteration took no time
86
- iter += cycles
87
- measurements << bench_time * MICROSECONDS_PER_SECOND
88
- end
85
+ result = IPSResult.new
89
86
 
90
- ips = measurements.map do |time_ms|
91
- (cycles / time_ms) * MICROSECONDS_PER_SECOND
92
- end
87
+ target = (before = Clock.now) + time
88
+
89
+ while Clock.now < target
90
+ time_s = Clock.measure { call_times(cycles_in_100ms, &work) }
93
91
 
94
- final_time = Time.now
95
- elapsed_time = (final_time - target).abs
92
+ next if time_s <= 0.0 # Iteration took no time
93
+
94
+ result.add(time_s, cycles_in_100ms)
95
+ end
96
96
 
97
- [Perf.average(ips).round, Perf.std_dev(ips).round, iter, elapsed_time]
97
+ result
98
98
  end
99
99
  module_function :run
100
100
  end # Iteration
@@ -0,0 +1,52 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Benchmark
4
+ module Perf
5
+ module Stats
6
+ # Calculate arithemtic average of measurements
7
+ #
8
+ # @param [Array[Float]] measurements
9
+ #
10
+ # @return [Float]
11
+ # the average of given measurements
12
+ #
13
+ # @api public
14
+ def average(measurements)
15
+ return 0 if measurements.empty?
16
+
17
+ measurements.reduce(&:+).to_f / measurements.size
18
+ end
19
+ module_function :average
20
+
21
+ # Calculate variance of measurements
22
+ #
23
+ # @param [Array[Float]] measurements
24
+ #
25
+ # @return [Float]
26
+ #
27
+ # @api public
28
+ def variance(measurements)
29
+ return 0 if measurements.empty?
30
+
31
+ avg = average(measurements)
32
+ total = measurements.reduce(0) do |sum, x|
33
+ sum + (x - avg)**2
34
+ end
35
+ total.to_f / measurements.size
36
+ end
37
+ module_function :variance
38
+
39
+ # Calculate standard deviation
40
+ #
41
+ # @param [Array[Float]] measurements
42
+ #
43
+ # @api public
44
+ def stdev(measurements)
45
+ return 0 if measurements.empty?
46
+
47
+ Math.sqrt(variance(measurements))
48
+ end
49
+ module_function :stdev
50
+ end # Stats
51
+ end # Perf
52
+ end # Benchmark
@@ -2,6 +2,6 @@
2
2
 
3
3
  module Benchmark
4
4
  module Perf
5
- VERSION = "0.5.0"
5
+ VERSION = "0.6.0"
6
6
  end # Perf
7
7
  end # Benchmark
metadata CHANGED
@@ -1,29 +1,29 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: benchmark-perf
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.5.0
4
+ version: 0.6.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Piotr Murach
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2019-04-21 00:00:00.000000000 Z
11
+ date: 2020-02-24 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
- name: bundler
14
+ name: rake
15
15
  requirement: !ruby/object:Gem::Requirement
16
16
  requirements:
17
17
  - - ">="
18
18
  - !ruby/object:Gem::Version
19
- version: '1.16'
19
+ version: '0'
20
20
  type: :development
21
21
  prerelease: false
22
22
  version_requirements: !ruby/object:Gem::Requirement
23
23
  requirements:
24
24
  - - ">="
25
25
  - !ruby/object:Gem::Version
26
- version: '1.16'
26
+ version: '0'
27
27
  - !ruby/object:Gem::Dependency
28
28
  name: rspec
29
29
  requirement: !ruby/object:Gem::Requirement
@@ -38,49 +38,38 @@ dependencies:
38
38
  - - "~>"
39
39
  - !ruby/object:Gem::Version
40
40
  version: '3.0'
41
- - !ruby/object:Gem::Dependency
42
- name: rake
43
- requirement: !ruby/object:Gem::Requirement
44
- requirements:
45
- - - ">="
46
- - !ruby/object:Gem::Version
47
- version: '0'
48
- type: :development
49
- prerelease: false
50
- version_requirements: !ruby/object:Gem::Requirement
51
- requirements:
52
- - - ">="
53
- - !ruby/object:Gem::Version
54
- version: '0'
55
41
  description: Execution time and iteration performance benchmarking
56
42
  email:
57
- - me@piotrmurach.com
43
+ - piotr@piotrmurach.com
58
44
  executables: []
59
45
  extensions: []
60
- extra_rdoc_files: []
46
+ extra_rdoc_files:
47
+ - README.md
48
+ - CHANGELOG.md
49
+ - LICENSE.txt
61
50
  files:
62
51
  - CHANGELOG.md
63
52
  - LICENSE.txt
64
53
  - README.md
65
- - Rakefile
66
- - benchmark-perf.gemspec
67
54
  - lib/benchmark-perf.rb
68
55
  - lib/benchmark/perf.rb
69
- - lib/benchmark/perf/execution_time.rb
56
+ - lib/benchmark/perf/clock.rb
57
+ - lib/benchmark/perf/cpu_result.rb
58
+ - lib/benchmark/perf/execution.rb
59
+ - lib/benchmark/perf/ips_result.rb
70
60
  - lib/benchmark/perf/iteration.rb
61
+ - lib/benchmark/perf/stats.rb
71
62
  - lib/benchmark/perf/version.rb
72
- - spec/spec_helper.rb
73
- - spec/unit/arithmetic_spec.rb
74
- - spec/unit/assertions_spec.rb
75
- - spec/unit/execution_time_spec.rb
76
- - spec/unit/iteration_spec.rb
77
- - tasks/console.rake
78
- - tasks/coverage.rake
79
- - tasks/spec.rake
80
- homepage: ''
63
+ homepage: https://github.com/piotrmurach/benchmark-perf
81
64
  licenses:
82
65
  - MIT
83
- metadata: {}
66
+ metadata:
67
+ allowed_push_host: https://rubygems.org
68
+ bug_tracker_uri: https://github.com/piotrmurach/benchmark-perf/issues
69
+ changelog_uri: https://github.com/piotrmurach/benchmark-perf/blob/master/CHANGELOG.md
70
+ documentation_uri: https://www.rubydoc.info/gems/benchmark-perf
71
+ homepage_uri: https://github.com/piotrmurach/benchmark-perf
72
+ source_code_uri: https://github.com/piotrmurach/benchmark-perf
84
73
  post_install_message:
85
74
  rdoc_options: []
86
75
  require_paths:
@@ -96,13 +85,8 @@ required_rubygems_version: !ruby/object:Gem::Requirement
96
85
  - !ruby/object:Gem::Version
97
86
  version: '0'
98
87
  requirements: []
99
- rubygems_version: 3.0.3
88
+ rubygems_version: 3.1.2
100
89
  signing_key:
101
90
  specification_version: 4
102
91
  summary: Execution time and iteration performance benchmarking
103
- test_files:
104
- - spec/spec_helper.rb
105
- - spec/unit/arithmetic_spec.rb
106
- - spec/unit/assertions_spec.rb
107
- - spec/unit/execution_time_spec.rb
108
- - spec/unit/iteration_spec.rb
92
+ test_files: []
data/Rakefile DELETED
@@ -1,8 +0,0 @@
1
- require "bundler/gem_tasks"
2
-
3
- FileList['tasks/**/*.rake'].each(&method(:import))
4
-
5
- desc 'Run all specs'
6
- task ci: %w[ spec ]
7
-
8
- task default: :spec
@@ -1,27 +0,0 @@
1
- lib = File.expand_path('../lib', __FILE__)
2
- $LOAD_PATH.unshift(lib) unless $LOAD_PATH.include?(lib)
3
- require 'benchmark/perf/version'
4
-
5
- Gem::Specification.new do |spec|
6
- spec.name = "benchmark-perf"
7
- spec.version = Benchmark::Perf::VERSION
8
- spec.authors = ["Piotr Murach"]
9
- spec.email = ["me@piotrmurach.com"]
10
- spec.summary = %q{Execution time and iteration performance benchmarking}
11
- spec.description = %q{Execution time and iteration performance benchmarking}
12
- spec.homepage = ""
13
- spec.license = "MIT"
14
-
15
- spec.files = Dir['{lib,spec}/**/*.rb']
16
- spec.files += Dir['tasks/*', 'benchmark-perf.gemspec']
17
- spec.files += Dir['README.md', 'CHANGELOG.md', 'LICENSE.txt', 'Rakefile']
18
- spec.executables = spec.files.grep(%r{^bin/}) { |f| File.basename(f) }
19
- spec.test_files = spec.files.grep(%r{^spec/})
20
- spec.require_paths = ["lib"]
21
-
22
- spec.required_ruby_version = '>= 2.0.0'
23
-
24
- spec.add_development_dependency 'bundler', '>= 1.16'
25
- spec.add_development_dependency 'rspec', '~> 3.0'
26
- spec.add_development_dependency 'rake'
27
- end
@@ -1,45 +0,0 @@
1
- # frozen_string_literal: true
2
-
3
- if ENV['COVERAGE'] || ENV['TRAVIS']
4
- require 'simplecov'
5
- require 'coveralls'
6
-
7
- SimpleCov.formatter = SimpleCov::Formatter::MultiFormatter[
8
- SimpleCov::Formatter::HTMLFormatter,
9
- Coveralls::SimpleCov::Formatter
10
- ]
11
-
12
- SimpleCov.start do
13
- command_name 'spec'
14
- add_filter 'spec'
15
- end
16
- end
17
-
18
- require "benchmark-perf"
19
-
20
- RSpec.configure do |config|
21
- config.expect_with :rspec do |expectations|
22
- expectations.include_chain_clauses_in_custom_matcher_descriptions = true
23
- end
24
-
25
- config.mock_with :rspec do |mocks|
26
- mocks.verify_partial_doubles = true
27
- end
28
-
29
- config.filter_run :focus
30
- config.run_all_when_everything_filtered = true
31
-
32
- config.disable_monkey_patching!
33
-
34
- config.warnings = true
35
-
36
- if config.files_to_run.one?
37
- config.default_formatter = 'doc'
38
- end
39
-
40
- config.profile_examples = 2
41
-
42
- config.order = :random
43
-
44
- Kernel.srand config.seed
45
- end
@@ -1,33 +0,0 @@
1
- # frozen_string_literal: true
2
-
3
- RSpec.describe Benchmark::Perf, 'arithmetic' do
4
- context '#average' do
5
- it "calculates average without measurements" do
6
- expect(Benchmark::Perf.average([])).to eq(0)
7
- end
8
-
9
- it "calculates average with measurements" do
10
- expect(Benchmark::Perf.average([1,2,3])).to eq(2.0)
11
- end
12
- end
13
-
14
- context '#variance' do
15
- it "calculates variance of no measurements" do
16
- expect(Benchmark::Perf.variance([])).to eq(0)
17
- end
18
-
19
- it "calculates variance of measurements" do
20
- expect(Benchmark::Perf.variance([1,2,3])).to eq(2.to_f/3)
21
- end
22
- end
23
-
24
- context '#std_dev' do
25
- it "calculates standard deviation of no measurements" do
26
- expect(Benchmark::Perf.std_dev([])).to eq(0)
27
- end
28
-
29
- it "calculates standard deviation of measurements" do
30
- expect(Benchmark::Perf.std_dev([1,2,3])).to eq(Math.sqrt(2.to_f/3))
31
- end
32
- end
33
- end
@@ -1,15 +0,0 @@
1
- # frozen_string_literal: true
2
-
3
- RSpec.describe Benchmark::Perf, 'assertions' do
4
- it "passes asertion by performing under threshold" do
5
- bench = Benchmark::Perf
6
- assertion = bench.assert_perform_under(0.01, repeat: 2) { 'x' * 1_024 }
7
- expect(assertion).to eq(true)
8
- end
9
-
10
- it "passes asertion by performing 10K ips" do
11
- bench = Benchmark::Perf
12
- assertion = bench.assert_perform_ips(10_000, warmup: 1.3) { 'x' * 1_024 }
13
- expect(assertion).to eq(true)
14
- end
15
- end
@@ -1,85 +0,0 @@
1
- # frozen_string_literal: true
2
-
3
- RSpec.describe Benchmark::Perf::ExecutionTime do
4
- it "provides default benchmark range" do
5
- allow(described_class).to receive(:run_in_subprocess).and_return(0.1)
6
-
7
- described_class.run(warmup: 0) { 'x' * 1024 }
8
-
9
- expect(described_class).to have_received(:run_in_subprocess).once
10
- end
11
-
12
- it "accepts custom number of samples" do
13
- allow(described_class).to receive(:run_in_subprocess).and_return(0.1)
14
-
15
- described_class.run(repeat: 12, warmup: 0) { 'x' * 1024 }
16
-
17
- expect(described_class).to have_received(:run_in_subprocess).exactly(12).times
18
- end
19
-
20
- it "runs warmup cycles" do
21
- allow(described_class).to receive(:run_in_subprocess).and_return(0.1)
22
-
23
- described_class.run(repeat: 1, warmup: 1) { 'x' }
24
-
25
- expect(described_class).to have_received(:run_in_subprocess).twice
26
- end
27
-
28
- it "doesn't run in subproces when option :run_in_subprocess is set to false",
29
- if: ::Process.respond_to?(:fork) do
30
-
31
- allow(::Process).to receive(:fork)
32
-
33
- described_class.run(subprocess: false) { 'x' * 1024 }
34
-
35
- expect(::Process).to_not have_received(:fork)
36
- end
37
-
38
- it "doesn't run in subprocess when RUN_IN_SUBPROCESS env var is set to false",
39
- if: ::Process.respond_to?(:fork) do
40
-
41
- allow(::Process).to receive(:fork)
42
- allow(ENV).to receive(:[]).with("RUN_IN_SUBPROCESS").and_return('false')
43
-
44
- described_class.run { 'x' * 1024 }
45
-
46
- expect(::Process).to_not have_received(:fork)
47
- end
48
-
49
- it "doesn't accept range smaller than 1" do
50
- expect {
51
- described_class.run(repeat: 0) { 'x' }
52
- }.to raise_error(ArgumentError, 'Repeat value: 0 needs to be greater than 0')
53
- end
54
-
55
- it "provides measurements for 30 samples by default" do
56
- sample = described_class.run { 'x' * 1024 }
57
-
58
- expect(sample).to all(be < 0.01)
59
- end
60
-
61
- it "doesn't benchmark raised exception" do
62
- expect {
63
- described_class.run { raise 'boo' }
64
- }.to raise_error(StandardError)
65
- end
66
-
67
- it "measures complex object" do
68
- sample = described_class.run { {foo: Object.new, bar: :piotr} }
69
-
70
- expect(sample).to all(be < 0.01)
71
- end
72
-
73
- it "executes code to warmup ruby vm" do
74
- sample = described_class.run_warmup { 'x' * 1_000_000 }
75
-
76
- expect(sample).to eq(1)
77
- end
78
-
79
- it "measures work performance for 10 samples" do
80
- sample = described_class.run(repeat: 10) { 'x' * 1_000_000 }
81
-
82
- expect(sample.size).to eq(2)
83
- expect(sample).to all(be < 0.01)
84
- end
85
- end
@@ -1,23 +0,0 @@
1
- # frozen_string_literal: true
2
-
3
- RSpec.describe Benchmark::Perf::Iteration do
4
- it "defines cycles per 100 microseconds" do
5
- sample = described_class.run_warmup { 'x' * 1_000_000 }
6
- expect(sample).to be > 25
7
- end
8
-
9
- it "measures 10K iterations per second" do
10
- sample = described_class.run { 'x' * 1_000_000 }
11
-
12
- expect(sample.size).to eq(4)
13
- expect(sample[0]).to be > 250
14
- expect(sample[1]).to be > 5
15
- expect(sample[2]).to be > 250
16
- end
17
-
18
- it "does't measure broken code" do
19
- expect {
20
- described_class.run { raise 'boo' }
21
- }.to raise_error(StandardError, /boo/)
22
- end
23
- end
@@ -1,11 +0,0 @@
1
- # frozen_string_literal: true
2
-
3
- desc 'Load gem inside irb console'
4
- task :console do
5
- require 'irb'
6
- require 'irb/completion'
7
- require_relative '../lib/benchmark-perf'
8
- ARGV.clear
9
- IRB.start
10
- end
11
- task c: %w[ console ]
@@ -1,11 +0,0 @@
1
- # frozen_string_literal: true
2
-
3
- desc 'Measure code coverage'
4
- task :coverage do
5
- begin
6
- original, ENV['COVERAGE'] = ENV['COVERAGE'], 'true'
7
- Rake::Task['spec'].invoke
8
- ensure
9
- ENV['COVERAGE'] = original
10
- end
11
- end
@@ -1,34 +0,0 @@
1
- # frozen_string_literal: true
2
-
3
- begin
4
- require 'rspec/core/rake_task'
5
-
6
- desc 'Run all specs'
7
- RSpec::Core::RakeTask.new(:spec) do |task|
8
- task.pattern = 'spec/{unit,integration}{,/*/**}/*_spec.rb'
9
- end
10
-
11
- namespace :spec do
12
- desc 'Run unit specs'
13
- RSpec::Core::RakeTask.new(:unit) do |task|
14
- task.pattern = 'spec/unit{,/*/**}/*_spec.rb'
15
- end
16
-
17
- desc 'Run integration specs'
18
- RSpec::Core::RakeTask.new(:integration) do |task|
19
- task.pattern = 'spec/integration{,/*/**}/*_spec.rb'
20
- end
21
-
22
- desc 'Run performance specs'
23
- RSpec::Core::RakeTask.new(:perf) do |task|
24
- task.pattern = 'spec/performance{,/*/**}/*_spec.rb'
25
- end
26
- end
27
-
28
- rescue LoadError
29
- %w[spec spec:unit spec:integration spec:perf].each do |name|
30
- task name do
31
- $stderr.puts "In order to run #{name}, do `gem install rspec`"
32
- end
33
- end
34
- end