benchmark-sweet 0.2.2 → 0.3.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: d244cd69d99d599b4c6b7ae175ebcfa59b1c826ef66f643662bac4424d884f1b
4
- data.tar.gz: 8350da53d68b067996858b52bdfbd5b9e7525d88042d099f0d68d443a00dc3c6
3
+ metadata.gz: 54cd4724f37f8b493ebb7ebcfd949d7a05fec9e09d6fcf8402a01bea4a92e1b5
4
+ data.tar.gz: b1853dfaf463323d2796ef194105a6ff7ac0b01260be29c279ab5a3d6b03e9ba
5
5
  SHA512:
6
- metadata.gz: 129add152f798180f6d7f3d76c294b7a8c78e8aa917a7c838c713a2177d339d3396b9155e0a253b96643207366e20377d9c61c219dc7f9088b0708356fa628c4
7
- data.tar.gz: d618214ba5e14cf06c6c1ae323e7836514af96ebd39602fbf1c682be091ee8e20f1f755bc70a0756adddcddf3e4e5a794c45b14c2ce8a2ce8bdaac4933913de3
6
+ metadata.gz: ae9f7595d95ee066fc849d856f7deda2aa9d5276d47e053a914da4736c1b5e62a52efc45137093704a2d72ed64b9a277837e21055294846ee887122bb57e8e4d
7
+ data.tar.gz: 12549ad3a023a1cf50a03972d315e49e360c8995157e853d50eb56c0f509fd14822a01bf0779f39ee148c90e3cc368981f4cf9a17ce0b6d79063a859d1dcf887
data/CHANGELOG.md CHANGED
@@ -6,12 +6,22 @@ and this project adheres to [Semantic Versioning](http://semver.org/).
6
6
 
7
7
  ## [Unreleased]
8
8
 
9
+ ## [0.3.0] - 2026-02-07
10
+
9
11
  ### Added
12
+ - Tests for Comparison, Job, Item, and table rendering
10
13
 
11
14
  ### Changed
15
+ - Removed activesupport runtime dependency (label values normalized to strings automatically)
16
+ - Loosened dependency pins for benchmark-ips (~> 2.8) and memory_profiler (~> 0.9)
17
+ - Improved README with single-example walkthrough showing how report_with reshapes output
12
18
 
13
19
  ### Fixed
14
- - properly indent tables with iso color codes (thanks @d-m-u)
20
+ - Fixed @instance typo in QueryCounter#callback (was crashing on cached queries)
21
+ - Fixed table rendering crash when header is wider than values
22
+ - Fixed table column alignment with ANSI color codes
23
+ - Properly indent tables with ANSI color codes (thanks @d-m-u)
24
+ - Removed dead code (labels_have_symbols!, unused symbol_value check)
15
25
 
16
26
  ## [0.2.1] - 2020-06-24
17
27
 
@@ -38,7 +48,8 @@ and this project adheres to [Semantic Versioning](http://semver.org/).
38
48
  ### Added
39
49
  - good stuff
40
50
 
41
- [Unreleased]: https://github.com/kbrock/benchmark-sweet/compare/v0.2.2...HEAD
51
+ [Unreleased]: https://github.com/kbrock/benchmark-sweet/compare/v0.3.0...HEAD
52
+ [0.3.0]: https://github.com/kbrock/benchmark-sweet/compare/v0.2.2...v0.3.0
42
53
  [0.2.2]: https://github.com/kbrock/benchmark-sweet/compare/v0.2.1...v0.2.2
43
54
  [0.2.1]: https://github.com/kbrock/benchmark-sweet/compare/v0.2.0...v0.2.1
44
55
  [0.2.0]: https://github.com/kbrock/benchmark-sweet/compare/v0.0.1...v0.2.0
data/README.md CHANGED
@@ -24,17 +24,68 @@ Or install it yourself as:
24
24
 
25
25
  $ gem install benchmark-sweet
26
26
 
27
- ## Example 1
27
+ ## How it works
28
28
 
29
- ### Code
29
+ ### Labels
30
+
31
+ Every benchmark item has a **label** — a Hash of metadata that identifies it. Labels are built
32
+ from `metadata` and the name passed to `report`:
33
+
34
+ ```ruby
35
+ x.metadata version: RUBY_VERSION
36
+ x.metadata data: "nil" do
37
+ x.report("to_s.split") { ... }
38
+ end
39
+ # produces label: {version: "3.4.8", data: "nil", method: "to_s.split"}
40
+ ```
41
+
42
+ These label keys (`version`, `data`, `method`) are not reserved words — they are whatever
43
+ metadata you choose. The same keys are then used with `compare_by` and `report_with` to
44
+ control comparisons and display.
45
+
46
+ ### `compare_by` — which items to compare
47
+
48
+ `compare_by` partitions results into comparison groups. Within each group, items are ranked
49
+ best to worst and slowdown factors are calculated. Items in different groups are never
50
+ compared against each other.
51
+
52
+ Method names and metrics are always part of the partition implicitly.
53
+
54
+ ```ruby
55
+ x.compare_by :data
56
+ ```
57
+
58
+ **Why this matters:** splitting a nil is much faster than splitting a string. Without
59
+ `compare_by :data`, the string case would show as "slower" — but it's a different workload,
60
+ not a worse implementation.
61
+
62
+ ### `report_with` — how to display the results
63
+
64
+ `report_with` controls the table layout using four parameters:
65
+
66
+ - **`grouping`** — a separate table is generated for each unique value (default: one table for everything)
67
+ - **`row`** — what goes on the left side of each row (default: the full label)
68
+ - **`column`** — what goes across the top as column headers (default: `:metric`)
69
+ - **`value`** — what appears in each cell (default: `:comp_short` — value with slowdown)
70
+
71
+ Each accepts a Symbol (label key), an Array of Symbols (joined to make compound headers),
72
+ or a lambda for custom formatting.
73
+
74
+ The rest of this README shows one example data set displayed multiple ways by changing
75
+ only `report_with`, so you can see how each parameter affects the output.
76
+
77
+ ## Example: same data, different views
78
+
79
+ ### The benchmark
80
+
81
+ Two split implementations, tested on both nil and string data, measuring ips and memsize:
30
82
 
31
83
  ```ruby
32
84
  require "benchmark/sweet"
33
- require "active_support/all"
34
85
 
35
86
  NSTRING = nil
36
- DELIMITER='/'.freeze
37
- STRING="ab/cd/ef/gh".freeze
87
+ DELIMITER = '/'.freeze
88
+ STRING = "ab/cd/ef/gh".freeze
38
89
 
39
90
  Benchmark.items(metrics: %w(ips memsize)) do |x|
40
91
  x.metadata data: "nil" do
@@ -46,72 +97,123 @@ Benchmark.items(metrics: %w(ips memsize)) do |x|
46
97
  x.report("?split:[]") { STRING ? STRING.split(DELIMITER) : [] }
47
98
  end
48
99
 
49
- # partition the data by data value (nil vs string)
50
- # that way we're not comparing a split on a nil vs a split on a populated string
51
- compare_by :data
52
-
53
- # each row is a different method (via `row: :method`)
54
- # each column is by data type (via `column: :data` - specified via `metadata data: "nil"`)
55
- x.report_with grouping: :metric, sort: true, row: :method, column: :data
100
+ x.compare_by :data
101
+ x.report_with ... # <-- we'll vary this part below
56
102
  end
57
103
  ```
58
104
 
59
- #### `compare_by`
105
+ This produces 8 data points (2 methods x 2 data types x 2 metrics). The `compare_by :data`
106
+ ensures nil-vs-nil and str-vs-str comparisons — never nil-vs-str.
60
107
 
61
- The code takes a different amount of time to process a `nil` vs a string. So the values
62
- are given metadata with the appropriate data used (i.e.: `metadata data: "nil"`).
63
- The benchmark is then told that the results need to be partitioned by `data` (i.e.: `compare_by :data`).
108
+ Now let's see how `report_with` changes the presentation of these same 8 data points.
64
109
 
65
- The multipliers are only comparing values with the same data value and do not compare `"string"` values with `"nil"` values.
110
+ ---
66
111
 
67
- Values for method labels (e.g.: `"to_s.split"`) and metrics (e.g.: `ips`) are already part of the partition.
112
+ ### View 1: a table per metric
68
113
 
69
- #### `grouping`
114
+ ```ruby
115
+ x.report_with grouping: :metric, row: :method, column: :data, sort: true
116
+ ```
70
117
 
71
- In this example, each metric is considered distinct so each metric is given a
72
- unique table (i.e.: `grouping: :metric`)
118
+ - **`grouping: :metric`** one table for `ips`, another for `memsize`
119
+ - **`row: :method`** — each method gets its own row
120
+ - **`column: :data`** — nil and str become column headers
73
121
 
74
- Metrics of `ips` and `memsize` are calculated (i.e.: `metrics: %w(ips memsize)`)
122
+ #### metric ips
75
123
 
76
- #### `row`
124
+ ```
125
+ method | nil | str
126
+ ------------|------------------------|---------------
127
+ ?split:[] | 7090539.2 i/s | 1322010.9 i/s
128
+ to_s.split | 3703981.6 i/s - 1.91x | 1311153.9 i/s
129
+ ```
77
130
 
78
- A different method is given per row (i.e. `row: :method`)
131
+ #### metric memsize
79
132
 
80
- #### `column`
133
+ ```
134
+ method | nil | str
135
+ ------------|--------------------|--------------
136
+ ?split:[] | 40.0 bytes | 360.0 bytes
137
+ to_s.split | 80.0 bytes - 2.00x | 360.0 bytes
138
+ ```
81
139
 
82
- The other axis for the columns is the type of data passed (i.e.: `column: :data`)
83
- This is not a native value, it is specified when the items are specified (e.g.:`metadata data: "nil"`)
140
+ Each table is small and focused. The slowdown `1.91x` only compares methods within the same
141
+ data type (nil column), because `compare_by :data` keeps them separate.
84
142
 
85
- ### example output
143
+ ---
86
144
 
87
- #### metric ips
145
+ ### View 2: everything in one table
88
146
 
89
- method | nil | str
90
- ------------|-----------------------|---------------
91
- `?split:[]` | 7090539.2 i/s | 1322010.9 i/s
92
- `to_s.split`| 3703981.6 i/s - 1.91x | 1311153.9 i/s
147
+ ```ruby
148
+ x.report_with row: :method, column: [:data, :metric], sort: true
149
+ ```
93
150
 
94
- #### metric memsize
151
+ - **no `grouping`** — all data in one table
152
+ - **`column: [:data, :metric]`** — combines data and metric into compound column headers like `nil_ips`, `str_memsize`
95
153
 
96
- method | nil | str
97
- ------------|--------------------|-------------
98
- `?split:[]` | 40.0 bytes | 360.0 bytes
99
- `to_s.split`| 80.0 bytes - 2.00x | 360.0 bytes
154
+ ```
155
+ method | nil_ips | str_ips | nil_memsize | str_memsize
156
+ ------------|---------------------|-------------------|--------------------|----------------
157
+ ?split:[] | 7090539.2 i/s | 1322010.9 i/s | 40.0 bytes | 360.0 bytes
158
+ to_s.split | 3703981.6 i/s - 1.91x | 1311153.9 i/s | 80.0 bytes - 2.00x | 360.0 bytes
159
+ ```
100
160
 
161
+ One wide table — useful for comparing all dimensions at a glance.
101
162
 
102
- ## Example 2
163
+ ---
103
164
 
104
- ### Code
165
+ ### View 3: a table per data type
105
166
 
106
167
  ```ruby
107
- require "benchmark/sweet"
108
- require "active_support/all"
168
+ x.report_with grouping: :data, row: :method, column: :metric, sort: true
169
+ ```
109
170
 
110
- NSTRING = nil
111
- DELIMITER='/'.freeze
112
- STRING="ab/cd/ef/gh".freeze
171
+ - **`grouping: :data`** — one table for nil, another for str
172
+ - **`column: :metric`** — ips and memsize become columns
113
173
 
114
- Benchmark.items(metrics: %w(ips memsize), memory: 3, warmup: 1, time: 1, quiet: false, force: ENV["FORCE"] == "true") do |x|
174
+ #### data nil
175
+
176
+ ```
177
+ method | ips | memsize
178
+ ------------|-----------------------|-------------------
179
+ ?split:[] | 7090539.2 i/s | 40.0 bytes
180
+ to_s.split | 3703981.6 i/s - 1.91x | 80.0 bytes - 2.00x
181
+ ```
182
+
183
+ #### data str
184
+
185
+ ```
186
+ method | ips | memsize
187
+ ------------|---------------|------------
188
+ ?split:[] | 1322010.9 i/s | 360.0 bytes
189
+ to_s.split | 1311153.9 i/s | 360.0 bytes
190
+ ```
191
+
192
+ Now each table shows "how do these methods compare for this particular input?" —
193
+ all metrics side by side for the same workload.
194
+
195
+ ---
196
+
197
+ ### Summary
198
+
199
+ The same data, three different views — controlled entirely by `report_with`:
200
+
201
+ | View | `grouping` | `row` | `column` | Result |
202
+ |------|-----------|-------|----------|--------|
203
+ | 1 | `:metric` | `:method` | `:data` | table per metric, data across top |
204
+ | 2 | *(none)* | `:method` | `[:data, :metric]` | one wide table |
205
+ | 3 | `:data` | `:method` | `:metric` | table per data type, metrics across top |
206
+
207
+ The rule: every label key must appear in exactly one of `grouping`, `row`, or `column`
208
+ (`:metric` is included automatically if not specified). Changing which key goes where
209
+ reshapes the table.
210
+
211
+ ## Cross-version comparisons with `save_file`
212
+
213
+ Add `version` metadata and use `save_file` to accumulate results across multiple runs:
214
+
215
+ ```ruby
216
+ Benchmark.items(metrics: %w(ips memsize)) do |x|
115
217
  x.metadata version: RUBY_VERSION
116
218
  x.metadata data: "nil" do
117
219
  x.report("to_s.split") { NSTRING.to_s.split(DELIMITER) }
@@ -122,112 +224,98 @@ Benchmark.items(metrics: %w(ips memsize), memory: 3, warmup: 1, time: 1, quiet:
122
224
  x.report("?split:[]") { STRING ? STRING.split(DELIMITER) : [] }
123
225
  end
124
226
 
125
- # partition the data by ruby version and data present
126
- # that way we're not comparing a split on a nil vs a split on a populated string
127
227
  x.compare_by :version, :data
128
- if ENV["CONDENSED"].to_s == "true"
129
- x.report_with grouping: :version, sort: true, row: :method, column: [:data, :metric]
130
- else
131
228
  x.report_with grouping: [:version, :metric], sort: true, row: :method, column: :data
132
- end
133
-
134
- x.save_file $PROGRAM_NAME.sub(/\.rb$/, '.json')
229
+ x.save_file "split_results.json"
135
230
  end
136
231
  ```
137
232
 
138
233
  #### `save_file`
139
234
 
140
- Creates a json save file which saves the timings across multiple runs.
141
- This is used along with the `version` metadata to record different results per ruby version.
142
- Another common use is to record ActiveRecord version or a gem's version.
143
-
144
- This is run with two different versions of ruby.
145
- Interim values are stored in the save_file.
235
+ Creates a JSON file that persists results across runs. Run once with Ruby 3.3, then again
236
+ with Ruby 3.4 the file accumulates both. Another common use is recording `ActiveRecord.version`
237
+ to compare across gem versions.
146
238
 
147
- Running with environment variable `FORCE` will force running this again. (i.e.: `force: ENV["force"] == true`)
148
-
149
- Depending upon the environment variable `CONDENSED`, there are two types of output.
239
+ Running with `force: true` will re-run and overwrite previously saved data for the same metadata.
150
240
 
151
241
  #### `compare_by`
152
242
 
153
- We introduce `version` as metadata for the tests. Adding `version` to the comparison
154
- says that we should only compare values for the same version of ruby (along with the same data).
155
-
156
- If you note `version` and `data` are not reserved words, instead, they are just what metadata we
157
- decided to pass in.
243
+ Adding `:version` to `compare_by` means Ruby 3.3 results are only compared against other 3.3
244
+ results, and 3.4 against 3.4. Note that `version` and `data` are not reserved words they are
245
+ just the metadata keys we chose.
158
246
 
159
- ### Example 2 output
247
+ #### Output one table per version and metric
160
248
 
161
- #### [:version, :metric] 2.3.7_ips
249
+ ##### [:version, :metric] 3.3.0_ips
162
250
 
163
- method | nil | str
164
- ------------|-----------------------|---------------
165
- `?split:[]`| 10146134.2 i/s | 1284159.2 i/s
166
- `to_s.split`| 4232772.3 i/s - 2.40x | 1258665.8 i/s
167
-
168
- #### [:version, :metric] 2.3.7_memsize
169
-
170
- method | nil | str
171
- ------------|--------------------|-------------
172
- `?split:[]`| 40.0 bytes | 360.0 bytes
173
- `to_s.split`| 80.0 bytes - 2.00x | 360.0 bytes
174
-
175
- #### [:version, :metric] 2.4.6_ips
176
-
177
- method | nil | str
251
+ ```
252
+ method | nil | str
178
253
  ------------|-----------------------|---------------
179
- `?split:[]`| 10012873.4 i/s | 1377320.5 i/s
180
- `to_s.split`| 4557456.3 i/s - 2.20x | 1350562.6 i/s
254
+ ?split:[] | 10146134.2 i/s | 1284159.2 i/s
255
+ to_s.split | 4232772.3 i/s - 2.40x | 1258665.8 i/s
256
+ ```
181
257
 
182
- #### [:version, :metric] 2.4.6_memsize
258
+ ##### [:version, :metric] 3.3.0_memsize
183
259
 
184
- method | nil | str
185
- ------------|--------------------|-------------
186
- `?split:[]`| 40.0 bytes | 360.0 bytes
187
- `to_s.split`| 80.0 bytes - 2.00x | 360.0 bytes
260
+ ```
261
+ method | nil | str
262
+ ------------|--------------------|--------------
263
+ ?split:[] | 40.0 bytes | 360.0 bytes
264
+ to_s.split | 80.0 bytes - 2.00x | 360.0 bytes
265
+ ```
188
266
 
189
- #### [:version, :metric] 2.5.5_ips
267
+ ##### [:version, :metric] 3.4.0_ips
190
268
 
191
- method | nil | str
269
+ ```
270
+ method | nil | str
192
271
  ------------|-----------------------|---------------
193
- `?split:[]`| 7168109.1 i/s | 1357046.0 i/s
194
- `to_s.split`| 3779969.3 i/s - 1.90x | 1328072.4 i/s
195
-
196
- #### [:version, :metric] 2.5.5_memsize
272
+ ?split:[] | 10012873.4 i/s | 1377320.5 i/s
273
+ to_s.split | 4557456.3 i/s - 2.20x | 1350562.6 i/s
274
+ ```
197
275
 
198
- method | nil | str
199
- ------------|--------------------|-------------
200
- `?split:[]`| 40.0 bytes | 360.0 bytes
201
- `to_s.split`| 80.0 bytes - 2.00x | 360.0 bytes
276
+ #### Condensed alternative
202
277
 
278
+ To see everything in fewer tables, combine data and metric into columns and group by version only:
203
279
 
204
- ### Example 2 CONDENSED output
280
+ ```ruby
281
+ x.report_with grouping: :version, sort: true, row: :method, column: [:data, :metric]
282
+ ```
205
283
 
206
- running with `CONDENSED=true` calls with a different `report_with`
284
+ ##### version 3.3.0
207
285
 
208
- As you might notice, the number of objects created for the runs are the same.
209
- But for some reason, the `nil` split case is slower for ruby 2.5.5.
286
+ ```
287
+ method | nil_ips | str_ips | nil_memsize | str_memsize
288
+ ------------|---------------------|-------------------|--------------------|----------------
289
+ ?split:[] | 10146134.2 i/s | 1284159.2 i/s | 40.0 bytes | 360.0 bytes
290
+ to_s.split | 4232772.3 i/s - 2.40x | 1258665.8 i/s | 80.0 bytes - 2.00x | 360.0 bytes
291
+ ```
210
292
 
211
- #### version 2.3.7
293
+ As you might notice, the number of objects created for the runs are the same across versions.
294
+ But the ips numbers may vary — letting you spot performance regressions.
212
295
 
213
- method | nil_ips | str_ips | nil_memsize | str_memsize
214
- ------------|-----------------------|---------------|--------------------|-------------
215
- `?split:[]`| 10146134.2 i/s | 1284159.2 i/s | 40.0 bytes | 360.0 bytes
216
- `to_s.split`| 4232772.3 i/s - 2.40x | 1258665.8 i/s | 80.0 bytes - 2.00x | 360.0 bytes
296
+ ## Custom value formatting
217
297
 
218
- #### version 2.4.6
298
+ Use the `value` parameter to customize cell display. This adds ANSI colors (green for best, red for worst):
219
299
 
220
- method | nil_ips | str_ips | nil_memsize | str_memsize
221
- ------------|-----------------------|---------------|--------------------|-------------
222
- `?split:[]`| 10012873.4 i/s | 1377320.5 i/s | 40.0 bytes | 360.0 bytes
223
- `to_s.split`| 4557456.3 i/s - 2.20x | 1350562.6 i/s | 80.0 bytes - 2.00x | 360.0 bytes
300
+ ```ruby
301
+ VALUE_TO_S = ->(m) { m.comp_short("\e[#{m.color}m#{m.central_tendency.round(1)} #{m.units}\e[0m") }
302
+ x.report_with row: :method, column: :metric, value: VALUE_TO_S
303
+ ```
224
304
 
225
- #### version 2.5.5
305
+ ## Options
226
306
 
227
- method | nil_ips | str_ips | nil_memsize | str_memsize
228
- ------------|-----------------------|---------------|--------------------|-------------
229
- `?split:[]`| 7168109.1 i/s | 1357046.0 i/s | 40.0 bytes | 360.0 bytes
230
- `to_s.split`| 3779969.3 i/s - 1.90x | 1328072.4 i/s | 80.0 bytes - 2.00x | 360.0 bytes
307
+ ```ruby
308
+ Benchmark.items(
309
+ metrics: %w(ips memsize), # which metrics to measure (default: %w(ips))
310
+ memory: 3, # number of memory profiling runs (default: 1)
311
+ warmup: 1, # IPS warmup seconds
312
+ time: 3, # IPS measurement seconds
313
+ quiet: false, # suppress interim output
314
+ force: true, # re-run even if saved data exists
315
+ ) do |x|
316
+ # ...
317
+ end
318
+ ```
231
319
 
232
320
  ## Development
233
321
 
@@ -11,11 +11,10 @@ Gem::Specification.new do |spec|
11
11
 
12
12
  spec.summary = %q{Suite to run multiple benchmarks}
13
13
  spec.description = <<-EOF
14
- Benchmark Sweet is a suite to run multiple kinds of metrics.
15
- It can be configured to run memory, sql query, and ips benchmarks on a common set of code.
16
- This data can be collected across multiple runs of the code, to support multiple ruby or
17
- gem versions.
18
- This also generates more complex comparisons
14
+ Benchmark Sweet is a suite to run multiple kinds of metrics that allows for the generation
15
+ of complex comparisons. It can be configured to run memory, sql query, and ips benchmarks
16
+ on a common set of code. These data can be collected across multiple runs with numerous
17
+ supported ruby or gem versions.
19
18
  EOF
20
19
 
21
20
  spec.homepage = "https://github.com/kbrock/benchmark-sweet"
@@ -25,8 +24,8 @@ EOF
25
24
  # to allow pushing to a single host or delete this section to allow pushing to any host.
26
25
  if spec.respond_to?(:metadata)
27
26
  spec.metadata["homepage_uri"] = spec.homepage
28
- spec.metadata["source_code_uri"] = "http://github.com/kbrock/benchmark-sweet"
29
- spec.metadata["changelog_uri"] = "http://github.com/kbrock/benchmark-sweet/CHANGELOG.md"
27
+ spec.metadata["source_code_uri"] = "https://github.com/kbrock/benchmark-sweet"
28
+ spec.metadata["changelog_uri"] = "https://github.com/kbrock/benchmark-sweet/blob/master/CHANGELOG.md"
30
29
  end
31
30
 
32
31
  # Specify which files should be added to the gem when it is released.
@@ -36,12 +35,11 @@ EOF
36
35
  end
37
36
  spec.require_paths = ["lib"]
38
37
 
39
- spec.add_development_dependency "bundler", "~> 2.1.4"
40
38
  spec.add_development_dependency "rake", ">= 12.3.3"
41
39
  spec.add_development_dependency "rspec", "~> 3.0"
42
40
  spec.add_development_dependency "activerecord"
41
+ spec.add_development_dependency "sqlite3"
43
42
 
44
- spec.add_runtime_dependency "benchmark-ips", "~> 2.8.2"
45
- spec.add_runtime_dependency "memory_profiler", "~> 0.9.0"
46
- spec.add_runtime_dependency "activesupport" # for {}.symbolize_keys! - want to remove
43
+ spec.add_runtime_dependency "benchmark-ips", "~> 2.14"
44
+ spec.add_runtime_dependency "memory_profiler", "~> 1.1"
47
45
  end
@@ -7,7 +7,8 @@ require "active_record"
7
7
  # User.all.to_a.first | 977.1 i/s - 3.26x | 1.0 objs | 100.0 objs - 100.00x
8
8
 
9
9
 
10
- ActiveRecord::Base.establish_connection(ENV.fetch('DATABASE_URL') { "postgres://localhost/user_benchmark" })
10
+ #ActiveRecord::Base.establish_connection(ENV.fetch('DATABASE_URL') { "postgres://localhost/user_benchmark" })
11
+ ActiveRecord::Base.establish_connection(ENV.fetch('DATABASE_URL') { "sqlite3::memory:" })
11
12
  ActiveRecord::Migration.verbose = false
12
13
 
13
14
  ActiveRecord::Schema.define do
@@ -15,19 +16,34 @@ ActiveRecord::Schema.define do
15
16
  t.string :name
16
17
  end
17
18
  #add_index :users, :name
19
+
20
+ create_table :dogs, force: true do |t|
21
+ t.string :name
22
+ end
18
23
  end
19
24
 
20
25
  class User < ActiveRecord::Base ;end
26
+ class Dog < ActiveRecord::Base ;end
21
27
 
22
28
  if User.count == 0
23
29
  puts "Creating 100 users"
24
30
  100.times { |i| User.create name: "user #{i}" }
31
+ puts "Creating 1 dog"
32
+ Dog.create name: "the dog"
25
33
  end
26
34
 
27
35
  Benchmark.items(metrics: %w(ips queries rows)) do |x|
28
- x.report("User.all.first")
29
- x.report("User.all.to_a.first")
36
+ x.metadata version: ActiveRecord.version.to_s
37
+
38
+ x.report(model: "User", method: "all.first") { User.all.first }
39
+ x.report(model: "User", method: "all.to_a.first") { User.all.to_a.first }
40
+
41
+ x.report(model: "Dog", method: "all.first") { Dog.all.first }
42
+ x.report(model: "Dog", method: "all.to_a.first") { Dog.all.to_a.first }
30
43
 
31
- x.report_with row: :method, column: :metric
44
+ # metric names across the top (column)
45
+ # method names down the side (rows)
46
+ x.report_with row: [:method], column: [:metric], grouping: :model, sort: true
47
+ # pass SAVE_FILE=true, to save results to a file (true chooses name, or pass actual file name)
32
48
  x.save_file (ENV["SAVE_FILE"] == "true") ? $0.sub(/\.rb$/, '.json') : ENV["SAVE_FILE"] if ENV["SAVE_FILE"]
33
49
  end
@@ -46,7 +46,7 @@ module Benchmark
46
46
  stats.overlaps?(@worst)
47
47
  else
48
48
  slowdown == Float::INFINITY || (total.to_i - 1 == offset.to_i && slowdown.to_i > 1)
49
- end
49
+ end
50
50
  end
51
51
 
52
52
  def slowdown
@@ -37,7 +37,7 @@ module Benchmark
37
37
 
38
38
  def initialize(options = {})
39
39
  @options = options
40
- @options[:metrics] ||= IPS_METRICS + %w()
40
+ @options[:metrics] ||= IPS_METRICS.dup
41
41
  validate_metrics(@options[:metrics])
42
42
  @items = []
43
43
  @entries = {}
@@ -57,26 +57,26 @@ module Benchmark
57
57
  end
58
58
 
59
59
  # @returns [Boolean] true to run iterations per second tests
60
- def ips? ; !(relevant_metric_names & IPS_METRICS).empty? ; end
60
+ def ips? ; (relevant_metric_names & IPS_METRICS).any?; end
61
61
  # @returns [Boolean] true to run memory tests
62
- def memory? ; !(relevant_metric_names & MEMORY_METRICS).empty? ; end
62
+ def memory? ; (relevant_metric_names & MEMORY_METRICS).any?; end
63
63
  # @returns [Boolean] true to run database queries tests
64
- def database? ; !(relevant_metric_names & DATABASE_METRICS).empty? ; end
64
+ def database? ; (relevant_metric_names & DATABASE_METRICS).any?; end
65
65
 
66
66
  # @returns [Boolean] true to suppress the display of interim test calculations
67
- def quiet? ; options[:quiet] ; end
67
+ def quiet?; options[:quiet]; end
68
68
 
69
69
  # @returns [Boolean] true to run tests for data that has already been processed
70
- def force? ; options[:force] ; end
70
+ def force?; options[:force]; end
71
71
 
72
72
  # @returns [Array<String>] List of metrics to compare
73
- def relevant_metric_names ; options[:metrics] ; end
73
+ def relevant_metric_names; options[:metrics]; end
74
74
 
75
75
  # items to run (typical benchmark/benchmark-ips use case)
76
76
  def item(label, action = nil, &block)
77
77
  # could use Benchmark::IPS::Job::Entry
78
78
  current_meta = label.kind_of?(Hash) ? @meta.merge(label) : @meta.merge(method: label)
79
- @items << Item.new(current_meta, action || block)
79
+ @items << Item.new(normalize_label(current_meta), action || block)
80
80
  end
81
81
  alias report item
82
82
 
@@ -102,7 +102,7 @@ module Benchmark
102
102
  # x.compare_by :data
103
103
  #
104
104
  def compare_by(*symbol, &block)
105
- @grouping = symbol.empty? ? block : Proc.new { |label, value| symbol.map { |s| label[s] } }
105
+ @grouping = symbol.empty? ? block : Proc.new { |label, _value| symbol.map { |s| label[s] } }
106
106
  end
107
107
 
108
108
  # Setup the testing framework
@@ -123,10 +123,6 @@ module Benchmark
123
123
  end
124
124
  end
125
125
 
126
- # if we are using symbols as keys for our labels
127
- def labels_have_symbols!
128
- end
129
-
130
126
  # report results
131
127
  def add_entry(label, metric, stat)
132
128
  (@entries[metric] ||= {})[label] = stat.respond_to?(:central_tendency) ? stat : create_stats(stat)
@@ -147,8 +143,7 @@ module Benchmark
147
143
  require "json"
148
144
 
149
145
  JSON.load(IO.read(filename)).each do |v|
150
- n = v["name"]
151
- n.symbolize_keys!
146
+ n = normalize_label(v["name"].transform_keys(&:to_sym))
152
147
  add_entry n, v["metric"], v["samples"]
153
148
  end
154
149
 
@@ -158,23 +153,16 @@ module Benchmark
158
153
  return unless filename
159
154
  require "json"
160
155
 
161
- # sanity checking
162
- symbol_value = false
163
-
164
156
  data = @entries.flat_map do |metric_name, metric_values|
165
157
  metric_values.map do |label, stat|
166
- # warnings
167
- symbol_values ||= label.kind_of?(Hash) && label.values.detect { |v| v.nil? || v.kind_of?(Symbol) }
168
158
  {
169
159
  'name' => label,
170
160
  'metric' => metric_name,
171
161
  'samples' => stat.samples,
172
- # extra data like measured_us, iter, and others?
173
162
  }
174
163
  end
175
164
  end
176
165
 
177
- puts "", "Warning: Please use strings or numbers for label hash values (not nils or symbols). Symbols are not JSON friendly." if symbol_value
178
166
  IO.write(filename, JSON.pretty_generate(data) << "\n")
179
167
  end
180
168
 
@@ -213,8 +201,8 @@ module Benchmark
213
201
  relevant_entries.flat_map do |metric_name, metric_entries|
214
202
  # TODO: map these to Comparison(metric_name, label, stats) So we only have 1 type of lambda
215
203
  partitioned_metrics = grouping ? metric_entries.group_by(&grouping) : {nil => metric_entries}
216
- partitioned_metrics.flat_map do |grouping_name, grouped_metrics|
217
- sorted = grouped_metrics.sort_by { |n, e| e.central_tendency }
204
+ partitioned_metrics.flat_map do |_grouping_name, grouped_metrics|
205
+ sorted = grouped_metrics.sort_by { |_n, e| e.central_tendency }
218
206
  sorted.reverse! if HIGHER_BETTER.include?(metric_name)
219
207
 
220
208
  _best_label, best_stats = sorted.first
@@ -236,6 +224,11 @@ module Benchmark
236
224
  end
237
225
  end
238
226
 
227
+ # Normalize label hash values to strings so labels match after JSON round-trip
228
+ def normalize_label(label)
229
+ label.transform_values(&:to_s)
230
+ end
231
+
239
232
  def create_stats(samples)
240
233
  Benchmark::IPS::Stats::SD.new(Array(samples))
241
234
  end
@@ -17,7 +17,7 @@ module Benchmark
17
17
  # add_entry e.label, "string_retained", e.measurement.string.retained
18
18
  # end
19
19
  require 'memory_profiler'
20
- puts "Memory Profiling----------" unless quiet?
20
+ $stdout.puts "Memory Profiling----------" unless quiet?
21
21
 
22
22
  items.each do |entry|
23
23
  name = entry.label
@@ -26,8 +26,8 @@ module Benchmark
26
26
  rpts = (options[:memory] || 1).times.map { MemoryProfiler.report(&entry.block) }
27
27
  tot_stat = add_entry(name, "memsize", rpts.map(&:total_allocated_memsize))
28
28
  totr_stat = add_entry name, "memsize_retained", rpts.map(&:total_retained_memsize)
29
- obj_stat = add_entry name, "objects", rpts.map(&:total_allocated) ## ? size
30
- objr_stat = add_entry name, "objects_retained", rpts.map(&:total_retained)
29
+ add_entry name, "objects", rpts.map(&:total_allocated)
30
+ add_entry name, "objects_retained", rpts.map(&:total_retained)
31
31
  str_stat = add_entry(name, "strings", rpts.map { |rpt| rpt.strings_allocated.size })
32
32
  strr_stat = add_entry(name, "strings_retained", rpts.map { |rpt| rpt.strings_retained.size })
33
33
 
@@ -2,16 +2,20 @@ module Benchmark
2
2
  module Sweet
3
3
  module Queries
4
4
  def run_queries
5
- items.each do |entry|
6
- values = ::Benchmark::Sweet::Queries::QueryCounter.count(&entry.block) # { entry.call_times(1) }
7
- add_entry entry.label, "rows", values[:instance_count]
8
- add_entry entry.label, "queries", values[:sql_count]
9
- add_entry entry.label, "ignored", values[:ignored_count]
10
- add_entry entry.label, "cached", values[:cache_count]
11
- unless options[:quiet]
12
- printf "%20s: %3d queries %5d ar_objects", entry.label, values[:sql_count], values[:instance_count]
13
- printf " (%d ignored)", values[:ignored_count] if values[:ignored_count] > 0
14
- puts
5
+ cntr = ::Benchmark::Sweet::Queries::QueryCounter.new
6
+ cntr.sub do
7
+ items.each do |entry|
8
+ entry.block.call
9
+ values = cntr.get_clear
10
+ add_entry entry.label, "rows", values[:instance_count]
11
+ add_entry entry.label, "queries", values[:sql_count]
12
+ add_entry entry.label, "ignored", values[:ignored_count]
13
+ add_entry entry.label, "cached", values[:cache_count]
14
+ unless options[:quiet]
15
+ printf "%20s: %3d queries %5d ar_objects", entry.label, values[:sql_count], values[:instance_count]
16
+ printf " (%d ignored)", values[:ignored_count] if values[:ignored_count] > 0
17
+ puts
18
+ end
15
19
  end
16
20
  end
17
21
  end
@@ -30,10 +34,14 @@ module Benchmark
30
34
  IGNORED_STATEMENTS = %w(CACHE SCHEMA).freeze
31
35
  IGNORED_QUERIES = /^(?:ROLLBACK|BEGIN|COMMIT|SAVEPOINT|RELEASE)/.freeze
32
36
 
37
+ def initialize
38
+ clear
39
+ end
40
+
33
41
  def callback(_name, _start, _finish, _id, payload)
34
42
  if payload[:sql]
35
43
  if payload[:name] == CACHE_STATEMENT
36
- @instance[:cache_count] += 1
44
+ @instances[:cache_count] += 1
37
45
  elsif IGNORED_STATEMENTS.include?(payload[:name]) || IGNORED_QUERIES.match(payload[:sql])
38
46
  @instances[:ignored_count] += 1
39
47
  else
@@ -48,12 +56,25 @@ module Benchmark
48
56
  lambda(&method(:callback))
49
57
  end
50
58
 
51
- # TODO: possibly setup a single subscribe and use a context/thread local to properly count metrics
52
- def count(&block)
59
+ def clear
53
60
  @instances = {cache_count: 0, ignored_count: 0, sql_count: 0, instance_count: 0}
54
- ActiveSupport::Notifications.subscribed(callback_proc, /active_record/, &block)
61
+ end
62
+
63
+ def get_clear; @instances.tap { clear }; end
64
+ def get; @instances; end
65
+
66
+ # either use 10.times { value = count(&block) }
67
+ # or use
68
+ # sub { 10.times { block.call; value = get_clear } }
69
+ def count(&block)
70
+ clear
71
+ sub(&block)
55
72
  @instances
56
73
  end
74
+
75
+ def sub(&block)
76
+ ActiveSupport::Notifications.subscribed(callback_proc, /active_record/, &block)
77
+ end
57
78
  end
58
79
  end
59
80
  end
@@ -1,5 +1,5 @@
1
1
  module Benchmark
2
2
  module Sweet
3
- VERSION = "0.2.2"
3
+ VERSION = "0.3.0"
4
4
  end
5
5
  end
@@ -40,7 +40,7 @@ module Benchmark
40
40
 
41
41
  grouping = symbol_to_proc(grouping)
42
42
 
43
- label_records = base.group_by(&grouping).select { |value, comparisons| !value.nil? }
43
+ label_records = base.group_by(&grouping).select { |value, _comparisons| !value.nil? }
44
44
  label_records = label_records.sort_by(&:first) if sort
45
45
 
46
46
  label_records.each(&block)
@@ -89,22 +89,30 @@ module Benchmark
89
89
  end
90
90
 
91
91
  def self.to_table(arr)
92
- field_sizes = Hash.new
93
- arr.each { |row| field_sizes.merge!(row => row.map { |iterand| iterand[1].to_s.gsub(/\e\[[^m]+m/, '').length } ) }
92
+ return if arr.empty?
94
93
 
95
- column_sizes = arr.reduce([]) do |lengths, row|
96
- row.each_with_index.map { |iterand, index| [lengths[index] || 0, field_sizes[row][index]].max }
97
- end
94
+ strip_ansi = ->(s) { s.to_s.gsub(/\e\[[^m]*m/, '') }
95
+
96
+ # collect headers from the first row's keys
97
+ headers = arr[0].keys
98
98
 
99
- format = column_sizes.collect {|n| "%#{n}s" }.join(" | ")
100
- format += "\n"
99
+ # compute visible width for each column (max of header and all values)
100
+ column_sizes = headers.each_with_index.map do |header, i|
101
+ values_max = arr.map { |row| strip_ansi.call(row.values[i]).length }.max
102
+ [strip_ansi.call(header).length, values_max].max
103
+ end
101
104
 
102
- printf format, *arr[0].each_with_index.map { |el, i| " "*(column_sizes[i] - field_sizes[arr[0]][i] ) + el[0].to_s }
105
+ # right-align with ANSI-aware padding
106
+ pad = ->(str, width) {
107
+ visible = strip_ansi.call(str).length
108
+ " " * [width - visible, 0].max + str.to_s
109
+ }
103
110
 
104
- printf format, *column_sizes.collect { |w| "-" * w }
111
+ puts headers.each_with_index.map { |h, i| pad.call(h.to_s, column_sizes[i]) }.join(" | ")
112
+ puts column_sizes.map { |w| "-" * w }.join("-|-")
105
113
 
106
- arr[0..arr.count].each do |line|
107
- printf format, *line.each_with_index.map { |el, i| " "*(column_sizes[i] - field_sizes[line][i] ) + el[1].to_s }
114
+ arr.each do |row|
115
+ puts row.values.each_with_index.map { |v, i| pad.call(v.to_s, column_sizes[i]) }.join(" | ")
108
116
  end
109
117
  end
110
118
  end
metadata CHANGED
@@ -1,29 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: benchmark-sweet
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.2.2
4
+ version: 0.3.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Keenan Brock
8
- autorequire:
9
8
  bindir: bin
10
9
  cert_chain: []
11
- date: 2020-07-15 00:00:00.000000000 Z
10
+ date: 1980-01-02 00:00:00.000000000 Z
12
11
  dependencies:
13
- - !ruby/object:Gem::Dependency
14
- name: bundler
15
- requirement: !ruby/object:Gem::Requirement
16
- requirements:
17
- - - "~>"
18
- - !ruby/object:Gem::Version
19
- version: 2.1.4
20
- type: :development
21
- prerelease: false
22
- version_requirements: !ruby/object:Gem::Requirement
23
- requirements:
24
- - - "~>"
25
- - !ruby/object:Gem::Version
26
- version: 2.1.4
27
12
  - !ruby/object:Gem::Dependency
28
13
  name: rake
29
14
  requirement: !ruby/object:Gem::Requirement
@@ -67,53 +52,52 @@ dependencies:
67
52
  - !ruby/object:Gem::Version
68
53
  version: '0'
69
54
  - !ruby/object:Gem::Dependency
70
- name: benchmark-ips
55
+ name: sqlite3
71
56
  requirement: !ruby/object:Gem::Requirement
72
57
  requirements:
73
- - - "~>"
58
+ - - ">="
74
59
  - !ruby/object:Gem::Version
75
- version: 2.8.2
76
- type: :runtime
60
+ version: '0'
61
+ type: :development
77
62
  prerelease: false
78
63
  version_requirements: !ruby/object:Gem::Requirement
79
64
  requirements:
80
- - - "~>"
65
+ - - ">="
81
66
  - !ruby/object:Gem::Version
82
- version: 2.8.2
67
+ version: '0'
83
68
  - !ruby/object:Gem::Dependency
84
- name: memory_profiler
69
+ name: benchmark-ips
85
70
  requirement: !ruby/object:Gem::Requirement
86
71
  requirements:
87
72
  - - "~>"
88
73
  - !ruby/object:Gem::Version
89
- version: 0.9.0
74
+ version: '2.14'
90
75
  type: :runtime
91
76
  prerelease: false
92
77
  version_requirements: !ruby/object:Gem::Requirement
93
78
  requirements:
94
79
  - - "~>"
95
80
  - !ruby/object:Gem::Version
96
- version: 0.9.0
81
+ version: '2.14'
97
82
  - !ruby/object:Gem::Dependency
98
- name: activesupport
83
+ name: memory_profiler
99
84
  requirement: !ruby/object:Gem::Requirement
100
85
  requirements:
101
- - - ">="
86
+ - - "~>"
102
87
  - !ruby/object:Gem::Version
103
- version: '0'
88
+ version: '1.1'
104
89
  type: :runtime
105
90
  prerelease: false
106
91
  version_requirements: !ruby/object:Gem::Requirement
107
92
  requirements:
108
- - - ">="
93
+ - - "~>"
109
94
  - !ruby/object:Gem::Version
110
- version: '0'
95
+ version: '1.1'
111
96
  description: |2
112
- Benchmark Sweet is a suite to run multiple kinds of metrics.
113
- It can be configured to run memory, sql query, and ips benchmarks on a common set of code.
114
- This data can be collected across multiple runs of the code, to support multiple ruby or
115
- gem versions.
116
- This also generates more complex comparisons
97
+ Benchmark Sweet is a suite to run multiple kinds of metrics that allows for the generation
98
+ of complex comparisons. It can be configured to run memory, sql query, and ips benchmarks
99
+ on a common set of code. These data can be collected across multiple runs with numerous
100
+ supported ruby or gem versions.
117
101
  email:
118
102
  - keenan@thebrocks.net
119
103
  executables: []
@@ -122,7 +106,6 @@ extra_rdoc_files: []
122
106
  files:
123
107
  - ".gitignore"
124
108
  - ".rspec"
125
- - ".travis.yml"
126
109
  - CHANGELOG.md
127
110
  - Gemfile
128
111
  - LICENSE.txt
@@ -150,9 +133,8 @@ licenses:
150
133
  - MIT
151
134
  metadata:
152
135
  homepage_uri: https://github.com/kbrock/benchmark-sweet
153
- source_code_uri: http://github.com/kbrock/benchmark-sweet
154
- changelog_uri: http://github.com/kbrock/benchmark-sweet/CHANGELOG.md
155
- post_install_message:
136
+ source_code_uri: https://github.com/kbrock/benchmark-sweet
137
+ changelog_uri: https://github.com/kbrock/benchmark-sweet/blob/master/CHANGELOG.md
156
138
  rdoc_options: []
157
139
  require_paths:
158
140
  - lib
@@ -167,9 +149,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
167
149
  - !ruby/object:Gem::Version
168
150
  version: '0'
169
151
  requirements: []
170
- rubyforge_project:
171
- rubygems_version: 2.7.6.2
172
- signing_key:
152
+ rubygems_version: 4.0.6
173
153
  specification_version: 4
174
154
  summary: Suite to run multiple benchmarks
175
155
  test_files: []
data/.travis.yml DELETED
@@ -1,7 +0,0 @@
1
- ---
2
- sudo: false
3
- language: ruby
4
- cache: bundler
5
- rvm:
6
- - 2.4.4
7
- before_install: gem install bundler -v 1.16.6