minitest-heat 0.0.11 → 1.0.0
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +4 -4
- data/Gemfile.lock +1 -1
- data/README.md +49 -14
- data/examples/exceptions.png +0 -0
- data/examples/failures.png +0 -0
- data/examples/map.png +0 -0
- data/examples/markers.png +0 -0
- data/examples/skips.png +0 -0
- data/examples/slows.png +0 -0
- data/lib/minitest/heat/backtrace.rb +5 -5
- data/lib/minitest/heat/issue.rb +24 -6
- data/lib/minitest/heat/output/backtrace.rb +44 -19
- data/lib/minitest/heat/output/issue.rb +34 -49
- data/lib/minitest/heat/output/map.rb +77 -36
- data/lib/minitest/heat/output.rb +21 -7
- data/lib/minitest/heat/results.rb +14 -2
- data/lib/minitest/heat/version.rb +1 -1
- data/lib/minitest/heat_reporter.rb +1 -1
- metadata +8 -2
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA256:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: 887dc01f3a08341f1e88cb24b539d0843cdce81fdb49246fbce8af875ff00d3f
|
4
|
+
data.tar.gz: 875d197ab5c65c2cb85b192201375be653ba30da2fc91896b4a3e61702cacefd
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: 6ef73c9df5b91b5950cc444f6ea7a18ccd867bda4fbdb015b8d713eaebe91b925a2cfb48b596a43a0d36af023dcb04f5c8ec0803602ab27bf709372fe6f9b0f3
|
7
|
+
data.tar.gz: d338944e9d32e8ec326cce9dbe861c5b8bec408bb455e4e984d45567ef9d9c5af9154f4475c2862ddc43778c95d8e58f12beef8757aa37f6b76d91da1b17a4c5
|
data/Gemfile.lock
CHANGED
data/README.md
CHANGED
@@ -1,12 +1,41 @@
|
|
1
|
-
# Minitest::Heat
|
2
|
-
|
1
|
+
# 🔥 Minitest::Heat 🔥
|
2
|
+
Minitest::Heat helps you identify problems faster so you can more efficiently resolve test failures. It does this through a few different methods.
|
3
3
|
|
4
|
-
|
4
|
+
It collects failures and inspects backtraces to identify patterns and provide a heat map summary of the files and line numbers that most frequently appear to be the causes of issues.
|
5
5
|
|
6
|
-
|
6
|
+
![Example Heat Map Displayed by Minitest Heat](https://raw.githubusercontent.com/garrettdimon/minitest-heat/main/examples/map.png)
|
7
7
|
|
8
|
-
|
8
|
+
It suppresses less critical issues like skips or slows when there are legitimate failures. It won't display information about slow tests unless all tests are passing (meaning no errors, failures, or skips)
|
9
|
+
|
10
|
+
It presents failures differently depending on the context of failure. For instance, it treats exceptions differently based on whether they arose directly from a test or from source code. It also treats extremely slow tests differently from moderately slow tests.
|
11
|
+
|
12
|
+
Markers get some nuance so that slow tests receive different markers than standard passing tests, and exception-triggered failures get different markers for source-code triggered exceptions (E) and test-triggered exceptions ('B' for 'Broken Test').
|
13
|
+
|
14
|
+
![Example Markers Displayed by Minitest Heat](https://raw.githubusercontent.com/garrettdimon/minitest-heat/main/examples/markers.png)
|
15
|
+
|
16
|
+
It also formats the failure details and backtraces to make them more scannable by emphasizing the project-relates lines from the backtrace.
|
17
|
+
|
18
|
+
It intelligently recognizes when an exception was raised from a test defintion vs. when an exception is genuinely triggered from the source code in order to help focus on fixing deeper exceptions first.
|
19
|
+
|
20
|
+
![Example Exceptions Displayed by Minitest Heat](https://raw.githubusercontent.com/garrettdimon/minitest-heat/main/examples/exceptions.png)
|
21
|
+
|
22
|
+
Failures are displayed ina fairly predictable manner but formatted to show the source code from the test so you can see the assertion that failed in addition to the summary of values that didn't satisfy the assertion.
|
23
|
+
|
24
|
+
![Example Failures Displayed by Minitest Heat](https://raw.githubusercontent.com/garrettdimon/minitest-heat/main/examples/failures.png)
|
25
|
+
|
26
|
+
Skipped tests are displayed in a simple manner as well so that it's easy to see the source of the skipped test as well as the reason it was skipped.
|
27
|
+
|
28
|
+
![Example Skips Displayed by Minitest Heat](https://raw.githubusercontent.com/garrettdimon/minitest-heat/main/examples/skips.png)
|
29
|
+
|
30
|
+
Slow tests get slightly more informative labels to indicate that they did pass, but they could use performance improvements. Tests that are particularly slow are called out with a little more emphasis so it's easier to focus on really slow tests first as they frequently represent the most potential for performance gains.
|
9
31
|
|
32
|
+
![Example Slows Displayed by Minitest Heat](https://raw.githubusercontent.com/garrettdimon/minitest-heat/main/examples/slows.png)
|
33
|
+
|
34
|
+
It also always displays the most significant issues at the bottom of the list in order to reduce the need to scroll up through the test failures. As you fix issues, the list becomes shorter, and the less significant issues will make there way to the bottom and be visible without scrolling.
|
35
|
+
|
36
|
+
For some additional insight about priorities and how it works, this [Twitter thread](https://twitter.com/garrettdimon/status/1432703746526560266) is currently the best place to start.
|
37
|
+
|
38
|
+
## Installation
|
10
39
|
Add this line to your application's Gemfile:
|
11
40
|
|
12
41
|
```ruby
|
@@ -27,27 +56,33 @@ And depending on your usage, you may need to require Minitest Heat in your test
|
|
27
56
|
require 'minitest/heat'
|
28
57
|
```
|
29
58
|
|
30
|
-
##
|
31
|
-
|
32
|
-
**Important:** In its current state, `Minitest::Heat` replaces any other reporter plugins you may have. Long-term, it should play nicer with other reporters, but during the initial heavy development cycle, it's been easier to have a high confidence that other reporters aren't the source of unexpected behavior.
|
33
|
-
|
34
|
-
Otherwise, once it's bundled and added to your `test_helper`, it shold "just work" whenever you run your test suite.
|
59
|
+
## Configuration
|
60
|
+
Minitest Heat doesn't currently offer a significant set of configuration options, but it will eventually support customizing the thresholds for "Slow" and "Painfully Slow". By default, it considers anything over 1.0s to be 'slow' and anything over 3.0s to be 'painfully slow'.
|
35
61
|
|
36
62
|
## Development
|
37
|
-
|
38
63
|
After checking out the repo, run `bin/setup` to install dependencies. Then, run `rake test` to run the tests. You can also run `bin/console` for an interactive prompt that will allow you to experiment.
|
39
64
|
|
40
65
|
To install this gem onto your local machine, run `bundle exec rake install`. To release a new version, update the version number in `version.rb`, and then run `bundle exec rake release`, which will create a git tag for the version, push git commits and tags, and push the `.gem` file to [rubygems.org](https://rubygems.org).
|
41
66
|
|
42
|
-
|
67
|
+
### Forcing Test Failures
|
68
|
+
In order to easily see how Minitest Heat handles different combinations of different types of failures, the following environment variables can be used to force failures.
|
69
|
+
|
70
|
+
```bash
|
71
|
+
IMPLODE=true # Every possible type of failure, skip, and slow is generated
|
72
|
+
FORCE_EXCEPTIONS=true # Only exception-triggered failures
|
73
|
+
FORCE_FAILURES=true # Only standard assertion failures
|
74
|
+
FORCE_SKIPS=true # No errors, just the skipped tests
|
75
|
+
FORCE_SLOWS=true # No errors or skipped tests, just slow tests
|
76
|
+
```
|
43
77
|
|
44
|
-
|
78
|
+
So to see the full context of a test suite, `IMPLODE=true bundle exec rake` will work its magic.
|
45
79
|
|
80
|
+
## Contributing
|
81
|
+
Bug reports and pull requests are welcome on GitHub at https://github.com/garrettdimon/minitest-heat. This project is intended to be a safe, welcoming space for collaboration, and contributors are expected to adhere to the [code of conduct](https://github.com/[USERNAME]/minitest-heat/blob/master/CODE_OF_CONDUCT.md).
|
46
82
|
|
47
83
|
## License
|
48
84
|
|
49
85
|
The gem is available as open source under the terms of the [MIT License](https://opensource.org/licenses/MIT).
|
50
86
|
|
51
87
|
## Code of Conduct
|
52
|
-
|
53
88
|
Everyone interacting in the Minitest::Heat project's codebases, issue trackers, chat rooms and mailing lists is expected to follow the [code of conduct](https://github.com/[USERNAME]/minitest-heat/blob/master/CODE_OF_CONDUCT.md).
|
Binary file
|
Binary file
|
data/examples/map.png
ADDED
Binary file
|
Binary file
|
data/examples/skips.png
ADDED
Binary file
|
data/examples/slows.png
ADDED
Binary file
|
@@ -26,7 +26,7 @@ module Minitest
|
|
26
26
|
|
27
27
|
# All lines of the backtrace converted to Backtrace::LineParser's
|
28
28
|
#
|
29
|
-
# @return [
|
29
|
+
# @return [Array<Location>] the full set of backtrace lines parsed as Location instances
|
30
30
|
def locations
|
31
31
|
return [] if raw_backtrace.nil?
|
32
32
|
|
@@ -36,28 +36,28 @@ module Minitest
|
|
36
36
|
# All entries from the backtrace within the project and sorted with the most recently modified
|
37
37
|
# files at the beginning
|
38
38
|
#
|
39
|
-
# @return [
|
39
|
+
# @return [Array<Location>] the sorted backtrace lines from the project
|
40
40
|
def recently_modified_locations
|
41
41
|
@recently_modified_locations ||= project_locations.sort_by(&:mtime).reverse
|
42
42
|
end
|
43
43
|
|
44
44
|
# All entries from the backtrace that are files within the project
|
45
45
|
#
|
46
|
-
# @return [
|
46
|
+
# @return [Array<Location>] the backtrace lines from within the project
|
47
47
|
def project_locations
|
48
48
|
@project_locations ||= locations.select(&:project_file?)
|
49
49
|
end
|
50
50
|
|
51
51
|
# All entries from the backtrace within the project tests
|
52
52
|
#
|
53
|
-
# @return [
|
53
|
+
# @return [Array<Location>] the backtrace lines from within the tests
|
54
54
|
def test_locations
|
55
55
|
@test_locations ||= project_locations.select(&:test_file?)
|
56
56
|
end
|
57
57
|
|
58
58
|
# All source code entries from the backtrace (i.e. excluding tests)
|
59
59
|
#
|
60
|
-
# @return [
|
60
|
+
# @return [Array<Location>] the backtrace lines from within the source code
|
61
61
|
def source_code_locations
|
62
62
|
@source_code_locations ||= project_locations.select(&:source_code_file?)
|
63
63
|
end
|
data/lib/minitest/heat/issue.rb
CHANGED
@@ -99,9 +99,9 @@ module Minitest
|
|
99
99
|
:skipped
|
100
100
|
elsif !passed?
|
101
101
|
:failure
|
102
|
-
elsif painful?
|
102
|
+
elsif passed? && painful?
|
103
103
|
:painful
|
104
|
-
elsif slow?
|
104
|
+
elsif passed? && slow?
|
105
105
|
:slow
|
106
106
|
else
|
107
107
|
:success
|
@@ -116,20 +116,38 @@ module Minitest
|
|
116
116
|
!passed? || slow? || painful?
|
117
117
|
end
|
118
118
|
|
119
|
+
# The number, in seconds, for a test to be considered "slow"
|
120
|
+
#
|
121
|
+
# @return [Float] number of seconds after which a test is considered slow
|
122
|
+
def slow_threshold
|
123
|
+
# Using a method here so that this can eventually be configurable such that the constant is
|
124
|
+
# only a fallback value if it's not specified anywhere else
|
125
|
+
SLOW_THRESHOLDS[:slow]
|
126
|
+
end
|
127
|
+
|
128
|
+
# The number, in seconds, for a test to be considered "painfully slow"
|
129
|
+
#
|
130
|
+
# @return [Float] number of seconds after which a test is considered painfully slow
|
131
|
+
def painfully_slow_threshold
|
132
|
+
# Using a method here so that this can eventually be configurable such that the constant is
|
133
|
+
# only a fallback value if it's not specified anywhere else
|
134
|
+
SLOW_THRESHOLDS[:painful]
|
135
|
+
end
|
136
|
+
|
119
137
|
# Determines if a test should be considered slow by comparing it to the low end definition of
|
120
138
|
# what is considered slow.
|
121
139
|
#
|
122
|
-
# @return [Boolean] true if the test took longer to run than `
|
140
|
+
# @return [Boolean] true if the test took longer to run than `slow_threshold`
|
123
141
|
def slow?
|
124
|
-
execution_time >=
|
142
|
+
execution_time >= slow_threshold && execution_time < painfully_slow_threshold
|
125
143
|
end
|
126
144
|
|
127
145
|
# Determines if a test should be considered painfully slow by comparing it to the high end
|
128
146
|
# definition of what is considered slow.
|
129
147
|
#
|
130
|
-
# @return [Boolean] true if the test took longer to run than `
|
148
|
+
# @return [Boolean] true if the test took longer to run than `painfully_slow_threshold`
|
131
149
|
def painful?
|
132
|
-
execution_time >=
|
150
|
+
execution_time >= painfully_slow_threshold
|
133
151
|
end
|
134
152
|
|
135
153
|
# Determines if the issue is an exception that was raised from directly within a test
|
@@ -3,7 +3,7 @@
|
|
3
3
|
module Minitest
|
4
4
|
module Heat
|
5
5
|
class Output
|
6
|
-
# Builds the collection of tokens for a backtrace when an exception occurs
|
6
|
+
# Builds the collection of tokens for displaying a backtrace when an exception occurs
|
7
7
|
class Backtrace
|
8
8
|
DEFAULT_LINE_COUNT = 10
|
9
9
|
DEFAULT_INDENTATION_SPACES = 2
|
@@ -17,31 +17,47 @@ module Minitest
|
|
17
17
|
end
|
18
18
|
|
19
19
|
def tokens
|
20
|
-
# There could be option to expand and display more than one line of source code for the
|
21
|
-
# final backtrace line if it might be relevant/helpful?
|
22
|
-
|
23
20
|
# Iterate over the selected lines from the backtrace
|
24
|
-
backtrace_locations.
|
25
|
-
@tokens << backtrace_location_tokens(location)
|
26
|
-
end
|
27
|
-
|
28
|
-
@tokens
|
21
|
+
@tokens = backtrace_locations.map { |location| backtrace_location_tokens(location) }
|
29
22
|
end
|
30
23
|
|
24
|
+
# Determines the number of lines to display from the backtrace.
|
25
|
+
#
|
26
|
+
# @return [Integer] the number of lines to limit the backtrace to
|
31
27
|
def line_count
|
28
|
+
# Defined as a method instead of using the constant directlyr in order to easily support
|
29
|
+
# adding options for controlling how many lines are displayed from a backtrace.
|
30
|
+
#
|
31
|
+
# For example, instead of a fixed number, the backtrace could dynamically calculate how
|
32
|
+
# many lines it should displaye in order to get to the origination point. Or it could have
|
33
|
+
# a default, but inteligently go back further if the backtrace meets some criteria for
|
34
|
+
# displaying more lines.
|
32
35
|
DEFAULT_LINE_COUNT
|
33
36
|
end
|
34
37
|
|
35
|
-
#
|
36
|
-
#
|
37
|
-
#
|
38
|
-
#
|
39
|
-
# ...it behaves a little different when it's a broken test vs. a true exception?
|
40
|
-
# ...it could be smart about subtly flagging the lines that show up in the heat map frequently?
|
41
|
-
# ...it could be influenced by a "compact" or "robust" reporter super-style?
|
42
|
-
# ...it's smart about exceptions that were raised outside of the project?
|
43
|
-
# ...it's smart about highlighting lines of code differently based on whether it's source code, test code, or external code?
|
38
|
+
# A subset of parsed lines from the backtrace.
|
39
|
+
#
|
40
|
+
# @return [Array<Location>] the backtrace locations determined to be most relevant to the
|
41
|
+
# context of the underlying issue
|
44
42
|
def backtrace_locations
|
43
|
+
# This could eventually have additional intelligence to determine what lines are most
|
44
|
+
# relevant for a given type of issue. For now, it simply takes the line numbers, but the
|
45
|
+
# idea is that long-term, it could adjust that on the fly to keep the line count as low
|
46
|
+
# as possible but expand it if necessary to ensure enough context is displayed.
|
47
|
+
#
|
48
|
+
# - If there's no clear cut details about the source of the error from within the project,
|
49
|
+
# it could display the entire backtrace without filtering anything.
|
50
|
+
# - It could scan the backtrace to the first appearance of project files and then display
|
51
|
+
# all of the lines that occurred after that instance
|
52
|
+
# - It coudl filter the lines differently whether the issue originated from a test or from
|
53
|
+
# the source code.
|
54
|
+
# - It could allow supporting a "compact" or "robust" reporter style so that someone on
|
55
|
+
# a smaller screen could easily reduce the information shown so that the results could
|
56
|
+
# be higher density even if it means truncating some occasionally useful details
|
57
|
+
# - It could be smarter about displaying context/guidance when the full backtrace is from
|
58
|
+
# outside the project's code
|
59
|
+
#
|
60
|
+
# But for now. It just grabs some lines.
|
45
61
|
backtrace.locations.take(line_count)
|
46
62
|
end
|
47
63
|
|
@@ -65,8 +81,17 @@ module Minitest
|
|
65
81
|
backtrace_locations.all?(&:project_file?)
|
66
82
|
end
|
67
83
|
|
84
|
+
# Determines if the file referenced by a backtrace line is the most recently modified file
|
85
|
+
# of all the files referenced in the visible backtrace locations.
|
86
|
+
#
|
87
|
+
# @param [Location] location the location to examine
|
88
|
+
#
|
89
|
+
# @return [<type>] <description>
|
90
|
+
#
|
68
91
|
def most_recently_modified?(location)
|
69
|
-
# If there's more than one line being displayed,
|
92
|
+
# If there's more than one line being displayed (otherwise, with one line, of course it's
|
93
|
+
# the most recently modified because there_aren't any others) and the current line is the
|
94
|
+
# same as the freshest location in the backtrace
|
70
95
|
backtrace_locations.size > 1 && location == locations.freshest
|
71
96
|
end
|
72
97
|
|
@@ -14,28 +14,16 @@ module Minitest
|
|
14
14
|
|
15
15
|
def tokens
|
16
16
|
case issue.type
|
17
|
-
when :error then
|
18
|
-
when :
|
19
|
-
when :
|
20
|
-
when :
|
21
|
-
when :painful then painful_tokens
|
22
|
-
when :slow then slow_tokens
|
17
|
+
when :error, :broken then exception_tokens
|
18
|
+
when :failure then failure_tokens
|
19
|
+
when :skipped then skipped_tokens
|
20
|
+
when :painful, :slow then slow_tokens
|
23
21
|
end
|
24
22
|
end
|
25
23
|
|
26
24
|
private
|
27
25
|
|
28
|
-
def
|
29
|
-
[
|
30
|
-
headline_tokens,
|
31
|
-
test_location_tokens,
|
32
|
-
summary_tokens,
|
33
|
-
*backtrace_tokens,
|
34
|
-
newline_tokens
|
35
|
-
]
|
36
|
-
end
|
37
|
-
|
38
|
-
def broken_tokens
|
26
|
+
def exception_tokens
|
39
27
|
[
|
40
28
|
headline_tokens,
|
41
29
|
test_location_tokens,
|
@@ -62,14 +50,6 @@ module Minitest
|
|
62
50
|
]
|
63
51
|
end
|
64
52
|
|
65
|
-
def painful_tokens
|
66
|
-
[
|
67
|
-
headline_tokens,
|
68
|
-
slowness_summary_tokens,
|
69
|
-
newline_tokens
|
70
|
-
]
|
71
|
-
end
|
72
|
-
|
73
53
|
def slow_tokens
|
74
54
|
[
|
75
55
|
headline_tokens,
|
@@ -79,9 +59,14 @@ module Minitest
|
|
79
59
|
end
|
80
60
|
|
81
61
|
def headline_tokens
|
82
|
-
[
|
62
|
+
[label_token(issue), spacer_token, [:default, test_name(issue)]]
|
83
63
|
end
|
84
64
|
|
65
|
+
# Creates a display-friendly version of the test name with underscores removed and the
|
66
|
+
# first letter capitalized regardless of the formatt used for the test definition
|
67
|
+
# @param issue [Issue] the issue to use to generate the test name
|
68
|
+
#
|
69
|
+
# @return [String] the cleaned up version of the test name
|
85
70
|
def test_name(issue)
|
86
71
|
test_prefix = 'test_'
|
87
72
|
identifier = issue.test_identifier
|
@@ -93,35 +78,14 @@ module Minitest
|
|
93
78
|
end
|
94
79
|
end
|
95
80
|
|
96
|
-
def
|
97
|
-
|
98
|
-
# When the exception came out of the test itself, that's a different kind of exception
|
99
|
-
# that really only indicates there's a problem with the code in the test. It's kind of
|
100
|
-
# between an error and a test.
|
101
|
-
'Broken Test'
|
102
|
-
elsif issue.error?
|
103
|
-
'Error'
|
104
|
-
elsif issue.skipped?
|
105
|
-
'Skipped'
|
106
|
-
elsif issue.painful?
|
107
|
-
'Passed but Very Slow'
|
108
|
-
elsif issue.slow?
|
109
|
-
'Passed but Slow'
|
110
|
-
elsif !issue.passed?
|
111
|
-
'Failure'
|
112
|
-
else
|
113
|
-
'Success'
|
114
|
-
end
|
81
|
+
def label_token(issue)
|
82
|
+
[issue.type, issue_label(issue.type)]
|
115
83
|
end
|
116
84
|
|
117
85
|
def test_name_and_class_tokens
|
118
86
|
[[:default, issue.test_class], *test_location_tokens]
|
119
87
|
end
|
120
88
|
|
121
|
-
def backtrace_tokens
|
122
|
-
@backtrace_tokens ||= ::Minitest::Heat::Output::Backtrace.new(locations).tokens
|
123
|
-
end
|
124
|
-
|
125
89
|
def test_location_tokens
|
126
90
|
[
|
127
91
|
[:default, locations.test_definition.relative_filename],
|
@@ -180,6 +144,27 @@ module Minitest
|
|
180
144
|
def arrow_token
|
181
145
|
Output::TOKENS[:muted_arrow]
|
182
146
|
end
|
147
|
+
|
148
|
+
def backtrace_tokens
|
149
|
+
@backtrace_tokens ||= ::Minitest::Heat::Output::Backtrace.new(locations).tokens
|
150
|
+
end
|
151
|
+
|
152
|
+
# The string to use to describe the failure type when displaying results/
|
153
|
+
# @param issue_type [Symbol] the symbol representing the issue's failure type
|
154
|
+
#
|
155
|
+
# @return [String] the display-friendly string describing the failure reason
|
156
|
+
def issue_label(issue_type)
|
157
|
+
case issue_type
|
158
|
+
when :error then 'Error'
|
159
|
+
when :broken then 'Broken Test'
|
160
|
+
when :failure then 'Failure'
|
161
|
+
when :skipped then 'Skipped'
|
162
|
+
when :slow then 'Passed but Slow'
|
163
|
+
when :painful then 'Passed but Very Slow'
|
164
|
+
when :passed then 'Success'
|
165
|
+
else 'Unknown'
|
166
|
+
end
|
167
|
+
end
|
183
168
|
end
|
184
169
|
end
|
185
170
|
end
|
@@ -14,24 +14,34 @@ module Minitest
|
|
14
14
|
|
15
15
|
def tokens
|
16
16
|
results.heat_map.file_hits.each do |hit|
|
17
|
-
#
|
17
|
+
# Focus on the relevant issues based on most significant problems. i.e. If there are
|
18
|
+
# legitimate failures or errors, skips and slows aren't relevant
|
18
19
|
next unless relevant_issue_types?(hit)
|
19
20
|
|
21
|
+
# Add a new line
|
20
22
|
@tokens << [[:muted, ""]]
|
23
|
+
|
24
|
+
# Build the summary line for the file
|
21
25
|
@tokens << file_summary_tokens(hit)
|
22
26
|
|
23
|
-
|
24
|
-
|
27
|
+
# Get the set of line numbers that appear more than once
|
28
|
+
repeated_line_numbers = find_repeated_line_numbers_in(hit)
|
25
29
|
|
26
|
-
|
27
|
-
|
30
|
+
# Only display more details if the same line number shows up more than once
|
31
|
+
next unless repeated_line_numbers.any?
|
28
32
|
|
33
|
+
repeated_line_numbers.each do |line_number|
|
34
|
+
# Get the backtraces for the given line numbers
|
29
35
|
traces = hit.lines[line_number.to_s]
|
30
|
-
sorted_traces = traces.sort_by { |trace| trace.locations.last.line_number }
|
31
36
|
|
32
|
-
|
33
|
-
|
34
|
-
|
37
|
+
# If there aren't any traces there's no way to provide additional details
|
38
|
+
break unless traces.any?
|
39
|
+
|
40
|
+
# A short summary explaining the details that will follow
|
41
|
+
@tokens << [[:default, " Line #{line_number}"], [:muted, ' issues triggered from:']]
|
42
|
+
|
43
|
+
# The last relevant location for each error's backtrace
|
44
|
+
@tokens += origination_sources(traces)
|
35
45
|
end
|
36
46
|
end
|
37
47
|
|
@@ -40,16 +50,28 @@ module Minitest
|
|
40
50
|
|
41
51
|
private
|
42
52
|
|
53
|
+
def origination_sources(traces)
|
54
|
+
# 1. Only pull the traces that have proper locations
|
55
|
+
# 2. Sort the traces by the most recent line number so they're displayed in numeric order
|
56
|
+
# 3. Get the final relevant location from the trace
|
57
|
+
traces.
|
58
|
+
select { |trace| trace.locations.any? }.
|
59
|
+
sort_by { |trace| trace.locations.last.line_number }.
|
60
|
+
map { |trace| origination_location_token(trace) }
|
61
|
+
end
|
62
|
+
|
43
63
|
def file_summary_tokens(hit)
|
44
64
|
pathname_tokens = pathname(hit)
|
45
|
-
line_number_list_tokens =
|
65
|
+
line_number_list_tokens = line_number_tokens_for_hit(hit)
|
46
66
|
|
47
67
|
[*pathname_tokens, *line_number_list_tokens]
|
48
68
|
end
|
49
69
|
|
50
70
|
def origination_location_token(trace)
|
51
71
|
# The earliest project line from the backtrace—this is probabyl wholly incorrect in terms
|
52
|
-
# of what would be the most helpful line to display, but it's a start.
|
72
|
+
# of what would be the most helpful line to display, but it's a start. Otherwise, the
|
73
|
+
# logic will need to compare all traces for the issue and find the unique origination
|
74
|
+
# lines
|
53
75
|
location = trace.locations.last
|
54
76
|
|
55
77
|
[
|
@@ -58,7 +80,7 @@ module Minitest
|
|
58
80
|
[:muted, ':'],
|
59
81
|
[:default, location.line_number],
|
60
82
|
[:muted, " in #{location.container}"],
|
61
|
-
[:muted, " #{Output::SYMBOLS[:arrow]}
|
83
|
+
[:muted, " #{Output::SYMBOLS[:arrow]} `#{location.source_code.line.strip}`"],
|
62
84
|
]
|
63
85
|
end
|
64
86
|
|
@@ -75,16 +97,18 @@ module Minitest
|
|
75
97
|
end
|
76
98
|
|
77
99
|
def relevant_issue_types?(hit)
|
100
|
+
# The intersection of which issue types are relevant based on the context and which issues
|
101
|
+
# matc those issue types
|
78
102
|
intersection_issue_types = relevant_issue_types & hit.issues.keys
|
79
103
|
|
80
104
|
intersection_issue_types.any?
|
81
105
|
end
|
82
106
|
|
83
|
-
def
|
107
|
+
def find_repeated_line_numbers_in(hit)
|
84
108
|
repeated_line_numbers = []
|
85
109
|
|
86
110
|
hit.lines.each_pair do |line_number, traces|
|
87
|
-
# If there aren't multiple traces for a line number, it's not a repeat
|
111
|
+
# If there aren't multiple traces for a line number, it's not a repeat
|
88
112
|
next unless traces.size > 1
|
89
113
|
|
90
114
|
repeated_line_numbers << Integer(line_number)
|
@@ -93,10 +117,6 @@ module Minitest
|
|
93
117
|
repeated_line_numbers.sort
|
94
118
|
end
|
95
119
|
|
96
|
-
def repeated_line_numbers?(hit)
|
97
|
-
repeated_line_numbers(hit).any?
|
98
|
-
end
|
99
|
-
|
100
120
|
def pathname(hit)
|
101
121
|
directory = hit.pathname.dirname.to_s.delete_prefix("#{Dir.pwd}/")
|
102
122
|
filename = hit.pathname.basename.to_s
|
@@ -108,6 +128,11 @@ module Minitest
|
|
108
128
|
]
|
109
129
|
end
|
110
130
|
|
131
|
+
# Gets the list of line numbers for a given hit location (i.e. file) so they can be
|
132
|
+
# displayed after the file name to show which lines were problematic
|
133
|
+
# @param hit [Hit] the instance to extract line numbers from
|
134
|
+
#
|
135
|
+
# @return [Array<Symbol,String>] [description]
|
111
136
|
def line_number_tokens_for_hit(hit)
|
112
137
|
line_number_tokens = []
|
113
138
|
|
@@ -116,34 +141,50 @@ module Minitest
|
|
116
141
|
line_numbers_for_issue_type = hit.issues.fetch(issue_type) { [] }
|
117
142
|
|
118
143
|
# Build the list of tokens representing styled line numbers
|
119
|
-
line_numbers_for_issue_type.each do |line_number|
|
120
|
-
|
144
|
+
line_numbers_for_issue_type.uniq.sort.each do |line_number|
|
145
|
+
frequency = line_numbers_for_issue_type.count(line_number)
|
146
|
+
|
147
|
+
line_number_tokens += line_number_token(issue_type, line_number, frequency)
|
121
148
|
end
|
122
149
|
end
|
123
150
|
|
124
151
|
line_number_tokens.compact
|
125
152
|
end
|
126
153
|
|
127
|
-
|
128
|
-
[style, "#{line_number} "]
|
129
|
-
end
|
130
|
-
|
131
|
-
# Generates the line number tokens styled based on their error type
|
154
|
+
# Builds a token representing a styled line number
|
132
155
|
#
|
133
|
-
# @param [
|
156
|
+
# @param style [Symbol] the relevant display style for the issue
|
157
|
+
# @param line_number [Integer] the affected line number
|
134
158
|
#
|
135
|
-
# @return [Array]
|
136
|
-
|
137
|
-
|
138
|
-
|
139
|
-
|
140
|
-
|
141
|
-
|
142
|
-
|
143
|
-
|
144
|
-
first_line_number <=> second_line_number
|
159
|
+
# @return [Array<Symbol,Integer>] array token representing the line number and issue type
|
160
|
+
def line_number_token(style, line_number, frequency)
|
161
|
+
if frequency > 1
|
162
|
+
[
|
163
|
+
[style, "#{line_number}"],
|
164
|
+
[:muted, "✕#{frequency} "]
|
165
|
+
]
|
166
|
+
else
|
167
|
+
[[style, "#{line_number} "]]
|
145
168
|
end
|
146
169
|
end
|
170
|
+
|
171
|
+
# # Sorts line number tokens so that line numbers are displayed in order regardless of their
|
172
|
+
# # underlying issue type
|
173
|
+
# #
|
174
|
+
# # @param hit [Hit] the instance of the hit file details to build the heat map entry
|
175
|
+
# #
|
176
|
+
# # @return [Array] the arrays representing the line number tokens to display next to a file
|
177
|
+
# # name in the heat map. ex [[:error, 12], [:falure, 13]]
|
178
|
+
# def sorted_line_number_list(hit)
|
179
|
+
# # Sort the collected group of line number hits so they're in order
|
180
|
+
# line_number_tokens_for_hit(hit).sort do |a, b|
|
181
|
+
# # Ensure the line numbers are integers for sorting (otherwise '100' comes before '12')
|
182
|
+
# first_line_number = Integer(a[1].strip)
|
183
|
+
# second_line_number = Integer(b[1].strip)
|
184
|
+
|
185
|
+
# first_line_number <=> second_line_number
|
186
|
+
# end
|
187
|
+
# end
|
147
188
|
end
|
148
189
|
end
|
149
190
|
end
|
data/lib/minitest/heat/output.rb
CHANGED
@@ -48,13 +48,20 @@ module Minitest
|
|
48
48
|
newline
|
49
49
|
|
50
50
|
# Issues start with the least critical and go up to the most critical so that the most
|
51
|
-
#
|
52
|
-
#
|
53
|
-
#
|
51
|
+
# pressing issues are displayed at the bottom of the report in order to reduce scrolling.
|
52
|
+
#
|
53
|
+
# This way, as you fix issues, the list gets shorter, and eventually the least critical
|
54
|
+
# issues will be displayed without scrolling once more problematic issues are resolved.
|
54
55
|
%i[slows painfuls skips failures brokens errors].each do |issue_category|
|
56
|
+
# Only show categories for the most pressing issues after the suite runs, otherwise,
|
57
|
+
# suppress them until the more critical issues are resolved.
|
55
58
|
next unless show?(issue_category, results)
|
56
59
|
|
57
|
-
results.send(issue_category)
|
60
|
+
issues = results.send(issue_category)
|
61
|
+
|
62
|
+
issues
|
63
|
+
.sort_by { |issue| issue.locations.most_relevant.to_a }
|
64
|
+
.each { |issue| issue_details(issue) }
|
58
65
|
end
|
59
66
|
rescue StandardError => e
|
60
67
|
message = "Sorry, but Minitest Heat couldn't display the details of any failures."
|
@@ -64,7 +71,7 @@ module Minitest
|
|
64
71
|
def issue_details(issue)
|
65
72
|
print_tokens Minitest::Heat::Output::Issue.new(issue).tokens
|
66
73
|
rescue StandardError => e
|
67
|
-
message = "Sorry, but Minitest Heat couldn't display output for a failure."
|
74
|
+
message = "Sorry, but Minitest Heat couldn't display output for a specific failure."
|
68
75
|
exception_guidance(message, e)
|
69
76
|
end
|
70
77
|
|
@@ -89,6 +96,15 @@ module Minitest
|
|
89
96
|
exception_guidance(message, e)
|
90
97
|
end
|
91
98
|
|
99
|
+
private
|
100
|
+
|
101
|
+
# Displays some guidance related to exceptions generated by Minitest Heat in order to help
|
102
|
+
# people get back on track (and ideally submit issues)
|
103
|
+
# @param message [String] a slightly more specific explanation of which part of minitest-heat
|
104
|
+
# caused the failure
|
105
|
+
# @param exception [Exception] the exception that caused the problem
|
106
|
+
#
|
107
|
+
# @return [void] displays the guidance to the console
|
92
108
|
def exception_guidance(message, exception)
|
93
109
|
newline
|
94
110
|
puts "#{message} Disabling Minitest Heat can get you back on track until the problem can be fixed."
|
@@ -100,8 +116,6 @@ module Minitest
|
|
100
116
|
newline
|
101
117
|
end
|
102
118
|
|
103
|
-
private
|
104
|
-
|
105
119
|
def no_problems?(results)
|
106
120
|
!results.problems?
|
107
121
|
end
|
@@ -30,8 +30,20 @@ module Minitest
|
|
30
30
|
# For heat map purposes, only the project backtrace lines are interesting
|
31
31
|
pathname, line_number = issue.locations.project.to_a
|
32
32
|
|
33
|
-
#
|
34
|
-
|
33
|
+
# A backtrace is only relevant for exception-generating issues (i.e. errors), not slows or skips
|
34
|
+
# However, while assertion failures won't have a backtrace, there can still be repeated line
|
35
|
+
# numbers if the tests reference a shared method with an assertion in it. So in those cases,
|
36
|
+
# the backtrace is simply the test definition
|
37
|
+
backtrace = if issue.error?
|
38
|
+
# With errors, we have a backtrace
|
39
|
+
issue.locations.backtrace.project_locations
|
40
|
+
else
|
41
|
+
# With failures, the test definition is the most granular backtrace equivalent
|
42
|
+
location = issue.locations.test_definition
|
43
|
+
location.raw_container = issue.test_identifier
|
44
|
+
|
45
|
+
[location]
|
46
|
+
end
|
35
47
|
|
36
48
|
@heat_map.add(pathname, line_number, issue.type, backtrace: backtrace)
|
37
49
|
end
|
@@ -96,7 +96,7 @@ module Minitest
|
|
96
96
|
# The list of individual issues and their associated details
|
97
97
|
output.issues_list(results)
|
98
98
|
|
99
|
-
# Display a short summary of the total issue counts
|
99
|
+
# Display a short summary of the total issue counts for each category as well as performance
|
100
100
|
# details for the test suite as a whole
|
101
101
|
output.compact_summary(results, timer)
|
102
102
|
|
metadata
CHANGED
@@ -1,14 +1,14 @@
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
2
2
|
name: minitest-heat
|
3
3
|
version: !ruby/object:Gem::Version
|
4
|
-
version: 0.0
|
4
|
+
version: 1.0.0
|
5
5
|
platform: ruby
|
6
6
|
authors:
|
7
7
|
- Garrett Dimon
|
8
8
|
autorequire:
|
9
9
|
bindir: exe
|
10
10
|
cert_chain: []
|
11
|
-
date: 2021-
|
11
|
+
date: 2021-12-01 00:00:00.000000000 Z
|
12
12
|
dependencies:
|
13
13
|
- !ruby/object:Gem::Dependency
|
14
14
|
name: minitest
|
@@ -141,6 +141,12 @@ files:
|
|
141
141
|
- Rakefile
|
142
142
|
- bin/console
|
143
143
|
- bin/setup
|
144
|
+
- examples/exceptions.png
|
145
|
+
- examples/failures.png
|
146
|
+
- examples/map.png
|
147
|
+
- examples/markers.png
|
148
|
+
- examples/skips.png
|
149
|
+
- examples/slows.png
|
144
150
|
- lib/minitest/heat.rb
|
145
151
|
- lib/minitest/heat/backtrace.rb
|
146
152
|
- lib/minitest/heat/backtrace/line_parser.rb
|