minitest-bisect 1.4.1 → 1.5.0
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +4 -4
- checksums.yaml.gz.sig +0 -0
- data.tar.gz.sig +0 -0
- data/History.rdoc +11 -0
- data/README.rdoc +54 -38
- data/example-many/helper.rb +1 -1
- data/lib/minitest/bisect.rb +5 -3
- data/test/minitest/test_bisect.rb +2 -2
- metadata +5 -5
- metadata.gz.sig +0 -0
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA256:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: 895bdb11c49734d1219e519a88532a03221bd0b2ae25431cc3e35d552716f841
|
4
|
+
data.tar.gz: 661d13d6f64de3a029fec5b68d7b89a3843e5e628fe9f53191e20981ed5bd806
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: a2d962388ce5dafc12591a3e5362d3e0be98b2e46f1bb6ce46a0b7897836ec517c4ad9f4ed07f159311cbfbd792f148fd6e8acfb88b8fec95ac770b47cce0fd5
|
7
|
+
data.tar.gz: 5b896b2f11f3a01e99106abde712ad9784d55e4b974a66d453331acfb09f0e68d2c41973dd643d3827866d5a649d756c2d7388226b5f669164a544ca7b52d74c
|
checksums.yaml.gz.sig
CHANGED
Binary file
|
data.tar.gz.sig
CHANGED
Binary file
|
data/History.rdoc
CHANGED
@@ -1,3 +1,14 @@
|
|
1
|
+
=== 1.5.0 / 2019-06-06
|
2
|
+
|
3
|
+
* 2 minor enhancements:
|
4
|
+
|
5
|
+
* Print out the culprit methods once done.
|
6
|
+
* Default to using -Itest:lib like minitest-sprint.
|
7
|
+
|
8
|
+
* 1 bug fix:
|
9
|
+
|
10
|
+
* Remove the server arguments from the final repro output.
|
11
|
+
|
1
12
|
=== 1.4.1 / 2019-05-26
|
2
13
|
|
3
14
|
* 2 bug fixes:
|
data/README.rdoc
CHANGED
@@ -30,13 +30,37 @@ Let's say you have a bunch of test files and they fail sometimes, but
|
|
30
30
|
not consistently. You get a run that fails, so you record the
|
31
31
|
randomization seed that minitest displays at the top of every run.
|
32
32
|
|
33
|
+
=== Normally, it passes:
|
34
|
+
|
35
|
+
% minitest example
|
36
|
+
Run options: --seed 42
|
37
|
+
|
38
|
+
# Running:
|
39
|
+
|
40
|
+
..............................................................................
|
41
|
+
..............................................................................
|
42
|
+
..............................................................................
|
43
|
+
..............................................................................
|
44
|
+
..............................................................................
|
45
|
+
..............................................................................
|
46
|
+
..............................................................................
|
47
|
+
..............................................................................
|
48
|
+
..............................................................................
|
49
|
+
..............................................................................
|
50
|
+
....................
|
51
|
+
|
52
|
+
Finished in 200.341744s, 3.6371 runs/s, 3.4230 assertions/s.
|
53
|
+
|
54
|
+
800 runs, 798 assertions, 0 failures, 0 errors, 0 skips
|
55
|
+
|
33
56
|
=== Original Failure:
|
34
57
|
|
35
|
-
|
58
|
+
But someone sees the failure either locally or on the CI. They record
|
59
|
+
the output and get the randomization seed that causes the test
|
60
|
+
ordering bug:
|
36
61
|
|
37
|
-
$
|
38
|
-
|
39
|
-
Run options: --seed 3911
|
62
|
+
$ minitest example --seed 314
|
63
|
+
Run options: --seed 314
|
40
64
|
|
41
65
|
# Running:
|
42
66
|
|
@@ -73,58 +97,50 @@ same order and reproduce every time.
|
|
73
97
|
minitest_bisect will first minimize the number of files, then it will
|
74
98
|
turn around and minimize the number of methods.
|
75
99
|
|
76
|
-
|
77
|
-
reproducing...
|
78
|
-
|
79
|
-
# of culprit
|
80
|
-
# of culprit
|
81
|
-
# of culprit
|
82
|
-
# of culprit
|
83
|
-
# of culprit
|
100
|
+
% minitest_bisect example --seed 314
|
101
|
+
reproducing... in 203.83 sec
|
102
|
+
verifying... in 0.37 sec
|
103
|
+
# of culprit methods: 64 in 16.65 sec
|
104
|
+
# of culprit methods: 32 in 8.53 sec
|
105
|
+
# of culprit methods: 32 in 8.52 sec
|
106
|
+
# of culprit methods: 16 in 4.46 sec
|
107
|
+
# of culprit methods: 16 in 4.44 sec
|
108
|
+
# of culprit methods: 8 in 2.41 sec
|
109
|
+
# of culprit methods: 4 in 1.40 sec
|
110
|
+
# of culprit methods: 2 in 0.89 sec
|
111
|
+
# of culprit methods: 2 in 0.89 sec
|
112
|
+
# of culprit methods: 1 in 0.62 sec
|
113
|
+
# of culprit methods: 1 in 0.63 sec
|
84
114
|
|
85
|
-
Minimal
|
115
|
+
Minimal methods found in 11 steps:
|
86
116
|
|
87
|
-
|
117
|
+
Culprit methods: ["TestBad1#test_bad1_1"]
|
88
118
|
|
89
|
-
|
90
|
-
reproduced
|
91
|
-
# of culprit methods: 64
|
92
|
-
# of culprit methods: 64
|
93
|
-
# of culprit methods: 32
|
94
|
-
# of culprit methods: 16
|
95
|
-
# of culprit methods: 8
|
96
|
-
# of culprit methods: 8
|
97
|
-
# of culprit methods: 4
|
98
|
-
# of culprit methods: 2
|
99
|
-
# of culprit methods: 2
|
100
|
-
# of culprit methods: 1
|
101
|
-
|
102
|
-
Minimal methods found in 10 steps:
|
103
|
-
|
104
|
-
ruby -Ilib -e 'require "./example/test_bad1.rb" ; require "./example/test_bad4.rb"' -- --seed 3911 -s 48222 -n '/^(?:TestBad1\#test_bad1_1|TestBad4\#test_bad4_4)$/'
|
119
|
+
ruby -Itest:lib -e 'require "./example/test_bad1.rb" ; require "./example/test_bad2.rb" ; require "./example/test_bad3.rb" ; require "./example/test_bad4.rb" ; require "./example/test_bad5.rb" ; require "./example/test_bad6.rb" ; require "./example/test_bad7.rb" ; require "./example/test_bad8.rb"' -- --seed 314 -n "/^(?:TestBad1#(?:test_bad1_1)|TestBad4#(?:test_bad4_4))$/"
|
105
120
|
|
106
121
|
Final reproduction:
|
107
122
|
|
108
|
-
Run options: --seed
|
123
|
+
Run options: --seed 314 -n "/^(?:TestBad1#(?:test_bad1_1)|TestBad4#(?:test_bad4_4))$/"
|
109
124
|
|
110
125
|
# Running:
|
111
126
|
|
112
127
|
.F
|
113
128
|
|
114
|
-
Finished in 0.
|
129
|
+
Finished in 0.512999s, 3.8986 runs/s, 1.9493 assertions/s.
|
115
130
|
|
116
131
|
1) Failure:
|
117
132
|
TestBad4#test_bad4_4 [/Users/ryan/Work/p4/zss/src/minitest-bisect/dev/example/helper.rb:16]:
|
118
133
|
muahahaha order dependency bug!
|
119
134
|
|
120
|
-
2 runs, 1 assertions, 1 failures, 0 errors, 0 skips
|
121
|
-
|
122
135
|
Voila! This reduced it from 800 tests across 8 files down to 2 tests
|
123
|
-
across 2 files. Note how we went from a 200 second test run to a 0.5
|
124
|
-
second test run. Debugging that will be much
|
136
|
+
across 2 files. Note how we went from a ~200 second test run to a ~0.5
|
137
|
+
second test run. Debugging that will be 400x faster and that much
|
138
|
+
easier.
|
139
|
+
|
140
|
+
=== What Now?
|
125
141
|
|
126
|
-
|
127
|
-
determine what side-effects (or lack thereof) are causing your test
|
142
|
+
Now, it is now up to you to look at the source of both of those tests
|
143
|
+
to determine what side-effects (or lack thereof) are causing your test
|
128
144
|
failure when run in this specific order.
|
129
145
|
|
130
146
|
This happens in a single run. Depending on how many files / tests you
|
data/example-many/helper.rb
CHANGED
data/lib/minitest/bisect.rb
CHANGED
@@ -5,7 +5,7 @@ require "rbconfig"
|
|
5
5
|
require "path_expander"
|
6
6
|
|
7
7
|
class Minitest::Bisect
|
8
|
-
VERSION = "1.
|
8
|
+
VERSION = "1.5.0"
|
9
9
|
|
10
10
|
class PathExpander < ::PathExpander
|
11
11
|
TEST_GLOB = "**/{test_*,*_test,spec_*,*_spec}.rb" # :nodoc:
|
@@ -14,7 +14,7 @@ class Minitest::Bisect
|
|
14
14
|
|
15
15
|
def initialize args = ARGV # :nodoc:
|
16
16
|
super args, TEST_GLOB
|
17
|
-
self.rb_flags = []
|
17
|
+
self.rb_flags = %w[-Itest:lib]
|
18
18
|
end
|
19
19
|
|
20
20
|
##
|
@@ -162,8 +162,10 @@ class Minitest::Bisect
|
|
162
162
|
puts
|
163
163
|
puts "Minimal methods found in #{count} steps:"
|
164
164
|
puts
|
165
|
+
puts "Culprit methods: %p" % [found]
|
166
|
+
puts
|
165
167
|
cmd = build_methods_cmd cmd, found, bad
|
166
|
-
puts cmd
|
168
|
+
puts cmd.sub(/--server \d+/, "")
|
167
169
|
puts
|
168
170
|
cmd
|
169
171
|
end
|
@@ -220,7 +220,7 @@ class TestMinitest::TestBisect::TestPathExpander < Minitest::Test
|
|
220
220
|
expander = mtbpe.new args
|
221
221
|
|
222
222
|
assert_kind_of PathExpander, expander
|
223
|
-
assert_equal [], expander.rb_flags
|
223
|
+
assert_equal %w[-Itest:lib], expander.rb_flags
|
224
224
|
assert_same mtbpe::TEST_GLOB, expander.glob
|
225
225
|
end
|
226
226
|
|
@@ -230,7 +230,7 @@ class TestMinitest::TestBisect::TestPathExpander < Minitest::Test
|
|
230
230
|
expander = Minitest::Bisect::PathExpander.new args
|
231
231
|
|
232
232
|
exp_files = %w[1 2 3 4 5 6]
|
233
|
-
exp_flags = %w[-Iblah -d -w]
|
233
|
+
exp_flags = %w[-Itest:lib -Iblah -d -w]
|
234
234
|
|
235
235
|
files = expander.process_flags(args)
|
236
236
|
|
metadata
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
2
2
|
name: minitest-bisect
|
3
3
|
version: !ruby/object:Gem::Version
|
4
|
-
version: 1.
|
4
|
+
version: 1.5.0
|
5
5
|
platform: ruby
|
6
6
|
authors:
|
7
7
|
- Ryan Davis
|
@@ -29,7 +29,7 @@ cert_chain:
|
|
29
29
|
Em82dBUFsipwMLCYj39kcyHWAxyl6Ae1Cn9r/ItVBCxoeFdrHjfavnrIEoXUt4bU
|
30
30
|
UfBugfLD19bu3nvL+zTAGx/U
|
31
31
|
-----END CERTIFICATE-----
|
32
|
-
date: 2019-
|
32
|
+
date: 2019-06-06 00:00:00.000000000 Z
|
33
33
|
dependencies:
|
34
34
|
- !ruby/object:Gem::Dependency
|
35
35
|
name: minitest-server
|
@@ -99,14 +99,14 @@ dependencies:
|
|
99
99
|
requirements:
|
100
100
|
- - "~>"
|
101
101
|
- !ruby/object:Gem::Version
|
102
|
-
version: '3.
|
102
|
+
version: '3.18'
|
103
103
|
type: :development
|
104
104
|
prerelease: false
|
105
105
|
version_requirements: !ruby/object:Gem::Requirement
|
106
106
|
requirements:
|
107
107
|
- - "~>"
|
108
108
|
- !ruby/object:Gem::Version
|
109
|
-
version: '3.
|
109
|
+
version: '3.18'
|
110
110
|
description: |-
|
111
111
|
Hunting down random test failures can be very very difficult,
|
112
112
|
sometimes impossible, but minitest-bisect makes it easy.
|
@@ -178,7 +178,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
|
|
178
178
|
- !ruby/object:Gem::Version
|
179
179
|
version: '0'
|
180
180
|
requirements: []
|
181
|
-
rubygems_version: 3.0.
|
181
|
+
rubygems_version: 3.0.3
|
182
182
|
signing_key:
|
183
183
|
specification_version: 4
|
184
184
|
summary: Hunting down random test failures can be very very difficult, sometimes impossible,
|
metadata.gz.sig
CHANGED
Binary file
|