Saikuro 1.1.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- data/README +142 -0
- data/bin/saikuro +94 -0
- data/lib/saikuro.rb +1178 -0
- data/lib/saikuro/usage.rb +39 -0
- data/tests/large_example.rb +70 -0
- data/tests/samples.rb +315 -0
- metadata +52 -0
data/README
ADDED
@@ -0,0 +1,142 @@
|
|
1
|
+
Version 0.2
|
2
|
+
|
3
|
+
Saikuro:
|
4
|
+
Saikuro is a Ruby cyclomatic complexity analyzer. When given Ruby
|
5
|
+
source code Saikuro will generate a report listing the cyclomatic
|
6
|
+
complexity of each method found. In addition, Saikuro counts the
|
7
|
+
number of lines per method and can generate a listing of the number of
|
8
|
+
tokens on each line of code.
|
9
|
+
|
10
|
+
License:
|
11
|
+
Saikuro uses the BSD license.
|
12
|
+
|
13
|
+
Installation:
|
14
|
+
Option 1: Using setup.rb
|
15
|
+
* login as root
|
16
|
+
* run "ruby setup.rb all"
|
17
|
+
|
18
|
+
Option 2: The manual way
|
19
|
+
Saikuro is a single Ruby file that is executable. You can run it where
|
20
|
+
you unpacked it or you can move it your preferred location such as
|
21
|
+
"/usr/local/bin" or "~/bin".
|
22
|
+
|
23
|
+
Note:
|
24
|
+
Ruby 1.8.5 has a bug in ri_options that will prevent Saikuro from
|
25
|
+
running. If you are using 1.8.5 please apply this patch :
|
26
|
+
http://www.ruby-lang.org/cgi-bin/cvsweb.cgi/ruby/lib/rdoc/ri/ri_options.rb.diff?r1=1.2.2.13;r2=1.2.2.14
|
27
|
+
|
28
|
+
|
29
|
+
Usage:
|
30
|
+
Saikuro is a command line program.
|
31
|
+
Running "saikuro -h" will output a usage statement describing all
|
32
|
+
the various arguments you can pass to it.
|
33
|
+
|
34
|
+
"saikuro -c -p tests/samples.rb"
|
35
|
+
|
36
|
+
The above command is a simple example that generates a cyclomatic
|
37
|
+
complexity report on the samples.rb file, using the default filter,
|
38
|
+
warning and error settings. The report is saved in the current
|
39
|
+
directory.
|
40
|
+
|
41
|
+
|
42
|
+
A more detailed example is
|
43
|
+
"saikuro -c -t -i tests -y 0 -w 11 -e 16 -o out/"
|
44
|
+
|
45
|
+
This will analyze all Ruby files found in the "tests/" directory.
|
46
|
+
Saikuro will generate a token count report and a cyclomatic complexity
|
47
|
+
report in the "out" directory . The "-y 0" command will turn off
|
48
|
+
filtering and thus show the complexity of all methods. The "-w 11"
|
49
|
+
will mark all methods with a complexity of 11 or higher with a
|
50
|
+
warning. Finally, "-e 16" will flag all methods with a complexity of
|
51
|
+
16 or higher with an error.
|
52
|
+
|
53
|
+
|
54
|
+
About Cyclomatic Complexity:
|
55
|
+
|
56
|
+
The following document provides a very good and detailed description
|
57
|
+
by the author of cyclomatic complexity.
|
58
|
+
|
59
|
+
NIST Special Publication 500-235
|
60
|
+
Structured Testing: A Testing Methodology Using the Cyclomatic
|
61
|
+
Complexity Metric
|
62
|
+
|
63
|
+
By Arthur H. Watson and Thomas J. McCabe
|
64
|
+
HTML
|
65
|
+
http://hissa.nist.gov/HHRFdata/Artifacts/ITLdoc/235/title.htm
|
66
|
+
PDF
|
67
|
+
http://www.mccabe.com/iq_research_nist.htm
|
68
|
+
|
69
|
+
|
70
|
+
How and what Saikuro counts to calculate the cyclomatic complexity:
|
71
|
+
|
72
|
+
Saikuro uses the Simplified Complexity Calculation, which is just
|
73
|
+
adding up the number of branch points in a method.
|
74
|
+
|
75
|
+
Each method starts with a complexity of 1, because there is at least
|
76
|
+
one path through the code. Then each conditional or looping operator
|
77
|
+
(if, unless, while, until, for, elsif, when) adds one point to the
|
78
|
+
complexity. Each "when" in a case statement adds one point. Also each
|
79
|
+
"rescue" statement adds one.
|
80
|
+
|
81
|
+
Saikuro also regards blocks as an addition to a method's complexity
|
82
|
+
because in many cases a block does add a path that may be traversed.
|
83
|
+
For example, invoking the "each" method of an array with a block would
|
84
|
+
only traverse the give block if the array is not empty. Thus if you
|
85
|
+
want to find the basis set to get 100% coverage of your code then a
|
86
|
+
block should add one point to the method's complexity. It is not yet
|
87
|
+
for sure however to what level the accuracy is decreased through this
|
88
|
+
measurement, as normal Ruby code uses blocks quite heavily and new
|
89
|
+
paths are not necessarily introduced by every block.
|
90
|
+
|
91
|
+
In addition, the short-circuiting "and" operators (&& and "and")
|
92
|
+
currently do not contribute to a method's complexity, although
|
93
|
+
McCabe's paper listed above suggests doing so.
|
94
|
+
|
95
|
+
|
96
|
+
#Example for "and" operator handling:
|
97
|
+
|
98
|
+
# Starting values for case 1 and 2
|
99
|
+
x = false
|
100
|
+
y = 15
|
101
|
+
r, q = nil
|
102
|
+
|
103
|
+
# case 1
|
104
|
+
puts "W" if ((r = x) && (q = y))
|
105
|
+
puts r # => false
|
106
|
+
puts q # => nil
|
107
|
+
|
108
|
+
# case 2
|
109
|
+
puts "W" if ((q = y) && (r = x))
|
110
|
+
puts r # => false
|
111
|
+
puts q # => 15
|
112
|
+
|
113
|
+
Case 1 illustrates why "and" operators should add to a method's
|
114
|
+
complexity, because the result of ( r = x ) is false the if statement
|
115
|
+
stops and returns false without evaluating the ( q = y ) branch. Thus
|
116
|
+
if a total coverage of source code is desired, one point should be
|
117
|
+
added to the method's complexity.
|
118
|
+
|
119
|
+
So why is it not added?
|
120
|
+
Mainly, because we have not gotten around to it. We are wondering if
|
121
|
+
this would increase the noise more than it should.
|
122
|
+
|
123
|
+
|
124
|
+
Tests:
|
125
|
+
In the test directory is a sample file that has examples of the
|
126
|
+
various possible cases that we examined and documented the expected
|
127
|
+
cyclomatic complexity result. If you find mistakes or missing tests
|
128
|
+
please report them.
|
129
|
+
|
130
|
+
Contact:
|
131
|
+
Saikuro is written by
|
132
|
+
Zev Blut (zb at ubit dot com)
|
133
|
+
|
134
|
+
Acknowledgments:
|
135
|
+
Thanks to Elbert Corpuz for writing the CSS for the HTML output!
|
136
|
+
|
137
|
+
Other metric tools for Ruby:
|
138
|
+
Ryan Davis has an abc metric program as an example in his ParseTree
|
139
|
+
product: http://www.zenspider.com/ZSS/Products/ParseTree/
|
140
|
+
|
141
|
+
The PMD project has a tool called CPD that can scan Ruby source code
|
142
|
+
looking for source duplication: http://pmd.sourceforge.net/
|
data/bin/saikuro
ADDED
@@ -0,0 +1,94 @@
|
|
1
|
+
#!/usr/bin/env ruby
|
2
|
+
# $Id: saikuro 39 2008-06-21 05:35:07Z zev $
|
3
|
+
# Version 0.2
|
4
|
+
# == Usage
|
5
|
+
#
|
6
|
+
# saikuro [ -h ] [-o output_directory] [-f type] [ -c, -t ]
|
7
|
+
# [ -y, -w, -e, -k, -s, -d - number ] ( -p file | -i directory )
|
8
|
+
#
|
9
|
+
# == Help
|
10
|
+
#
|
11
|
+
# -o, --output_directory (directory) : A directory to ouput the results in.
|
12
|
+
# The current directory is used if this option is not passed.
|
13
|
+
#
|
14
|
+
# -h, --help : This help message.
|
15
|
+
#
|
16
|
+
# -f, --formater (html | text) : The format to output the results in.
|
17
|
+
# The default is html
|
18
|
+
#
|
19
|
+
# -c, --cyclo : Compute the cyclomatic complexity of the input.
|
20
|
+
#
|
21
|
+
# -t, --token : Count the number of tokens per line of the input.
|
22
|
+
#
|
23
|
+
# -y, --filter_cyclo (number) : Filter the output to only include methods
|
24
|
+
# whose cyclomatic complexity are greater than the passed number.
|
25
|
+
#
|
26
|
+
# -w, --warn_cyclo (number) : Highlight with a warning methods whose
|
27
|
+
# cyclomatic complexity are greather than or equal to the passed number.
|
28
|
+
#
|
29
|
+
#
|
30
|
+
# -e, --error_cyclo (number) : Highligh with an error methods whose
|
31
|
+
# cyclomatic complexity are greather than or equal to the passed number.
|
32
|
+
#
|
33
|
+
#
|
34
|
+
# -k, --filter_token (number) : Filter the output to only include lines
|
35
|
+
# whose token count are greater than the passed number.
|
36
|
+
#
|
37
|
+
#
|
38
|
+
# -s, --warn_token (number) : Highlight with a warning lines whose
|
39
|
+
# token count are greater than or equal to the passed number.
|
40
|
+
#
|
41
|
+
#
|
42
|
+
# -d, --error_token (number) : Highlight with an error lines whose
|
43
|
+
# token count are greater than or equal to the passed number.
|
44
|
+
#
|
45
|
+
#
|
46
|
+
# -p, --parse_file (file) : A file to use as input.
|
47
|
+
#
|
48
|
+
# -i, --input_directory (directory) : All ruby files found recursively
|
49
|
+
# inside the directory are passed as input.
|
50
|
+
|
51
|
+
# == License
|
52
|
+
# Saikruo uses the BSD license.
|
53
|
+
#
|
54
|
+
# Copyright (c) 2005, Ubiquitous Business Technology (http://ubit.com)
|
55
|
+
# All rights reserved.
|
56
|
+
#
|
57
|
+
# Redistribution and use in source and binary forms, with or without
|
58
|
+
# modification, are permitted provided that the following conditions are
|
59
|
+
# met:
|
60
|
+
#
|
61
|
+
#
|
62
|
+
# * Redistributions of source code must retain the above copyright
|
63
|
+
# notice, this list of conditions and the following disclaimer.
|
64
|
+
#
|
65
|
+
# * Redistributions in binary form must reproduce the above
|
66
|
+
# copyright notice, this list of conditions and the following
|
67
|
+
# disclaimer in the documentation and/or other materials provided
|
68
|
+
# with the distribution.
|
69
|
+
#
|
70
|
+
# * Neither the name of Ubiquitous Business Technology nor the names
|
71
|
+
# of its contributors may be used to endorse or promote products
|
72
|
+
# derived from this software without specific prior written
|
73
|
+
# permission.
|
74
|
+
#
|
75
|
+
#
|
76
|
+
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
77
|
+
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
78
|
+
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
79
|
+
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
80
|
+
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
81
|
+
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
82
|
+
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
83
|
+
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
84
|
+
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
85
|
+
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
86
|
+
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
87
|
+
#
|
88
|
+
# == Author
|
89
|
+
# Zev Blut (zb@ubit.com)
|
90
|
+
|
91
|
+
require 'saikuro'
|
92
|
+
|
93
|
+
SaikuroCMDLineRunner.new.run
|
94
|
+
|
data/lib/saikuro.rb
ADDED
@@ -0,0 +1,1178 @@
|
|
1
|
+
# $Id: saikuro.rb 39 2008-06-21 05:35:07Z zev $
|
2
|
+
|
3
|
+
# Saikruo uses the BSD license.
|
4
|
+
#
|
5
|
+
# Copyright (c) 2005, Ubiquitous Business Technology (http://ubit.com)
|
6
|
+
# All rights reserved.
|
7
|
+
#
|
8
|
+
# Redistribution and use in source and binary forms, with or without
|
9
|
+
# modification, are permitted provided that the following conditions are
|
10
|
+
# met:
|
11
|
+
#
|
12
|
+
#
|
13
|
+
# * Redistributions of source code must retain the above copyright
|
14
|
+
# notice, this list of conditions and the following disclaimer.
|
15
|
+
#
|
16
|
+
# * Redistributions in binary form must reproduce the above
|
17
|
+
# copyright notice, this list of conditions and the following
|
18
|
+
# disclaimer in the documentation and/or other materials provided
|
19
|
+
# with the distribution.
|
20
|
+
#
|
21
|
+
# * Neither the name of Ubiquitous Business Technology nor the names
|
22
|
+
# of its contributors may be used to endorse or promote products
|
23
|
+
# derived from this software without specific prior written
|
24
|
+
# permission.
|
25
|
+
#
|
26
|
+
#
|
27
|
+
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
28
|
+
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
29
|
+
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
30
|
+
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
31
|
+
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
32
|
+
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
33
|
+
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
34
|
+
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
35
|
+
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
36
|
+
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
37
|
+
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
38
|
+
#
|
39
|
+
# == Author
|
40
|
+
# Zev Blut (zb@ubit.com)
|
41
|
+
|
42
|
+
require 'irb/ruby-lex'
|
43
|
+
require 'yaml'
|
44
|
+
|
45
|
+
# States to watch for
|
46
|
+
# once in def get the token after space, because it may also
|
47
|
+
# be something like + or << for operator overloading.
|
48
|
+
|
49
|
+
# Counts the number of tokens in each line.
|
50
|
+
class TokenCounter
|
51
|
+
include RubyToken
|
52
|
+
|
53
|
+
attr_reader :current_file
|
54
|
+
|
55
|
+
def initialize
|
56
|
+
@files = Hash.new
|
57
|
+
@tokens_per_line = Hash.new(0)
|
58
|
+
@current_file = ""
|
59
|
+
end
|
60
|
+
|
61
|
+
# Mark file to associate with the token count.
|
62
|
+
def set_current_file(file)
|
63
|
+
@current_file = file
|
64
|
+
@tokens_per_line = Hash.new(0)
|
65
|
+
@files[@current_file] = @tokens_per_line
|
66
|
+
end
|
67
|
+
|
68
|
+
# Iterate through all tracked files, passing the
|
69
|
+
# the provided formater the token counts.
|
70
|
+
def list_tokens_per_line(formater)
|
71
|
+
formater.start_count(@files.size)
|
72
|
+
@files.each do |fname, tok_per_line|
|
73
|
+
formater.start_file(fname)
|
74
|
+
tok_per_line.sort.each do |line,num|
|
75
|
+
formater.line_token_count(line,num)
|
76
|
+
end
|
77
|
+
formater.end_file
|
78
|
+
end
|
79
|
+
end
|
80
|
+
|
81
|
+
# Count the token for the passed line.
|
82
|
+
def count_token(line_no,token)
|
83
|
+
case token
|
84
|
+
when TkSPACE, TkNL, TkRD_COMMENT
|
85
|
+
# Do not count these as tokens
|
86
|
+
when TkCOMMENT
|
87
|
+
# Ignore this only for comments in a statement?
|
88
|
+
# Ignore TkCOLON,TkCOLON2 and operators? like "." etc..
|
89
|
+
when TkRBRACK, TkRPAREN, TkRBRACE
|
90
|
+
# Ignore the closing of an array/index/hash/paren
|
91
|
+
# The opening is counted, but no more.
|
92
|
+
# Thus [], () {} is counted as 1 token not 2.
|
93
|
+
else
|
94
|
+
# may want to filter out comments...
|
95
|
+
@tokens_per_line[line_no] += 1
|
96
|
+
end
|
97
|
+
end
|
98
|
+
|
99
|
+
end
|
100
|
+
|
101
|
+
# Main class and structure used to compute the
|
102
|
+
# cyclomatic complexity of Ruby programs.
|
103
|
+
class ParseState
|
104
|
+
include RubyToken
|
105
|
+
attr_accessor :name, :children, :complexity, :parent, :lines
|
106
|
+
|
107
|
+
@@top_state = nil
|
108
|
+
def ParseState.make_top_state()
|
109
|
+
@@top_state = ParseState.new(nil)
|
110
|
+
@@top_state.name = "__top__"
|
111
|
+
@@top_state
|
112
|
+
end
|
113
|
+
|
114
|
+
@@token_counter = TokenCounter.new
|
115
|
+
def ParseState.set_token_counter(counter)
|
116
|
+
@@token_counter = counter
|
117
|
+
end
|
118
|
+
def ParseState.get_token_counter
|
119
|
+
@@token_counter
|
120
|
+
end
|
121
|
+
|
122
|
+
def initialize(lexer,parent=nil)
|
123
|
+
@name = ""
|
124
|
+
@children = Array.new
|
125
|
+
@complexity = 0
|
126
|
+
@parent = parent
|
127
|
+
@lexer = lexer
|
128
|
+
@run = true
|
129
|
+
# To catch one line def statements, We always have one line.
|
130
|
+
@lines = 0
|
131
|
+
@last_token_line_and_char = Array.new
|
132
|
+
end
|
133
|
+
|
134
|
+
def top_state?
|
135
|
+
self == @@top_state
|
136
|
+
end
|
137
|
+
|
138
|
+
def lexer=(lexer)
|
139
|
+
@run = true
|
140
|
+
@lexer = lexer
|
141
|
+
end
|
142
|
+
|
143
|
+
def make_state(type,parent = nil)
|
144
|
+
cstate = type.new(@lexer,self)
|
145
|
+
parent.children<< cstate
|
146
|
+
cstate
|
147
|
+
end
|
148
|
+
|
149
|
+
def calc_complexity
|
150
|
+
complexity = @complexity
|
151
|
+
children.each do |child|
|
152
|
+
complexity += child.calc_complexity
|
153
|
+
end
|
154
|
+
complexity
|
155
|
+
end
|
156
|
+
|
157
|
+
def calc_lines
|
158
|
+
lines = @lines
|
159
|
+
children.each do |child|
|
160
|
+
lines += child.calc_lines
|
161
|
+
end
|
162
|
+
lines
|
163
|
+
end
|
164
|
+
|
165
|
+
def compute_state(formater)
|
166
|
+
if top_state?
|
167
|
+
compute_state_for_global(formater)
|
168
|
+
end
|
169
|
+
|
170
|
+
@children.each do |s|
|
171
|
+
s.compute_state(formater)
|
172
|
+
end
|
173
|
+
end
|
174
|
+
|
175
|
+
def compute_state_for_global(formater)
|
176
|
+
global_def, @children = @children.partition do |s|
|
177
|
+
!s.kind_of?(ParseClass)
|
178
|
+
end
|
179
|
+
return if global_def.empty?
|
180
|
+
gx = global_def.inject(0) { |c,s| s.calc_complexity }
|
181
|
+
gl = global_def.inject(0) { |c,s| s.calc_lines }
|
182
|
+
formater.start_class_compute_state("Global", "", gx, gl)
|
183
|
+
global_def.each do |s|
|
184
|
+
s.compute_state(formater)
|
185
|
+
end
|
186
|
+
formater.end_class_compute_state("")
|
187
|
+
end
|
188
|
+
|
189
|
+
# Count the tokens parsed if true else ignore them.
|
190
|
+
def count_tokens?
|
191
|
+
true
|
192
|
+
end
|
193
|
+
|
194
|
+
def parse
|
195
|
+
while @run do
|
196
|
+
tok = @lexer.token
|
197
|
+
@run = false if tok.nil?
|
198
|
+
if lexer_loop?(tok)
|
199
|
+
STDERR.puts "Lexer loop at line : #{@lexer.line_no} char #{@lexer.char_no}."
|
200
|
+
@run = false
|
201
|
+
end
|
202
|
+
@last_token_line_and_char<< [@lexer.line_no.to_i, @lexer.char_no.to_i, tok]
|
203
|
+
if $VERBOSE
|
204
|
+
puts "DEBUG: #{@lexer.line_no} #{tok.class}:#{tok.name if tok.respond_to?(:name)}"
|
205
|
+
end
|
206
|
+
@@token_counter.count_token(@lexer.line_no, tok) if count_tokens?
|
207
|
+
parse_token(tok)
|
208
|
+
end
|
209
|
+
end
|
210
|
+
|
211
|
+
# Ruby-Lexer can go into a loop if the file does not end with a newline.
|
212
|
+
def lexer_loop?(token)
|
213
|
+
return false if @last_token_line_and_char.empty?
|
214
|
+
loop_flag = false
|
215
|
+
last = @last_token_line_and_char.last
|
216
|
+
line = last[0]
|
217
|
+
char = last[1]
|
218
|
+
ltok = last[2]
|
219
|
+
|
220
|
+
if ( (line == @lexer.line_no.to_i) &&
|
221
|
+
(char == @lexer.char_no.to_i) &&
|
222
|
+
(ltok.class == token.class) )
|
223
|
+
# We are potentially in a loop
|
224
|
+
if @last_token_line_and_char.size >= 3
|
225
|
+
loop_flag = true
|
226
|
+
end
|
227
|
+
else
|
228
|
+
# Not in a loop so clear stack
|
229
|
+
@last_token_line_and_char = Array.new
|
230
|
+
end
|
231
|
+
|
232
|
+
loop_flag
|
233
|
+
end
|
234
|
+
|
235
|
+
def do_begin_token(token)
|
236
|
+
make_state(EndableParseState, self)
|
237
|
+
end
|
238
|
+
|
239
|
+
def do_class_token(token)
|
240
|
+
make_state(ParseClass,self)
|
241
|
+
end
|
242
|
+
|
243
|
+
def do_module_token(token)
|
244
|
+
make_state(ParseModule,self)
|
245
|
+
end
|
246
|
+
|
247
|
+
def do_def_token(token)
|
248
|
+
make_state(ParseDef,self)
|
249
|
+
end
|
250
|
+
|
251
|
+
def do_constant_token(token)
|
252
|
+
nil
|
253
|
+
end
|
254
|
+
|
255
|
+
def do_identifier_token(token)
|
256
|
+
if (token.name == "__END__" && token.char_no.to_i == 0)
|
257
|
+
# The Ruby code has stopped and the rest is data so cease parsing.
|
258
|
+
@run = false
|
259
|
+
end
|
260
|
+
nil
|
261
|
+
end
|
262
|
+
|
263
|
+
def do_right_brace_token(token)
|
264
|
+
nil
|
265
|
+
end
|
266
|
+
|
267
|
+
def do_end_token(token)
|
268
|
+
end_debug
|
269
|
+
nil
|
270
|
+
end
|
271
|
+
|
272
|
+
def do_block_token(token)
|
273
|
+
make_state(ParseBlock,self)
|
274
|
+
end
|
275
|
+
|
276
|
+
def do_conditional_token(token)
|
277
|
+
make_state(ParseCond,self)
|
278
|
+
end
|
279
|
+
|
280
|
+
def do_conditional_do_control_token(token)
|
281
|
+
make_state(ParseDoCond,self)
|
282
|
+
end
|
283
|
+
|
284
|
+
def do_case_token(token)
|
285
|
+
make_state(EndableParseState, self)
|
286
|
+
end
|
287
|
+
|
288
|
+
def do_one_line_conditional_token(token)
|
289
|
+
# This is an if with no end
|
290
|
+
@complexity += 1
|
291
|
+
#STDOUT.puts "got IF_MOD: #{self.to_yaml}" if $VERBOSE
|
292
|
+
#if state.type != "class" && state.type != "def" && state.type != "cond"
|
293
|
+
#STDOUT.puts "Changing IF_MOD Parent" if $VERBOSE
|
294
|
+
#state = state.parent
|
295
|
+
#@run = false
|
296
|
+
nil
|
297
|
+
end
|
298
|
+
|
299
|
+
def do_else_token(token)
|
300
|
+
STDOUT.puts "Ignored/Unknown Token:#{token.class}" if $VERBOSE
|
301
|
+
nil
|
302
|
+
end
|
303
|
+
|
304
|
+
def do_comment_token(token)
|
305
|
+
make_state(ParseComment, self)
|
306
|
+
end
|
307
|
+
|
308
|
+
def do_symbol_token(token)
|
309
|
+
make_state(ParseSymbol, self)
|
310
|
+
end
|
311
|
+
|
312
|
+
def parse_token(token)
|
313
|
+
state = nil
|
314
|
+
case token
|
315
|
+
when TkCLASS
|
316
|
+
state = do_class_token(token)
|
317
|
+
when TkMODULE
|
318
|
+
state = do_module_token(token)
|
319
|
+
when TkDEF
|
320
|
+
state = do_def_token(token)
|
321
|
+
when TkCONSTANT
|
322
|
+
# Nothing to do with a constant at top level?
|
323
|
+
state = do_constant_token(token)
|
324
|
+
when TkIDENTIFIER,TkFID
|
325
|
+
# Nothing to do at top level?
|
326
|
+
state = do_identifier_token(token)
|
327
|
+
when TkRBRACE
|
328
|
+
# Nothing to do at top level
|
329
|
+
state = do_right_brace_token(token)
|
330
|
+
when TkEND
|
331
|
+
state = do_end_token(token)
|
332
|
+
# At top level this might be an error...
|
333
|
+
when TkDO,TkfLBRACE
|
334
|
+
state = do_block_token(token)
|
335
|
+
when TkIF,TkUNLESS
|
336
|
+
state = do_conditional_token(token)
|
337
|
+
when TkWHILE,TkUNTIL,TkFOR
|
338
|
+
state = do_conditional_do_control_token(token)
|
339
|
+
when TkELSIF #,TkELSE
|
340
|
+
@complexity += 1
|
341
|
+
when TkELSE
|
342
|
+
# Else does not increase complexity
|
343
|
+
when TkCASE
|
344
|
+
state = do_case_token(token)
|
345
|
+
when TkWHEN
|
346
|
+
@complexity += 1
|
347
|
+
when TkBEGIN
|
348
|
+
state = do_begin_token(token)
|
349
|
+
when TkRESCUE
|
350
|
+
# Maybe this should add complexity and not begin
|
351
|
+
@complexity += 1
|
352
|
+
when TkIF_MOD, TkUNLESS_MOD, TkUNTIL_MOD, TkWHILE_MOD, TkQUESTION
|
353
|
+
state = do_one_line_conditional_token(token)
|
354
|
+
when TkNL
|
355
|
+
#
|
356
|
+
@lines += 1
|
357
|
+
when TkRETURN
|
358
|
+
# Early returns do not increase complexity as the condition that
|
359
|
+
# calls the return is the one that increases it.
|
360
|
+
when TkCOMMENT
|
361
|
+
state = do_comment_token(token)
|
362
|
+
when TkSYMBEG
|
363
|
+
state = do_symbol_token(token)
|
364
|
+
when TkError
|
365
|
+
STDOUT.puts "Lexer received an error for line #{@lexer.line_no} char #{@lexer.char_no}"
|
366
|
+
else
|
367
|
+
state = do_else_token(token)
|
368
|
+
end
|
369
|
+
state.parse if state
|
370
|
+
end
|
371
|
+
|
372
|
+
def end_debug
|
373
|
+
STDOUT.puts "got an end: #{@name} in #{self.class.name}" if $VERBOSE
|
374
|
+
if @parent.nil?
|
375
|
+
STDOUT.puts "DEBUG: Line #{@lexer.line_no}"
|
376
|
+
STDOUT.puts "DEBUG: #{@name}; #{self.class}"
|
377
|
+
# to_yaml can cause an infinite loop?
|
378
|
+
#STDOUT.puts "TOP: #{@@top_state.to_yaml}"
|
379
|
+
#STDOUT.puts "TOP: #{@@top_state.inspect}"
|
380
|
+
|
381
|
+
# This may not be an error?
|
382
|
+
#exit 1
|
383
|
+
end
|
384
|
+
end
|
385
|
+
|
386
|
+
end
|
387
|
+
|
388
|
+
# Read and consume tokens in comments until a new line.
|
389
|
+
class ParseComment < ParseState
|
390
|
+
|
391
|
+
# While in a comment state do not count the tokens.
|
392
|
+
def count_tokens?
|
393
|
+
false
|
394
|
+
end
|
395
|
+
|
396
|
+
def parse_token(token)
|
397
|
+
if token.is_a?(TkNL)
|
398
|
+
@lines += 1
|
399
|
+
@run = false
|
400
|
+
end
|
401
|
+
end
|
402
|
+
end
|
403
|
+
|
404
|
+
class ParseSymbol < ParseState
|
405
|
+
def initialize(lexer, parent = nil)
|
406
|
+
super
|
407
|
+
STDOUT.puts "STARTING SYMBOL" if $VERBOSE
|
408
|
+
end
|
409
|
+
|
410
|
+
def parse_token(token)
|
411
|
+
STDOUT.puts "Symbol's token is #{token.class}" if $VERBOSE
|
412
|
+
# Consume the next token and stop
|
413
|
+
@run = false
|
414
|
+
nil
|
415
|
+
end
|
416
|
+
end
|
417
|
+
|
418
|
+
class EndableParseState < ParseState
|
419
|
+
def initialize(lexer,parent=nil)
|
420
|
+
super(lexer,parent)
|
421
|
+
STDOUT.puts "Starting #{self.class}" if $VERBOSE
|
422
|
+
end
|
423
|
+
|
424
|
+
def do_end_token(token)
|
425
|
+
end_debug
|
426
|
+
@run = false
|
427
|
+
nil
|
428
|
+
end
|
429
|
+
end
|
430
|
+
|
431
|
+
class ParseClass < EndableParseState
|
432
|
+
def initialize(lexer,parent=nil)
|
433
|
+
super(lexer,parent)
|
434
|
+
@type_name = "Class"
|
435
|
+
end
|
436
|
+
|
437
|
+
def do_constant_token(token)
|
438
|
+
@name = token.name if @name.empty?
|
439
|
+
nil
|
440
|
+
end
|
441
|
+
|
442
|
+
def compute_state(formater)
|
443
|
+
# Seperate the Module and Class Children out
|
444
|
+
cnm_children, @children = @children.partition do |child|
|
445
|
+
child.kind_of?(ParseClass)
|
446
|
+
end
|
447
|
+
|
448
|
+
formater.start_class_compute_state(@type_name,@name,self.calc_complexity,self.calc_lines)
|
449
|
+
super(formater)
|
450
|
+
formater.end_class_compute_state(@name)
|
451
|
+
|
452
|
+
cnm_children.each do |child|
|
453
|
+
child.name = @name + "::" + child.name
|
454
|
+
child.compute_state(formater)
|
455
|
+
end
|
456
|
+
end
|
457
|
+
end
|
458
|
+
|
459
|
+
class ParseModule < ParseClass
|
460
|
+
def initialize(lexer,parent=nil)
|
461
|
+
super(lexer,parent)
|
462
|
+
@type_name = "Module"
|
463
|
+
end
|
464
|
+
end
|
465
|
+
|
466
|
+
class ParseDef < EndableParseState
|
467
|
+
|
468
|
+
def initialize(lexer,parent=nil)
|
469
|
+
super(lexer,parent)
|
470
|
+
@complexity = 1
|
471
|
+
@looking_for_name = true
|
472
|
+
@first_space = true
|
473
|
+
end
|
474
|
+
|
475
|
+
# This way I don't need to list all possible overload
|
476
|
+
# tokens.
|
477
|
+
def create_def_name(token)
|
478
|
+
case token
|
479
|
+
when TkSPACE
|
480
|
+
# mark first space so we can stop at next space
|
481
|
+
if @first_space
|
482
|
+
@first_space = false
|
483
|
+
else
|
484
|
+
@looking_for_name = false
|
485
|
+
end
|
486
|
+
when TkNL,TkLPAREN,TkfLPAREN,TkSEMICOLON
|
487
|
+
# we can also stop at a new line or left parenthesis
|
488
|
+
@looking_for_name = false
|
489
|
+
when TkDOT
|
490
|
+
@name<< "."
|
491
|
+
when TkCOLON2
|
492
|
+
@name<< "::"
|
493
|
+
when TkASSIGN
|
494
|
+
@name<< "="
|
495
|
+
when TkfLBRACK
|
496
|
+
@name<< "["
|
497
|
+
when TkRBRACK
|
498
|
+
@name<< "]"
|
499
|
+
else
|
500
|
+
begin
|
501
|
+
@name<< token.name.to_s
|
502
|
+
rescue Exception => err
|
503
|
+
#what is this?
|
504
|
+
STDOUT.puts @@token_counter.current_file
|
505
|
+
STDOUT.puts @name
|
506
|
+
STDOUT.puts token.inspect
|
507
|
+
STDOUT.puts err.message
|
508
|
+
exit 1
|
509
|
+
end
|
510
|
+
end
|
511
|
+
end
|
512
|
+
|
513
|
+
def parse_token(token)
|
514
|
+
if @looking_for_name
|
515
|
+
create_def_name(token)
|
516
|
+
end
|
517
|
+
super(token)
|
518
|
+
end
|
519
|
+
|
520
|
+
def compute_state(formater)
|
521
|
+
formater.def_compute_state(@name, self.calc_complexity, self.calc_lines)
|
522
|
+
super(formater)
|
523
|
+
end
|
524
|
+
end
|
525
|
+
|
526
|
+
class ParseCond < EndableParseState
|
527
|
+
def initialize(lexer,parent=nil)
|
528
|
+
super(lexer,parent)
|
529
|
+
@complexity = 1
|
530
|
+
end
|
531
|
+
end
|
532
|
+
|
533
|
+
class ParseDoCond < ParseCond
|
534
|
+
def initialize(lexer,parent=nil)
|
535
|
+
super(lexer,parent)
|
536
|
+
@looking_for_new_line = true
|
537
|
+
end
|
538
|
+
|
539
|
+
# Need to consume the do that can appear at the
|
540
|
+
# end of these control structures.
|
541
|
+
def parse_token(token)
|
542
|
+
if @looking_for_new_line
|
543
|
+
if token.is_a?(TkDO)
|
544
|
+
nil
|
545
|
+
else
|
546
|
+
if token.is_a?(TkNL)
|
547
|
+
@looking_for_new_line = false
|
548
|
+
end
|
549
|
+
super(token)
|
550
|
+
end
|
551
|
+
else
|
552
|
+
super(token)
|
553
|
+
end
|
554
|
+
end
|
555
|
+
|
556
|
+
end
|
557
|
+
|
558
|
+
class ParseBlock < EndableParseState
|
559
|
+
|
560
|
+
def initialize(lexer,parent=nil)
|
561
|
+
super(lexer,parent)
|
562
|
+
@complexity = 1
|
563
|
+
@lbraces = Array.new
|
564
|
+
end
|
565
|
+
|
566
|
+
# Because the token for a block and hash right brace is the same,
|
567
|
+
# we need to track the hash left braces to determine when an end is
|
568
|
+
# encountered.
|
569
|
+
def parse_token(token)
|
570
|
+
if token.is_a?(TkLBRACE)
|
571
|
+
@lbraces.push(true)
|
572
|
+
elsif token.is_a?(TkRBRACE)
|
573
|
+
if @lbraces.empty?
|
574
|
+
do_right_brace_token(token)
|
575
|
+
#do_end_token(token)
|
576
|
+
else
|
577
|
+
@lbraces.pop
|
578
|
+
end
|
579
|
+
else
|
580
|
+
super(token)
|
581
|
+
end
|
582
|
+
end
|
583
|
+
|
584
|
+
def do_right_brace_token(token)
|
585
|
+
# we are done ? what about a hash in a block :-/
|
586
|
+
@run = false
|
587
|
+
nil
|
588
|
+
end
|
589
|
+
|
590
|
+
end
|
591
|
+
|
592
|
+
# ------------ END Analyzer logic ------------------------------------
|
593
|
+
|
594
|
+
class Filter
|
595
|
+
attr_accessor :limit, :error, :warn
|
596
|
+
|
597
|
+
def initialize(limit = -1, error = 11, warn = 8)
|
598
|
+
@limit = limit
|
599
|
+
@error = error
|
600
|
+
@warn = warn
|
601
|
+
end
|
602
|
+
|
603
|
+
def ignore?(count)
|
604
|
+
count < @limit
|
605
|
+
end
|
606
|
+
|
607
|
+
def warn?(count)
|
608
|
+
count >= @warn
|
609
|
+
end
|
610
|
+
|
611
|
+
def error?(count)
|
612
|
+
count >= @error
|
613
|
+
end
|
614
|
+
|
615
|
+
end
|
616
|
+
|
617
|
+
|
618
|
+
class BaseFormater
|
619
|
+
attr_accessor :warnings, :errors, :current
|
620
|
+
|
621
|
+
def initialize(out, filter = nil)
|
622
|
+
@out = out
|
623
|
+
@filter = filter
|
624
|
+
reset_data
|
625
|
+
end
|
626
|
+
|
627
|
+
def warn_error?(num, marker)
|
628
|
+
klass = ""
|
629
|
+
|
630
|
+
if @filter.error?(num)
|
631
|
+
klass = ' class="error"'
|
632
|
+
@errors<< [@current, marker, num]
|
633
|
+
elsif @filter.warn?(num)
|
634
|
+
klass = ' class="warning"'
|
635
|
+
@warnings<< [@current, marker, num]
|
636
|
+
end
|
637
|
+
|
638
|
+
klass
|
639
|
+
end
|
640
|
+
|
641
|
+
def reset_data
|
642
|
+
@warnings = Array.new
|
643
|
+
@errors = Array.new
|
644
|
+
@current = ""
|
645
|
+
end
|
646
|
+
|
647
|
+
end
|
648
|
+
|
649
|
+
class TokenCounterFormater < BaseFormater
|
650
|
+
|
651
|
+
def start(new_out=nil)
|
652
|
+
reset_data
|
653
|
+
@out = new_out if new_out
|
654
|
+
@out.puts "Token Count"
|
655
|
+
end
|
656
|
+
|
657
|
+
def start_count(number_of_files)
|
658
|
+
@out.puts "Counting tokens for #{number_of_files} files."
|
659
|
+
end
|
660
|
+
|
661
|
+
def start_file(file_name)
|
662
|
+
@current = file_name
|
663
|
+
@out.puts "File:#{file_name}"
|
664
|
+
end
|
665
|
+
|
666
|
+
def line_token_count(line_number,number_of_tokens)
|
667
|
+
return if @filter.ignore?(number_of_tokens)
|
668
|
+
warn_error?(number_of_tokens, line_number)
|
669
|
+
@out.puts "Line:#{line_number} ; Tokens : #{number_of_tokens}"
|
670
|
+
end
|
671
|
+
|
672
|
+
def end_file
|
673
|
+
@out.puts ""
|
674
|
+
end
|
675
|
+
|
676
|
+
def end_count
|
677
|
+
end
|
678
|
+
|
679
|
+
def end
|
680
|
+
end
|
681
|
+
|
682
|
+
end
|
683
|
+
|
684
|
+
module HTMLStyleSheet
|
685
|
+
def HTMLStyleSheet.style_sheet
|
686
|
+
out = StringIO.new
|
687
|
+
|
688
|
+
out.puts "<style>"
|
689
|
+
out.puts 'body {'
|
690
|
+
out.puts ' margin: 20px;'
|
691
|
+
out.puts ' padding: 0;'
|
692
|
+
out.puts ' font-size: 12px;'
|
693
|
+
out.puts ' font-family: bitstream vera sans, verdana, arial, sans serif;'
|
694
|
+
out.puts ' background-color: #efefef;'
|
695
|
+
out.puts '}'
|
696
|
+
out.puts ''
|
697
|
+
out.puts 'table { '
|
698
|
+
out.puts ' border-collapse: collapse;'
|
699
|
+
out.puts ' /*border-spacing: 0;*/'
|
700
|
+
out.puts ' border: 1px solid #666;'
|
701
|
+
out.puts ' background-color: #fff;'
|
702
|
+
out.puts ' margin-bottom: 20px;'
|
703
|
+
out.puts '}'
|
704
|
+
out.puts ''
|
705
|
+
out.puts 'table, th, th+th, td, td+td {'
|
706
|
+
out.puts ' border: 1px solid #ccc;'
|
707
|
+
out.puts '}'
|
708
|
+
out.puts ''
|
709
|
+
out.puts 'table th {'
|
710
|
+
out.puts ' font-size: 12px;'
|
711
|
+
out.puts ' color: #fc0;'
|
712
|
+
out.puts ' padding: 4px 0;'
|
713
|
+
out.puts ' background-color: #336;'
|
714
|
+
out.puts '}'
|
715
|
+
out.puts ''
|
716
|
+
out.puts 'th, td {'
|
717
|
+
out.puts ' padding: 4px 10px;'
|
718
|
+
out.puts '}'
|
719
|
+
out.puts ''
|
720
|
+
out.puts 'td { '
|
721
|
+
out.puts ' font-size: 13px;'
|
722
|
+
out.puts '}'
|
723
|
+
out.puts ''
|
724
|
+
out.puts '.class_name {'
|
725
|
+
out.puts ' font-size: 17px;'
|
726
|
+
out.puts ' margin: 20px 0 0;'
|
727
|
+
out.puts '}'
|
728
|
+
out.puts ''
|
729
|
+
out.puts '.class_complexity {'
|
730
|
+
out.puts 'margin: 0 auto;'
|
731
|
+
out.puts '}'
|
732
|
+
out.puts ''
|
733
|
+
out.puts '.class_complexity>.class_complexity {'
|
734
|
+
out.puts ' margin: 0;'
|
735
|
+
out.puts '}'
|
736
|
+
out.puts ''
|
737
|
+
out.puts '.class_total_complexity, .class_total_lines, .start_token_count, .file_count {'
|
738
|
+
out.puts ' font-size: 13px;'
|
739
|
+
out.puts ' font-weight: bold;'
|
740
|
+
out.puts '}'
|
741
|
+
out.puts ''
|
742
|
+
out.puts '.class_total_complexity, .class_total_lines {'
|
743
|
+
out.puts ' color: #c00;'
|
744
|
+
out.puts '}'
|
745
|
+
out.puts ''
|
746
|
+
out.puts '.start_token_count, .file_count {'
|
747
|
+
out.puts ' color: #333;'
|
748
|
+
out.puts '}'
|
749
|
+
out.puts ''
|
750
|
+
out.puts '.warning {'
|
751
|
+
out.puts ' background-color: yellow;'
|
752
|
+
out.puts '}'
|
753
|
+
out.puts ''
|
754
|
+
out.puts '.error {'
|
755
|
+
out.puts ' background-color: #f00;'
|
756
|
+
out.puts '}'
|
757
|
+
out.puts "</style>"
|
758
|
+
|
759
|
+
out.string
|
760
|
+
end
|
761
|
+
|
762
|
+
def style_sheet
|
763
|
+
HTMLStyleSheet.style_sheet
|
764
|
+
end
|
765
|
+
end
|
766
|
+
|
767
|
+
|
768
|
+
class HTMLTokenCounterFormater < TokenCounterFormater
|
769
|
+
include HTMLStyleSheet
|
770
|
+
|
771
|
+
def start(new_out=nil)
|
772
|
+
reset_data
|
773
|
+
@out = new_out if new_out
|
774
|
+
@out.puts "<html>"
|
775
|
+
@out.puts style_sheet
|
776
|
+
@out.puts "<body>"
|
777
|
+
end
|
778
|
+
|
779
|
+
def start_count(number_of_files)
|
780
|
+
@out.puts "<div class=\"start_token_count\">"
|
781
|
+
@out.puts "Number of files: #{number_of_files}"
|
782
|
+
@out.puts "</div>"
|
783
|
+
end
|
784
|
+
|
785
|
+
def start_file(file_name)
|
786
|
+
@current = file_name
|
787
|
+
@out.puts "<div class=\"file_count\">"
|
788
|
+
@out.puts "<p class=\"file_name\">"
|
789
|
+
@out.puts "File: #{file_name}"
|
790
|
+
@out.puts "</p>"
|
791
|
+
@out.puts "<table width=\"100%\" border=\"1\">"
|
792
|
+
@out.puts "<tr><th>Line</th><th>Tokens</th></tr>"
|
793
|
+
end
|
794
|
+
|
795
|
+
def line_token_count(line_number,number_of_tokens)
|
796
|
+
return if @filter.ignore?(number_of_tokens)
|
797
|
+
klass = warn_error?(number_of_tokens, line_number)
|
798
|
+
@out.puts "<tr><td>#{line_number}</td><td#{klass}>#{number_of_tokens}</td></tr>"
|
799
|
+
end
|
800
|
+
|
801
|
+
def end_file
|
802
|
+
@out.puts "</table>"
|
803
|
+
end
|
804
|
+
|
805
|
+
def end_count
|
806
|
+
end
|
807
|
+
|
808
|
+
def end
|
809
|
+
@out.puts "</body>"
|
810
|
+
@out.puts "</html>"
|
811
|
+
end
|
812
|
+
end
|
813
|
+
|
814
|
+
class ParseStateFormater < BaseFormater
|
815
|
+
|
816
|
+
def start(new_out=nil)
|
817
|
+
reset_data
|
818
|
+
@out = new_out if new_out
|
819
|
+
end
|
820
|
+
|
821
|
+
def end
|
822
|
+
end
|
823
|
+
|
824
|
+
def start_class_compute_state(type_name,name,complexity,lines)
|
825
|
+
@current = name
|
826
|
+
@out.puts "-- START #{name} --"
|
827
|
+
@out.puts "Type:#{type_name} Name:#{name} Complexity:#{complexity} Lines:#{lines}"
|
828
|
+
end
|
829
|
+
|
830
|
+
def end_class_compute_state(name)
|
831
|
+
@out.puts "-- END #{name} --"
|
832
|
+
end
|
833
|
+
|
834
|
+
def def_compute_state(name,complexity,lines)
|
835
|
+
return if @filter.ignore?(complexity)
|
836
|
+
warn_error?(complexity, name)
|
837
|
+
@out.puts "Type:Def Name:#{name} Complexity:#{complexity} Lines:#{lines}"
|
838
|
+
end
|
839
|
+
|
840
|
+
end
|
841
|
+
|
842
|
+
|
843
|
+
|
844
|
+
class StateHTMLComplexityFormater < ParseStateFormater
|
845
|
+
include HTMLStyleSheet
|
846
|
+
|
847
|
+
def start(new_out=nil)
|
848
|
+
reset_data
|
849
|
+
@out = new_out if new_out
|
850
|
+
@out.puts "<html><head><title>Cyclometric Complexity</title></head>"
|
851
|
+
@out.puts style_sheet
|
852
|
+
@out.puts "<body>"
|
853
|
+
end
|
854
|
+
|
855
|
+
def end
|
856
|
+
@out.puts "</body>"
|
857
|
+
@out.puts "</html>"
|
858
|
+
end
|
859
|
+
|
860
|
+
def start_class_compute_state(type_name,name,complexity,lines)
|
861
|
+
@current = name
|
862
|
+
@out.puts "<div class=\"class_complexity\">"
|
863
|
+
@out.puts "<h2 class=\"class_name\">#{type_name} : #{name}</h2>"
|
864
|
+
@out.puts "<div class=\"class_total_complexity\">Total Complexity: #{complexity}</div>"
|
865
|
+
@out.puts "<div class=\"class_total_lines\">Total Lines: #{lines}</div>"
|
866
|
+
@out.puts "<table width=\"100%\" border=\"1\">"
|
867
|
+
@out.puts "<tr><th>Method</th><th>Complexity</th><th># Lines</th></tr>"
|
868
|
+
end
|
869
|
+
|
870
|
+
def end_class_compute_state(name)
|
871
|
+
@out.puts "</table>"
|
872
|
+
@out.puts "</div>"
|
873
|
+
end
|
874
|
+
|
875
|
+
def def_compute_state(name, complexity, lines)
|
876
|
+
return if @filter.ignore?(complexity)
|
877
|
+
klass = warn_error?(complexity, name)
|
878
|
+
@out.puts "<tr><td>#{name}</td><td#{klass}>#{complexity}</td><td>#{lines}</td></tr>"
|
879
|
+
end
|
880
|
+
|
881
|
+
end
|
882
|
+
|
883
|
+
|
884
|
+
module ResultIndexGenerator
|
885
|
+
def summarize_errors_and_warnings(enw, header)
|
886
|
+
return "" if enw.empty?
|
887
|
+
f = StringIO.new
|
888
|
+
erval = Hash.new { |h,k| h[k] = Array.new }
|
889
|
+
wval = Hash.new { |h,k| h[k] = Array.new }
|
890
|
+
|
891
|
+
enw.each do |fname, warnings, errors|
|
892
|
+
errors.each do |c,m,v|
|
893
|
+
erval[v] << [fname, c, m]
|
894
|
+
end
|
895
|
+
warnings.each do |c,m,v|
|
896
|
+
wval[v] << [fname, c, m]
|
897
|
+
end
|
898
|
+
end
|
899
|
+
|
900
|
+
f.puts "<h2 class=\"class_name\">Errors and Warnings</h2>"
|
901
|
+
f.puts "<table width=\"100%\" border=\"1\">"
|
902
|
+
f.puts header
|
903
|
+
|
904
|
+
f.puts print_summary_table_rows(erval, "error")
|
905
|
+
f.puts print_summary_table_rows(wval, "warning")
|
906
|
+
f.puts "</table>"
|
907
|
+
|
908
|
+
f.string
|
909
|
+
end
|
910
|
+
|
911
|
+
def print_summary_table_rows(ewvals, klass_type)
|
912
|
+
f = StringIO.new
|
913
|
+
ewvals.sort { |a,b| b <=> a}.each do |v, vals|
|
914
|
+
vals.sort.each do |fname, c, m|
|
915
|
+
f.puts "<tr><td><a href=\"./#{fname}\">#{c}</a></td><td>#{m}</td>"
|
916
|
+
f.puts "<td class=\"#{klass_type}\">#{v}</td></tr>"
|
917
|
+
end
|
918
|
+
end
|
919
|
+
f.string
|
920
|
+
end
|
921
|
+
|
922
|
+
def list_analyzed_files(files)
|
923
|
+
f = StringIO.new
|
924
|
+
f.puts "<h2 class=\"class_name\">Analyzed Files</h2>"
|
925
|
+
f.puts "<ul>"
|
926
|
+
files.each do |fname, warnings, errors|
|
927
|
+
readname = fname.split("_")[0...-1].join("_")
|
928
|
+
f.puts "<li>"
|
929
|
+
f.puts "<p class=\"file_name\"><a href=\"./#{fname}\">#{readname}</a>"
|
930
|
+
f.puts "</li>"
|
931
|
+
end
|
932
|
+
f.puts "</ul>"
|
933
|
+
f.string
|
934
|
+
end
|
935
|
+
|
936
|
+
def write_index(files, filename, title, header)
|
937
|
+
return if files.empty?
|
938
|
+
|
939
|
+
File.open(filename,"w") do |f|
|
940
|
+
f.puts "<html><head><title>#{title}</title></head>"
|
941
|
+
f.puts "#{HTMLStyleSheet.style_sheet}\n<body>"
|
942
|
+
f.puts "<h1>#{title}</h1>"
|
943
|
+
|
944
|
+
enw = files.find_all { |fn,w,e| (!w.empty? || !e.empty?) }
|
945
|
+
|
946
|
+
f.puts summarize_errors_and_warnings(enw, header)
|
947
|
+
|
948
|
+
f.puts "<hr/>"
|
949
|
+
f.puts list_analyzed_files(files)
|
950
|
+
f.puts "</body></html>"
|
951
|
+
end
|
952
|
+
end
|
953
|
+
|
954
|
+
def write_cyclo_index(files, output_dir)
|
955
|
+
header = "<tr><th>Class</th><th>Method</th><th>Complexity</th></tr>"
|
956
|
+
write_index(files,
|
957
|
+
"#{output_dir}/index_cyclo.html",
|
958
|
+
"Index for cyclomatic complexity",
|
959
|
+
header)
|
960
|
+
end
|
961
|
+
|
962
|
+
def write_token_index(files, output_dir)
|
963
|
+
header = "<tr><th>File</th><th>Line #</th><th>Tokens</th></tr>"
|
964
|
+
write_index(files,
|
965
|
+
"#{output_dir}/index_token.html",
|
966
|
+
"Index for tokens per line",
|
967
|
+
header)
|
968
|
+
end
|
969
|
+
|
970
|
+
end
|
971
|
+
|
972
|
+
module Saikuro
|
973
|
+
|
974
|
+
#Returns the path without the file
|
975
|
+
def Saikuro.seperate_file_from_path(path)
|
976
|
+
res = path.split("/")
|
977
|
+
if res.size == 1
|
978
|
+
""
|
979
|
+
else
|
980
|
+
res[0..res.size - 2].join("/")
|
981
|
+
end
|
982
|
+
end
|
983
|
+
|
984
|
+
def Saikuro.analyze(files, state_formater, token_count_formater, output_dir)
|
985
|
+
|
986
|
+
idx_states = Array.new
|
987
|
+
idx_tokens = Array.new
|
988
|
+
|
989
|
+
# parse each file
|
990
|
+
files.each do |file|
|
991
|
+
begin
|
992
|
+
STDOUT.puts "Parsing #{file}"
|
993
|
+
# create top state
|
994
|
+
top = ParseState.make_top_state
|
995
|
+
STDOUT.puts "TOP State made" if $VERBOSE
|
996
|
+
token_counter = TokenCounter.new
|
997
|
+
ParseState.set_token_counter(token_counter)
|
998
|
+
token_counter.set_current_file(file)
|
999
|
+
|
1000
|
+
STDOUT.puts "Setting up Lexer" if $VERBOSE
|
1001
|
+
lexer = RubyLex.new
|
1002
|
+
# Turn of this, because it aborts when a syntax error is found...
|
1003
|
+
lexer.exception_on_syntax_error = false
|
1004
|
+
lexer.set_input(File.new(file,"r"))
|
1005
|
+
top.lexer = lexer
|
1006
|
+
STDOUT.puts "Parsing" if $VERBOSE
|
1007
|
+
top.parse
|
1008
|
+
|
1009
|
+
|
1010
|
+
fdir_path = seperate_file_from_path(file)
|
1011
|
+
FileUtils.makedirs("#{output_dir}/#{fdir_path}")
|
1012
|
+
|
1013
|
+
if state_formater
|
1014
|
+
# output results
|
1015
|
+
state_io = StringIO.new
|
1016
|
+
state_formater.start(state_io)
|
1017
|
+
top.compute_state(state_formater)
|
1018
|
+
state_formater.end
|
1019
|
+
|
1020
|
+
fname = "#{file}_cyclo.html"
|
1021
|
+
puts "writing cyclomatic #{file}" if $VERBOSE
|
1022
|
+
File.open("#{output_dir}/#{fname}","w") do |f|
|
1023
|
+
f.write state_io.string
|
1024
|
+
end
|
1025
|
+
idx_states<< [
|
1026
|
+
fname,
|
1027
|
+
state_formater.warnings.dup,
|
1028
|
+
state_formater.errors.dup,
|
1029
|
+
]
|
1030
|
+
end
|
1031
|
+
|
1032
|
+
if token_count_formater
|
1033
|
+
token_io = StringIO.new
|
1034
|
+
token_count_formater.start(token_io)
|
1035
|
+
token_counter.list_tokens_per_line(token_count_formater)
|
1036
|
+
token_count_formater.end
|
1037
|
+
|
1038
|
+
fname = "#{file}_token.html"
|
1039
|
+
puts "writing token #{file}" if $VERBOSE
|
1040
|
+
File.open("#{output_dir}/#{fname}","w") do |f|
|
1041
|
+
f.write token_io.string
|
1042
|
+
end
|
1043
|
+
idx_tokens<< [
|
1044
|
+
fname,
|
1045
|
+
token_count_formater.warnings.dup,
|
1046
|
+
token_count_formater.errors.dup,
|
1047
|
+
]
|
1048
|
+
end
|
1049
|
+
|
1050
|
+
rescue RubyLex::SyntaxError => synerr
|
1051
|
+
STDOUT.puts "Lexer error for file #{file} on line #{lexer.line_no}"
|
1052
|
+
STDOUT.puts "#{synerr.class.name} : #{synerr.message}"
|
1053
|
+
rescue StandardError => err
|
1054
|
+
STDOUT.puts "Error while parsing file : #{file}"
|
1055
|
+
STDOUT.puts err.class,err.message,err.backtrace.join("\n")
|
1056
|
+
rescue Exception => ex
|
1057
|
+
STDOUT.puts "Error while parsing file : #{file}"
|
1058
|
+
STDOUT.puts ex.class,ex.message,ex.backtrace.join("\n")
|
1059
|
+
end
|
1060
|
+
end
|
1061
|
+
|
1062
|
+
[idx_states, idx_tokens]
|
1063
|
+
end
|
1064
|
+
end
|
1065
|
+
|
1066
|
+
|
1067
|
+
# Really ugly command line runner stuff here for now
|
1068
|
+
|
1069
|
+
class SaikuroCMDLineRunner
|
1070
|
+
require 'stringio'
|
1071
|
+
require 'getoptlong'
|
1072
|
+
require 'fileutils'
|
1073
|
+
require 'find'
|
1074
|
+
|
1075
|
+
# modification to RDoc.usage that allows main_program_file to be set
|
1076
|
+
# for RDoc.usage
|
1077
|
+
require 'saikuro/usage'
|
1078
|
+
RDoc::main_program_file = __FILE__
|
1079
|
+
|
1080
|
+
include ResultIndexGenerator
|
1081
|
+
|
1082
|
+
def get_ruby_files(path)
|
1083
|
+
files = Array.new
|
1084
|
+
Find.find(path) do |f|
|
1085
|
+
if !FileTest.directory?(f)
|
1086
|
+
if f =~ /rb$/
|
1087
|
+
files<< f
|
1088
|
+
end
|
1089
|
+
end
|
1090
|
+
end
|
1091
|
+
files
|
1092
|
+
end
|
1093
|
+
|
1094
|
+
def run
|
1095
|
+
files = Array.new
|
1096
|
+
output_dir = "./"
|
1097
|
+
formater = "html"
|
1098
|
+
state_filter = Filter.new(5)
|
1099
|
+
token_filter = Filter.new(10, 25, 50)
|
1100
|
+
comp_state = comp_token = false
|
1101
|
+
begin
|
1102
|
+
opt = GetoptLong.new(
|
1103
|
+
["-o","--output_directory", GetoptLong::REQUIRED_ARGUMENT],
|
1104
|
+
["-h","--help", GetoptLong::NO_ARGUMENT],
|
1105
|
+
["-f","--formater", GetoptLong::REQUIRED_ARGUMENT],
|
1106
|
+
["-c","--cyclo", GetoptLong::NO_ARGUMENT],
|
1107
|
+
["-t","--token", GetoptLong::NO_ARGUMENT],
|
1108
|
+
["-y","--filter_cyclo", GetoptLong::REQUIRED_ARGUMENT],
|
1109
|
+
["-k","--filter_token", GetoptLong::REQUIRED_ARGUMENT],
|
1110
|
+
["-w","--warn_cyclo", GetoptLong::REQUIRED_ARGUMENT],
|
1111
|
+
["-s","--warn_token", GetoptLong::REQUIRED_ARGUMENT],
|
1112
|
+
["-e","--error_cyclo", GetoptLong::REQUIRED_ARGUMENT],
|
1113
|
+
["-d","--error_token", GetoptLong::REQUIRED_ARGUMENT],
|
1114
|
+
["-p","--parse_file", GetoptLong::REQUIRED_ARGUMENT],
|
1115
|
+
["-i","--input_directory", GetoptLong::REQUIRED_ARGUMENT],
|
1116
|
+
["-v","--verbose", GetoptLong::NO_ARGUMENT]
|
1117
|
+
)
|
1118
|
+
|
1119
|
+
opt.each do |arg,val|
|
1120
|
+
case arg
|
1121
|
+
when "-o"
|
1122
|
+
output_dir = val
|
1123
|
+
when "-h"
|
1124
|
+
RDoc.usage('help')
|
1125
|
+
when "-f"
|
1126
|
+
formater = val
|
1127
|
+
when "-c"
|
1128
|
+
comp_state = true
|
1129
|
+
when "-t"
|
1130
|
+
comp_token = true
|
1131
|
+
when "-k"
|
1132
|
+
token_filter.limit = val.to_i
|
1133
|
+
when "-s"
|
1134
|
+
token_filter.warn = val.to_i
|
1135
|
+
when "-d"
|
1136
|
+
token_filter.error = val.to_i
|
1137
|
+
when "-y"
|
1138
|
+
state_filter.limit = val.to_i
|
1139
|
+
when "-w"
|
1140
|
+
state_filter.warn = val.to_i
|
1141
|
+
when "-e"
|
1142
|
+
state_filter.error = val.to_i
|
1143
|
+
when "-p"
|
1144
|
+
files<< val
|
1145
|
+
when "-i"
|
1146
|
+
files.concat(get_ruby_files(val))
|
1147
|
+
when "-v"
|
1148
|
+
STDOUT.puts "Verbose mode on"
|
1149
|
+
$VERBOSE = true
|
1150
|
+
end
|
1151
|
+
|
1152
|
+
end
|
1153
|
+
RDoc.usage if !comp_state && !comp_token
|
1154
|
+
rescue => err
|
1155
|
+
RDoc.usage
|
1156
|
+
end
|
1157
|
+
|
1158
|
+
if formater =~ /html/i
|
1159
|
+
state_formater = StateHTMLComplexityFormater.new(STDOUT,state_filter)
|
1160
|
+
token_count_formater = HTMLTokenCounterFormater.new(STDOUT,token_filter)
|
1161
|
+
else
|
1162
|
+
state_formater = ParseStateFormater.new(STDOUT,state_filter)
|
1163
|
+
token_count_formater = TokenCounterFormater.new(STDOUT,token_filter)
|
1164
|
+
end
|
1165
|
+
|
1166
|
+
state_formater = nil if !comp_state
|
1167
|
+
token_count_formater = nil if !comp_token
|
1168
|
+
|
1169
|
+
idx_states, idx_tokens = Saikuro.analyze(files,
|
1170
|
+
state_formater,
|
1171
|
+
token_count_formater,
|
1172
|
+
output_dir)
|
1173
|
+
|
1174
|
+
write_cyclo_index(idx_states, output_dir)
|
1175
|
+
write_token_index(idx_tokens, output_dir)
|
1176
|
+
end
|
1177
|
+
|
1178
|
+
end
|