tokn 0.0.6 → 0.0.8

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA1:
3
- metadata.gz: 75d7206c10817a05dbbd9e5ff36b25f20ef5ad18
4
- data.tar.gz: d1d84da2be85c05b567b476b53471ac2a7c6a04f
3
+ metadata.gz: a3803d745ad9198112d78c381a11c18bd7367789
4
+ data.tar.gz: 8f6ad5c1ce11a76adb2f677b84dc730581c5d82f
5
5
  SHA512:
6
- metadata.gz: 47a078062419175ef5fd1d66a46bae95b423c1b69cd8d27a28875589b13abdcdd5474c81e70a9b09f11a27b302f9f9038302fd1ad9121bf81d7db42cf20cea96
7
- data.tar.gz: 2bd489cb22b1b68d0a365fd70c337e7cc3bfd1872d501a88e735fe23fe3780a77415262be265263cd906eb7cf32afea7f5e1c8cb849c6feef44567ef80b4717b
6
+ metadata.gz: a9c7a28d748333fd518476dfa3f8f8b859c425b796d76a7df9e4a09a29b784eaab677da7fbcf220bd3d2c16ccc1451c44f86613a97a78225611628eaf4f4d5d3
7
+ data.tar.gz: 0fbfeb04dc8ff896533193225b4c8715a5abb1f3ab5e2d691721fd17083bc99357500de2b86fc3e1170711d105ad1ab3c184e7b537521b4aabcd4d36ed4005fc
data/CHANGELOG.txt ADDED
@@ -0,0 +1,10 @@
1
+ 2013-03-07
2
+ * Version 0.0.6 released
3
+
4
+ 2013-04-03
5
+ * Version 0.0.7 released
6
+ * Added CHANGELOG.txt
7
+ * Added link to sample_pdf.DFA
8
+
9
+ * Version 0.0.8
10
+ * Fixed problem with link URL's
data/README.txt CHANGED
@@ -1,46 +1,47 @@
1
- 'tokn' : A ruby gem for constructing DFAs and using them to tokenize text files.
1
+ # @markup markdown
2
+
3
+ tokn
4
+ =======
5
+ A ruby gem for constructing DFAs and using them to tokenize text files.
2
6
 
3
7
  Written and (c) by Jeff Sember, March 2013.
4
- ================================================================================
8
+
5
9
 
6
10
 
7
11
  Description of the problem
8
- ================================================================================
12
+ ------
9
13
 
10
14
  For a simple example, suppose a particular text file is designed to have
11
15
  tokens of the following three types:
12
16
 
13
- 1) 'a' followed by any number of 'a' or 'b'
14
- 2) 'b' followed by either 'aa' or zero or more 'b'
15
- 3) 'bbb'
16
-
17
- We will also allow an additional token, one or more spaces, to separate them.
18
- These four token types can be written using regular expressions as:
17
+ 'a' followed by any number of 'a' or 'b'
18
+ 'b' followed by either 'aa' or zero or more 'b'
19
+ 'bbb'
20
+
21
+ We will also allow an additional token, one or more spaces, to separate them.
22
+ These four token types can be written using regular expressions as:
19
23
 
20
- sep: \s
21
- tku: a(a|b)*
22
- tkv: b(aa|b*)
23
- tkw: bbb
24
+ sep: \s
25
+ tku: a(a|b)*
26
+ tkv: b(aa|b*)
27
+ tkw: bbb
24
28
 
25
29
  We've given each token definition a name (to the left of the colon).
26
30
 
27
31
  Now suppose your program needs to read a text file and interpret the tokens it
28
- finds there. This can be done using the DFA (deterministic finite state automaton)
29
- shown in figures/sample_dfa.pdf. The token extraction algorithm is as follows:
32
+ finds there.
33
+
34
+ This can be done using the DFA (deterministic finite state automaton) found at: <http://www.cs.ubc.ca/~jpsember/sample_dfa.pdf>
35
+
36
+
37
+ The token extraction algorithm has these steps:
30
38
 
31
- 1) Begin at the start state, S0.
32
- 2) Look at the next character in the source (text) file. If there is an arrow (edge)
33
- labelled with that character, follow it to another state (it may lead to the
34
- same state; that's okay), and advance the cursor to the next character in
35
- the source file.
36
- 3) If there's an arrow labelled with a negative number N, don't follow the edge,
37
- but instead remember the lowest (i.e., most negative) such N found.
38
- 4) Continue steps 2 and 3 until no further progress is possible.
39
- 5) At this point, N indicates the name of the token found. The cursor should be
40
- restored to the point it was at when that N was recorded. The token's text
41
- consists of the characters from the starting cursor position to that point.
42
- 6) If no N value was recorded, then the source text doesn't match any of the tokens,
43
- which is considered an error.
39
+ 1. Begin at the start state, S0.
40
+ 1. Look at the next character in the source (text) file. If there is an arrow (edge) labelled with that character, follow it to another state (it may lead to the same state; that's okay), and advance the cursor to the next character in the source file.
41
+ 1. If there's an arrow labelled with a negative number N, don't follow the edge, but instead remember the lowest (i.e., most negative) such N found.
42
+ 1. Continue steps 2 and 3 until no further progress is possible.
43
+ 1. At this point, N indicates the name of the token found. The cursor should be restored to the point it was at when that N was recorded. The token's text consists of the characters from the starting cursor position to that point.
44
+ 1. If no N value was recorded, then the source text doesn't match any of the tokens, which is considered an error.
44
45
 
45
46
 
46
47
  The tokn module provides a simple and efficient way to perform this tokenization process.
@@ -50,7 +51,7 @@ Such DFAs are very useful, and can be used by non-Ruby programs as well.
50
51
 
51
52
 
52
53
  Using the tokn module in a Ruby program
53
- ===================================================================================
54
+ ------
54
55
 
55
56
  There are three object classes of interest: DFA, Tokenizer, and Token. A DFA is
56
57
  compiled once from a script containing token definitions (e.g, "tku: b(aa|b*) ..."),
@@ -64,31 +65,26 @@ Here's some example Ruby code showing how a text file "source.txt" can be split
64
65
  tokens. We'll assume there's a text file "tokendefs.txt" that contains the
65
66
  definitions shown earlier.
66
67
 
67
- require "Tokenizer"
68
-
69
- include Tokn # Avoids having to prefix things with 'Tokn::'
70
-
71
- dfa = DFA.from_script(readTextFile("tokendefs.txt"))
72
-
73
- t = Tokenizer.new(dfa, readTextFile("source.txt"))
74
-
75
- while t.hasNext
76
-
77
- k = t.read # read token
78
-
79
- if t.typeOf(k) == "sep" # skip 'whitespace'
80
- next
81
- end
82
-
83
- ...do something with the token ...
84
- end
85
-
68
+ require "Tokenizer"
69
+ include Tokn
70
+
71
+ dfa = DFA.from_script(readTextFile("tokendefs.txt"))
72
+ t = Tokenizer.new(dfa, readTextFile("source.txt"))
73
+
74
+ while t.hasNext
75
+ k = t.read # read token
76
+ next if t.typeOf(k) == "sep" # skip 'whitespace'
77
+
78
+ ...do something with the token ...
79
+
80
+ end
81
+
86
82
  If later, another file needs to be tokenized, a new Tokenizer object can be
87
83
  constructed and given the same dfa object as earlier.
88
84
 
89
85
 
90
86
  Using the tokn command line utilities
91
- ===================================================================================
87
+ ------
92
88
 
93
89
  The module has two utility scripts: tokncompile, and toknprocess. These can be
94
90
  found in the bin/ directory.
@@ -110,62 +106,62 @@ of tokens from the source file to the standard output:
110
106
 
111
107
  This will produce the following output:
112
108
 
113
- WS 1 1 // Example source file that can be tokenized
114
-
115
- WS 2 1
116
-
117
- ID 3 1 speed
118
- WS 3 6
119
- ASSIGN 3 7 =
120
- WS 3 8
121
- INT 3 9 42
122
- WS 3 11
123
- WS 3 14 // speed of object
124
-
125
- WS 4 1
126
-
127
- ID 5 1 gravity
128
- WS 5 8
129
- ASSIGN 5 9 =
130
- WS 5 10
131
- DBL 5 11 -9.80
132
- WS 5 16
133
-
134
-
135
- ID 7 1 title
136
- WS 7 6
137
- ASSIGN 7 7 =
138
- WS 7 8
139
- LBL 7 9 'This is a string with \' an escaped delimiter'
140
- WS 7 56
141
-
142
-
143
- IF 9 1 if
144
- WS 9 3
145
- ID 9 4 gravity
146
- WS 9 11
147
- EQUIV 9 12 ==
148
- WS 9 14
149
- INT 9 15 12
150
- WS 9 17
151
- BROP 9 18 {
152
- WS 9 19
153
-
154
- DO 10 3 do
155
- WS 10 5
156
- ID 10 6 something
157
- WS 10 15
158
-
159
- BRCL 11 1 }
160
- WS 11 2
161
-
109
+ WS 1 1 // Example source file that can be tokenized
110
+
111
+ WS 2 1
112
+
113
+ ID 3 1 speed
114
+ WS 3 6
115
+ ASSIGN 3 7 =
116
+ WS 3 8
117
+ INT 3 9 42
118
+ WS 3 11
119
+ WS 3 14 // speed of object
120
+
121
+ WS 4 1
122
+
123
+ ID 5 1 gravity
124
+ WS 5 8
125
+ ASSIGN 5 9 =
126
+ WS 5 10
127
+ DBL 5 11 -9.80
128
+ WS 5 16
129
+
130
+
131
+ ID 7 1 title
132
+ WS 7 6
133
+ ASSIGN 7 7 =
134
+ WS 7 8
135
+ LBL 7 9 'This is a string with \' an escaped delimiter'
136
+ WS 7 56
137
+
138
+
139
+ IF 9 1 if
140
+ WS 9 3
141
+ ID 9 4 gravity
142
+ WS 9 11
143
+ EQUIV 9 12 ==
144
+ WS 9 14
145
+ INT 9 15 12
146
+ WS 9 17
147
+ BROP 9 18 {
148
+ WS 9 19
149
+
150
+ DO 10 3 do
151
+ WS 10 5
152
+ ID 10 6 something
153
+ WS 10 15
154
+
155
+ BRCL 11 1 }
156
+ WS 11 2
157
+
162
158
  The extra linefeeds are the result of a token containing a linefeed.
163
159
 
164
160
 
165
161
  FAQ
166
- ===================================================================================
162
+ --------
167
163
 
168
- 1) Why can't I just use Ruby's regular expressions for tokenizing text?
164
+ * Why can't I just use Ruby's regular expressions for tokenizing text?
169
165
 
170
166
  You could construct a regular expression describing each possible token, and use that
171
167
  to extract a token from the start of a string; you could then remove that token from the
@@ -178,7 +174,7 @@ implementations actually 'recognize' a richer class of tokens than the ones desc
178
174
  here. This extra power can come at a cost; in some pathological cases, the running time
179
175
  can become exponential.
180
176
 
181
- 2) Is tokn compatible with Unicode?
177
+ * Is tokn compatible with Unicode?
182
178
 
183
179
  The tokn tool is capable of extracting tokens made up of characters that have
184
180
  codes in the entire Unicode range: 0 through 0x10ffff (hex). In fact, the labels
@@ -186,8 +182,3 @@ on the DFA edges can be viewed as sets of any nonnegative integers (negative
186
182
  values are reserved for the token identifiers). Note however that the current implementation
187
183
  only reads Ruby characters from the input, which I believe are only 8 bits wide.
188
184
 
189
- 3) What do I do if I have some ideas for enhancing tokn, or want to point out some
190
- problems with it?
191
-
192
- Well, I can be reached as jpsember at gmail dot com.
193
-
data/README2.txt ADDED
@@ -0,0 +1,219 @@
1
+ # @markup rdoc
2
+
3
+ == 'tokn' : A ruby gem for constructing DFAs and using them to tokenize text files.
4
+
5
+ Written and (c) by Jeff Sember, March 2013.
6
+ ---
7
+
8
+
9
+ = Description of the problem
10
+
11
+ For a simple example, suppose a particular text file is designed to have
12
+ tokens of the following three types:
13
+
14
+ [] 'a' followed by any number of 'a' or 'b'
15
+ [] 'b' followed by either 'aa' or zero or more 'b'
16
+ [] 'bbb'
17
+
18
+ We will also allow an additional token, one or more spaces, to separate them.
19
+ These four token types can be written using regular expressions as:
20
+
21
+ sep: \s
22
+ tku: a(a|b)*
23
+ tkv: b(aa|b*)
24
+ tkw: bbb
25
+
26
+ We've given each token definition a name (to the left of the colon).
27
+
28
+ Now suppose your program needs to read a text file and interpret the tokens it
29
+ finds there. {This can be done using the DFA (deterministic finite state automaton)
30
+ shown here.}[link:sample_dfa.pdf]
31
+
32
+ The token extraction algorithm has these steps:
33
+
34
+ 1. Begin at the start state, S0.
35
+ 1. Look at the next character in the source (text) file. If there is an arrow (edge) labelled with that character, follow it to another state (it may lead to the same state; that's okay), and advance the cursor to the next character in the source file.
36
+ 1. If there's an arrow labelled with a negative number N, don't follow the edge, but instead remember the lowest (i.e., most negative) such N found.
37
+ 1. Continue steps 2 and 3 until no further progress is possible.
38
+ 1. At this point, N indicates the name of the token found. The cursor should be restored to the point it was at when that N was recorded. The token's text consists of the characters from the starting cursor position to that point.
39
+ 1. If no N value was recorded, then the source text doesn't match any of the tokens, which is considered an error.
40
+
41
+
42
+ The tokn module provides a simple and efficient way to perform this tokenization process.
43
+ Its major accomplishment is not just performing the above six steps, but rather that
44
+ it also can construct, from a set of token definitions, the DFA to be used in these steps.
45
+ Such DFAs are very useful, and can be used by non-Ruby programs as well.
46
+
47
+
48
+ = Using the tokn module in a Ruby program
49
+
50
+ There are three object classes of interest: DFA, Tokenizer, and Token. A DFA is
51
+ compiled once from a script containing token definitions (e.g, "tku: b(aa|b*) ..."),
52
+ and can then be stored (either in memory, or on disk as a JSON string) for later use.
53
+
54
+ When tokens need to be extracted from a source file (or simple string), a Tokenizer is
55
+ constructed. It requires both the DFA and the source file as input. Once this is done,
56
+ individual Token objects can be read from the Tokenizer.
57
+
58
+ Here's some example Ruby code showing how a text file "source.txt" can be split into
59
+ tokens. We'll assume there's a text file "tokendefs.txt" that contains the
60
+ definitions shown earlier.
61
+
62
+
63
+
64
+
65
+
66
+
67
+
68
+ require "Tokenizer"
69
+
70
+ include Tokn
71
+
72
+ dfa = DFA.from_script(readTextFile("tokendefs.txt"))
73
+
74
+ t = Tokenizer.new(dfa, readTextFile("source.txt"))
75
+
76
+ while t.hasNext
77
+
78
+ k = t.read
79
+
80
+ if t.typeOf(k) == "sep"
81
+ next
82
+ end
83
+
84
+ ...do something with the token ...
85
+ end
86
+
87
+
88
+
89
+ lorum epson:
90
+
91
+
92
+
93
+
94
+
95
+
96
+ require "Tokenizer"
97
+
98
+ include Tokn # Avoids having to prefix things with 'Tokn::'
99
+
100
+ dfa = DFA.from_script(readTextFile("tokendefs.txt"))
101
+
102
+ t = Tokenizer.new(dfa, readTextFile("source.txt"))
103
+
104
+ while t.hasNext
105
+
106
+ k = t.read # read token
107
+
108
+ if t.typeOf(k) == "sep" # skip 'whitespace'
109
+ next
110
+ end
111
+
112
+ ...do something with the token ...
113
+ end
114
+
115
+
116
+
117
+
118
+
119
+ If later, another file needs to be tokenized, a new Tokenizer object can be
120
+ constructed and given the same dfa object as earlier.
121
+
122
+
123
+ = Using the tokn command line utilities
124
+
125
+ The module has two utility scripts: tokncompile, and toknprocess. These can be
126
+ found in the bin/ directory.
127
+
128
+ The tokncompile script reads a token definition script from standard input, and
129
+ compiles it to a DFA. For example, if you are in the tokn/test/data directory, you can
130
+ type:
131
+
132
+ tokncompile < sampletokens.txt > compileddfa.txt
133
+
134
+ It will produce the JSON encoding of the appropriate DFA. For a description of how
135
+ this JSON string represents the DFA, see Dfa.rb.
136
+
137
+ The toknprocess script takes two arguments: the name of a file containing a
138
+ previously compiled DFA, and the name of a source file. It extracts the sequence
139
+ of tokens from the source file to the standard output:
140
+
141
+ toknprocess compileddfa.txt sampletext.txt
142
+
143
+ This will produce the following output:
144
+
145
+ WS 1 1 // Example source file that can be tokenized
146
+
147
+ WS 2 1
148
+
149
+ ID 3 1 speed
150
+ WS 3 6
151
+ ASSIGN 3 7 =
152
+ WS 3 8
153
+ INT 3 9 42
154
+ WS 3 11
155
+ WS 3 14 // speed of object
156
+
157
+ WS 4 1
158
+
159
+ ID 5 1 gravity
160
+ WS 5 8
161
+ ASSIGN 5 9 =
162
+ WS 5 10
163
+ DBL 5 11 -9.80
164
+ WS 5 16
165
+
166
+
167
+ ID 7 1 title
168
+ WS 7 6
169
+ ASSIGN 7 7 =
170
+ WS 7 8
171
+ LBL 7 9 'This is a string with \' an escaped delimiter'
172
+ WS 7 56
173
+
174
+
175
+ IF 9 1 if
176
+ WS 9 3
177
+ ID 9 4 gravity
178
+ WS 9 11
179
+ EQUIV 9 12 ==
180
+ WS 9 14
181
+ INT 9 15 12
182
+ WS 9 17
183
+ BROP 9 18 {
184
+ WS 9 19
185
+
186
+ DO 10 3 do
187
+ WS 10 5
188
+ ID 10 6 something
189
+ WS 10 15
190
+
191
+ BRCL 11 1 }
192
+ WS 11 2
193
+
194
+ The extra linefeeds are the result of a token containing a linefeed.
195
+
196
+
197
+ = FAQ
198
+
199
+ 1. Why can't I just use Ruby's regular expressions for tokenizing text?
200
+
201
+ You could construct a regular expression describing each possible token, and use that
202
+ to extract a token from the start of a string; you could then remove that token from the
203
+ string, and repeat. The trouble is that the regular expression has no easy way to indicate
204
+ which individual token's expression was matched. You would then (presumably) have to match
205
+ the returned token with each individual regular expression to identify the token type.
206
+
207
+ Another reason why standard regular expressions can be troublesome is that their
208
+ implementations actually 'recognize' a richer class of tokens than the ones described
209
+ here. This extra power can come at a cost; in some pathological cases, the running time
210
+ can become exponential.
211
+
212
+ 1. Is tokn compatible with Unicode?
213
+
214
+ The tokn tool is capable of extracting tokens made up of characters that have
215
+ codes in the entire Unicode range: 0 through 0x10ffff (hex). In fact, the labels
216
+ on the DFA edges can be viewed as sets of any nonnegative integers (negative
217
+ values are reserved for the token identifiers). Note however that the current implementation
218
+ only reads Ruby characters from the input, which I believe are only 8 bits wide.
219
+
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: tokn
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.0.6
4
+ version: 0.0.8
5
5
  platform: ruby
6
6
  authors:
7
7
  - Jeff Sember
@@ -10,9 +10,9 @@ bindir: bin
10
10
  cert_chain: []
11
11
  date: 2013-03-07 00:00:00.000000000 Z
12
12
  dependencies: []
13
- description: Given a script containing token descriptions (each a regular expression),
14
- tokn compiles an automaton which it can then use to efficiently convert a text file
15
- to a sequence of those tokens.
13
+ description: "Given a script containing token descriptions (each a regular expression),
14
+ \ntokn compiles an automaton which it can then use to efficiently convert a \ntext
15
+ file to a sequence of those tokens.\n"
16
16
  email: jpsember@gmail.com
17
17
  executables:
18
18
  - tokncompile
@@ -33,16 +33,18 @@ files:
33
33
  - lib/tokn/tools.rb
34
34
  - bin/tokncompile
35
35
  - bin/toknprocess
36
+ - CHANGELOG.txt
36
37
  - README.txt
38
+ - README2.txt
37
39
  - test/Example1.rb
38
40
  - test/data/compileddfa.txt
39
41
  - test/data/sampletext.txt
40
42
  - test/data/sampletokens.txt
41
43
  - test/test.rb
42
44
  - test/testcmds
43
- - figures/sample_dfa.pdf
44
45
  homepage: http://www.cs.ubc.ca/~jpsember/
45
- licenses: []
46
+ licenses:
47
+ - mit
46
48
  metadata: {}
47
49
  post_install_message:
48
50
  rdoc_options: []
Binary file