sax-machine 0.1.0 → 0.2.0.rc1

Sign up to get free protection for your applications and to get access to all the features.
@@ -0,0 +1,165 @@
1
+ <?xml version="1.0" encoding="UTF-8"?>
2
+ <?xml-stylesheet href="http://feeds.feedburner.com/~d/styles/atom10full.xsl" type="text/xsl" media="screen"?><?xml-stylesheet href="http://feeds.feedburner.com/~d/styles/itemcontent.css" type="text/css" media="screen"?><feed xmlns="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:thr="http://purl.org/syndication/thread/1.0" xmlns:feedburner="http://rssnamespace.org/feedburner/ext/1.0">
3
+ <title>Paul Dix Explains Nothing</title>
4
+
5
+ <link rel="alternate" type="text/html" href="http://www.pauldix.net/" />
6
+ <id>tag:typepad.com,2003:weblog-108605</id>
7
+ <updated>2008-09-04T16:07:19-04:00</updated>
8
+ <subtitle>Entrepreneurship, programming, software development, politics, NYC, and random thoughts.</subtitle>
9
+ <generator uri="http://www.typepad.com/">TypePad</generator>
10
+ <link rel="self" href="http://feeds.feedburner.com/PaulDixExplainsNothing" type="application/atom+xml" /><entry>
11
+ <title>Marshal data too short error with ActiveRecord</title>
12
+ <link rel="alternate" type="text/html" href="http://feeds.feedburner.com/~r/PaulDixExplainsNothing/~3/383536354/marshal-data-to.html" />
13
+ <link rel="replies" type="text/html" href="http://www.pauldix.net/2008/09/marshal-data-to.html" thr:count="2" thr:updated="2008-11-17T14:40:06-05:00" />
14
+ <id>tag:typepad.com,2003:post-55147740</id>
15
+ <published>2008-09-04T16:07:19-04:00</published>
16
+ <updated>2008-11-17T14:40:06-05:00</updated>
17
+ <summary>In my previous post about the speed of serializing data, I concluded that Marshal was the quickest way to get things done. So I set about using Marshal to store some data in an ActiveRecord object. Things worked great at...</summary>
18
+ <author>
19
+ <name>Paul Dix</name>
20
+ </author>
21
+ <category scheme="http://www.sixapart.com/ns/types#category" term="Tahiti" />
22
+
23
+
24
+ <content type="html" xml:lang="en-US" xml:base="http://www.pauldix.net/">
25
+ &lt;div xmlns="http://www.w3.org/1999/xhtml"&gt;&lt;p&gt;In my previous &lt;a href="http://www.pauldix.net/2008/08/serializing-dat.html"&gt;post about the speed of serializing data&lt;/a&gt;, I concluded that Marshal was the quickest way to get things done. So I set about using Marshal to store some data in an ActiveRecord object. Things worked great at first, but on some test data I got this error: marshal data too short. Luckily, &lt;a href="http://www.brynary.com/"&gt;Bryan Helmkamp&lt;/a&gt; had helpfully pointed out that there were sometimes problems with storing marshaled data in the database. He said it was best to base64 encode the marshal dump before storing.&lt;/p&gt;
26
+
27
+ &lt;p&gt;I was curious why it was working on some things and not others. It turns out that some types of data being marshaled were causing the error to pop up. Here's the test data I used in my specs:&lt;/p&gt;
28
+ &lt;pre&gt;{ :foo =&amp;gt; 3, :bar =&amp;gt; 2 } # hash with symbols for keys and integer values&lt;br /&gt;[3, 2.1, 4, 8]&amp;nbsp; &amp;nbsp;&amp;nbsp; &amp;nbsp;&amp;nbsp; &amp;nbsp;&amp;nbsp; # array with integer and float values&lt;/pre&gt;
29
+ &lt;p&gt;Everything worked when I switched the array values to all integers so it seems that floats were causing the problem. However, in the interest of keeping everything working regardless of data types, I base64 encoded before going into the database and decoded on the way out.&lt;/p&gt;
30
+
31
+ &lt;p&gt;I also ran the benchmarks again to determine what impact this would have on speed. Here are the results for 100 iterations on a 10k element array and a 10k element hash with and without base64 encode/decode:&lt;/p&gt;
32
+ &lt;pre&gt;&amp;nbsp; &amp;nbsp;&amp;nbsp; &amp;nbsp;&amp;nbsp; &amp;nbsp;&amp;nbsp; &amp;nbsp;&amp;nbsp; &amp;nbsp; user&amp;nbsp; &amp;nbsp;&amp;nbsp; &amp;nbsp; system&amp;nbsp; &amp;nbsp;&amp;nbsp; total&amp;nbsp; &amp;nbsp;&amp;nbsp; &amp;nbsp; real&lt;br /&gt;array marshal&amp;nbsp; 0.200000&amp;nbsp; &amp;nbsp;0.010000&amp;nbsp; &amp;nbsp;0.210000 (&amp;nbsp; 0.214018) (without Base64)&lt;br /&gt;array marshal&amp;nbsp; 0.220000&amp;nbsp; &amp;nbsp;0.010000&amp;nbsp; &amp;nbsp;0.230000 (&amp;nbsp; 0.250260)&lt;br /&gt;&lt;br /&gt;hash marshal&amp;nbsp; &amp;nbsp;1.830000&amp;nbsp; &amp;nbsp;0.040000&amp;nbsp; &amp;nbsp;1.870000 (&amp;nbsp; 1.892874) (without Base64)&lt;br /&gt;hash marshal&amp;nbsp; &amp;nbsp;2.040000&amp;nbsp; &amp;nbsp;0.100000&amp;nbsp; &amp;nbsp;2.140000 (&amp;nbsp; 2.170405)&lt;/pre&gt;
33
+ &lt;p&gt;As you can see the difference in speed is pretty negligible. I assume that the error has to do with AR cleaning the stuff that gets inserted into the database, but I'm not really sure. In the end it's just easier to use Base64.encode64 when serializing data into a text field in ActiveRecord using Marshal.&lt;/p&gt;
34
+
35
+ &lt;p&gt;I've also read people posting about this error when using the database session store. I can only assume that it's because they were trying to store either way too much data in their session (too much for a regular text field) or they were storing float values or some other data type that would cause this to pop up. Hopefully this helps.&lt;/p&gt;&lt;/div&gt;
36
+ &lt;div class="feedflare"&gt;
37
+ &lt;a href="http://feeds.feedburner.com/~f/PaulDixExplainsNothing?a=rWfWO"&gt;&lt;img src="http://feeds.feedburner.com/~f/PaulDixExplainsNothing?i=rWfWO" border="0"&gt;&lt;/img&gt;&lt;/a&gt; &lt;a href="http://feeds.feedburner.com/~f/PaulDixExplainsNothing?a=RaCqo"&gt;&lt;img src="http://feeds.feedburner.com/~f/PaulDixExplainsNothing?i=RaCqo" border="0"&gt;&lt;/img&gt;&lt;/a&gt; &lt;a href="http://feeds.feedburner.com/~f/PaulDixExplainsNothing?a=1CBLo"&gt;&lt;img src="http://feeds.feedburner.com/~f/PaulDixExplainsNothing?i=1CBLo" border="0"&gt;&lt;/img&gt;&lt;/a&gt;
38
+ &lt;/div&gt;&lt;img src="http://feeds.feedburner.com/~r/PaulDixExplainsNothing/~4/383536354" height="1" width="1"/&gt;</content>
39
+
40
+
41
+ <feedburner:origLink>http://www.pauldix.net/2008/09/marshal-data-to.html</feedburner:origLink></entry>
42
+ <entry>
43
+ <title>Serializing data speed comparison: Marshal vs. JSON vs. Eval vs. YAML</title>
44
+ <link rel="alternate" type="text/html" href="http://feeds.feedburner.com/~r/PaulDixExplainsNothing/~3/376401099/serializing-dat.html" />
45
+ <link rel="replies" type="text/html" href="http://www.pauldix.net/2008/08/serializing-dat.html" thr:count="5" thr:updated="2008-10-14T01:26:31-04:00" />
46
+ <id>tag:typepad.com,2003:post-54766774</id>
47
+ <published>2008-08-27T14:31:41-04:00</published>
48
+ <updated>2008-10-14T01:26:31-04:00</updated>
49
+ <summary>Last night at the NYC Ruby hackfest, I got into a discussion about serializing data. Brian mentioned the Marshal library to me, which for some reason had completely escaped my attention until last night. He said it was wicked fast...</summary>
50
+ <author>
51
+ <name>Paul Dix</name>
52
+ </author>
53
+ <category scheme="http://www.sixapart.com/ns/types#category" term="Tahiti" />
54
+
55
+
56
+ <content type="html" xml:lang="en-US" xml:base="http://www.pauldix.net/">
57
+ &lt;div xmlns="http://www.w3.org/1999/xhtml"&gt;&lt;p&gt;Last night at the &lt;a href="http://nycruby.org"&gt;NYC Ruby hackfest&lt;/a&gt;, I got into a discussion about serializing data. Brian mentioned the Marshal library to me, which for some reason had completely escaped my attention until last night. He said it was wicked fast so we decided to run a quick benchmark comparison.&lt;/p&gt;
58
+ &lt;p&gt;The test data is designed to roughly approximate what my &lt;a href="http://www.pauldix.net/2008/08/storing-many-cl.html"&gt;stored classifier data&lt;/a&gt; will look like. The different methods we decided to benchmark were Marshal, json, eval, and yaml. With each one we took the in-memory object and serialized it and then read it back in. With eval we had to convert the object to ruby code to serialize it then run eval against that. Here are the results for 100 iterations on a 10k element array and a hash with 10k key/value pairs run on my Macbook Pro 2.4 GHz Core 2 Duo:&lt;/p&gt;
59
+ &lt;pre&gt;&amp;nbsp; &amp;nbsp;&amp;nbsp; &amp;nbsp;&amp;nbsp; &amp;nbsp;&amp;nbsp; &amp;nbsp;&amp;nbsp; &amp;nbsp;&amp;nbsp; user&amp;nbsp; &amp;nbsp;&amp;nbsp; &amp;nbsp;system&amp;nbsp; &amp;nbsp;&amp;nbsp; total&amp;nbsp; &amp;nbsp;&amp;nbsp; &amp;nbsp; real&lt;br /&gt;array marshal&amp;nbsp; 0.210000&amp;nbsp; &amp;nbsp;0.010000&amp;nbsp; &amp;nbsp;0.220000 (&amp;nbsp; 0.220701)&lt;br /&gt;array json&amp;nbsp; &amp;nbsp;&amp;nbsp; 2.180000&amp;nbsp; &amp;nbsp;0.050000&amp;nbsp; &amp;nbsp;2.230000 (&amp;nbsp; 2.288489)&lt;br /&gt;array eval&amp;nbsp; &amp;nbsp;&amp;nbsp; 2.090000&amp;nbsp; &amp;nbsp;0.060000&amp;nbsp; &amp;nbsp;2.150000 (&amp;nbsp; 2.240443)&lt;br /&gt;array yaml&amp;nbsp; &amp;nbsp; 26.650000&amp;nbsp; &amp;nbsp;0.350000&amp;nbsp; 27.000000 ( 27.810609)&lt;br /&gt;&lt;br /&gt;hash marshal&amp;nbsp; &amp;nbsp;2.000000&amp;nbsp; &amp;nbsp;0.050000&amp;nbsp; &amp;nbsp;2.050000 (&amp;nbsp; 2.114950)&lt;br /&gt;hash json&amp;nbsp; &amp;nbsp;&amp;nbsp; &amp;nbsp;3.700000&amp;nbsp; &amp;nbsp;0.060000&amp;nbsp; &amp;nbsp;3.760000 (&amp;nbsp; 3.881716)&lt;br /&gt;hash eval&amp;nbsp; &amp;nbsp;&amp;nbsp; &amp;nbsp;5.370000&amp;nbsp; &amp;nbsp;0.140000&amp;nbsp; &amp;nbsp;5.510000 (&amp;nbsp; 6.117947)&lt;br /&gt;hash yaml&amp;nbsp; &amp;nbsp;&amp;nbsp; 68.220000&amp;nbsp; &amp;nbsp;0.870000&amp;nbsp; 69.090000 ( 72.370784)&lt;/pre&gt;
60
+ &lt;p&gt;The order in which I tested them is pretty much the order in which they ranked for speed. Marshal was amazingly fast. JSON and eval came out roughly equal on the array with eval trailing quite a bit for the hash. Yaml was just slow as all hell. A note on the json: I used the 1.1.3 library which uses c to parse. I assume it would be quite a bit slower if I used the pure ruby implementation. Here's &lt;a href="http://gist.github.com/7549"&gt;a gist of the benchmark code&lt;/a&gt; if you're curious and want to run it yourself.&lt;/p&gt;
61
+
62
+
63
+
64
+ &lt;p&gt;If you're serializing user data, be super careful about using eval. It's probably best to avoid it completely. Finally, just for fun I took yaml out (it was too slow) and ran the benchmark again with 1k iterations:&lt;/p&gt;
65
+ &lt;pre&gt;&amp;nbsp; &amp;nbsp;&amp;nbsp; &amp;nbsp;&amp;nbsp; &amp;nbsp;&amp;nbsp; &amp;nbsp;&amp;nbsp; &amp;nbsp;&amp;nbsp; user&amp;nbsp; &amp;nbsp;&amp;nbsp; &amp;nbsp;system&amp;nbsp; &amp;nbsp;&amp;nbsp; total&amp;nbsp; &amp;nbsp;&amp;nbsp; &amp;nbsp; real&lt;br /&gt;array marshal&amp;nbsp; 2.080000&amp;nbsp; &amp;nbsp;0.110000&amp;nbsp; &amp;nbsp;2.190000 (&amp;nbsp; 2.242235)&lt;br /&gt;array json&amp;nbsp; &amp;nbsp; 21.860000&amp;nbsp; &amp;nbsp;0.500000&amp;nbsp; 22.360000 ( 23.052403)&lt;br /&gt;array eval&amp;nbsp; &amp;nbsp; 20.730000&amp;nbsp; &amp;nbsp;0.570000&amp;nbsp; 21.300000 ( 21.992454)&lt;br /&gt;&lt;br /&gt;hash marshal&amp;nbsp; 19.510000&amp;nbsp; &amp;nbsp;0.500000&amp;nbsp; 20.010000 ( 20.794111)&lt;br /&gt;hash json&amp;nbsp; &amp;nbsp;&amp;nbsp; 39.770000&amp;nbsp; &amp;nbsp;0.670000&amp;nbsp; 40.440000 ( 41.689297)&lt;br /&gt;hash eval&amp;nbsp; &amp;nbsp;&amp;nbsp; 51.410000&amp;nbsp; &amp;nbsp;1.290000&amp;nbsp; 52.700000 ( 54.155711)&lt;/pre&gt;&lt;/div&gt;
66
+ &lt;div class="feedflare"&gt;
67
+ &lt;a href="http://feeds.feedburner.com/~f/PaulDixExplainsNothing?a=zombO"&gt;&lt;img src="http://feeds.feedburner.com/~f/PaulDixExplainsNothing?i=zombO" border="0"&gt;&lt;/img&gt;&lt;/a&gt; &lt;a href="http://feeds.feedburner.com/~f/PaulDixExplainsNothing?a=T3kqo"&gt;&lt;img src="http://feeds.feedburner.com/~f/PaulDixExplainsNothing?i=T3kqo" border="0"&gt;&lt;/img&gt;&lt;/a&gt; &lt;a href="http://feeds.feedburner.com/~f/PaulDixExplainsNothing?a=aI6Oo"&gt;&lt;img src="http://feeds.feedburner.com/~f/PaulDixExplainsNothing?i=aI6Oo" border="0"&gt;&lt;/img&gt;&lt;/a&gt;
68
+ &lt;/div&gt;&lt;img src="http://feeds.feedburner.com/~r/PaulDixExplainsNothing/~4/376401099" height="1" width="1"/&gt;</content>
69
+
70
+
71
+ <feedburner:origLink>http://www.pauldix.net/2008/08/serializing-dat.html</feedburner:origLink></entry>
72
+ <entry>
73
+ <title>Gotcha with cache_fu and permalinks</title>
74
+ <link rel="alternate" type="text/html" href="http://feeds.feedburner.com/~r/PaulDixExplainsNothing/~3/369250462/gotcha-with-cac.html" />
75
+ <link rel="replies" type="text/html" href="http://www.pauldix.net/2008/08/gotcha-with-cac.html" thr:count="2" thr:updated="2008-11-20T13:58:38-05:00" />
76
+ <id>tag:typepad.com,2003:post-54411628</id>
77
+ <published>2008-08-19T14:26:24-04:00</published>
78
+ <updated>2008-11-20T13:58:38-05:00</updated>
79
+ <summary>This is an issue I had recently in a project with cache_fu. Models that I found and cached based on permalinks weren't expiring the cache correctly when getting updated. Here's an example scenario. Say you have a blog with posts....</summary>
80
+ <author>
81
+ <name>Paul Dix</name>
82
+ </author>
83
+ <category scheme="http://www.sixapart.com/ns/types#category" term="Ruby on Rails" />
84
+
85
+
86
+ <content type="html" xml:lang="en-US" xml:base="http://www.pauldix.net/">
87
+ &lt;div xmlns="http://www.w3.org/1999/xhtml"&gt;&lt;p&gt;This is an issue I had recently in a project with &lt;a href="http://errtheblog.com/posts/57-kickin-ass-w-cachefu"&gt;cache_fu&lt;/a&gt;. Models that I found and cached based on permalinks weren't expiring the cache correctly when getting updated. Here's an example scenario.&lt;/p&gt;
88
+
89
+ &lt;p&gt;Say you have a blog with posts. However, instead of using a url like http://paulscoolblog.com/posts/23 you want something that's more search engine friendly and readable for the user. So you use a permalink (maybe using the &lt;a href="http://github.com/github/permalink_fu/tree/master"&gt;permalink_fu plugin&lt;/a&gt;) that's auto-generated based on the title of the post. This post would have a url that looks something like http://paulscoolblog.com/posts/gotcha-with-cache_fu-and-permalinks.&lt;/p&gt;
90
+
91
+ &lt;p&gt;In your controller's show method you'd probably find the post like this:&lt;/p&gt;
92
+ &lt;pre&gt;@post = Post.find_by_permalink(params[:permalink])&lt;/pre&gt;
93
+ &lt;p&gt;However, you'd want to do the caching thing so you'd actually do this:&lt;/p&gt;
94
+ &lt;pre&gt;@post = Post.cached(:find_by_permalink, :with =&amp;gt; params[:permalink])&lt;/pre&gt;
95
+ &lt;p&gt;The problem that I ran into, which is probably obvious to anyone familiar with cache_fu, was that when updating the post, it wouldn't expire the cache. That part of the post model looks like this:&lt;/p&gt;
96
+ &lt;pre&gt;class Post &amp;lt; ActiveRecord::Base&lt;br /&gt;&amp;nbsp; before_save :expire_cache&lt;br /&gt;&amp;nbsp; ...&lt;br /&gt;end&lt;/pre&gt;
97
+ &lt;p&gt;Do you see it? The issue is that when expire_cache gets called on the object, it expires the key &lt;strong&gt;Post:23&lt;/strong&gt; from the cache (assuming 23 was the id of the post). However, when the post was cached using the cached(:find_by_permalink ...) method, it put the post object into the cache with a key of &lt;strong&gt;Post:find_by_permalink:gotcha-with-cache_fu-and-permalinks&lt;/strong&gt;.&lt;/p&gt;
98
+ &lt;p&gt;Luckily, it's a fairly simple fix. If you have a model that is commonly accessed through permalinks, just write your own cache expiry method that looks for both keys and expires them.&lt;/p&gt;&lt;/div&gt;
99
+ &lt;div class="feedflare"&gt;
100
+ &lt;a href="http://feeds.feedburner.com/~f/PaulDixExplainsNothing?a=V1ojO"&gt;&lt;img src="http://feeds.feedburner.com/~f/PaulDixExplainsNothing?i=V1ojO" border="0"&gt;&lt;/img&gt;&lt;/a&gt; &lt;a href="http://feeds.feedburner.com/~f/PaulDixExplainsNothing?a=eu6Zo"&gt;&lt;img src="http://feeds.feedburner.com/~f/PaulDixExplainsNothing?i=eu6Zo" border="0"&gt;&lt;/img&gt;&lt;/a&gt; &lt;a href="http://feeds.feedburner.com/~f/PaulDixExplainsNothing?a=ddUho"&gt;&lt;img src="http://feeds.feedburner.com/~f/PaulDixExplainsNothing?i=ddUho" border="0"&gt;&lt;/img&gt;&lt;/a&gt;
101
+ &lt;/div&gt;&lt;img src="http://feeds.feedburner.com/~r/PaulDixExplainsNothing/~4/369250462" height="1" width="1"/&gt;</content>
102
+
103
+
104
+ <feedburner:origLink>http://www.pauldix.net/2008/08/gotcha-with-cac.html</feedburner:origLink></entry>
105
+ <entry>
106
+ <title>Non-greedy mode in regex</title>
107
+ <link rel="alternate" type="text/html" href="http://feeds.feedburner.com/~r/PaulDixExplainsNothing/~3/365673983/non-greedy-mode.html" />
108
+ <link rel="replies" type="text/html" href="http://www.pauldix.net/2008/08/non-greedy-mode.html" thr:count="0" />
109
+ <id>tag:typepad.com,2003:post-54227244</id>
110
+ <published>2008-08-15T09:32:11-04:00</published>
111
+ <updated>2008-08-27T09:33:15-04:00</updated>
112
+ <summary>I was writing a regular expression yesterday and this popped up. It's just a quick note about greedy vs. non-greedy mode in regular expression matching. Say I have a regular expression that looks something like this: /(\[.*\])/ In English that...</summary>
113
+ <author>
114
+ <name>Paul Dix</name>
115
+ </author>
116
+ <category scheme="http://www.sixapart.com/ns/types#category" term="Ruby" />
117
+
118
+
119
+ <content type="html" xml:lang="en-US" xml:base="http://www.pauldix.net/">&lt;p&gt;I was writing a regular expression yesterday and this popped up. It's just a quick note about greedy vs. non-greedy mode in regular expression matching. Say I have a regular expression that looks something like this:&lt;/p&gt;&#xD;
120
+ &lt;pre&gt;/(\[.*\])/&lt;/pre&gt;&#xD;
121
+ &lt;p&gt;In English that says something roughly like: find an opening bracket [ with 0 or more of any character followed by a closing bracket. The backslashes are to escape the brackets and the parenthesis specify grouping so we can later access that matched text.&lt;/p&gt;&#xD;
122
+ &#xD;
123
+ &lt;p&gt;The greedy mode comes up with the 0 or more characters part of the match (the .* part of the expression). The default mode of greedy means that the parser will gobble up as many characters as it can and match the very last closing bracket. So if you have text like this:&lt;/p&gt;&#xD;
124
+ &#xD;
125
+ &lt;pre&gt;a = [:foo, :bar]&lt;br&gt;b = [:hello, :world]&lt;/pre&gt;&#xD;
126
+ &lt;p&gt;The resulting grouped match would be this:&lt;/p&gt;&#xD;
127
+ &lt;pre&gt;[:foo, :bar]&lt;br&gt;b = [:hello, :world]&lt;/pre&gt;&#xD;
128
+ &lt;p&gt;If you just wanted the [:foo, :bar] part, the solution is to parse in non-greedy mode. This means that it will match on the first closing bracket it sees. The modified regular expression looks like this:&lt;/p&gt;&#xD;
129
+ &lt;pre&gt;/(\[.*?\])/&lt;/pre&gt;&#xD;
130
+ &lt;p&gt;I love the regular expression engine in Ruby. It's one of the best things it ripped off from Perl. The one thing I don't like is the magic global variable that it places matched groups into. You can access that first match through the $1 variable. If you're unfamiliar with regular expressions, a good place to start is the &lt;a href="http://www.amazon.com/Programming-Perl-3rd-Larry-Wall/dp/0596000278/ref=pd_bbs_sr_1?ie=UTF8&amp;amp;s=books&amp;amp;qid=1218806755&amp;amp;sr=8-1"&gt;Camel book&lt;/a&gt;. It's about Perl, but the way they work is very similar. I actually haven't seen good coverage of regexes in a Ruby book.&lt;/p&gt;&lt;div class="feedflare"&gt;
131
+ &lt;a href="http://feeds.feedburner.com/~f/PaulDixExplainsNothing?a=OkVmO"&gt;&lt;img src="http://feeds.feedburner.com/~f/PaulDixExplainsNothing?i=OkVmO" border="0"&gt;&lt;/img&gt;&lt;/a&gt; &lt;a href="http://feeds.feedburner.com/~f/PaulDixExplainsNothing?a=iRpWo"&gt;&lt;img src="http://feeds.feedburner.com/~f/PaulDixExplainsNothing?i=iRpWo" border="0"&gt;&lt;/img&gt;&lt;/a&gt; &lt;a href="http://feeds.feedburner.com/~f/PaulDixExplainsNothing?a=pjRCo"&gt;&lt;img src="http://feeds.feedburner.com/~f/PaulDixExplainsNothing?i=pjRCo" border="0"&gt;&lt;/img&gt;&lt;/a&gt;
132
+ &lt;/div&gt;&lt;img src="http://feeds.feedburner.com/~r/PaulDixExplainsNothing/~4/365673983" height="1" width="1"/&gt;</content>
133
+
134
+
135
+ <feedburner:origLink>http://www.pauldix.net/2008/08/non-greedy-mode.html</feedburner:origLink></entry>
136
+ <entry>
137
+ <title>Storing many classification models</title>
138
+ <link rel="alternate" type="text/html" href="http://feeds.feedburner.com/~r/PaulDixExplainsNothing/~3/358530158/storing-many-cl.html" />
139
+ <link rel="replies" type="text/html" href="http://www.pauldix.net/2008/08/storing-many-cl.html" thr:count="3" thr:updated="2008-08-08T11:40:28-04:00" />
140
+ <id>tag:typepad.com,2003:post-53888232</id>
141
+ <published>2008-08-07T12:01:38-04:00</published>
142
+ <updated>2008-08-27T16:58:18-04:00</updated>
143
+ <summary>One of the things I need to do in Filterly is keep many trained classifiers. These are the machine learning models that determine if a blog post is on topic (Filterly separates information by topic). At the very least I...</summary>
144
+ <author>
145
+ <name>Paul Dix</name>
146
+ </author>
147
+ <category scheme="http://www.sixapart.com/ns/types#category" term="Tahiti" />
148
+
149
+
150
+ <content type="html" xml:lang="en-US" xml:base="http://www.pauldix.net/">&lt;p&gt;One of the things I need to do in &lt;a href="http://filterly.com/"&gt;Filterly&lt;/a&gt; is keep many trained &lt;a href="http://en.wikipedia.org/wiki/Statistical_classification"&gt;classifiers&lt;/a&gt;. These are the machine learning models that determine if a blog post is on topic (Filterly separates information by topic). At the very least I need one per topic in the system. If I want to do something like &lt;a href="http://en.wikipedia.org/wiki/Boosting"&gt;boosting&lt;/a&gt; then I need even more. The issue I'm wrestling with is how to store this data. I'll outline a specific approach and what the storage needs are.&lt;/p&gt;&#xD;
151
+ &#xD;
152
+ &lt;p&gt;Let's say I go with boosting and 10 &lt;a href="http://en.wikipedia.org/wiki/Perceptron"&gt;perceptrons&lt;/a&gt;. I'll also limit my feature space to the 10,000 most statistically significant features. So the storage for each perceptron is a 10k element array. However, I'll also have to keep another data structure to store what the 10k features are and their position in the array. In code I use a hash for this where the feature name is the key and the value is its position. I just need to store one of these hashes per topic.&lt;/p&gt;&#xD;
153
+ &#xD;
154
+ &lt;p&gt;That's not really a huge amount of data. I'm more concerned about the best way to store it. I don't think this kind of thing maps well to a relational database. I don't need to store the features individually. Generally when I'm running the thing I'll want the whole perceptron and feature set in memory for quick access. For now I'm just using a big text field and serializing each using JSON.&lt;/p&gt;&#xD;
155
+ &#xD;
156
+ &lt;p&gt;I don't really like this approach. The whole serializing into the database seems really inelegant. Combined with the time that it takes to parse these things. Each time I want to see if a new post is on topic I'd need to load up the classifier and parse the 10 10k arrays and the 10k key hash. I could keep each classifier running as a service, but then I've got a pretty heavy process running for each topic.&lt;/p&gt;&#xD;
157
+ &#xD;
158
+ &lt;p&gt;I guess I'll just use the stupid easy solution for the time being and worry about performance later. Anyone have thoughts on the best approach?&lt;/p&gt;&lt;div class="feedflare"&gt;
159
+ &lt;a href="http://feeds.feedburner.com/~f/PaulDixExplainsNothing?a=DUT8O"&gt;&lt;img src="http://feeds.feedburner.com/~f/PaulDixExplainsNothing?i=DUT8O" border="0"&gt;&lt;/img&gt;&lt;/a&gt; &lt;a href="http://feeds.feedburner.com/~f/PaulDixExplainsNothing?a=ZGjFo"&gt;&lt;img src="http://feeds.feedburner.com/~f/PaulDixExplainsNothing?i=ZGjFo" border="0"&gt;&lt;/img&gt;&lt;/a&gt; &lt;a href="http://feeds.feedburner.com/~f/PaulDixExplainsNothing?a=pH3Vo"&gt;&lt;img src="http://feeds.feedburner.com/~f/PaulDixExplainsNothing?i=pH3Vo" border="0"&gt;&lt;/img&gt;&lt;/a&gt;
160
+ &lt;/div&gt;&lt;img src="http://feeds.feedburner.com/~r/PaulDixExplainsNothing/~4/358530158" height="1" width="1"/&gt;</content>
161
+
162
+
163
+ <feedburner:origLink>http://www.pauldix.net/2008/08/storing-many-cl.html</feedburner:origLink></entry>
164
+
165
+ </feed>
@@ -1,4 +1,4 @@
1
- require 'spec_helper'
1
+ require File.expand_path(File.dirname(__FILE__) + '/../spec_helper')
2
2
 
3
3
  class A
4
4
 
@@ -1,4 +1,4 @@
1
- require 'spec_helper'
1
+ require File.expand_path(File.dirname(__FILE__) + '/../spec_helper')
2
2
 
3
3
  class A
4
4
  include SAXMachine
@@ -1,4 +1,4 @@
1
- require File.dirname(__FILE__) + '/../spec_helper'
1
+ require File.expand_path(File.dirname(__FILE__) + '/../spec_helper')
2
2
 
3
3
  describe "SAXMachine" do
4
4
  describe "element" do
@@ -59,6 +59,43 @@ describe "SAXMachine" do
59
59
  it "should be available" do
60
60
  @klass.data_class(:date).should == DateTime
61
61
  end
62
+
63
+ it "should handle an integer class" do
64
+ @klass = Class.new do
65
+ include SAXMachine
66
+ element :number, :class => Integer
67
+ end
68
+ document = @klass.parse("<number>5</number>")
69
+ document.number.should == 5
70
+ end
71
+
72
+ it "should handle an float class" do
73
+ @klass = Class.new do
74
+ include SAXMachine
75
+ element :number, :class => Float
76
+ end
77
+ document = @klass.parse("<number>5.5</number>")
78
+ document.number.should == 5.5
79
+ end
80
+
81
+ it "should handle an string class" do
82
+ @klass = Class.new do
83
+ include SAXMachine
84
+ element :number, :class => String
85
+ end
86
+ document = @klass.parse("<number>5.5</number>")
87
+ document.number.should == "5.5"
88
+ end
89
+
90
+ it "should handle a time class" do
91
+ @klass = Class.new do
92
+ include SAXMachine
93
+ element :time, :class => Time
94
+ end
95
+ document = @klass.parse("<time>1994-02-04T06:20:00Z</time>")
96
+ document.time.should == Time.utc(1994, 2, 4, 6, 20, 0, 0)
97
+ end
98
+
62
99
  end
63
100
  describe "the required attribute" do
64
101
  it "should be available" do
@@ -89,6 +126,16 @@ describe "SAXMachine" do
89
126
  document.title.should == "My Title"
90
127
  end
91
128
 
129
+ if RUBY_VERSION >= "1.9.0"
130
+ it "should keep the document encoding for elements" do
131
+ data = "<title>My Title</title>"
132
+ data.encode!("utf-8")
133
+
134
+ document = @klass.parse(data)
135
+ document.title.encoding.should == data.encoding
136
+ end
137
+ end
138
+
92
139
  it "should save cdata into an accessor" do
93
140
  document = @klass.parse("<title><![CDATA[A Title]]></title>")
94
141
  document.title.should == "A Title"
@@ -385,7 +432,7 @@ describe "SAXMachine" do
385
432
  elements :item, :as => :items, :with => {:type => 'Foo'}, :class => Foo
386
433
  end
387
434
  end
388
-
435
+
389
436
  it "should cast into the correct class" do
390
437
  document = @klass.parse("<items><item type=\"Bar\"><title>Bar title</title></item><item type=\"Foo\"><title>Foo title</title></item></items>")
391
438
  document.items.size.should == 2
@@ -455,6 +502,18 @@ describe "SAXMachine" do
455
502
  end
456
503
  end
457
504
  end
505
+
506
+ describe "when dealing with element names containing dashes" do
507
+ it 'should automatically convert dashes to underscores' do
508
+ class Dashes
509
+ include SAXMachine
510
+ element :dashed_element
511
+ end
512
+
513
+ parsed = Dashes.parse('<dashed-element>Text</dashed-element>')
514
+ parsed.dashed_element.should eq "Text"
515
+ end
516
+ end
458
517
 
459
518
  describe "full example" do
460
519
  before :each do
@@ -1,11 +1,15 @@
1
- require 'date'
2
-
3
- # gem install redgreen for colored test output
4
- begin require "redgreen" unless ENV['TM_CURRENT_LINE']
5
- rescue LoadError
1
+ begin
2
+ require 'simplecov'
3
+ SimpleCov.start do
4
+ add_filter "/spec/"
5
+ end
6
+ rescue LoadError
6
7
  end
7
8
 
8
- path = File.expand_path(File.dirname(__FILE__) + "/../lib/")
9
- $LOAD_PATH.unshift(path) unless $LOAD_PATH.include?(path)
9
+ require File.expand_path(File.dirname(__FILE__) + '/../lib/sax-machine')
10
10
 
11
- require "sax-machine"
11
+ RSpec.configure do |config|
12
+ config.treat_symbols_as_metadata_keys_with_true_values = true
13
+ config.run_all_when_everything_filtered = true
14
+ config.filter_run :focus
15
+ end
metadata CHANGED
@@ -1,97 +1,117 @@
1
- --- !ruby/object:Gem::Specification
1
+ --- !ruby/object:Gem::Specification
2
2
  name: sax-machine
3
- version: !ruby/object:Gem::Version
4
- hash: 27
5
- prerelease:
6
- segments:
7
- - 0
8
- - 1
9
- - 0
10
- version: 0.1.0
3
+ version: !ruby/object:Gem::Version
4
+ version: 0.2.0.rc1
5
+ prerelease: 6
11
6
  platform: ruby
12
- authors:
7
+ authors:
13
8
  - Paul Dix
14
9
  - Julien Kirch
10
+ - Ezekiel Templin
15
11
  autorequire:
16
12
  bindir: bin
17
13
  cert_chain: []
18
-
19
- date: 2011-09-30 00:00:00 Z
20
- dependencies:
21
- - !ruby/object:Gem::Dependency
14
+ date: 2012-06-04 00:00:00.000000000 Z
15
+ dependencies:
16
+ - !ruby/object:Gem::Dependency
22
17
  name: nokogiri
23
- prerelease: false
24
- requirement: &id001 !ruby/object:Gem::Requirement
18
+ requirement: !ruby/object:Gem::Requirement
25
19
  none: false
26
- requirements:
27
- - - ">"
28
- - !ruby/object:Gem::Version
29
- hash: 31
30
- segments:
31
- - 0
32
- - 0
33
- - 0
34
- version: 0.0.0
20
+ requirements:
21
+ - - ~>
22
+ - !ruby/object:Gem::Version
23
+ version: 1.5.2
35
24
  type: :runtime
36
- version_requirements: *id001
25
+ prerelease: false
26
+ version_requirements: !ruby/object:Gem::Requirement
27
+ none: false
28
+ requirements:
29
+ - - ~>
30
+ - !ruby/object:Gem::Version
31
+ version: 1.5.2
32
+ - !ruby/object:Gem::Dependency
33
+ name: rspec
34
+ requirement: !ruby/object:Gem::Requirement
35
+ none: false
36
+ requirements:
37
+ - - ~>
38
+ - !ruby/object:Gem::Version
39
+ version: 2.10.0
40
+ type: :development
41
+ prerelease: false
42
+ version_requirements: !ruby/object:Gem::Requirement
43
+ none: false
44
+ requirements:
45
+ - - ~>
46
+ - !ruby/object:Gem::Version
47
+ version: 2.10.0
37
48
  description:
38
49
  email: paul@pauldix.net
39
50
  executables: []
40
-
41
51
  extensions: []
42
-
43
52
  extra_rdoc_files: []
44
-
45
- files:
53
+ files:
54
+ - .gitignore
55
+ - .rspec
56
+ - .travis.yml
57
+ - Gemfile
58
+ - Guardfile
59
+ - HISTORY.md
60
+ - README.md
61
+ - Rakefile
46
62
  - lib/sax-machine.rb
47
63
  - lib/sax-machine/sax_ancestor_config.rb
48
64
  - lib/sax-machine/sax_attribute_config.rb
65
+ - lib/sax-machine/sax_collection_config.rb
49
66
  - lib/sax-machine/sax_config.rb
50
67
  - lib/sax-machine/sax_configure.rb
51
68
  - lib/sax-machine/sax_document.rb
52
- - lib/sax-machine/sax_collection_config.rb
53
69
  - lib/sax-machine/sax_element_config.rb
54
70
  - lib/sax-machine/sax_element_value_config.rb
55
71
  - lib/sax-machine/sax_handler.rb
56
- - README.textile
57
- - Rakefile
58
- - Gemfile
59
- - spec/spec_helper.rb
72
+ - lib/sax-machine/version.rb
73
+ - sax-machine.gemspec
74
+ - spec/benchmarks/amazon.xml
75
+ - spec/benchmarks/benchmark.rb
76
+ - spec/benchmarks/public_timeline.xml
77
+ - spec/sax-machine/atom.xml
60
78
  - spec/sax-machine/configure_sax_machine_spec.rb
61
79
  - spec/sax-machine/include_sax_machine_spec.rb
62
80
  - spec/sax-machine/sax_document_spec.rb
81
+ - spec/spec_helper.rb
63
82
  homepage: http://github.com/pauldix/sax-machine
64
83
  licenses: []
65
-
66
84
  post_install_message:
67
85
  rdoc_options: []
68
-
69
- require_paths:
86
+ require_paths:
70
87
  - lib
71
- required_ruby_version: !ruby/object:Gem::Requirement
88
+ required_ruby_version: !ruby/object:Gem::Requirement
72
89
  none: false
73
- requirements:
74
- - - ">="
75
- - !ruby/object:Gem::Version
76
- hash: 3
77
- segments:
90
+ requirements:
91
+ - - ! '>='
92
+ - !ruby/object:Gem::Version
93
+ version: '0'
94
+ segments:
78
95
  - 0
79
- version: "0"
80
- required_rubygems_version: !ruby/object:Gem::Requirement
96
+ hash: 3574605312337356673
97
+ required_rubygems_version: !ruby/object:Gem::Requirement
81
98
  none: false
82
- requirements:
83
- - - ">="
84
- - !ruby/object:Gem::Version
85
- hash: 3
86
- segments:
87
- - 0
88
- version: "0"
99
+ requirements:
100
+ - - ! '>'
101
+ - !ruby/object:Gem::Version
102
+ version: 1.3.1
89
103
  requirements: []
90
-
91
104
  rubyforge_project:
92
- rubygems_version: 1.8.5
105
+ rubygems_version: 1.8.24
93
106
  signing_key:
94
- specification_version: 2
107
+ specification_version: 3
95
108
  summary: Declarative SAX Parsing with Nokogiri
96
- test_files: []
97
-
109
+ test_files:
110
+ - spec/benchmarks/amazon.xml
111
+ - spec/benchmarks/benchmark.rb
112
+ - spec/benchmarks/public_timeline.xml
113
+ - spec/sax-machine/atom.xml
114
+ - spec/sax-machine/configure_sax_machine_spec.rb
115
+ - spec/sax-machine/include_sax_machine_spec.rb
116
+ - spec/sax-machine/sax_document_spec.rb
117
+ - spec/spec_helper.rb