agiley-feedzirra 0.0.24

Sign up to get free protection for your applications and to get access to all the features.
@@ -0,0 +1,208 @@
1
+ h1. Feedzirra
2
+
3
+ "http://github.com/pauldix/feedzirra/tree/master":http://github.com/pauldix/feedzirra/tree/master
4
+
5
+ I'd like feedback on the api and any bugs encountered on feeds in the wild. I've set up a "google group here":http://groups.google.com/group/feedzirra.
6
+
7
+ h2. Summary
8
+
9
+ A feed fetching and parsing library that treats the internet like Godzilla treats Japan: it dominates and eats all.
10
+
11
+ h2. Description
12
+
13
+ Feedzirra is a feed library that is designed to get and update many feeds as quickly as possible. This includes using libcurl-multi through the "taf2-curb":http://github.com/taf2/curb/tree/master gem for faster http gets, and libxml through "nokogiri":http://github.com/tenderlove/nokogiri/tree/master and "sax-machine":http://github.com/pauldix/sax-machine/tree/master for faster parsing.
14
+
15
+ Once you have fetched feeds using Feedzirra, they can be updated using the feed objects. Feedzirra automatically inserts etag and last-modified information from the http response headers to lower bandwidth usage, eliminate unnecessary parsing, and make things speedier in general.
16
+
17
+ Another feature present in Feedzirra is the ability to create callback functions that get called "on success" and "on failure" when getting a feed. This makes it easy to do things like log errors or update data stores.
18
+
19
+ The fetching and parsing logic have been decoupled so that either of them can be used in isolation if you'd prefer not to use everything that Feedzirra offers. However, the code examples below use helper methods in the Feed class that put everything together to make things as simple as possible.
20
+
21
+ The final feature of Feedzirra is the ability to define custom parsing classes. In truth, Feedzirra could be used to parse much more than feeds. Microformats, page scraping, and almost anything else are fair game.
22
+
23
+ h2. Installation
24
+
25
+ For now Feedzirra exists only on github. It also has a few gem requirements that are only on github. Before you start you need to have "libcurl":http://curl.haxx.se/ and "libxml":http://xmlsoft.org/ installed. If you're on Leopard you have both. Otherwise, you'll need to grab them. Once you've got those libraries, these are the gems that get used: nokogiri, pauldix-sax-machine, taf2-curb (note that this is a fork that lives on github and not the Ruby Forge version of curb), and pauldix-feedzirra. The feedzirra gemspec has all the dependencies so you should be able to get up and running with the standard github gem install routine:
26
+
27
+ <pre>
28
+ gem sources -a http://gems.github.com # if you haven't already
29
+ gem install pauldix-feedzirra
30
+ </pre>
31
+
32
+ <b>NOTE:</b>Some people have been reporting a few issues related to installation. First, the Ruby Forge version of curb is not what you want. It will not work. Nor will the curl-multi gem that lives on Ruby Forge. You have to get the "taf2-curb":http://github.com/taf2/curb/tree/master fork installed.
33
+
34
+ If you see this error when doing a require:
35
+
36
+ <pre>
37
+ /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `gem_original_require': no such file to load -- curb_core (LoadError)
38
+ </pre>
39
+
40
+ It means that the taf2-curb gem didn't build correctly. To resolve this you can do a git clone git://github.com/taf2/curb.git then run rake gem in the curb directory, then sudo gem install pkg/curb-0.2.4.0.gem. After that you should be good.
41
+
42
+ If you see something like this when trying to run it:
43
+
44
+ <pre>
45
+ NoMethodError: undefined method `on_success' for #<Curl::Easy:0x1182724>
46
+ from ./lib/feedzirra/feed.rb:88:in `add_url_to_multi'
47
+ </pre>
48
+
49
+ This means that you are requiring curl-multi or the Ruby Forge version of Curb somewhere. You can't use those and need to get the taf2 version up and running.
50
+
51
+ If you're on Debian or Ubuntu and getting errors while trying to install the taf2-curb gem, it could be because you don't have the latest version of libcurl installed. Do this to fix:
52
+
53
+ <pre>
54
+ sudo apt-get install libcurl4-gnutls-dev
55
+ </pre>
56
+
57
+ Another problem could be if you are running Mac Ports and you have libcurl installed through there. You need to uninstall it for curb to work! The version in Mac Ports is old and doesn't play nice with curb. If you're running Leopard, you can just uninstall and you should be golden. If you're on an older version of OS X, you'll then need to "download curl":http://curl.haxx.se/download.html and build from source. Then you'll have to install the taf2-curb gem again. You might have to perform the step above.
58
+
59
+ If you're still having issues, please let me know on the mailing list. Also, "Todd Fisher (taf2)":http://github.com/taf2 is working on fixing the gem install. Please send him a full error report.
60
+
61
+ h2. Usage
62
+
63
+ "A gist of the following code":http://gist.github.com/57285
64
+ "A gist of how to do updates on feeds":http://gist.github.com/132671
65
+
66
+ <pre>
67
+ require 'feedzirra'
68
+
69
+ # fetching a single feed
70
+ feed = Feedzirra::Feed.fetch_and_parse("http://feeds.feedburner.com/PaulDixExplainsNothing")
71
+
72
+ # feed and entries accessors
73
+ feed.title # => "Paul Dix Explains Nothing"
74
+ feed.url # => "http://www.pauldix.net"
75
+ feed.feed_url # => "http://feeds.feedburner.com/PaulDixExplainsNothing"
76
+ feed.etag # => "GunxqnEP4NeYhrqq9TyVKTuDnh0"
77
+ feed.last_modified # => Sat Jan 31 17:58:16 -0500 2009 # it's a Time object
78
+
79
+ entry = feed.entries.first
80
+ entry.title # => "Ruby Http Client Library Performance"
81
+ entry.url # => "http://www.pauldix.net/2009/01/ruby-http-client-library-performance.html"
82
+ entry.author # => "Paul Dix"
83
+ entry.summary # => "..."
84
+ entry.content # => "..."
85
+ entry.published # => Thu Jan 29 17:00:19 UTC 2009 # it's a Time object
86
+ entry.categories # => ["...", "..."]
87
+
88
+ # sanitizing an entry's content
89
+ entry.title.sanitize # => returns the title with harmful stuff escaped
90
+ entry.author.sanitize # => returns the author with harmful stuff escaped
91
+ entry.content.sanitize # => returns the content with harmful stuff escaped
92
+ entry.content.sanitize! # => returns content with harmful stuff escaped and replaces original (also exists for author and title)
93
+ entry.sanitize! # => sanitizes the entry's title, author, and content in place (as in, it changes the value to clean versions)
94
+ feed.sanitize_entries! # => sanitizes all entries in place
95
+
96
+ # updating a single feed
97
+ updated_feed = Feedzirra::Feed.update(feed)
98
+
99
+ # an updated feed has the following extra accessors
100
+ updated_feed.updated? # returns true if any of the feed attributes have been modified. will return false if only new entries
101
+ updated_feed.new_entries # a collection of the entry objects that are newer than the latest in the feed before update
102
+
103
+ # fetching multiple feeds
104
+ feed_urls = ["http://feeds.feedburner.com/PaulDixExplainsNothing", "http://feeds.feedburner.com/trottercashion"]
105
+ feeds = Feedzirra::Feed.fetch_and_parse(feed_urls)
106
+
107
+ # feeds is now a hash with the feed_urls as keys and the parsed feed objects as values. If an error was thrown
108
+ # there will be a Fixnum of the http response code instead of a feed object
109
+
110
+ # updating multiple feeds. it expects a collection of feed objects
111
+ updated_feeds = Feedzirra::Feed.update(feeds.values)
112
+
113
+ # defining custom behavior on failure or success. note that a return status of 304 (not updated) will call the on_success handler
114
+ feed = Feedzirra::Feed.fetch_and_parse("http://feeds.feedburner.com/PaulDixExplainsNothing",
115
+ :on_success => lambda {|feed| puts feed.title },
116
+ :on_failure => lambda {|url, response_code, response_header, response_body| puts response_body })
117
+ # if a collection was passed into fetch_and_parse, the handlers will be called for each one
118
+
119
+ # the behavior for the handlers when using Feedzirra::Feed.update is slightly different. The feed passed into on_success will be
120
+ # the updated feed with the standard updated accessors. on failure it will be the original feed object passed into update
121
+
122
+ # You can add custom parsing to the feed entry classes. Say you want the wfw:comments fields in an entry
123
+ Feedzirra::Feed.add_common_feed_entry_element("wfw:commentRss", :as => :comment_rss)
124
+ # The arguments are the same as the SAXMachine arguments for the element method. For more example usage look at the RSSEntry and
125
+ # AtomEntry classes. Now you can access those in an atom feed:
126
+ Feedzirra::Feed.parse(some_atom_xml).entries.first.comment_rss_ # => wfw:commentRss is now parsed!
127
+
128
+
129
+ # You can also define your own parsers and add them to the ones Feedzirra knows about. Here's an example that adds
130
+ # ITunesRSS parsing. It's included in the library, but not part of Feedzirra by default because some of the field names
131
+ # differ from other classes, thus breaking normalization.
132
+ Feedzirra::Feed.add_feed_class(ITunesRSS) # now all feeds will be checked to see if they match ITunesRSS before others
133
+
134
+ # You can also access http basic auth feeds. Unfortunately, you can't get to these inside of a bulk get of a bunch of feeds.
135
+ # You'll have to do it on its own like so:
136
+ Feedzirra::Feed.fetch_and_parse(some_url, :http_authentication => ["myusername", "mypassword"])
137
+
138
+ # Defining custom parsers
139
+ # TODO: the functionality is here, just write some good examples that show how to do this
140
+ </pre>
141
+
142
+ h2. Benchmarks
143
+
144
+ One of the goals of Feedzirra is speed. This includes not only parsing, but fetching multiple feeds as quickly as possible. I ran a benchmark getting 20 feeds 10 times using Feedzirra, rFeedParser, and FeedNormalizer. For more details the "benchmark code can be found in the project in spec/benchmarks/feedzirra_benchmarks.rb":http://github.com/pauldix/feedzirra/blob/7fb5634c5c16e9c6ec971767b462c6518cd55f5d/spec/benchmarks/feedzirra_benchmarks.rb
145
+
146
+ <pre>
147
+ feedzirra 5.170000 1.290000 6.460000 ( 18.917796)
148
+ rfeedparser 104.260000 12.220000 116.480000 (244.799063)
149
+ feed-normalizer 66.250000 4.010000 70.260000 (191.589862)
150
+ </pre>
151
+
152
+ The result of that benchmark is a bit sketchy because of the network variability. Running 10 times against the same 20 feeds was meant to smooth some of that out. However, there is also a "benchmark comparing parsing speed in spec/benchmarks/parsing_benchmark.rb":http://github.com/pauldix/feedzirra/blob/7fb5634c5c16e9c6ec971767b462c6518cd55f5d/spec/benchmarks/parsing_benchmark.rb on an atom feed.
153
+
154
+ <pre>
155
+ feedzirra 0.500000 0.030000 0.530000 ( 0.658744)
156
+ rfeedparser 8.400000 1.110000 9.510000 ( 11.839827)
157
+ feed-normalizer 5.980000 0.160000 6.140000 ( 7.576140)
158
+ </pre>
159
+
160
+ There's also a "benchmark that shows the results of using Feedzirra to perform updates on feeds":http://github.com/pauldix/feedzirra/blob/45d64319544c61a4c9eb9f7f825c73b9f9030cb3/spec/benchmarks/updating_benchmarks.rb you've already pulled in. I tested against 179 feeds. The first is the initial pull and the second is an update 65 seconds later. I'm not sure how many of them support etag and last-modified, so performance may be better or worse depending on what feeds you're requesting.
161
+
162
+ <pre>
163
+ feedzirra fetch and parse 4.010000 0.710000 4.720000 ( 15.110101)
164
+ feedzirra update 0.660000 0.280000 0.940000 ( 5.152709)
165
+ </pre>
166
+
167
+ h2. Next Steps
168
+
169
+ This thing needs to hammer on many different feeds in the wild. I'm sure there will be bugs. I want to find them and crush them. I didn't bother using the test suite for feedparser. i wanted to start fresh.
170
+
171
+ Here are some more specific TODOs.
172
+ * Fix the iTunes parser so things are normalized again
173
+ * Fix the Zlib deflate error
174
+ * Fix this error: http://github.com/inbox/70508
175
+ * Convert to use Typhoeus instead of taf2-curb
176
+ * Make the entries parse all link fields
177
+ * Make a feedzirra-rails gem to integrate feedzirra seamlessly with Rails and ActiveRecord.
178
+ * Create a super sweet DSL for defining new parsers.
179
+ * Test against Ruby 1.9.1 and fix any bugs.
180
+ * Clean up the fetching code inside feed.rb so it doesn't suck so hard.
181
+ * Readdress how feeds determine if they can parse a document. Maybe I should use namespaces instead?
182
+
183
+ h2. LICENSE
184
+
185
+ (The MIT License)
186
+
187
+ Copyright (c) 2009:
188
+
189
+ "Paul Dix":http://pauldix.net
190
+
191
+ Permission is hereby granted, free of charge, to any person obtaining
192
+ a copy of this software and associated documentation files (the
193
+ 'Software'), to deal in the Software without restriction, including
194
+ without limitation the rights to use, copy, modify, merge, publish,
195
+ distribute, sublicense, and/or sell copies of the Software, and to
196
+ permit persons to whom the Software is furnished to do so, subject to
197
+ the following conditions:
198
+
199
+ The above copyright notice and this permission notice shall be
200
+ included in all copies or substantial portions of the Software.
201
+
202
+ THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND,
203
+ EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
204
+ MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
205
+ IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
206
+ CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
207
+ TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
208
+ SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
@@ -0,0 +1,56 @@
1
+ require "spec"
2
+ require "spec/rake/spectask"
3
+ require 'rake/rdoctask'
4
+ require 'lib/feedzirra.rb'
5
+
6
+ # Grab recently touched specs
7
+ def recent_specs(touched_since)
8
+ recent_specs = FileList['app/**/*'].map do |path|
9
+
10
+ if File.mtime(path) > touched_since
11
+ spec = File.join('spec', File.dirname(path).split("/")[1..-1].join('/'),
12
+ "#{File.basename(path, ".*")}_spec.rb")
13
+ spec if File.exists?(spec)
14
+ end
15
+ end.compact
16
+
17
+ recent_specs += FileList['spec/**/*_spec.rb'].select do |path|
18
+ File.mtime(path) > touched_since
19
+ end
20
+ recent_specs.uniq
21
+ end
22
+
23
+ desc "Run all the tests"
24
+ task :default => :spec
25
+
26
+ # Tasks
27
+ Spec::Rake::SpecTask.new do |t|
28
+ t.spec_opts = ['--options', "\"#{File.dirname(__FILE__)}/spec/spec.opts\""]
29
+ t.spec_files = FileList['spec/**/*_spec.rb']
30
+ end
31
+
32
+ desc 'Run recent specs'
33
+ Spec::Rake::SpecTask.new("spec:recent") do |t|
34
+ t.spec_opts = ["--format","specdoc","--color"]
35
+ t.spec_files = recent_specs(Time.now - 600) # 10 min.
36
+ end
37
+
38
+ Spec::Rake::SpecTask.new('spec:rcov') do |t|
39
+ t.spec_opts = ['--options', "\"#{File.dirname(__FILE__)}/spec/spec.opts\""]
40
+ t.spec_files = FileList['spec/**/*_spec.rb']
41
+ t.rcov = true
42
+ t.rcov_opts = ['--exclude', 'spec,/usr/lib/ruby,/usr/local,/var/lib,/Library', '--text-report']
43
+ end
44
+
45
+ Rake::RDocTask.new do |rd|
46
+ rd.title = 'Feedzirra'
47
+ rd.rdoc_dir = 'rdoc'
48
+ rd.rdoc_files.include('README.rdoc', 'lib/feedzirra.rb', 'lib/feedzirra/**/*.rb')
49
+ rd.options = ["--quiet", "--opname", "index.html", "--line-numbers", "--inline-source", '--main', 'README.rdoc']
50
+ end
51
+
52
+ task :install do
53
+ rm_rf "*.gem"
54
+ puts `gem build feedzirra.gemspec`
55
+ puts `sudo gem install feedzirra-#{Feedzirra::VERSION}.gem`
56
+ end
@@ -0,0 +1,21 @@
1
+ # Date code pulled from:
2
+ # Ruby Cookbook by Lucas Carlson and Leonard Richardson
3
+ # Published by O'Reilly
4
+ # ISBN: 0-596-52369-6
5
+ class Date
6
+ def feed_utils_to_gm_time
7
+ feed_utils_to_time(new_offset, :gm)
8
+ end
9
+
10
+ def feed_utils_to_local_time
11
+ feed_utils_to_time(new_offset(DateTime.now.offset-offset), :local)
12
+ end
13
+
14
+ private
15
+ def feed_utils_to_time(dest, method)
16
+ #Convert a fraction of a day to a number of microseconds
17
+ usec = (dest.sec_fraction * 60 * 60 * 24 * (10**6)).to_i
18
+ Time.send(method, dest.year, dest.month, dest.day, dest.hour, dest.min,
19
+ dest.sec, usec)
20
+ end
21
+ end
@@ -0,0 +1,9 @@
1
+ class String
2
+ def sanitize!
3
+ self.replace(sanitize)
4
+ end
5
+
6
+ def sanitize
7
+ Loofah.scrub_fragment(self, :prune).to_s
8
+ end
9
+ end
@@ -0,0 +1,41 @@
1
+ $LOAD_PATH.unshift(File.dirname(__FILE__)) unless $LOAD_PATH.include?(File.dirname(__FILE__))
2
+
3
+ require 'zlib'
4
+ require 'curb'
5
+ require 'sax-machine'
6
+ require 'loofah'
7
+ require 'uri'
8
+
9
+ require 'active_support/version'
10
+ require 'active_support/basic_object'
11
+ require 'active_support/core_ext/module'
12
+ require 'active_support/core_ext/kernel'
13
+ require 'active_support/core_ext/object'
14
+
15
+ if ActiveSupport::VERSION::MAJOR >= 3
16
+ require 'active_support/time'
17
+ else
18
+ require 'active_support/core_ext/time'
19
+ end
20
+
21
+ require 'core_ext/date'
22
+ require 'core_ext/string'
23
+
24
+ require 'feedzirra/feed_utilities'
25
+ require 'feedzirra/feed_entry_utilities'
26
+ require 'feedzirra/feed'
27
+
28
+ require 'feedzirra/parser/rss_entry'
29
+ require 'feedzirra/parser/itunes_rss_owner'
30
+ require 'feedzirra/parser/itunes_rss_item'
31
+ require 'feedzirra/parser/atom_entry'
32
+ require 'feedzirra/parser/atom_feed_burner_entry'
33
+
34
+ require 'feedzirra/parser/rss'
35
+ require 'feedzirra/parser/itunes_rss'
36
+ require 'feedzirra/parser/atom'
37
+ require 'feedzirra/parser/atom_feed_burner'
38
+
39
+ module Feedzirra
40
+ VERSION = "0.0.24"
41
+ end
@@ -0,0 +1,325 @@
1
+ module Feedzirra
2
+ class NoParserAvailable < StandardError; end
3
+
4
+ class Feed
5
+ USER_AGENT = "feedzirra http://github.com/pauldix/feedzirra/tree/master"
6
+
7
+ # Takes a raw XML feed and attempts to parse it. If no parser is available a Feedzirra::NoParserAvailable exception is raised.
8
+ #
9
+ # === Parameters
10
+ # [xml<String>] The XML that you would like parsed.
11
+ # === Returns
12
+ # An instance of the determined feed type. By default a Feedzirra::Atom, Feedzirra::AtomFeedBurner, Feedzirra::RDF, or Feedzirra::RSS object.
13
+ # === Raises
14
+ # Feedzirra::NoParserAvailable : If no valid parser classes could be found for the feed.
15
+ def self.parse(xml)
16
+ if parser = determine_feed_parser_for_xml(xml)
17
+ parser.parse(xml)
18
+ else
19
+ raise NoParserAvailable.new("No valid parser for XML.")
20
+ end
21
+ end
22
+
23
+ # Determines the correct parser class to use for parsing the feed.
24
+ #
25
+ # === Parameters
26
+ # [xml<String>] The XML that you would like determine the parser for.
27
+ # === Returns
28
+ # The class name of the parser that can handle the XML.
29
+ def self.determine_feed_parser_for_xml(xml)
30
+ start_of_doc = xml.slice(0, 2000)
31
+ feed_classes.detect {|klass| klass.able_to_parse?(start_of_doc)}
32
+ end
33
+
34
+ # Adds a new feed parsing class that will be used for parsing.
35
+ #
36
+ # === Parameters
37
+ # [klass<Constant>] The class/constant that you want to register.
38
+ # === Returns
39
+ # A updated array of feed parser class names.
40
+ def self.add_feed_class(klass)
41
+ feed_classes.unshift klass
42
+ end
43
+
44
+ # Provides a list of registered feed parsing classes.
45
+ #
46
+ # === Returns
47
+ # A array of class names.
48
+ def self.feed_classes
49
+ @feed_classes ||= [Feedzirra::Parser::RSS, Feedzirra::Parser::AtomFeedBurner, Feedzirra::Parser::Atom]
50
+ end
51
+
52
+ # Makes all entry types look for the passed in element to parse. This is actually just a call to
53
+ # element (a SAXMachine call) in the class
54
+ #
55
+ # === Parameters
56
+ # [element_tag<String>]
57
+ # [options<Hash>] Valid keys are same as with SAXMachine
58
+ def self.add_common_feed_entry_element(element_tag, options = {})
59
+ # need to think of a better way to do this. will break for people who want this behavior
60
+ # across their added classes
61
+ feed_classes.map{|k| eval("#{k}Entry") }.each do |klass|
62
+ klass.send(:element, element_tag, options)
63
+ end
64
+ end
65
+
66
+ # Fetches and returns the raw XML for each URL provided.
67
+ #
68
+ # === Parameters
69
+ # [urls<String> or <Array>] A single feed URL, or an array of feed URLs.
70
+ # [options<Hash>] Valid keys for this argument as as followed:
71
+ # :user_agent - String that overrides the default user agent.
72
+ # :if_modified_since - Time object representing when the feed was last updated.
73
+ # :if_none_match - String that's normally an etag for the request that was stored previously.
74
+ # :on_success - Block that gets executed after a successful request.
75
+ # :on_failure - Block that gets executed after a failed request.
76
+ # === Returns
77
+ # A String of XML if a single URL is passed.
78
+ #
79
+ # A Hash if multiple URL's are passed. The key will be the URL, and the value the XML.
80
+ def self.fetch_raw(urls, options = {})
81
+ url_queue = [*urls]
82
+ multi = Curl::Multi.new
83
+ responses = {}
84
+ url_queue.each do |url|
85
+ easy = Curl::Easy.new(url) do |curl|
86
+ curl.headers["User-Agent"] = (options[:user_agent] || USER_AGENT)
87
+ curl.headers["If-Modified-Since"] = options[:if_modified_since].httpdate if options.has_key?(:if_modified_since)
88
+ curl.headers["If-None-Match"] = options[:if_none_match] if options.has_key?(:if_none_match)
89
+ curl.headers["Accept-encoding"] = 'gzip, deflate' if options.has_key?(:compress)
90
+ curl.follow_location = true
91
+ curl.userpwd = options[:http_authentication].join(':') if options.has_key?(:http_authentication)
92
+
93
+ curl.max_redirects = options[:max_redirects] if options[:max_redirects]
94
+ curl.timeout = options[:timeout] if options[:timeout]
95
+
96
+ curl.on_success do |c|
97
+ responses[url] = decode_content(c)
98
+ end
99
+ curl.on_failure do |c, err|
100
+ responses[url] = c.response_code
101
+ end
102
+ end
103
+ multi.add(easy)
104
+ end
105
+
106
+ multi.perform
107
+ urls.is_a?(String) ? responses.values.first : responses
108
+ end
109
+
110
+ # Fetches and returns the parsed XML for each URL provided.
111
+ #
112
+ # === Parameters
113
+ # [urls<String> or <Array>] A single feed URL, or an array of feed URLs.
114
+ # [options<Hash>] Valid keys for this argument as as followed:
115
+ # * :user_agent - String that overrides the default user agent.
116
+ # * :if_modified_since - Time object representing when the feed was last updated.
117
+ # * :if_none_match - String, an etag for the request that was stored previously.
118
+ # * :on_success - Block that gets executed after a successful request.
119
+ # * :on_failure - Block that gets executed after a failed request.
120
+ # === Returns
121
+ # A Feed object if a single URL is passed.
122
+ #
123
+ # A Hash if multiple URL's are passed. The key will be the URL, and the value the Feed object.
124
+ def self.fetch_and_parse(urls, options = {})
125
+ url_queue = [*urls]
126
+ multi = Curl::Multi.new
127
+ responses = {}
128
+
129
+ # I broke these down so I would only try to do 30 simultaneously because
130
+ # I was getting weird errors when doing a lot. As one finishes it pops another off the queue.
131
+ url_queue.slice!(0, 30).each do |url|
132
+ add_url_to_multi(multi, url, url_queue, responses, options)
133
+ end
134
+
135
+ multi.perform
136
+ return urls.is_a?(String) ? responses.values.first : responses
137
+ end
138
+
139
+ # Decodes the XML document if it was compressed.
140
+ #
141
+ # === Parameters
142
+ # [curl_request<Curl::Easy>] The Curl::Easy response object from the request.
143
+ # === Returns
144
+ # A decoded string of XML.
145
+ def self.decode_content(c)
146
+ if c.header_str.match(/Content-Encoding: gzip/)
147
+ begin
148
+ gz = Zlib::GzipReader.new(StringIO.new(c.body_str))
149
+ xml = gz.read
150
+ gz.close
151
+ rescue Zlib::GzipFile::Error
152
+ # Maybe this is not gzipped?
153
+ xml = c.body_str
154
+ end
155
+ elsif c.header_str.match(/Content-Encoding: deflate/)
156
+ xml = Zlib::Inflate.inflate(c.body_str)
157
+ else
158
+ xml = c.body_str
159
+ end
160
+
161
+ xml
162
+ end
163
+
164
+ # Updates each feed for each Feed object provided.
165
+ #
166
+ # === Parameters
167
+ # [feeds<Feed> or <Array>] A single feed object, or an array of feed objects.
168
+ # [options<Hash>] Valid keys for this argument as as followed:
169
+ # * :user_agent - String that overrides the default user agent.
170
+ # * :on_success - Block that gets executed after a successful request.
171
+ # * :on_failure - Block that gets executed after a failed request.
172
+ # === Returns
173
+ # A updated Feed object if a single URL is passed.
174
+ #
175
+ # A Hash if multiple Feeds are passed. The key will be the URL, and the value the updated Feed object.
176
+ def self.update(feeds, options = {})
177
+ feed_queue = [*feeds]
178
+ multi = Curl::Multi.new
179
+ responses = {}
180
+
181
+ feed_queue.slice!(0, 30).each do |feed|
182
+ add_feed_to_multi(multi, feed, feed_queue, responses, options)
183
+ end
184
+
185
+ multi.perform
186
+ responses.size == 1 ? responses.values.first : responses.values
187
+ end
188
+
189
+ # An abstraction for adding a feed by URL to the passed Curb::multi stack.
190
+ #
191
+ # === Parameters
192
+ # [multi<Curl::Multi>] The Curl::Multi object that the request should be added too.
193
+ # [url<String>] The URL of the feed that you would like to be fetched.
194
+ # [url_queue<Array>] An array of URLs that are queued for request.
195
+ # [responses<Hash>] Existing responses that you want the response from the request added to.
196
+ # [feeds<String> or <Array>] A single feed object, or an array of feed objects.
197
+ # [options<Hash>] Valid keys for this argument as as followed:
198
+ # * :user_agent - String that overrides the default user agent.
199
+ # * :on_success - Block that gets executed after a successful request.
200
+ # * :on_failure - Block that gets executed after a failed request.
201
+ # === Returns
202
+ # The updated Curl::Multi object with the request details added to it's stack.
203
+ def self.add_url_to_multi(multi, url, url_queue, responses, options)
204
+ easy = Curl::Easy.new(url) do |curl|
205
+ curl.headers["User-Agent"] = (options[:user_agent] || USER_AGENT)
206
+ curl.headers["If-Modified-Since"] = options[:if_modified_since].httpdate if options.has_key?(:if_modified_since)
207
+ curl.headers["If-None-Match"] = options[:if_none_match] if options.has_key?(:if_none_match)
208
+ curl.headers["Accept-encoding"] = 'gzip, deflate' if options.has_key?(:compress)
209
+ curl.follow_location = true
210
+ curl.userpwd = options[:http_authentication].join(':') if options.has_key?(:http_authentication)
211
+ curl.proxy_url = options[:proxy_url] if options.has_key?(:proxy_url)
212
+ curl.proxy_port = options[:proxy_port] if options.has_key?(:proxy_port)
213
+ curl.max_redirects = options[:max_redirects] if options[:max_redirects]
214
+ curl.timeout = options[:timeout] if options[:timeout]
215
+
216
+ curl.on_success do |c|
217
+ add_url_to_multi(multi, url_queue.shift, url_queue, responses, options) unless url_queue.empty?
218
+ xml = decode_content(c)
219
+ klass = determine_feed_parser_for_xml(xml)
220
+
221
+ if klass
222
+ begin
223
+ feed = klass.parse(xml)
224
+ feed.feed_url = c.last_effective_url
225
+ feed.etag = etag_from_header(c.header_str)
226
+ feed.last_modified = last_modified_from_header(c.header_str)
227
+ responses[url] = feed
228
+ options[:on_success].call(url, feed) if options.has_key?(:on_success)
229
+ rescue Exception => e
230
+ options[:on_failure].call(url, c.response_code, c.header_str, c.body_str) if options.has_key?(:on_failure)
231
+ end
232
+ else
233
+ # puts "Error determining parser for #{url} - #{c.last_effective_url}"
234
+ # raise NoParserAvailable.new("no valid parser for content.") (this would unfirtunately fail the whole 'multi', so it's not really useable)
235
+ options[:on_failure].call(url, c.response_code, c.header_str, c.body_str) if options.has_key?(:on_failure)
236
+ end
237
+ end
238
+
239
+ curl.on_failure do |c, err|
240
+ add_url_to_multi(multi, url_queue.shift, url_queue, responses, options) unless url_queue.empty?
241
+ responses[url] = c.response_code
242
+ options[:on_failure].call(url, c.response_code, c.header_str, c.body_str) if options.has_key?(:on_failure)
243
+ end
244
+ end
245
+ multi.add(easy)
246
+ end
247
+
248
+ # An abstraction for adding a feed by a Feed object to the passed Curb::multi stack.
249
+ #
250
+ # === Parameters
251
+ # [multi<Curl::Multi>] The Curl::Multi object that the request should be added too.
252
+ # [feed<Feed>] A feed object that you would like to be fetched.
253
+ # [url_queue<Array>] An array of feed objects that are queued for request.
254
+ # [responses<Hash>] Existing responses that you want the response from the request added to.
255
+ # [feeds<String>] or <Array> A single feed object, or an array of feed objects.
256
+ # [options<Hash>] Valid keys for this argument as as followed:
257
+ # * :user_agent - String that overrides the default user agent.
258
+ # * :on_success - Block that gets executed after a successful request.
259
+ # * :on_failure - Block that gets executed after a failed request.
260
+ # === Returns
261
+ # The updated Curl::Multi object with the request details added to it's stack.
262
+ def self.add_feed_to_multi(multi, feed, feed_queue, responses, options)
263
+ easy = Curl::Easy.new(feed.feed_url) do |curl|
264
+ curl.headers["User-Agent"] = (options[:user_agent] || USER_AGENT)
265
+ curl.headers["If-Modified-Since"] = feed.last_modified.httpdate if feed.last_modified
266
+ curl.headers["If-None-Match"] = feed.etag if feed.etag
267
+ curl.userpwd = options[:http_authentication].join(':') if options.has_key?(:http_authentication)
268
+ curl.follow_location = true
269
+
270
+ curl.max_redirects = options[:max_redirects] if options[:max_redirects]
271
+ curl.timeout = options[:timeout] if options[:timeout]
272
+
273
+ curl.on_success do |c|
274
+ begin
275
+ add_feed_to_multi(multi, feed_queue.shift, feed_queue, responses, options) unless feed_queue.empty?
276
+ updated_feed = Feed.parse(c.body_str)
277
+ updated_feed.feed_url = c.last_effective_url
278
+ updated_feed.etag = etag_from_header(c.header_str)
279
+ updated_feed.last_modified = last_modified_from_header(c.header_str)
280
+ feed.update_from_feed(updated_feed)
281
+ responses[feed.feed_url] = feed
282
+ options[:on_success].call(feed) if options.has_key?(:on_success)
283
+ rescue Exception => e
284
+ options[:on_failure].call(feed, c.response_code, c.header_str, c.body_str) if options.has_key?(:on_failure)
285
+ end
286
+ end
287
+
288
+ curl.on_failure do |c, err|
289
+ add_feed_to_multi(multi, feed_queue.shift, feed_queue, responses, options) unless feed_queue.empty?
290
+ response_code = c.response_code
291
+ if response_code == 304 # it's not modified. this isn't an error condition
292
+ responses[feed.feed_url] = feed
293
+ options[:on_success].call(feed) if options.has_key?(:on_success)
294
+ else
295
+ responses[feed.url] = c.response_code
296
+ options[:on_failure].call(feed, c.response_code, c.header_str, c.body_str) if options.has_key?(:on_failure)
297
+ end
298
+ end
299
+ end
300
+ multi.add(easy)
301
+ end
302
+
303
+ # Determines the etag from the request headers.
304
+ #
305
+ # === Parameters
306
+ # [header<String>] Raw request header returned from the request
307
+ # === Returns
308
+ # A string of the etag or nil if it cannot be found in the headers.
309
+ def self.etag_from_header(header)
310
+ header =~ /.*ETag:\s(.*)\r/
311
+ $1
312
+ end
313
+
314
+ # Determines the last modified date from the request headers.
315
+ #
316
+ # === Parameters
317
+ # [header<String>] Raw request header returned from the request
318
+ # === Returns
319
+ # A Time object of the last modified date or nil if it cannot be found in the headers.
320
+ def self.last_modified_from_header(header)
321
+ header =~ /.*Last-Modified:\s(.*)\r/
322
+ Time.parse($1) if $1
323
+ end
324
+ end
325
+ end