spidey 0.0.2 → 0.0.3
Sign up to get free protection for your applications and to get access to all the features.
- data/LICENSE.txt +20 -0
- data/README.md +34 -9
- data/lib/spidey/version.rb +1 -1
- data/spidey.gemspec +1 -0
- metadata +13 -11
data/LICENSE.txt
ADDED
@@ -0,0 +1,20 @@
|
|
1
|
+
Copyright (c) 2012 Joey Aghion, Art.sy Inc.
|
2
|
+
|
3
|
+
Permission is hereby granted, free of charge, to any person obtaining
|
4
|
+
a copy of this software and associated documentation files (the
|
5
|
+
"Software"), to deal in the Software without restriction, including
|
6
|
+
without limitation the rights to use, copy, modify, merge, publish,
|
7
|
+
distribute, sublicense, and/or sell copies of the Software, and to
|
8
|
+
permit persons to whom the Software is furnished to do so, subject to
|
9
|
+
the following conditions:
|
10
|
+
|
11
|
+
The above copyright notice and this permission notice shall be
|
12
|
+
included in all copies or substantial portions of the Software.
|
13
|
+
|
14
|
+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
15
|
+
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
16
|
+
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
17
|
+
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
|
18
|
+
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
|
19
|
+
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
|
20
|
+
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
data/README.md
CHANGED
@@ -1,27 +1,52 @@
|
|
1
1
|
Spidey
|
2
2
|
======
|
3
3
|
|
4
|
+
Spidey provides a bare-bones framework for crawling and scraping web sites.
|
4
5
|
|
5
6
|
|
6
7
|
Example
|
7
8
|
-------
|
8
9
|
|
9
|
-
|
10
|
-
|
10
|
+
This [non-working] example _spider_ crawls the ebay.com home page, follows links to auction pages, and finally records a few scraped item details as a _result_.
|
11
|
+
|
12
|
+
class EbaySpider < Spidey::AbstractSpider
|
13
|
+
handle "http://www.ebay.com", :process_home
|
14
|
+
|
15
|
+
def process_home(page, default_data = {})
|
16
|
+
page.links_with(href: /auction\.aspx/).each do |link|
|
17
|
+
handle resolve_url(link.href, page), :process_auction, auction_title: link.text
|
18
|
+
end
|
19
|
+
end
|
20
|
+
|
21
|
+
def process_auction(page, default_data = {})
|
22
|
+
record default_data.merge(sale_price: page.search('.sale_price').text)
|
23
|
+
end
|
24
|
+
end
|
25
|
+
|
26
|
+
spider = EbaySpider.new verbose: true
|
27
|
+
spider.crawl max_urls: 100
|
28
|
+
|
29
|
+
Implement a _spider_ class extending `Spidey::AbstractSpider` for each target site. The class can declare starting URLs with class-level calls to `handle`. Spidey invokes each of the methods specified in those calls, passing in the resulting `page` (a [Mechanize](http://mechanize.rubyforge.org/) [Page](http://mechanize.rubyforge.org/Mechanize/Page.html) object) and, optionally, some scraped data. The methods can do whatever processing of the page is necessary, calling `handle` with additional URLs to crawl and/or `record` with scraped results.
|
30
|
+
|
31
|
+
|
32
|
+
Storage Strategies
|
11
33
|
----------
|
12
34
|
|
13
|
-
|
35
|
+
By default, the lists of URLs being crawled, results scraped, and errors encountered are stored as simple arrays in the spider (i.e., in memory):
|
14
36
|
|
15
|
-
|
37
|
+
spider.urls # => ["http://www.ebay.com", "http://www.ebay.com/...", ...]
|
38
|
+
spider.results # => [{auction_title: "...", sale_price: "..."}, ...]
|
39
|
+
spider.errors # => [{url: "...", handler: :process_home, error: FooException}, ...]
|
16
40
|
|
41
|
+
Add the [spidey-mongo](https://github.com/joeyAghion/spidey-mongo) gem and include `Spidey::Strategies::Mongo` in your spider to instead use MongoDB to persist these data. [See the docs](https://github.com/joeyAghion/spidey-mongo) for more information.
|
17
42
|
|
18
|
-
Contributing
|
19
|
-
------------
|
20
43
|
|
21
44
|
To Do
|
22
45
|
-----
|
23
|
-
* Add examples
|
46
|
+
* Add working examples
|
47
|
+
* Spidey works well for crawling public web pages, but since little effort is undertaken to preserve the crawler's state across requests, it works less well when particular cookies or sequences of form submissions are required. [Mechanize](http://mechanize.rubyforge.org/) supports this quite well, though, so Spidey could grow in that direction.
|
24
48
|
|
25
49
|
|
26
|
-
|
27
|
-
|
50
|
+
Copyright
|
51
|
+
---------
|
52
|
+
Copyright (c) 2012 Joey Aghion, Art.sy Inc. See [LICENSE.txt](LICENSE.txt) for further details.
|
data/lib/spidey/version.rb
CHANGED
data/spidey.gemspec
CHANGED
@@ -10,6 +10,7 @@ Gem::Specification.new do |s|
|
|
10
10
|
s.homepage = "https://github.com/joeyAghion/spidey"
|
11
11
|
s.summary = %q{A loose framework for crawling and scraping web sites.}
|
12
12
|
s.description = %q{A loose framework for crawling and scraping web sites.}
|
13
|
+
s.license = 'MIT'
|
13
14
|
|
14
15
|
s.rubyforge_project = "spidey"
|
15
16
|
|
metadata
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
2
2
|
name: spidey
|
3
3
|
version: !ruby/object:Gem::Version
|
4
|
-
version: 0.0.
|
4
|
+
version: 0.0.3
|
5
5
|
prerelease:
|
6
6
|
platform: ruby
|
7
7
|
authors:
|
@@ -9,11 +9,11 @@ authors:
|
|
9
9
|
autorequire:
|
10
10
|
bindir: bin
|
11
11
|
cert_chain: []
|
12
|
-
date: 2012-06-
|
12
|
+
date: 2012-06-27 00:00:00.000000000Z
|
13
13
|
dependencies:
|
14
14
|
- !ruby/object:Gem::Dependency
|
15
15
|
name: rake
|
16
|
-
requirement: &
|
16
|
+
requirement: &70359848145740 !ruby/object:Gem::Requirement
|
17
17
|
none: false
|
18
18
|
requirements:
|
19
19
|
- - ! '>='
|
@@ -21,10 +21,10 @@ dependencies:
|
|
21
21
|
version: '0'
|
22
22
|
type: :development
|
23
23
|
prerelease: false
|
24
|
-
version_requirements: *
|
24
|
+
version_requirements: *70359848145740
|
25
25
|
- !ruby/object:Gem::Dependency
|
26
26
|
name: rspec
|
27
|
-
requirement: &
|
27
|
+
requirement: &70359848144760 !ruby/object:Gem::Requirement
|
28
28
|
none: false
|
29
29
|
requirements:
|
30
30
|
- - ! '>='
|
@@ -32,10 +32,10 @@ dependencies:
|
|
32
32
|
version: '0'
|
33
33
|
type: :development
|
34
34
|
prerelease: false
|
35
|
-
version_requirements: *
|
35
|
+
version_requirements: *70359848144760
|
36
36
|
- !ruby/object:Gem::Dependency
|
37
37
|
name: mechanize
|
38
|
-
requirement: &
|
38
|
+
requirement: &70359848143880 !ruby/object:Gem::Requirement
|
39
39
|
none: false
|
40
40
|
requirements:
|
41
41
|
- - ! '>='
|
@@ -43,7 +43,7 @@ dependencies:
|
|
43
43
|
version: '0'
|
44
44
|
type: :runtime
|
45
45
|
prerelease: false
|
46
|
-
version_requirements: *
|
46
|
+
version_requirements: *70359848143880
|
47
47
|
description: A loose framework for crawling and scraping web sites.
|
48
48
|
email:
|
49
49
|
- joey@aghion.com
|
@@ -53,6 +53,7 @@ extra_rdoc_files: []
|
|
53
53
|
files:
|
54
54
|
- .gitignore
|
55
55
|
- Gemfile
|
56
|
+
- LICENSE.txt
|
56
57
|
- README.md
|
57
58
|
- Rakefile
|
58
59
|
- lib/spidey.rb
|
@@ -62,7 +63,8 @@ files:
|
|
62
63
|
- spec/spidey/abstract_spider_spec.rb
|
63
64
|
- spidey.gemspec
|
64
65
|
homepage: https://github.com/joeyAghion/spidey
|
65
|
-
licenses:
|
66
|
+
licenses:
|
67
|
+
- MIT
|
66
68
|
post_install_message:
|
67
69
|
rdoc_options: []
|
68
70
|
require_paths:
|
@@ -75,7 +77,7 @@ required_ruby_version: !ruby/object:Gem::Requirement
|
|
75
77
|
version: '0'
|
76
78
|
segments:
|
77
79
|
- 0
|
78
|
-
hash:
|
80
|
+
hash: -1311415898264218872
|
79
81
|
required_rubygems_version: !ruby/object:Gem::Requirement
|
80
82
|
none: false
|
81
83
|
requirements:
|
@@ -84,7 +86,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
|
|
84
86
|
version: '0'
|
85
87
|
segments:
|
86
88
|
- 0
|
87
|
-
hash:
|
89
|
+
hash: -1311415898264218872
|
88
90
|
requirements: []
|
89
91
|
rubyforge_project: spidey
|
90
92
|
rubygems_version: 1.8.10
|