libcraigscrape 0.5

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
data/COPYING.LESSER ADDED
@@ -0,0 +1,165 @@
1
+ GNU LESSER GENERAL PUBLIC LICENSE
2
+ Version 3, 29 June 2007
3
+
4
+ Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>
5
+ Everyone is permitted to copy and distribute verbatim copies
6
+ of this license document, but changing it is not allowed.
7
+
8
+
9
+ This version of the GNU Lesser General Public License incorporates
10
+ the terms and conditions of version 3 of the GNU General Public
11
+ License, supplemented by the additional permissions listed below.
12
+
13
+ 0. Additional Definitions.
14
+
15
+ As used herein, "this License" refers to version 3 of the GNU Lesser
16
+ General Public License, and the "GNU GPL" refers to version 3 of the GNU
17
+ General Public License.
18
+
19
+ "The Library" refers to a covered work governed by this License,
20
+ other than an Application or a Combined Work as defined below.
21
+
22
+ An "Application" is any work that makes use of an interface provided
23
+ by the Library, but which is not otherwise based on the Library.
24
+ Defining a subclass of a class defined by the Library is deemed a mode
25
+ of using an interface provided by the Library.
26
+
27
+ A "Combined Work" is a work produced by combining or linking an
28
+ Application with the Library. The particular version of the Library
29
+ with which the Combined Work was made is also called the "Linked
30
+ Version".
31
+
32
+ The "Minimal Corresponding Source" for a Combined Work means the
33
+ Corresponding Source for the Combined Work, excluding any source code
34
+ for portions of the Combined Work that, considered in isolation, are
35
+ based on the Application, and not on the Linked Version.
36
+
37
+ The "Corresponding Application Code" for a Combined Work means the
38
+ object code and/or source code for the Application, including any data
39
+ and utility programs needed for reproducing the Combined Work from the
40
+ Application, but excluding the System Libraries of the Combined Work.
41
+
42
+ 1. Exception to Section 3 of the GNU GPL.
43
+
44
+ You may convey a covered work under sections 3 and 4 of this License
45
+ without being bound by section 3 of the GNU GPL.
46
+
47
+ 2. Conveying Modified Versions.
48
+
49
+ If you modify a copy of the Library, and, in your modifications, a
50
+ facility refers to a function or data to be supplied by an Application
51
+ that uses the facility (other than as an argument passed when the
52
+ facility is invoked), then you may convey a copy of the modified
53
+ version:
54
+
55
+ a) under this License, provided that you make a good faith effort to
56
+ ensure that, in the event an Application does not supply the
57
+ function or data, the facility still operates, and performs
58
+ whatever part of its purpose remains meaningful, or
59
+
60
+ b) under the GNU GPL, with none of the additional permissions of
61
+ this License applicable to that copy.
62
+
63
+ 3. Object Code Incorporating Material from Library Header Files.
64
+
65
+ The object code form of an Application may incorporate material from
66
+ a header file that is part of the Library. You may convey such object
67
+ code under terms of your choice, provided that, if the incorporated
68
+ material is not limited to numerical parameters, data structure
69
+ layouts and accessors, or small macros, inline functions and templates
70
+ (ten or fewer lines in length), you do both of the following:
71
+
72
+ a) Give prominent notice with each copy of the object code that the
73
+ Library is used in it and that the Library and its use are
74
+ covered by this License.
75
+
76
+ b) Accompany the object code with a copy of the GNU GPL and this license
77
+ document.
78
+
79
+ 4. Combined Works.
80
+
81
+ You may convey a Combined Work under terms of your choice that,
82
+ taken together, effectively do not restrict modification of the
83
+ portions of the Library contained in the Combined Work and reverse
84
+ engineering for debugging such modifications, if you also do each of
85
+ the following:
86
+
87
+ a) Give prominent notice with each copy of the Combined Work that
88
+ the Library is used in it and that the Library and its use are
89
+ covered by this License.
90
+
91
+ b) Accompany the Combined Work with a copy of the GNU GPL and this license
92
+ document.
93
+
94
+ c) For a Combined Work that displays copyright notices during
95
+ execution, include the copyright notice for the Library among
96
+ these notices, as well as a reference directing the user to the
97
+ copies of the GNU GPL and this license document.
98
+
99
+ d) Do one of the following:
100
+
101
+ 0) Convey the Minimal Corresponding Source under the terms of this
102
+ License, and the Corresponding Application Code in a form
103
+ suitable for, and under terms that permit, the user to
104
+ recombine or relink the Application with a modified version of
105
+ the Linked Version to produce a modified Combined Work, in the
106
+ manner specified by section 6 of the GNU GPL for conveying
107
+ Corresponding Source.
108
+
109
+ 1) Use a suitable shared library mechanism for linking with the
110
+ Library. A suitable mechanism is one that (a) uses at run time
111
+ a copy of the Library already present on the user's computer
112
+ system, and (b) will operate properly with a modified version
113
+ of the Library that is interface-compatible with the Linked
114
+ Version.
115
+
116
+ e) Provide Installation Information, but only if you would otherwise
117
+ be required to provide such information under section 6 of the
118
+ GNU GPL, and only to the extent that such information is
119
+ necessary to install and execute a modified version of the
120
+ Combined Work produced by recombining or relinking the
121
+ Application with a modified version of the Linked Version. (If
122
+ you use option 4d0, the Installation Information must accompany
123
+ the Minimal Corresponding Source and Corresponding Application
124
+ Code. If you use option 4d1, you must provide the Installation
125
+ Information in the manner specified by section 6 of the GNU GPL
126
+ for conveying Corresponding Source.)
127
+
128
+ 5. Combined Libraries.
129
+
130
+ You may place library facilities that are a work based on the
131
+ Library side by side in a single library together with other library
132
+ facilities that are not Applications and are not covered by this
133
+ License, and convey such a combined library under terms of your
134
+ choice, if you do both of the following:
135
+
136
+ a) Accompany the combined library with a copy of the same work based
137
+ on the Library, uncombined with any other library facilities,
138
+ conveyed under the terms of this License.
139
+
140
+ b) Give prominent notice with the combined library that part of it
141
+ is a work based on the Library, and explaining where to find the
142
+ accompanying uncombined form of the same work.
143
+
144
+ 6. Revised Versions of the GNU Lesser General Public License.
145
+
146
+ The Free Software Foundation may publish revised and/or new versions
147
+ of the GNU Lesser General Public License from time to time. Such new
148
+ versions will be similar in spirit to the present version, but may
149
+ differ in detail to address new problems or concerns.
150
+
151
+ Each version is given a distinguishing version number. If the
152
+ Library as you received it specifies that a certain numbered version
153
+ of the GNU Lesser General Public License "or any later version"
154
+ applies to it, you have the option of following the terms and
155
+ conditions either of that published version or of any later version
156
+ published by the Free Software Foundation. If the Library as you
157
+ received it does not specify a version number of the GNU Lesser
158
+ General Public License, you may choose any version of the GNU Lesser
159
+ General Public License ever published by the Free Software Foundation.
160
+
161
+ If the Library as you received it specifies that a proxy can decide
162
+ whether future versions of the GNU Lesser General Public License shall
163
+ apply, that proxy's public statement of acceptance of any version is
164
+ permanent authorization for you to choose that version for the
165
+ Library.
data/README ADDED
@@ -0,0 +1,69 @@
1
+ = libcraigscrape - A craigslist URL-scraping support Library
2
+
3
+ An easy library to do the heavy lifting between you and Craigslist's posting database. Given a URL, libcraigscrape will
4
+ follow links, scrape fields, and make ruby-sense out of the raw html from craigslist's servers.
5
+
6
+ For more information, head to the {craiglist monitoring}[http://www.derosetechnologies.com/community/libcraigscrape] help section of our website.
7
+
8
+ == craigwatch
9
+ libcraigscrape was primarily developed to support the included craigwatch[link:files/bin/craigwatch.html] script. See the included craigwatch script for
10
+ examples of libcraigscape in action, and (hopefully) to serve an immediate craigscraping need.
11
+
12
+ == Installation
13
+
14
+ Install via RubyGems:
15
+
16
+ sudo gem install libcraigscrape
17
+
18
+ == Usage
19
+
20
+ === Scrape Craigslist Listings since Apr 26
21
+
22
+ Using the search url http://miami.craigslist.org/search/sss?query=apple
23
+
24
+ require 'libcraigscrape'
25
+ require 'date'
26
+ require 'pp'
27
+
28
+ posts = CraigScrape.scrape_posts_since 'http://miami.craigslist.org/search/sss?query=apple', Time.parse('Apr 25')
29
+ posts.each do |post|
30
+ pp post
31
+ end
32
+
33
+ === Scrape Last 225 Craigslist Listings
34
+
35
+ Under the category url http://miami.craigslist.org/apa/
36
+
37
+ require 'libcraigscrape'
38
+ require 'pp'
39
+
40
+ posts = CraigScrape.scrape_posts 'http://miami.craigslist.org/apa/', 225
41
+ posts.each do |post|
42
+ pp post
43
+ end
44
+
45
+ === Scrape Single Craigslist Posting
46
+
47
+ This grabs the full details under the specific post http://miami.craigslist.org/mdc/sys/1140808860.html
48
+
49
+ require 'libcraigscrape'
50
+
51
+ post = CraigScrape.scrape_full_post 'http://miami.craigslist.org/mdc/sys/1140808860.html'
52
+ puts "(%s) %s:\n %s" % [ post.post_time.strftime('%b %d'), post.title, post.contents_as_plain ]
53
+
54
+ === Scrape Single Craigslist Listing
55
+
56
+ This grabs the post summaries of the single listings at http://miami.craigslist.org/search/sss?query=laptop
57
+
58
+ require 'libcraigscrape'
59
+
60
+ listing = CraigScrape.scrape_listing 'http://miami.craigslist.org/search/sss?query=laptop'
61
+ puts 'Found %d posts for the search "laptop" on this page' % listing.posts.length
62
+
63
+ == Author
64
+ - Chris DeRose (cderose@derosetechnologies.com)
65
+ - DeRose Technologies, Inc. http://www.derosetechnologies.com
66
+
67
+ == License
68
+
69
+ See COPYING[link:files/COPYING.html]
data/Rakefile ADDED
@@ -0,0 +1,72 @@
1
+ require 'rake'
2
+ require 'rake/clean'
3
+ require 'rake/gempackagetask'
4
+ require 'rake/rdoctask'
5
+ require 'rake/testtask'
6
+ require 'fileutils'
7
+ include FileUtils
8
+
9
+ RbConfig = Config unless defined? RbConfig
10
+
11
+ NAME = "libcraigscrape"
12
+ VERS = ENV['VERSION'] || "0.5"
13
+ PKG = "#{NAME}-#{VERS}"
14
+
15
+ RDOC_OPTS = ['--quiet', '--title', 'The libcraigscrape Reference', '--main', 'README', '--inline-source']
16
+ PKG_FILES = %w(CHANGELOG COPYING COPYING.LESSER Rakefile) + Dir.glob("{bin,test,lib}/**/*")
17
+
18
+ SPEC =
19
+ Gem::Specification.new do |s|
20
+ s.name = NAME
21
+ s.version = VERS
22
+ s.platform = Gem::Platform::RUBY
23
+ s.has_rdoc = true
24
+ s.bindir = 'bin'
25
+ s.executables = 'craigwatch'
26
+ s.rdoc_options += RDOC_OPTS
27
+ s.extra_rdoc_files = ['README', "CHANGELOG", "COPYING","COPYING.LESSER", 'bin/craigwatch']
28
+ s.summary = "quick, easy, craigslist parsing library that takes the monotony out of working with craigslist posts and listings"
29
+ s.description = s.summary
30
+ s.author = "Chris DeRose, DeRose Technologies, Inc."
31
+ s.email = 'cderose@derosetechnologies.com'
32
+ s.homepage = 'http://www.derosetechnologies.com/community/libcraigscrape'
33
+ s.rubyforge_project = 'libcraigwatch'
34
+ s.files = PKG_FILES
35
+ s.require_paths = ["lib"]
36
+ s.test_files = FileList['test/test_*.rb']
37
+ s.add_dependency 'hpricot'
38
+ s.add_dependency 'htmlentities'
39
+ s.add_dependency 'activesupport'
40
+ end
41
+
42
+ desc "Run all the tests"
43
+ Rake::TestTask.new do |t|
44
+ t.libs << "test"
45
+ t.test_files = FileList['test/test_*.rb']
46
+ t.verbose = true
47
+ end
48
+
49
+ Rake::RDocTask.new do |rdoc|
50
+ rdoc.rdoc_dir = 'doc/rdoc'
51
+ rdoc.options += RDOC_OPTS
52
+ rdoc.main = "README"
53
+ rdoc.rdoc_files.add ['README', 'CHANGELOG', 'COPYING', 'COPYING.LESSER', 'bin/craigwatch','lib/**/*.rb']
54
+ end
55
+
56
+ Rake::GemPackageTask.new(SPEC) do |p|
57
+ p.need_tar = true
58
+ p.gem_spec = SPEC
59
+ end
60
+
61
+ task "lib" do
62
+ directory "lib"
63
+ end
64
+
65
+ task :install do
66
+ sh %{rake package}
67
+ sh %{sudo gem install pkg/#{NAME}-#{VERS}}
68
+ end
69
+
70
+ task :uninstall => [:clean] do
71
+ sh %{sudo gem uninstall #{NAME}}
72
+ end
@@ -0,0 +1,57 @@
1
+ type: map
2
+ class: CraigReportDefinition
3
+ mapping:
4
+ "debug_database": { type: bool, required: no }
5
+ "debug_mailer": { type: bool, required: no }
6
+ "debug_craigscrape": { type: bool, required: no }
7
+
8
+ "report_name": { type: str, required: yes }
9
+ "email_to": { type: str, required: yes }
10
+ "email_from": { type: str, required: no }
11
+ "smtp_settings":
12
+ type: map
13
+ required: no
14
+ mapping:
15
+ "address": { type: str, required: yes }
16
+ "port": { type: int, required: no, default: 25 }
17
+ "user_name": { type: str, required: no }
18
+ "domain": { type: str, required: no }
19
+ "password": { type: str, required: no }
20
+ "authentication": { type: str, required: no }
21
+ "tracking_database":
22
+ type: map
23
+ mapping:
24
+ "adapter": { type: str, required: yes }
25
+ "dbfile": { type: str, required: no }
26
+ "host": { type: str, required: no }
27
+ "username": { type: str, required: no }
28
+ "password": { type: str, required: no }
29
+ "socket": { type: str, required: no }
30
+ "database": { type: str, required: no }
31
+ "searches":
32
+ type: seq
33
+ sequence:
34
+ - type: map
35
+ class: CraigReportDefinition::SearchDefinition
36
+ mapping:
37
+ "name": {type: str, required: yes, unique: yes}
38
+ "has_image": {type: bool, required: no}
39
+ "newest_first": {type: bool, required: no, default: no}
40
+ "price_required": {type: bool, required: no, default: no}
41
+ "price_greater_than": {type: int, required: no}
42
+ "price_less_than": {type: int, required: no}
43
+ "full_post_has": {type: seq, required: no, sequence: [ {type: str, unique: yes} ]}
44
+ "full_post_has_no": {type: seq, required: no, sequence: [ {type: str, unique: yes} ]}
45
+ "summary_post_has": {type: seq, required: no, sequence: [ {type: str, unique: yes} ]}
46
+ "summary_post_has_no": {type: seq, required: no, sequence: [ {type: str, unique: yes} ]}
47
+ "listing":
48
+ type: seq
49
+ required: yes
50
+ sequence:
51
+ - type: str
52
+ pattern: /^http[s]?\:\/\//
53
+ unique: yes
54
+ "starting":
55
+ type: str
56
+ required: no
57
+ pattern: /^[\d]{1,2}\/[\d]{1,2}\/(?:[\d]{2}|[\d]{4})$/
data/bin/craigwatch ADDED
@@ -0,0 +1,374 @@
1
+ #!/usr/bin/ruby
2
+ #
3
+ # =craigwatch - A email-based "post monitoring" solution
4
+ #
5
+ # Created alongside the libcraigscrape library, libcraigwatch was designed to take the monotony out of regular
6
+ # craiglist monitoring. craigwatch is designed to be run at periodic intervals (hourly/daily/etc) through crontab
7
+ # and report all new postings within a listing or search url, since its last run, by email.
8
+ #
9
+ # For more information, head to the {craiglist monitoring}[http://www.derosetechnologies.com/community/libcraigscrape] help section of our website.
10
+ #
11
+ # == Features
12
+ # In additon to its report tracking, craigwatch offers many post search and filtering options that offer much imrpoved
13
+ # and more accurate results then does craigslist's search functions. Post filtering options include:
14
+ # - has_image - yes/no
15
+ # - price_required - yes/no
16
+ # - price_greater_than - (int)
17
+ # - price_less_than - (int)
18
+ # - full_post_has - (string or regexp) Only post whose full-post's contents contains/matches
19
+ # - full_post_has_no - (string or regexp) Only post whose full-post's contents_ contains doesn't contain/match
20
+ # - summary_post_has - (string or regexp) Only post whose listing's label contains/matches
21
+ # - summary_post_has_no - (string or regexp) Only post whose listing's label doesn't contain/match
22
+ #
23
+ # Multiple searches can be combined into a single report, and results can be sorted by newest-first or oldest-first (default)
24
+ #
25
+ # Reporting output is easily customized html, handled by ActionMailer, and emails can be delivered via smtp or sendmail.
26
+ # Database tracking of already-delivered posts is handled by ActiveRecord, and its driver-agnostic SQL supports all the
27
+ # major backends (sqllite/mysql/postgres/probably-all-others). Database sizes are contained by automatically pruning old results
28
+ # that are no longer required at the end of each run.
29
+ #
30
+ # Pretty useful, no?
31
+ #
32
+ # == Installation
33
+ # craigwatch is coupled with libcraigscrape, and is installed via ruby gems. However, since we focused on keeping the
34
+ # libcraigscrape download 'lightweight' some additional gems need to be installed in addition to the initial libcraigscrape
35
+ # gem itself.
36
+ #
37
+ # This should take care of the craigwatch install on all systems:
38
+ # sudo gem install libcraigscrape kwalify activerecord actionmailer
39
+ # Alternatively, if you've already installed libcraigscrape and want to start working with craigwatch:
40
+ # sudo gem install kwalify activerecord actionmailer
41
+ #
42
+ # This script was initially developed with activerecord 2.3, actionmailer 2.3 and kwalify 0.7, but will likely work with most
43
+ # prior and future versions of these libraries.
44
+ #
45
+ # == Usage
46
+ # When craigwatch is invoked, it is designed to run a single report and then terminate. There is only one parameter to craigwatch, and
47
+ # this parameter is the path to a valid report-definition yml file. ie:
48
+ # craigwatch johns_daily_watch.yml
49
+ #
50
+ # There is an included kwalify schema which can validate your definition files, but craigwatch will automatically do so at startup.
51
+ # Probably, the best way to understand the report definition files, is to look at the annotated sample file below, and use it as a
52
+ # starting point for your own.
53
+ #
54
+ # By default there is no program output, however, setting any of the following paramters to 'yes' in your definition file will turn on
55
+ # useful debugging/logging output:
56
+ # - debug_database
57
+ # - debug_mailer
58
+ # - debug_craigscrape
59
+ #
60
+ # == Definition File Sample
61
+ #
62
+ # Here's a simple annotated report which uses most of the available craigwatch features:
63
+ #
64
+ # # The report_name is fed into Time.now.strftime, hence the formatting characters
65
+ # report_name: Craig Watch For Johnathan on %D at %I:%M %p
66
+ #
67
+ # email_to: Johnathan Peabody <john@example.local>
68
+ #
69
+ # # This is sent straight into ActiveRecord, so there's plenty of options available here. the following is an easy
70
+ # # default sqlite store that should work on most any system with a minimal overhead
71
+ # tracking_database: { adapter: sqlite3, dbfile: /home/john/john_cwatch_report.db }
72
+ #
73
+ # searches:
74
+ # # Search #1:
75
+ # - name: Schwinn Bikes For Sale in/near New York
76
+ #
77
+ # # Scrape the following listings pages:
78
+ # listing:
79
+ # - http://newyork.craigslist.org/bik/
80
+ # - http://newyork.craigslist.org/jsy/bik/
81
+ # # This starting date is mostly for the first run, and gives us a reasonable cut-off point from whcih to build
82
+ # starting: 5/2/2009
83
+ #
84
+ # # We want listings with Schwinn in the summary
85
+ # summary_post_has: [ /schwinn/i ]
86
+ #
87
+ # # We're only interested in adult bikes, so scrap any results that mentions chidren or kids
88
+ # full_post_has_no: [ /(children|kids)/i ]
89
+ #
90
+ # # Oh, and we're on a budget:
91
+ # price_less_than: 120
92
+ #
93
+ # # Search #2
94
+ # - name: Large apartment rentals in San Francisco
95
+ # # We're going to rely on craigslist's built-in search for this one since there's a lot of listings, and we
96
+ # # want to conserve some bandwidth
97
+ # listing: [ http://sfbay.craigslist.org/search/apa?query=pool&minAsk=min&maxAsk=max&bedrooms=5 ]
98
+ # starting: 5/2/2009
99
+ # # We'll require a price to be listed, 'cause it keeps out some of the unwanted fluff
100
+ # price_required: yes
101
+ #
102
+ # # Hopefully this will keep us away from a bad part of town:
103
+ # price_greater_than: 1000
104
+ #
105
+ # # Since we dont have time to driv to each location, we'll require only listings with pictures
106
+ # has_image: yes
107
+ #
108
+ # == Author
109
+ # - Chris DeRose (cderose@derosetechnologies.com)
110
+ # - DeRose Technologies, Inc. http://www.derosetechnologies.com
111
+ #
112
+ # == License
113
+ #
114
+ # See COPYING[link:files/COPYING.html]
115
+ #
116
+ $: << File.dirname(__FILE__) + '/../lib'
117
+
118
+ require 'rubygems'
119
+ require 'kwalify'
120
+ require 'kwalify/util/hashlike'
121
+ require 'active_record'
122
+ require 'action_mailer'
123
+ require 'libcraigscrape'
124
+ require "socket"
125
+
126
+ class String #:nodoc:
127
+ RE = /^\/(.*)\/([ixm]*)$/
128
+
129
+ def is_re?
130
+ (RE.match self) ? true : false
131
+ end
132
+
133
+ def to_re
134
+ source, options = ( RE.match(self) )? [$1, $2] : [self,nil]
135
+ mods = 0
136
+
137
+ options.each_char do |c|
138
+ mods |= case c
139
+ when 'i': Regexp::IGNORECASE
140
+ when 'x': Regexp::EXTENDED
141
+ when 'm': Regexp::MULTILINE
142
+ end
143
+ end unless options.nil? or options.empty?
144
+
145
+ Regexp.new source, mods
146
+ end
147
+ end
148
+
149
+ class CraigReportDefinition #:nodoc:
150
+ include Kwalify::Util::HashLike
151
+
152
+ attr_reader :report_name, :email_to, :email_from, :tracking_database, :searches, :smtp_settings
153
+
154
+ def debug_database?; @debug_database; end
155
+ def debug_mailer?; @debug_mailer; end
156
+ def debug_craigscrape?; @debug_craigscrape; end
157
+
158
+ def each_search(&block); searches.each &block; end
159
+
160
+ def email_from
161
+ (@email_from) ? @email_from : ('%s@%s' % [ENV['USER'], Socket.gethostname])
162
+ end
163
+
164
+ class SearchDefinition #:nodoc:
165
+ include Kwalify::Util::HashLike
166
+
167
+ attr_reader :name, :listing
168
+ attr_reader :full_post_has, :full_post_has_no
169
+ attr_reader :summary_post_has, :summary_post_has_no
170
+
171
+ attr_reader :price_greater_than,:price_less_than
172
+
173
+ def has_image?; @has_image; end
174
+ def newest_first?; @newest_first; end
175
+ def price_required?; @price_required; end
176
+
177
+ def starting_at
178
+ (@starting) ?
179
+ Time.parse(@starting) :
180
+ Time.now.yesterday.beginning_of_day
181
+ end
182
+
183
+ def passes_filter?(post)
184
+ if post.price.nil?
185
+ return false if price_required?
186
+ else
187
+ return false if @price_greater_than and post.price <= @price_greater_than
188
+ return false if @price_less_than and post.price >= @price_less_than
189
+ end
190
+
191
+ return false unless matches_all? summary_post_has, post.label
192
+ return false unless doesnt_match_any? summary_post_has_no, post.label
193
+
194
+ if full_post_has or full_post_has_no
195
+ # We're going to download the page, so let's make sure we didnt hit a "This posting has been flagged for removal"
196
+ return false if post.full_post.title.nil?
197
+
198
+ return false unless matches_all? full_post_has, post.full_post.contents_as_plain
199
+ return false unless doesnt_match_any? full_post_has_no, post.full_post.contents_as_plain
200
+ end
201
+
202
+ true
203
+ end
204
+
205
+ private
206
+
207
+ def matches_all?(conditions, against)
208
+ (conditions.nil? or conditions.all?{|c| match_against c, against}) ? true : false
209
+ end
210
+
211
+ def doesnt_match_any?(conditions, against)
212
+ (conditions.nil? or conditions.all?{|c| !match_against c, against}) ? true : false
213
+ end
214
+
215
+ def match_against(condition, against)
216
+ (against.scan( condition.is_re? ? condition.to_re : condition).length > 0) ? true : false
217
+ end
218
+ end
219
+ end
220
+
221
+ class TrackedSearch < ActiveRecord::Base #:nodoc:
222
+ has_many :tracked_posts, :dependent => :destroy
223
+ validates_uniqueness_of :search_name
224
+ validates_presence_of :search_name
225
+
226
+ def already_tracked?(url)
227
+ ( self.tracked_posts.find :first, :conditions => ['url = ?', url]) ? true : false
228
+ end
229
+
230
+ def last_tracked_at
231
+ self.tracked_posts.maximum 'created_at'
232
+ end
233
+ end
234
+
235
+ class TrackedPost < ActiveRecord::Base #:nodoc:
236
+ belongs_to :tracked_search
237
+ validates_presence_of :url, :tracked_search_id
238
+ validates_uniqueness_of :url, :scope => :tracked_search_id
239
+ end
240
+
241
+ class ReportMailer < ActionMailer::Base #:nodoc:
242
+ def report(to, sender, subject_template, report_tmpl)
243
+
244
+ formatted_subject = Time.now.strftime(subject_template)
245
+
246
+ recipients to
247
+ from sender
248
+ subject formatted_subject
249
+
250
+ generate_view_parts 'craigslist_report', report_tmpl.merge({:subject =>formatted_subject})
251
+ end
252
+
253
+ def generate_view_parts(view_name, tmpl)
254
+ part( :content_type => "multipart/alternative" ) do |p|
255
+ [
256
+ { :content_type => "text/plain", :body => render_message("#{view_name.to_s}.plain.erb", tmpl) },
257
+ { :content_type => "text/html", :body => render_message("#{view_name.to_s}.html.erb", tmpl.merge({:part_container => p})) }
258
+ ].each { |parms| p.part parms.merge( { :charset => "UTF-8", :transfer_encoding => "7bit" } ) }
259
+ end
260
+ end
261
+ end
262
+
263
+ #############
264
+
265
+ # Let's start our program now:
266
+ report_definition_file = ARGV[0] if ARGV[0] and File.readable?(ARGV[0])
267
+
268
+ unless report_definition_file
269
+ puts <<EOD
270
+ Usage:
271
+ #{File.basename($0)} [report_definition_file]
272
+
273
+ Run 'gem server' and browse the libcraigscrape rdoc for 'bin/craigscrape' for specific usage details.
274
+ EOD
275
+ exit
276
+ end
277
+
278
+ # Validate/Parse our input file:
279
+ parser = Kwalify::Yaml::Parser.new(
280
+ Kwalify::Validator.new(
281
+ Kwalify::Yaml.load_file(File.dirname(__FILE__)+'/craig_report_schema.yml')
282
+ )
283
+ )
284
+ parser.data_binding = true
285
+
286
+ craig_report = parser.parse_file report_definition_file
287
+
288
+ parser.errors.each do |e|
289
+ puts "Definition Validation Error (line #{e.linenum}, char #{e.column}): #{e.message}"
290
+ end and exit if parser.errors.length > 0
291
+
292
+ # Initialize Action Mailer:
293
+ ActionMailer::Base.logger = Logger.new STDERR if craig_report.debug_mailer?
294
+ if craig_report.smtp_settings
295
+ ReportMailer.smtp_settings = craig_report.smtp_settings
296
+ else
297
+ ReportMailer.delivery_method = :sendmail
298
+ end
299
+ ReportMailer.template_root = File.dirname __FILE__
300
+
301
+ # Initialize the database:
302
+ ActiveRecord::Base.logger = Logger.new STDERR if craig_report.debug_database?
303
+ ActiveRecord::Base.establish_connection craig_report.tracking_database
304
+
305
+ # Initialize CraigScrape (sorta)
306
+ CraigScrape.logger = Logger.new STDERR if craig_report.debug_craigscrape?
307
+
308
+ # Perform migrations if needed?
309
+ ActiveRecord::Schema.define do
310
+ suppress_messages do
311
+ create_table :tracked_searches do |t|
312
+ t.column :search_name, :string
313
+ end unless table_exists? :tracked_searches
314
+
315
+ create_table :tracked_posts do |t|
316
+ t.column :url, :string
317
+ t.column :tracked_search_id, :integer
318
+ t.column :created_at, :date
319
+ end unless table_exists? :tracked_posts
320
+ end
321
+ end
322
+
323
+ # Now let's run a report:
324
+ report_summaries = []
325
+
326
+ craig_report.each_search do |search|
327
+
328
+ # Load our tracking info
329
+ search_track = TrackedSearch.find :first, :conditions => ['search_name = ?',search.name]
330
+
331
+ # No Tracking found - let's set one up:
332
+ search_track = TrackedSearch.create! :search_name => search.name unless search_track
333
+
334
+ last_tracked_at = (search_track.last_tracked_at) ? search_track.last_tracked_at : search.starting_at
335
+
336
+ # Let's collect all the summaries that could apply:
337
+ new_summaries = {}
338
+ search.listing.each do |listing|
339
+ CraigScrape.scrape_posts_since(listing, last_tracked_at).each do |p_s|
340
+ new_summaries[p_s.full_url] = p_s unless new_summaries.has_key? p_s.full_url or search_track.already_tracked? p_s.full_url
341
+ end
342
+ end
343
+
344
+ # Let's flatten the unique'd hash into a more useable array:
345
+ new_summaries = new_summaries.values.sort{|a,b| a.date <=> b.date} # oldest goes to bottom
346
+
347
+ # Reject anything which doesn't match the has/has_no :
348
+ new_summaries.reject!{|s| !search.passes_filter? s }
349
+
350
+ # Now Let's manage the tracking database:
351
+ if new_summaries.length > 0
352
+ # Insert all the relevant, newest tracked posts:
353
+ # NOTE: Since all the dates are at_begining_of_day, we'll effectively have a chunk of dates tied for latest
354
+ new_summaries.reject{|p| p.date < new_summaries.last.date}.each do |p_s|
355
+ search_track.tracked_posts.create! :url => p_s.full_url, :created_at => p_s.date
356
+ end
357
+
358
+ # Remove all tracked posts older then we could need:
359
+ TrackedPost.delete_all ['tracked_search_id = ? AND created_at < ?', search_track.id, new_summaries.last.date ]
360
+
361
+ new_summaries.reverse! if search.newest_first?
362
+
363
+ # We'll want to email these...
364
+ report_summaries << { :postings => new_summaries, :search => search }
365
+ end
366
+ end
367
+
368
+ # Time to send the email:
369
+ ReportMailer.deliver_report(
370
+ craig_report.email_to,
371
+ craig_report.email_from,
372
+ craig_report.report_name,
373
+ {:summaries => report_summaries, :definition => craig_report}
374
+ ) if report_summaries.length > 0