scraper_utils 0.2.0 → 0.4.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: b4291b6994419c04851935fe4aa4e047eb4069cab3fecf451bf65f8e91acb48d
4
- data.tar.gz: 2e3a657ce230f9c6bc9defe042cf7babb9e52e2130d32f0ec8312571f5dcb26a
3
+ metadata.gz: 9443f89de2518d12830ddcd3d4d5e246c77a35b34e6549031a2f61a454c9c96c
4
+ data.tar.gz: 693aeaeb896c9779135f3c703f86aa436a3d9b10715448cc18c96c590ddf3723
5
5
  SHA512:
6
- metadata.gz: 51c29aea77f43a8c7de8e874a2601b4e0b9e9c36ae512180f10dcd182b2ecc899cc08944face686e0f993b02338975672d9eaf06d8a0185ee222cfc263993244
7
- data.tar.gz: ec63009a4f10677a8e9500b5ca15c68b432081fa995d5bee2eda2b2cff88cdb7090595be7ad64a0dfeacf09decbc292313b47f9f8af795a3b95c718c77f59339
6
+ metadata.gz: 38d23ede40ee6313c0c8098b34d1258829c2842e2b43fe39fd156503d0c387fc59929fec740205ffc232cafdfe095932a62f937b3aded2ab7b5990dc889615d7
7
+ data.tar.gz: 7e9b7da5194265fdd77a8136202bf11a3293a2dda0b2f5d00ffaf7503872079233889b80d81ab2267eccbd9c570b56e8778b9e4fce28cee9e3a2657788f02334
data/.gitignore CHANGED
@@ -9,6 +9,9 @@
9
9
  /test/tmp/
10
10
  /test/version_tmp/
11
11
 
12
+ # Ignore log files
13
+ /log/
14
+
12
15
  # Temp files
13
16
  ,*
14
17
  *.bak
data/.rubocop.yml CHANGED
@@ -1,3 +1,7 @@
1
+ plugins:
2
+ - rubocop-rake
3
+ - rubocop-rspec
4
+
1
5
  AllCops:
2
6
  NewCops: enable
3
7
 
data/CHANGELOG.md CHANGED
@@ -1,5 +1,26 @@
1
1
  # Changelog
2
2
 
3
+ ## 0.4.0 - 2025-03-04
4
+
5
+ * Add Cycle Utils as an alternative to Date range utils
6
+ * Update README.md with changed defaults
7
+
8
+ ## 0.3.0 - 2025-03-04
9
+
10
+ * Add date range utils
11
+ * Flush $stdout and $stderr when logging to sync exception output and logging lines
12
+ * Break out example code from README.md into docs dir
13
+
14
+ ## 0.2.1 - 2025-02-28
15
+
16
+ Fixed broken v0.2.0
17
+
18
+ ## 0.2.0 - 2025-02-28
19
+
20
+ Added FiberScheduler, enabled complient mode with delays by default and simplified usage removing third retry without proxy
21
+
3
22
  ## 0.1.0 - 2025-02-23
4
23
 
5
- `First release for development`
24
+ First release for development
25
+
26
+
data/Gemfile CHANGED
@@ -22,12 +22,15 @@ gem "sqlite3", platform && (platform == :heroku16 ? "~> 1.4.0" : "~> 1.6.3")
22
22
  gem "scraperwiki", git: "https://github.com/openaustralia/scraperwiki-ruby.git",
23
23
  branch: "morph_defaults"
24
24
 
25
- # development and test test gems
25
+ # development and test gems
26
26
  gem "rake", platform && (platform == :heroku16 ? "~> 12.3.3" : "~> 13.0")
27
27
  gem "rspec", platform && (platform == :heroku16 ? "~> 3.9.0" : "~> 3.12")
28
- gem "rubocop", platform && (platform == :heroku16 ? "~> 0.80.0" : "~> 1.57")
28
+ gem "rubocop", platform && (platform == :heroku16 ? "~> 1.28.2" : "~> 1.73")
29
+ gem "rubocop-rake", platform && (platform == :heroku16 ? "~> 0.6.0" : "~> 0.7")
30
+ gem "rubocop-rspec", platform && (platform == :heroku16 ? "~> 2.10.0" : "~> 3.5")
29
31
  gem "simplecov", platform && (platform == :heroku16 ? "~> 0.18.0" : "~> 0.22.0")
30
32
  gem "simplecov-console"
33
+ gem "terminal-table"
31
34
  gem "webmock", platform && (platform == :heroku16 ? "~> 3.14.0" : "~> 3.19.0")
32
35
 
33
36
  gemspec
data/README.md CHANGED
@@ -13,11 +13,12 @@ our scraper accessing your systems, here's what you should know:
13
13
 
14
14
  ### How to Control Our Behavior
15
15
 
16
- Our scraper utilities respect the standard server **robots.txt** control mechanisms (by default). To control our access:
16
+ Our scraper utilities respect the standard server **robots.txt** control mechanisms (by default).
17
+ To control our access:
17
18
 
18
19
  - Add a section for our user agent: `User-agent: ScraperUtils` (default)
19
- - Set a crawl delay: `Crawl-delay: 5`
20
- - If needed specify disallowed paths: `Disallow: /private/`
20
+ - Set a crawl delay, eg: `Crawl-delay: 20`
21
+ - If needed specify disallowed paths*: `Disallow: /private/`
21
22
 
22
23
  ### Built-in Politeness Features
23
24
 
@@ -26,13 +27,24 @@ Even without specific configuration, our scrapers will, by default:
26
27
  - **Identify themselves**: Our user agent clearly indicates who we are and provides a link to the project repository:
27
28
  `Mozilla/5.0 (compatible; ScraperUtils/0.2.0 2025-02-22; +https://github.com/ianheggie-oaf/scraper_utils)`
28
29
 
29
- - **Limit server load**: We introduce delays to avoid undue load on your server's by default based on your response
30
- time.
31
- The slower your server is running, the longer the delay we add between requests to help you.
32
- In the default "compliant mode" this defaults to 20% and custom settings are capped at 33% maximum.
30
+ - **Limit server load**: We slow down our requests so we should never be a significant load to your server, let alone
31
+ overload it.
32
+ The slower your server is running, the longer the delay we add between requests to help.
33
+ In the default "compliant mode" this defaults to a max load of 20% and is capped at 33%.
33
34
 
34
- - **Add randomized delays**: We add random delays between requests to avoid creating regular traffic patterns that might
35
- impact server performance (enabled by default).
35
+ - **Add randomized delays**: We add random delays between requests to further reduce our impact on servers, which should
36
+ bring us down to the load of a single industrious person.
37
+
38
+ Extra utilities provided for scrapers to further reduce your server load:
39
+
40
+ - **Interleave requests**: This spreads out the requests to your server rather than focusing on one scraper at a time.
41
+
42
+ - **Intelligent Date Range selection**: This reduces server load by over 60% by a smarter choice of date range searches,
43
+ checking the recent 4 days each day and reducing down to checking each 3 days by the end of the 33-day mark. This
44
+ replaces the simplistic check of the last 30 days each day.
45
+
46
+ - Alternative **Cycle Utilities** - a convenience class to cycle through short and longer search ranges to reduce server
47
+ load.
36
48
 
37
49
  Our goal is to access public planning information without negatively impacting your services.
38
50
 
@@ -50,10 +62,6 @@ And then execute:
50
62
 
51
63
  $ bundle
52
64
 
53
- Or install it yourself for testing:
54
-
55
- $ gem install scraper_utils
56
-
57
65
  Usage
58
66
  -----
59
67
 
@@ -101,12 +109,12 @@ export DEBUG=1
101
109
  Add `client_options` to your AUTHORITIES configuration and move any of the following settings into it:
102
110
 
103
111
  * `timeout: Integer` - Timeout for agent connections in case the server is slower than normal
104
- * `australian_proxy: true` - Use the MORPH_AUSTRALIAN_PROXY as proxy url if the site is geo-locked
112
+ * `australian_proxy: true` - Use the proxy url in the `MORPH_AUSTRALIAN_PROXY` env variable if the site is geo-locked
105
113
  * `disable_ssl_certificate_check: true` - Disabled SSL verification for old / incorrect certificates
106
114
 
107
115
  See the documentation on `ScraperUtils::MechanizeUtils::AgentConfig` for more options
108
116
 
109
- Then adjust your code to accept client_options and pass then through to:
117
+ Then adjust your code to accept `client_options` and pass then through to:
110
118
  `ScraperUtils::MechanizeUtils.mechanize_agent(client_options || {})`
111
119
  to receive a `Mechanize::Agent` configured accordingly.
112
120
 
@@ -115,130 +123,47 @@ The agent returned is configured using Mechanize hooks to implement the desired
115
123
  ### Default Configuration
116
124
 
117
125
  By default, the Mechanize agent is configured with the following settings.
126
+ As you can see, the defaults can be changed using env variables.
127
+
128
+ Note - compliant mode forces max_load to be set to a value no greater than 50.
118
129
 
119
130
  ```ruby
120
131
  ScraperUtils::MechanizeUtils::AgentConfig.configure do |config|
121
- config.default_timeout = 60
122
- config.default_compliant_mode = true
123
- config.default_random_delay = 3
124
- config.default_max_load = 20 # percentage
125
- config.default_disable_ssl_certificate_check = false
126
- config.default_australian_proxy = false
132
+ config.default_timeout = ENV.fetch('MORPH_TIMEOUT', 60).to_i # 60
133
+ config.default_compliant_mode = ENV.fetch('MORPH_NOT_COMPLIANT', nil).to_s.empty? # true
134
+ config.default_random_delay = ENV.fetch('MORPH_RANDOM_DELAY', 5).to_i # 5
135
+ config.default_max_load = ENV.fetch('MORPH_MAX_LOAD', 33.3).to_f # 33.3
136
+ config.default_disable_ssl_certificate_check = !ENV.fetch('MORPH_DISABLE_SSL_CHECK', nil).to_s.empty? # false
137
+ config.default_australian_proxy = !ENV.fetch('MORPH_USE_PROXY', nil).to_s.empty? # false
138
+ config.default_user_agent = ENV.fetch('MORPH_USER_AGENT', nil) # Uses Mechanize user agent
127
139
  end
128
140
  ```
129
141
 
130
142
  You can modify these global defaults before creating any Mechanize agents. These settings will be used for all Mechanize
131
143
  agents created by `ScraperUtils::MechanizeUtils.mechanize_agent` unless overridden by passing parameters to that method.
132
144
 
133
- ### Example updated `scraper.rb` file
134
-
135
- Update your `scraper.rb` as per the following example for basic utilities:
145
+ To speed up testing, set the following in `spec_helper.rb`:
136
146
 
137
147
  ```ruby
138
- #!/usr/bin/env ruby
139
- # frozen_string_literal: true
140
-
141
- $LOAD_PATH << "./lib"
142
-
143
- require "scraper_utils"
144
- require "technology_one_scraper"
145
-
146
- # Main Scraper class
147
- class Scraper
148
- AUTHORITIES = YourScraper::AUTHORITIES
149
-
150
- # ADD: attempt argument
151
- def scrape(authorities, attempt)
152
- exceptions = {}
153
- # ADD: Report attempt number
154
- authorities.each do |authority_label|
155
- puts "\nCollecting feed data for #{authority_label}, attempt: #{attempt}..."
156
-
157
- begin
158
- # REPLACE:
159
- # YourScraper.scrape(authority_label) do |record|
160
- # record["authority_label"] = authority_label.to_s
161
- # YourScraper.log(record)
162
- # ScraperWiki.save_sqlite(%w[authority_label council_reference], record)
163
- # end
164
- # WITH:
165
- ScraperUtils::DataQualityMonitor.start_authority(authority_label)
166
- YourScraper.scrape(authority_label) do |record|
167
- begin
168
- record["authority_label"] = authority_label.to_s
169
- ScraperUtils::DbUtils.save_record(record)
170
- rescue ScraperUtils::UnprocessableRecord => e
171
- ScraperUtils::DataQualityMonitor.log_unprocessable_record(e, record)
172
- exceptions[authority_label] = e
173
- end
174
- end
175
- # END OF REPLACE
176
- end
177
- rescue StandardError => e
178
- warn "#{authority_label}: ERROR: #{e}"
179
- warn e.backtrace
180
- exceptions[authority_label] = e
181
- end
182
-
183
- exceptions
184
- end
185
-
186
- def self.selected_authorities
187
- ScraperUtils::AuthorityUtils.selected_authorities(AUTHORITIES.keys)
188
- end
189
-
190
- def self.run(authorities)
191
- puts "Scraping authorities: #{authorities.join(', ')}"
192
- start_time = Time.now
193
- exceptions = scrape(authorities, 1)
194
- # Set start_time and attempt to the call above and log run below
195
- ScraperUtils::LogUtils.log_scraping_run(
196
- start_time,
197
- 1,
198
- authorities,
199
- exceptions
200
- )
201
-
202
- unless exceptions.empty?
203
- puts "\n***************************************************"
204
- puts "Now retrying authorities which earlier had failures"
205
- puts exceptions.keys.join(", ").to_s
206
- puts "***************************************************"
207
-
208
- start_time = Time.now
209
- exceptions = scrape(exceptions.keys, 2)
210
- # Set start_time and attempt to the call above and log run below
211
- ScraperUtils::LogUtils.log_scraping_run(
212
- start_time,
213
- 2,
214
- authorities,
215
- exceptions
216
- )
217
- end
218
-
219
- # Report on results, raising errors for unexpected conditions
220
- ScraperUtils::LogUtils.report_on_results(authorities, exceptions)
221
- end
148
+ ScraperUtils::MechanizeUtils::AgentConfig.configure do |config|
149
+ config.default_random_delay = nil
150
+ config.default_max_load = 33
222
151
  end
152
+ ```
223
153
 
224
- if __FILE__ == $PROGRAM_NAME
225
- # Default to list of authorities we can't or won't fix in code, explain why
226
- # wagga: url redirects and then reports Application error
154
+ ### Example updated `scraper.rb` file
227
155
 
228
- ENV["MORPH_EXPECT_BAD"] ||= "wagga"
229
- Scraper.run(Scraper.selected_authorities)
230
- end
231
- ```
156
+ Update your `scraper.rb` as per the [example scraper](docs/example_scraper.rb).
232
157
 
233
158
  Your code should raise ScraperUtils::UnprocessableRecord when there is a problem with the data presented on a page for a
234
159
  record.
235
160
  Then just before you would normally yield a record for saving, rescue that exception and:
236
161
 
237
- * Call ScraperUtils::DataQualityMonitor.log_unprocessable_record(e, record)
162
+ * Call `ScraperUtils::DataQualityMonitor.log_unprocessable_record(e, record)`
238
163
  * NOT yield the record for saving
239
164
 
240
165
  In your code update where create a mechanize agent (often `YourScraper.scrape_period`) and the `AUTHORITIES` hash
241
- to move mechanize_agent options (like `australian_proxy` and `timeout`) to a hash under a new key: `client_options`.
166
+ to move Mechanize agent options (like `australian_proxy` and `timeout`) to a hash under a new key: `client_options`.
242
167
  For example:
243
168
 
244
169
  ```ruby
@@ -297,44 +222,97 @@ The `ScraperUtils::FiberScheduler` provides a lightweight utility that:
297
222
  * thus optimizing the total scraper run time
298
223
  * allows you to increase the random delay for authorities without undue effect on total run time
299
224
  * For the curious, it uses [ruby fibers](https://ruby-doc.org/core-2.5.8/Fiber.html) rather than threads as that is
300
- simpler to get right and debug!
225
+ a simpler system and thus easier to get right, understand and debug!
226
+ * Cycles around the authorities when compliant_mode, max_load and random_delay are disabled
301
227
 
302
- To enable change the scrape method in the example above to;
228
+ To enable change the scrape method to be like [example scrape method using fibers](docs/example_scrape_with_fibers.rb)
303
229
 
304
- ```ruby
230
+ And use `ScraperUtils::FiberScheduler.log` instead of `puts` when logging within the authority processing code.
231
+ This will prefix the output lines with the authority name, which is needed since the system will interleave the work and
232
+ thus the output.
233
+
234
+ This uses `ScraperUtils::RandomizeUtils` as described below. Remember to add the recommended line to
235
+ `spec/spec_heper.rb`.
236
+
237
+ Intelligent Date Range Selection
238
+ --------------------------------
239
+
240
+ To further reduce server load and speed up scrapers, we provide an intelligent date range selection mechanism
241
+ that can reduce server requests by 60% without significantly impacting delay in picking up changes.
305
242
 
306
- def scrape(authorities, attempt)
307
- ScraperUtils::FiberScheduler.reset!
308
- exceptions = {}
309
- authorities.each do |authority_label|
310
- ScraperUtils::FiberScheduler.register_operation(authority_label) do
311
- ScraperUtils::FiberScheduler.log "Collecting feed data for #{authority_label}, attempt: #{attempt}..."
312
- begin
313
- ScraperUtils::DataQualityMonitor.start_authority(authority_label)
314
- YourScraper.scrape(authority_label) do |record|
315
- begin
316
- record["authority_label"] = authority_label.to_s
317
- ScraperUtils::DbUtils.save_record(record)
318
- rescue ScraperUtils::UnprocessableRecord => e
319
- ScraperUtils::DataQualityMonitor.log_unprocessable_record(e, record)
320
- exceptions[authority_label] = e
321
- end
322
- end
323
- rescue StandardError => e
324
- warn "#{authority_label}: ERROR: #{e}"
325
- warn e.backtrace
326
- exceptions[authority_label] = e
327
- end
328
- end # end of register_operation block
243
+ The `ScraperUtils::DateRangeUtils#calculate_date_ranges` method provides a smart approach to searching historical
244
+ records:
245
+
246
+ - Always checks the most recent 4 days daily (configurable)
247
+ - Progressively reduces search frequency for older records
248
+ - Uses a Fibonacci-like progression to create natural, efficient search intervals
249
+ - Configurable `max_period` (default is 3 days)
250
+ - merges adjacent search ranges and handles the changeover in search frequency by extending some searches
251
+
252
+ Example usage in your scraper:
253
+
254
+ ```ruby
255
+ date_ranges = ScraperUtils::DateRangeUtils.new.calculate_date_ranges
256
+ date_ranges.each do |from_date, to_date, _debugging_comment|
257
+ # Adjust your normal search code to use for this date range
258
+ your_search_records(from_date: from_date, to_date: to_date) do |record|
259
+ # process as normal
329
260
  end
330
- ScraperUtils::FiberScheduler.run_all
331
- exceptions
332
261
  end
333
262
  ```
334
263
 
335
- And use `ScraperUtils::FiberScheduler.log` instead of `puts` when logging within the authority processing code.
336
- This will prefix the output lines with the authority name, which is needed since the system will interleave the work and
337
- thus the output.
264
+ Typical server load reductions:
265
+
266
+ * Max period 2 days : ~42% of the 33 days selected
267
+ * Max period 3 days : ~37% of the 33 days selected (default)
268
+ * Max period 5 days : ~35% (or ~31% when days = 45)
269
+
270
+ See the class documentation for customizing defaults and passing options.
271
+
272
+ ### Other possibilities
273
+
274
+ If the site uses tags like 'L28', 'L14' and 'L7' for the last 28, 14 and 7 days, an alternative solution
275
+ is to cycle through ['L28', 'L7', 'L14', 'L7'] which would drop the load by 50% and be less Bot like.
276
+
277
+ Cycle Utils
278
+ -----------
279
+ Simple utility for cycling through options based on Julian day number:
280
+
281
+ ```ruby
282
+ # Toggle between main and alternate behaviour
283
+ alternate = ScraperUtils::CycleUtils.position(2).even?
284
+
285
+ # Use with any cycle size
286
+ pos = ScraperUtils::CycleUtils.position(7) # 0-6 cycle
287
+
288
+ # Test with specific date
289
+ pos = ScraperUtils::CycleUtils.position(3, date: Date.new(2024, 1, 5))
290
+
291
+ # Override for testing
292
+ # CYCLE_POSITION=2 bundle exec ruby scraper.rb
293
+ ```
294
+
295
+ Randomizing Requests
296
+ --------------------
297
+
298
+ Pass a `Collection` or `Array` to `ScraperUtils::RandomizeUtils.randomize_order` to randomize it in production, but
299
+ receive in as is when testing.
300
+
301
+ Use this with the list of records scraped from an index to randomise any requests for further information to be less Bot
302
+ like.
303
+
304
+ ### Spec setup
305
+
306
+ You should enforce sequential mode when testing by adding the following code to `spec/spec_helper.rb` :
307
+
308
+ ```
309
+ ScraperUtils::RandomizeUtils.sequential = true
310
+ ```
311
+
312
+ Note:
313
+
314
+ * You can also force sequential mode by setting the env variable `MORPH_PROCESS_SEQUENTIALLY` to `1` (any non blank)
315
+ * testing using VCR requires sequential mode
338
316
 
339
317
  Development
340
318
  -----------
@@ -356,7 +334,7 @@ NOTE: You need to use ruby 3.2.2 instead of 2.5.8 to release to OTP protected ac
356
334
  Contributing
357
335
  ------------
358
336
 
359
- Bug reports and pull requests are welcome on GitHub at https://github.com/ianheggie-oaf/scraper_utils
337
+ Bug reports and pull requests with working tests are welcome on [GitHub](https://github.com/ianheggie-oaf/scraper_utils)
360
338
 
361
339
  CHANGELOG.md is maintained by the author aiming to follow https://github.com/vweevers/common-changelog
362
340
 
@@ -364,3 +342,4 @@ License
364
342
  -------
365
343
 
366
344
  The gem is available as open source under the terms of the [MIT License](https://opensource.org/licenses/MIT).
345
+
@@ -0,0 +1,31 @@
1
+ # frozen_string_literal: true
2
+
3
+ # Example scrape method updated to use ScraperUtils::FibreScheduler
4
+
5
+ def scrape(authorities, attempt)
6
+ ScraperUtils::FiberScheduler.reset!
7
+ exceptions = {}
8
+ authorities.each do |authority_label|
9
+ ScraperUtils::FiberScheduler.register_operation(authority_label) do
10
+ ScraperUtils::FiberScheduler.log(
11
+ "Collecting feed data for #{authority_label}, attempt: #{attempt}..."
12
+ )
13
+ ScraperUtils::DataQualityMonitor.start_authority(authority_label)
14
+ YourScraper.scrape(authority_label) do |record|
15
+ record["authority_label"] = authority_label.to_s
16
+ ScraperUtils::DbUtils.save_record(record)
17
+ rescue ScraperUtils::UnprocessableRecord => e
18
+ ScraperUtils::DataQualityMonitor.log_unprocessable_record(e, record)
19
+ exceptions[authority_label] = e
20
+ # Continues processing other records
21
+ end
22
+ rescue StandardError => e
23
+ warn "#{authority_label}: ERROR: #{e}"
24
+ warn e.backtrace || "No backtrace available"
25
+ exceptions[authority_label] = e
26
+ end
27
+ # end of register_operation block
28
+ end
29
+ ScraperUtils::FiberScheduler.run_all
30
+ exceptions
31
+ end
@@ -0,0 +1,93 @@
1
+ #!/usr/bin/env ruby
2
+ # frozen_string_literal: true
3
+
4
+ $LOAD_PATH << "./lib"
5
+
6
+ require "scraper_utils"
7
+ require "technology_one_scraper"
8
+
9
+ # Main Scraper class
10
+ class Scraper
11
+ AUTHORITIES = YourScraper::AUTHORITIES
12
+
13
+ # ADD: attempt argument
14
+ def scrape(authorities, attempt)
15
+ exceptions = {}
16
+ # ADD: Report attempt number
17
+ authorities.each do |authority_label|
18
+ puts "\nCollecting feed data for #{authority_label}, attempt: #{attempt}..."
19
+
20
+ begin
21
+ # REPLACE:
22
+ # YourScraper.scrape(authority_label) do |record|
23
+ # record["authority_label"] = authority_label.to_s
24
+ # YourScraper.log(record)
25
+ # ScraperWiki.save_sqlite(%w[authority_label council_reference], record)
26
+ # end
27
+ # WITH:
28
+ ScraperUtils::DataQualityMonitor.start_authority(authority_label)
29
+ YourScraper.scrape(authority_label) do |record|
30
+ begin
31
+ record["authority_label"] = authority_label.to_s
32
+ ScraperUtils::DbUtils.save_record(record)
33
+ rescue ScraperUtils::UnprocessableRecord => e
34
+ ScraperUtils::DataQualityMonitor.log_unprocessable_record(e, record)
35
+ exceptions[authority_label] = e
36
+ end
37
+ end
38
+ # END OF REPLACE
39
+ end
40
+ rescue StandardError => e
41
+ warn "#{authority_label}: ERROR: #{e}"
42
+ warn e.backtrace
43
+ exceptions[authority_label] = e
44
+ end
45
+
46
+ exceptions
47
+ end
48
+
49
+ def self.selected_authorities
50
+ ScraperUtils::AuthorityUtils.selected_authorities(AUTHORITIES.keys)
51
+ end
52
+
53
+ def self.run(authorities)
54
+ puts "Scraping authorities: #{authorities.join(', ')}"
55
+ start_time = Time.now
56
+ exceptions = scrape(authorities, 1)
57
+ # Set start_time and attempt to the call above and log run below
58
+ ScraperUtils::LogUtils.log_scraping_run(
59
+ start_time,
60
+ 1,
61
+ authorities,
62
+ exceptions
63
+ )
64
+
65
+ unless exceptions.empty?
66
+ puts "\n***************************************************"
67
+ puts "Now retrying authorities which earlier had failures"
68
+ puts exceptions.keys.join(", ").to_s
69
+ puts "***************************************************"
70
+
71
+ start_time = Time.now
72
+ exceptions = scrape(exceptions.keys, 2)
73
+ # Set start_time and attempt to the call above and log run below
74
+ ScraperUtils::LogUtils.log_scraping_run(
75
+ start_time,
76
+ 2,
77
+ authorities,
78
+ exceptions
79
+ )
80
+ end
81
+
82
+ # Report on results, raising errors for unexpected conditions
83
+ ScraperUtils::LogUtils.report_on_results(authorities, exceptions)
84
+ end
85
+ end
86
+
87
+ if __FILE__ == $PROGRAM_NAME
88
+ # Default to list of authorities we can't or won't fix in code, explain why
89
+ # wagga: url redirects and then reports Application error
90
+
91
+ ENV["MORPH_EXPECT_BAD"] ||= "wagga"
92
+ Scraper.run(Scraper.selected_authorities)
93
+ end