staticizer 0.0.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml ADDED
@@ -0,0 +1,7 @@
1
+ ---
2
+ SHA1:
3
+ metadata.gz: 60dbe34b0e69038956585a2c25804422be99bf3e
4
+ data.tar.gz: 539fb871a2cbbb99d81b5365f942f8a04c2232bf
5
+ SHA512:
6
+ metadata.gz: cbb0db4aa4864d060534f971a1c16d8d66d26967ae9bc04c7fe6d155339ba80b9c4fe00c30d11fef4c7a696e8f9a8441428b0fee9ae3d865671690257480141d
7
+ data.tar.gz: 613cb6ac0a9109443f199fa294da8c0a0106b543555402738cabe3a996b12c033d27a8414c52f79f5daa92ecd3a0a9bb0c816bf3558638e712d64909147b1474
data/.gitignore ADDED
@@ -0,0 +1,17 @@
1
+ *.gem
2
+ *.rbc
3
+ .bundle
4
+ .config
5
+ .yardoc
6
+ Gemfile.lock
7
+ InstalledFiles
8
+ _yardoc
9
+ coverage
10
+ doc/
11
+ lib/bundler/man
12
+ pkg
13
+ rdoc
14
+ spec/reports
15
+ test/tmp
16
+ test/version_tmp
17
+ tmp
data/Gemfile ADDED
@@ -0,0 +1,4 @@
1
+ source 'https://rubygems.org'
2
+
3
+ # Specify your gem's dependencies in staticizer.gemspec
4
+ gemspec
data/LICENSE.txt ADDED
@@ -0,0 +1,22 @@
1
+ Copyright (c) 2014 Conor Hunt
2
+
3
+ MIT License
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining
6
+ a copy of this software and associated documentation files (the
7
+ "Software"), to deal in the Software without restriction, including
8
+ without limitation the rights to use, copy, modify, merge, publish,
9
+ distribute, sublicense, and/or sell copies of the Software, and to
10
+ permit persons to whom the Software is furnished to do so, subject to
11
+ the following conditions:
12
+
13
+ The above copyright notice and this permission notice shall be
14
+ included in all copies or substantial portions of the Software.
15
+
16
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
17
+ EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
18
+ MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
19
+ NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
20
+ LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
21
+ OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
22
+ WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
data/README.md ADDED
@@ -0,0 +1,110 @@
1
+ # Staticizer
2
+
3
+ A tool to create a static version of a website for hosting on S3.
4
+
5
+ ## Rationale
6
+
7
+ One of our clients needed a reliable emergency backup for a
8
+ website. If the website goes down this backup would be available
9
+ with reduced functionality.
10
+
11
+ S3 and Route 53 provide an great way to host a static emergency backup for a website.
12
+ See this article - http://aws.typepad.com/aws/2013/02/create-a-backup-website-using-route-53-dns-failover-and-s3-website-hosting.html
13
+ . In our experience it works very well and is incredibly cheap at less than US$1 a month (depending on the size of the website).
14
+
15
+ We tried using exsisting tools httrack/wget to crawl and create a static version
16
+ of the site to upload to S3, but we found that they did not work well with S3 hosting.
17
+ We wanted the site uploaded to S3 to respond to the *exact* same URLs (where possible) as
18
+ the existing site. This way when the site goes down incoming links from Google search
19
+ results etc. will still work.
20
+
21
+ ## Installation
22
+
23
+ Add this line to your application's Gemfile:
24
+
25
+ gem 'staticizer'
26
+
27
+ And then execute:
28
+
29
+ $ bundle
30
+
31
+ Or install it yourself as:
32
+
33
+ $ gem install s3static
34
+
35
+ ## Command line usage
36
+
37
+ The tool can either be used via the 'staticizer' commandline tool or via requiring the library.
38
+
39
+
40
+ ### Crawl a website and write to disk
41
+
42
+ staticizer http://squaremill.com -output-dir=/tmp/crawl
43
+
44
+ ### Crawl a website and upload to AWS
45
+
46
+ staticizer http://squaremill.com -aws-s3-bucket=squaremill.com --aws-access-key=HJFJS5gSJHMDZDFFSSDQQ --aws-secret-key=HIA7T189234aADfFAdf322Vs12duRhOHy+23mc1+s
47
+
48
+ ### Crawl a website and allow several domains to be crawled
49
+
50
+ staticizer http://squaremill.com --valid-domains=squaremill.com,www.squaremill.com,img.squaremill.com
51
+
52
+ ## Code Usage
53
+
54
+ For all these examples you must first:
55
+
56
+ require 'staticizer'
57
+
58
+ ### Crawl a website and upload to AWS
59
+
60
+ This will only crawl urls in the domain squaremill.com
61
+
62
+ s = Staticizer::Crawler.new("http://squaremill.com",
63
+ :aws => {
64
+ :bucket_name => "www.squaremill.com",
65
+ :secret_access_key => "HIA7T189234aADfFAdf322Vs12duRhOHy+23mc1+s",
66
+ :access_key_id => "HJFJS5gSJHMDZDFFSSDQQ"
67
+ }
68
+ )
69
+ s.crawl
70
+
71
+ ### Crawl a website and write to disk
72
+
73
+ s = Staticizer::Crawler.new("http://squaremill.com", :output_dir => "/tmp/crawl")
74
+ s.crawl
75
+
76
+ ### Crawl a website and rewrite all non www urls to www
77
+
78
+ s = Staticizer::Crawler.new("http://squaremill.com",
79
+ :aws => {
80
+ :bucket_name => "www.squaremill.com",
81
+ :secret_access_key => "HIA7T189234aADfFAdf322Vs12duRhOHy+23mc1+s",
82
+ :access_key_id => "HJFJS5gSJHMDZDFFSSDQQ"
83
+ },
84
+ :filter_url => lambda do |url, info|
85
+ # Only crawl URL if it matches squaremill.com or www.squaremil.com
86
+ if url =~ %r{https?://(www\.)?squaremill\.com}
87
+ # Rewrite non-www urls to www
88
+ return url.gsub(%r{https?://(www\.)?squaremill\.com}, "http://www.squaremill.com")
89
+ end
90
+ # returning nil here prevents the url from being crawled
91
+ end
92
+ )
93
+ s.crawl
94
+
95
+ ## Cralwer Options
96
+
97
+ * :aws - Hash of connection options passed to aws/sdk gem
98
+ * :filter_url - proc called to see if a discovered URL should be crawled, return nil to not crawl a url, return the url (can be modified) to crawl
99
+ * :output_dir - if writing a site to disk the directory to write to, will be created if it does not exist
100
+ * :logger - A logger object responding to the usual Ruby Logger methods.
101
+ * :log_level - Log level - defaults to INFO.
102
+ # :valid_domains - Array of domains that should be crawled. Domains not in this list will be ignored.
103
+
104
+ ## Contributing
105
+
106
+ 1. Fork it
107
+ 2. Create your feature branch (`git checkout -b my-new-feature`)
108
+ 3. Commit your changes (`git commit -am 'Add some feature'`)
109
+ 4. Push to the branch (`git push origin my-new-feature`)
110
+ 5. Create new Pull Request
data/Rakefile ADDED
@@ -0,0 +1 @@
1
+ require "bundler/gem_tasks"
data/bin/staticizer ADDED
@@ -0,0 +1,11 @@
1
+ #!/usr/bin/env ruby
2
+
3
+ lib = File.expand_path(File.dirname(__FILE__) + '/../lib')
4
+ $LOAD_PATH.unshift(lib) if File.directory?(lib) && !$LOAD_PATH.include?(lib)
5
+
6
+ require 'staticizer'
7
+ require 'staticizer/command'
8
+
9
+ options, initial_page = Staticizer::Command.parse(ARGV)
10
+ s = Staticizer::Crawler.new(initial_page, options)
11
+ s.crawl
@@ -0,0 +1,76 @@
1
+ require 'optparse'
2
+
3
+ module Staticizer
4
+ class Command
5
+ # Parse command line arguments and print out any errors
6
+ def Command.parse(args)
7
+ options = {}
8
+ initial_page = nil
9
+
10
+ parser = OptionParser.new do |opts|
11
+ opts.banner = "Usage: staticizer initial_url [options]\nExample: staticizer http://squaremill.com --output-dir=/tmp/crawl"
12
+
13
+ opts.separator ""
14
+ opts.separator "Specific options:"
15
+
16
+ opts.on("--aws-s3-bucket [STRING]", "Name of S3 bucket to write to") do |v|
17
+ options[:aws] ||= {}
18
+ options[:aws][:bucket_name] = v
19
+ end
20
+
21
+ opts.on("--aws-access-key [STRING]", "AWS Access Key ID") do |v|
22
+ options[:aws] ||= {}
23
+ options[:aws][:access_key_id] = v
24
+ end
25
+
26
+ opts.on("--aws-secret-key [STRING]", "AWS Secret Access Key") do |v|
27
+ options[:aws] ||= {}
28
+ options[:aws][:secret_access_key] = v
29
+ end
30
+
31
+ opts.on("-d", "--output-dir [DIRECTORY]", "Write crawl to disk in this directory, will be created if it does not exist") do |v|
32
+ options[:output_dir] = v
33
+ end
34
+
35
+ opts.on("-v", "--verbose", "Run verbosely (sets log level to Logger::DEBUG)") do |v|
36
+ options[:log_level] = Logger::DEBUG
37
+ end
38
+
39
+ opts.on("--log-level [NUMBER]", "Set log level 0 = most verbose to 4 = least verbose") do |v|
40
+ options[:log_level] = v.to_i
41
+ end
42
+
43
+ opts.on("--log-file [PATH]", "Log file to write to") do |v|
44
+ options[:logger] = Logger.new(v)
45
+ end
46
+
47
+ opts.on("--valid-domains x,y,z", Array, "Comma separated list of domains that should be crawled, other domains will be ignored") do |v|
48
+ options[:valid_domains] = v
49
+ end
50
+
51
+ opts.on_tail("-h", "--help", "Show this message") do
52
+ puts "test"
53
+ puts opts
54
+ exit
55
+ end
56
+ end
57
+
58
+ begin
59
+ parser.parse!(args)
60
+ initial_page = ARGV.pop
61
+ raise ArgumentError, "Need to specify an initial URL to start the crawl" unless initial_page
62
+ rescue StandardError => e
63
+ puts e
64
+ puts parser
65
+ exit(1)
66
+ end
67
+
68
+ return options, initial_page
69
+ end
70
+ end
71
+ end
72
+
73
+
74
+ =begin
75
+
76
+ =end
@@ -0,0 +1,213 @@
1
+ require 'net/http'
2
+ require 'fileutils'
3
+ require 'nokogiri'
4
+ require 'aws-sdk'
5
+ require 'logger'
6
+
7
+ module Staticizer
8
+ class Crawler
9
+ def initialize(initial_page, opts = {})
10
+ if initial_page.nil?
11
+ raise ArgumentError, "Initial page required"
12
+ end
13
+
14
+ @opts = opts.dup
15
+ @url_queue = []
16
+ @processed_urls = []
17
+ @opts[:output_dir] ||= File.expand_path("crawl/")
18
+ @log = @opts[:logger] || Logger.new(STDOUT)
19
+ @log.level = @opts[:log_level] || Logger::INFO
20
+
21
+ if @opts[:aws]
22
+ bucket_name = @opts[:aws].delete(:bucket_name)
23
+ AWS.config(opts[:aws])
24
+ @s3_bucket = AWS::S3.new.buckets[bucket_name]
25
+ @s3_bucket.acl = :public_read
26
+ end
27
+
28
+ if @opts[:valid_domains].nil?
29
+ uri = URI.parse(initial_page)
30
+ @opts[:valid_domains] ||= [uri.host]
31
+ end
32
+ add_url(initial_page)
33
+ end
34
+
35
+ def crawl
36
+ @log.info("Starting crawl")
37
+ while(@url_queue.length > 0)
38
+ url, info = @url_queue.shift
39
+ @processed_urls << url
40
+ process_url(url, info)
41
+ end
42
+ @log.info("Finished crawl")
43
+ end
44
+
45
+ def extract_hrefs(doc, base_uri)
46
+ doc.xpath("//a/@href").map {|href| make_absolute(base_uri, href) }
47
+ end
48
+
49
+ def extract_images(doc, base_uri)
50
+ doc.xpath("//img/@src").map {|src| make_absolute(base_uri, src) }
51
+ end
52
+
53
+ def extract_links(doc, base_uri)
54
+ doc.xpath("//link/@href").map {|href| make_absolute(base_uri, href) }
55
+ end
56
+
57
+ def extract_scripts(doc, base_uri)
58
+ doc.xpath("//script/@src").map {|src| make_absolute(base_uri, src) }
59
+ end
60
+
61
+ def extract_css_urls(css, base_uri)
62
+ css.scan(/url\(([^)]+)\)/).map {|src| make_absolute(base_uri, src[0]) }
63
+ end
64
+
65
+ def add_urls(urls, info = {})
66
+ urls.compact.uniq.each {|url| add_url(url, info.dup) }
67
+ end
68
+
69
+ def make_absolute(base_uri, href)
70
+ URI::join(base_uri, href).to_s
71
+ rescue StandardError => e
72
+ @log.error "Could not make absolute #{base_uri} - #{href}"
73
+ end
74
+
75
+ def add_url(url, info = {})
76
+ if @opts[:filter_url]
77
+ url = @opts[:filter_url].call(url, info)
78
+ return if url.nil?
79
+ else
80
+ regex = "(#{@opts[:valid_domains].join(")|(")})"
81
+ return if url !~ %r{^https?://#{regex}}
82
+ end
83
+
84
+ url = url.sub(/#.*$/,'') # strip off any fragments
85
+ return if @url_queue.index {|u| u[0] == url } || @processed_urls.include?(url)
86
+ @url_queue << [url, info]
87
+ end
88
+
89
+ def save_page(response, uri)
90
+ if @opts[:aws]
91
+ save_page_to_aws(response, uri)
92
+ else
93
+ save_page_to_disk(response, uri)
94
+ end
95
+ end
96
+
97
+ def save_page_to_disk(response, uri)
98
+ path = uri.path
99
+ path += "?#{uri.query}" if uri.query
100
+
101
+ path_segments = path.scan(%r{[^/]*/})
102
+ filename = path.include?("/") ? path[path.rindex("/")+1..-1] : path
103
+
104
+ current = @opts[:output_dir]
105
+ FileUtils.mkdir_p(current) unless File.exist?(current)
106
+
107
+ # Create all the directories necessary for this file
108
+ path_segments.each do |segment|
109
+ current = File.join(current, "#{segment}").sub(%r{/$},'')
110
+ if File.file?(current)
111
+ # If we are trying to create a directory and there already is a file
112
+ # with the same name add a .d to the file since we can't create
113
+ # a directory and file with the same name in the file system
114
+ dirfile = current + ".d"
115
+ FileUtils.mv(current, dirfile)
116
+ FileUtils.mkdir(current)
117
+ FileUtils.cp(dirfile, File.join(current, "/index.html"))
118
+ elsif !File.exists?(current)
119
+ FileUtils.mkdir(current)
120
+ end
121
+ end
122
+
123
+ body = response.respond_to?(:read_body) ? response.read_body : response
124
+ outfile = File.join(current, "/#{filename}")
125
+ if filename == ""
126
+ indexfile = File.join(outfile, "/index.html")
127
+ @log.info "Saving #{indexfile}"
128
+ File.open(indexfile, "wb") {|f| f << body }
129
+ elsif File.directory?(outfile)
130
+ dirfile = outfile + ".d"
131
+ @log.info "Saving #{dirfile}"
132
+ File.open(dirfile, "wb") {|f| f << body }
133
+ FileUtils.cp(dirfile, File.join(outfile, "/index.html"))
134
+ else
135
+ @log.info "Saving #{outfile}"
136
+ File.open(outfile, "wb") {|f| f << body }
137
+ end
138
+ end
139
+
140
+ def save_page_to_aws(response, uri)
141
+ key = uri.path
142
+ key += "?#{uri.query}" if uri.query
143
+ key = key.gsub(%r{^/},"")
144
+ key = "index.html" if key == ""
145
+ # Upload this file directly to AWS::S3
146
+ opts = {:acl => :public_read}
147
+ opts[:content_type] = response['content-type'] rescue "text/html"
148
+ @log.info "Uploading #{key} to s3 with content type #{opts[:content_type]}"
149
+ if response.respond_to?(:read_body)
150
+ @s3_bucket.objects[key].write(response.read_body, opts)
151
+ else
152
+ @s3_bucket.objects[key].write(response, opts)
153
+ end
154
+ end
155
+
156
+ def process_success(response, parsed_uri)
157
+ url = parsed_uri.to_s
158
+ case response['content-type']
159
+ when /css/
160
+ save_page(response, parsed_uri)
161
+ add_urls(extract_css_urls(response.body, url), {:type_hint => "css_url"})
162
+ when /html/
163
+ save_page(response, parsed_uri)
164
+ doc = Nokogiri::HTML(response.body)
165
+ add_urls(extract_links(doc, url), {:type_hint => "link"})
166
+ add_urls(extract_scripts(doc, url), {:type_hint => "script"})
167
+ add_urls(extract_images(doc, url), {:type_hint => "image"})
168
+ add_urls(extract_hrefs(doc, url), {:type_hint => "href"})
169
+ else
170
+ save_page(response, parsed_uri)
171
+ end
172
+ end
173
+
174
+ # If we hit a redirect we save the redirect as a meta refresh page
175
+ # TODO: for AWS S3 hosting we could instead create a redirect?
176
+ def process_redirect(url, destination_url)
177
+ body = "<html><head><META http-equiv='refresh' content='0;URL=\"#{destination_url}\"'></head><body>You are being redirected to <a href='#{destination_url}'>#{destination_url}</a>.</body></html>"
178
+ save_page_to_aws(body, url)
179
+ end
180
+
181
+ # Fetch a URI and save it to disk
182
+ def process_url(url, info)
183
+ @http_connections ||= {}
184
+ parsed_uri = URI(url)
185
+
186
+ @log.debug "Fetching #{parsed_uri}"
187
+
188
+ # Attempt to use an already open Net::HTTP connection
189
+ key = parsed_uri.host + parsed_uri.port.to_s
190
+ connection = @http_connections[key]
191
+ if connection.nil?
192
+ connection = Net::HTTP.new(parsed_uri.host, parsed_uri.port)
193
+ @http_connections[key] = connection
194
+ end
195
+
196
+ request = Net::HTTP::Get.new(parsed_uri.request_uri)
197
+ connection.request(request) do |response|
198
+ case response
199
+ when Net::HTTPSuccess
200
+ process_success(response, parsed_uri)
201
+ when Net::HTTPRedirection
202
+ redirect_url = response['location']
203
+ @log.debug "Processing redirect to #{redirect_url}"
204
+ process_redirect(parsed_uri, redirect_url)
205
+ add_url(redirect_url)
206
+ else
207
+ @log.error "Error #{response.code}:#{response.message} fetching url #{url}"
208
+ end
209
+ end
210
+ end
211
+
212
+ end
213
+ end
@@ -0,0 +1,3 @@
1
+ module Staticizer
2
+ VERSION = "0.0.2"
3
+ end
data/lib/staticizer.rb ADDED
@@ -0,0 +1,9 @@
1
+ require_relative "staticizer/version"
2
+ require_relative 'staticizer/crawler'
3
+
4
+ module Staticizer
5
+ def Staticizer.crawl(url, options = {}, &block)
6
+ cralwer = Staticizer::Crawler.new(url, options)
7
+ crawler.crawl
8
+ end
9
+ end
@@ -0,0 +1,26 @@
1
+ # coding: utf-8
2
+ lib = File.expand_path('../lib', __FILE__)
3
+ $LOAD_PATH.unshift(lib) unless $LOAD_PATH.include?(lib)
4
+ require 'staticizer/version'
5
+
6
+ Gem::Specification.new do |spec|
7
+ spec.name = "staticizer"
8
+ spec.version = Staticizer::VERSION
9
+ spec.authors = ["Conor Hunt"]
10
+ spec.email = ["conor.hunt+git@gmail.com"]
11
+ spec.description = %q{A tool to create a static version of a website for hosting on S3. Can be used to create a cheap emergency backup version of a dynamic website.}
12
+ spec.summary = %q{A tool to create a static version of a website for hosting on S3.}
13
+ spec.homepage = "https://github.com/SquareMill/staticizer"
14
+ spec.license = "MIT"
15
+
16
+ spec.files = `git ls-files`.split($/)
17
+ spec.executables = spec.files.grep(%r{^bin/}) { |f| File.basename(f) }
18
+ spec.test_files = spec.files.grep(%r{^(test|spec|features)/})
19
+ spec.require_paths = ["lib"]
20
+
21
+ spec.add_development_dependency "bundler", "~> 1.3"
22
+ spec.add_development_dependency "rake"
23
+
24
+ spec.add_runtime_dependency 'nokogiri'
25
+ spec.add_runtime_dependency 'aws-sdk'
26
+ end
@@ -0,0 +1,15 @@
1
+ require 'minitest/autorun'
2
+
3
+ # TODO!
4
+
5
+ class TestFilePaths < MiniTest::Unit::TestCase
6
+ tests = {
7
+ "" => "index.html"
8
+ "/" => "index.html"
9
+ "/asdfdf/dfdf" => "/asdfdf/dfdf"
10
+ "/asdfdf/dfdf/" => "/asdfdf/dfdf" and "/asdfdf/dfdf/index.html"
11
+ "/asdfad/asdffd.test" => "/asdfad/asdffd.test"
12
+ "/?asdfsd=12312" => "/?asdfsd=12312"
13
+ "/asdfad/asdffd.test?123=sdff" => "/asdfad/asdffd.test?123=sdff"
14
+ }
15
+ end
metadata ADDED
@@ -0,0 +1,114 @@
1
+ --- !ruby/object:Gem::Specification
2
+ name: staticizer
3
+ version: !ruby/object:Gem::Version
4
+ version: 0.0.2
5
+ platform: ruby
6
+ authors:
7
+ - Conor Hunt
8
+ autorequire:
9
+ bindir: bin
10
+ cert_chain: []
11
+ date: 2014-01-14 00:00:00.000000000 Z
12
+ dependencies:
13
+ - !ruby/object:Gem::Dependency
14
+ name: bundler
15
+ requirement: !ruby/object:Gem::Requirement
16
+ requirements:
17
+ - - ~>
18
+ - !ruby/object:Gem::Version
19
+ version: '1.3'
20
+ type: :development
21
+ prerelease: false
22
+ version_requirements: !ruby/object:Gem::Requirement
23
+ requirements:
24
+ - - ~>
25
+ - !ruby/object:Gem::Version
26
+ version: '1.3'
27
+ - !ruby/object:Gem::Dependency
28
+ name: rake
29
+ requirement: !ruby/object:Gem::Requirement
30
+ requirements:
31
+ - - '>='
32
+ - !ruby/object:Gem::Version
33
+ version: '0'
34
+ type: :development
35
+ prerelease: false
36
+ version_requirements: !ruby/object:Gem::Requirement
37
+ requirements:
38
+ - - '>='
39
+ - !ruby/object:Gem::Version
40
+ version: '0'
41
+ - !ruby/object:Gem::Dependency
42
+ name: nokogiri
43
+ requirement: !ruby/object:Gem::Requirement
44
+ requirements:
45
+ - - '>='
46
+ - !ruby/object:Gem::Version
47
+ version: '0'
48
+ type: :runtime
49
+ prerelease: false
50
+ version_requirements: !ruby/object:Gem::Requirement
51
+ requirements:
52
+ - - '>='
53
+ - !ruby/object:Gem::Version
54
+ version: '0'
55
+ - !ruby/object:Gem::Dependency
56
+ name: aws-sdk
57
+ requirement: !ruby/object:Gem::Requirement
58
+ requirements:
59
+ - - '>='
60
+ - !ruby/object:Gem::Version
61
+ version: '0'
62
+ type: :runtime
63
+ prerelease: false
64
+ version_requirements: !ruby/object:Gem::Requirement
65
+ requirements:
66
+ - - '>='
67
+ - !ruby/object:Gem::Version
68
+ version: '0'
69
+ description: A tool to create a static version of a website for hosting on S3. Can
70
+ be used to create a cheap emergency backup version of a dynamic website.
71
+ email:
72
+ - conor.hunt+git@gmail.com
73
+ executables:
74
+ - staticizer
75
+ extensions: []
76
+ extra_rdoc_files: []
77
+ files:
78
+ - .gitignore
79
+ - Gemfile
80
+ - LICENSE.txt
81
+ - README.md
82
+ - Rakefile
83
+ - bin/staticizer
84
+ - lib/staticizer.rb
85
+ - lib/staticizer/command.rb
86
+ - lib/staticizer/crawler.rb
87
+ - lib/staticizer/version.rb
88
+ - staticizer.gemspec
89
+ - tests/crawler_test.rb
90
+ homepage: https://github.com/SquareMill/staticizer
91
+ licenses:
92
+ - MIT
93
+ metadata: {}
94
+ post_install_message:
95
+ rdoc_options: []
96
+ require_paths:
97
+ - lib
98
+ required_ruby_version: !ruby/object:Gem::Requirement
99
+ requirements:
100
+ - - '>='
101
+ - !ruby/object:Gem::Version
102
+ version: '0'
103
+ required_rubygems_version: !ruby/object:Gem::Requirement
104
+ requirements:
105
+ - - '>='
106
+ - !ruby/object:Gem::Version
107
+ version: '0'
108
+ requirements: []
109
+ rubyforge_project:
110
+ rubygems_version: 2.1.9
111
+ signing_key:
112
+ specification_version: 4
113
+ summary: A tool to create a static version of a website for hosting on S3.
114
+ test_files: []