recipe_crawler 2.1.1 → 3.0.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA1:
3
- metadata.gz: 89724169688d20b40e62c446b5aabada46cff9d5
4
- data.tar.gz: 0f04b9336ccbbac34b7a03e984818b3ac83d1ba3
3
+ metadata.gz: 73c1539f622d74c7bf7baf43143d0d1126dbd25b
4
+ data.tar.gz: d482a41f7d5afd8e911a63bc22ba18892c7968b9
5
5
  SHA512:
6
- metadata.gz: 637f23b5a4bb49f48f4269009fcaec4155fae268c9cd4dbe65494f6048b745d6e6e99eef075ac232b9688078de169bffb5df17359a0b736682af64ccd4999c61
7
- data.tar.gz: 5a2e5f744823dc0438be65741985b0b2d2c2466694590c30886f71e4c039eca5d1aa4698c5d38ffbdab7fa260fb0deb95fe22cfcc1efd944c313b37d8ab01928
6
+ metadata.gz: a52e22347cde681c281ae23eef2d767d85c48c8424a84c168abc0f530bb663e28b561864a6e5a8cdec97c3038e546ddf123e9b016ab59b21177a46a1f7a55d34
7
+ data.tar.gz: 47fe8d244019888e7302df36cc094d617f2b946942e57a1e94b075b4d1ea22c3088881ac1fe38ca2e9ae4703bad4d1094463b1678ce5f4c666276f8a68bbfe04
data/.gitignore CHANGED
@@ -7,3 +7,4 @@
7
7
  /pkg/
8
8
  /spec/reports/
9
9
  /tmp/
10
+ *.sqlite3
data/Gemfile CHANGED
@@ -1,4 +1,4 @@
1
1
  source 'https://rubygems.org'
2
2
 
3
- # Specify your gem's dependencies in marmiton_crawler.gemspec
3
+ # Specify your gem's dependencies in recipe_crawler.gemspec
4
4
  gemspec
data/README.md CHANGED
@@ -1,8 +1,13 @@
1
1
  # RecipeCrawler
2
2
 
3
- A web scrawler to get a [Marmiton](http://www.marmiton.org/) or [750g](http://www.750g.com)*(two famous french cooking recipe websites)*!
3
+ A **web crawler** to save recipes from [marmiton.org](http://www.marmiton.org/), [750g.com](http://www.750g.com) or [cuisineaz.com](http://www.cuisineaz.com) into an **SQlite3** database.
4
+
5
+ > For the moment, it works only with [cuisineaz.com](http://www.cuisineaz.com)
6
+
7
+ This *Rubygems* use my other **powerfull** [recipe_scraper](https://github.com/madeindjs/recipe_scraper) gem to scrape data on these websites.
8
+
9
+ To experiment with that code, run `bin/console` for an interactive prompt.
4
10
 
5
- You'll find the files you need to be able to package up your Ruby library into a gem. Put your Ruby code in the file `lib/recipe_crawler`. To experiment with that code, run `bin/console` for an interactive prompt.
6
11
 
7
12
  ## Installation
8
13
 
@@ -22,47 +27,46 @@ Or install it yourself as:
22
27
 
23
28
  ## Usage
24
29
 
25
- 1. import library:
26
-
27
- ~~~ruby
28
- require 'recipe_crawler'
29
- ~~~
30
-
31
- 2. Create a new instance of `RecipeCrawler::Recipe`
32
-
33
- ~~~ruby
34
- marmiton_url = 'http://www.marmiton.org/recettes/recette_burger-d-avocat_345742.aspx'
35
- recipe = RecipeCrawler::Recipe.new marmiton_url
36
- ~~~
37
-
38
- 3. Export as `json` or as an `Array`
39
-
40
- ~~~ruby
41
- recipe.to_hash
42
- # will return
43
- # --------------
44
- # { :cooktime => 7,
45
- # :image => "http://images.marmitoncdn.org/recipephotos/multiphoto/7b/7b4e95f5-37e0-4294-bebe-cde86c30817f_normal.jpg",
46
- # :ingredients => ["2 beaux avocat", "2 steaks hachés de boeuf", "2 tranches de cheddar", "quelques feuilles de salade", "1/2 oignon rouge", "1 tomate", "graines de sésame", "1 filet d'huile d'olive", "1 pincée de sel", "1 pincée de poivre"],
47
- # :preptime => 20,
48
- # :steps => ["Laver et couper la tomate en rondelles", "Cuire les steaks à la poêle avec un filet d'huile d'olive", "Saler et poivrer", "Toaster les graines de sésames", "Ouvrir les avocats en 2, retirer le noyau et les éplucher", "Monter les burger en plaçant un demi-avocat face noyau vers le haut, déposer un steak, une tranche de cheddar sur le steak bien chaud pour qu'elle fonde, une rondelle de tomate, une rondelle d'oignon, quelques feuilles de salade et terminer par la seconde moitié d'avocat", "Parsemer quelques graines de sésames."],
49
- # :title => "Burger d'avocat",
50
- # }
51
- recipe.to_json
52
- # will return
53
- # --------------
54
- # "{\"title\":\"Burger d'avocat\",\"preptime\":20,\"cooktime\":7,\"ingredients\":[\"2 beaux avocat\",\"2 steaks hachés de boeuf\",\"2 tranches de cheddar\",\"quelques feuilles de salade\",\"1/2 oignon rouge\",\"1 tomate\",\"graines de sésame\",\"1 filet d'huile d'olive\",\"1 pincée de sel\",\"1 pincée de poivre\"],\"steps\":[\"Laver et couper la tomate en rondelles\",\"Cuire les steaks à la poêle avec un filet d'huile d'olive\",\"Saler et poivrer\",\"Toaster les graines de sésames\",\"Ouvrir les avocats en 2, retirer le noyau et les éplucher\",\"Monter les burger en plaçant un demi-avocat face noyau vers le haut, déposer un steak, une tranche de cheddar sur le steak bien chaud pour qu'elle fonde, une rondelle de tomate, une rondelle d'oignon, quelques feuilles de salade et terminer par la seconde moitié d'avocat\",\"Parsemer quelques graines de sésames.\"],\"image\":\"http://images.marmitoncdn.org/recipephotos/multiphoto/7b/7b4e95f5-37e0-4294-bebe-cde86c30817f_normal.jpg\"}"
55
- ~~~
30
+ ### Command line
31
+
32
+ > TODO
33
+
34
+ ### API
35
+
36
+ Install & import the library:
37
+
38
+ require 'recipe_scraper'
39
+
40
+ Then you just need to instanciate a `RecipeCrawler::Crawler` with url of a CuisineAZ's recipe.
41
+
42
+ url = 'http://www.cuisineaz.com/recettes/pate-a-pizza-legere-55004.aspx'
43
+ r = RecipeCrawler::Crawler.new url
44
+
45
+ Then you just need to run the crawl with a limit number of recipe to fetch. All recipes will be saved in a *export.sqlite3* file. You can pass a block to play with `RecipeSraper::Recipe` objects.
46
+
47
+ r.crawl!(10) do |recipe|
48
+ puts recipe.to_hash
49
+ # will return
50
+ # --------------
51
+ # { :cooktime => 7,
52
+ # :image => "http://images.marmitoncdn.org/recipephotos/multiphoto/7b/7b4e95f5-37e0-4294-bebe-cde86c30817f_normal.jpg",
53
+ # :ingredients => ["2 beaux avocat", "2 steaks hachés de boeuf", "2 tranches de cheddar", "quelques feuilles de salade", "1/2 oignon rouge", "1 tomate", "graines de sésame", "1 filet d'huile d'olive", "1 pincée de sel", "1 pincée de poivre"],
54
+ # :preptime => 20,
55
+ # :steps => ["Laver et couper la tomate en rondelles", "Cuire les steaks à la poêle avec un filet d'huile d'olive", "Saler et poivrer", "Toaster les graines de sésames", "Ouvrir les avocats en 2, retirer le noyau et les éplucher", "Monter les burger en plaçant un demi-avocat face noyau vers le haut, déposer un steak, une tranche de cheddar sur le steak bien chaud pour qu'elle fonde, une rondelle de tomate, une rondelle d'oignon, quelques feuilles de salade et terminer par la seconde moitié d'avocat", "Parsemer quelques graines de sésames."],
56
+ # :title => "Burger d'avocat",
57
+ # }
58
+ end
59
+
56
60
 
57
61
  ## Development
58
62
 
59
- After checking out the repo, run `bin/setup` to install dependencies. Then, run `rake spec` to run the tests. You can also run `bin/console` for an interactive prompt that will allow you to experiment.
63
+ After checking out the repo, run `bin/setup` to install dependencies. Then, run `rake spec` to run the tests. You can also run `bin/console` for an interactive prompt that will allow you to experiment. You can also run `yard` to generate the documentation.
60
64
 
61
65
  To install this gem onto your local machine, run `bundle exec rake install`. To release a new version, update the version number in `version.rb`, and then run `bundle exec rake release`, which will create a git tag for the version, push git commits and tags, and push the `.gem` file to [rubygems.org](https://rubygems.org).
62
66
 
63
67
  ## Contributing
64
68
 
65
- Bug reports and pull requests are welcome on GitHub at https://github.com/madeindjs/recipe_crawler. This project is intended to be a safe, welcoming space for collaboration, and contributors are expected to adhere to the [Contributor Covenant](http://contributor-covenant.org) code of conduct.
69
+ Bug reports and pull requests are welcome on GitHub at https://github.com/[USERNAME]/recipe_crawler. This project is intended to be a safe, welcoming space for collaboration, and contributors are expected to adhere to the [Contributor Covenant](http://contributor-covenant.org) code of conduct.
66
70
 
67
71
 
68
72
  ## License
@@ -72,5 +76,4 @@ The gem is available as open source under the terms of the [MIT License](http://
72
76
  Author
73
77
  ----------
74
78
 
75
- [Rousseau Alexandre](https://github.com/madeindjs)
76
-
79
+ [Rousseau Alexandre](https://github.com/madeindjs)
data/bin/console CHANGED
@@ -1,7 +1,7 @@
1
1
  #!/usr/bin/env ruby
2
2
 
3
3
  require "bundler/setup"
4
- require "marmiton_crawler"
4
+ require "recipe_crawler"
5
5
 
6
6
  # You can add fixtures and/or initialization code here to make experimenting
7
7
  # with your gem easier. You can also use a different console, if you like.
@@ -0,0 +1,176 @@
1
+ require 'recipe_scraper'
2
+ require 'nokogiri'
3
+ require 'open-uri'
4
+ require 'sqlite3'
5
+
6
+
7
+ module RecipeCrawler
8
+
9
+ # This is the main class to crawl recipes from a given url
10
+ # 1. Crawler will crawl url to find others recipes urls on the website
11
+ # 2. it will crawl urls founded to find other url again & again
12
+ # 3. it will scrape urls founded to get data
13
+ #
14
+ # @attr url [String] first url parsed
15
+ # @attr host [Symbol] of url's host
16
+ # @attr scraped_urls [Array<String>] of url's host
17
+ # @attr crawled_urls [Array<String>] of url's host
18
+ # @attr to_crawl_urls [Array<String>] of url's host
19
+ # @attr recipes [Array<RecipeSraper::Recipe>] recipes fetched
20
+ #
21
+ # @attr db [SQLite3::Database] Sqlite database where recipe will be saved
22
+ class Crawler
23
+
24
+ # URL than crawler can parse
25
+ ALLOWED_URLS = {
26
+ cuisineaz: 'http://www.cuisineaz.com/recettes/',
27
+ marmiton: 'http://www.marmiton.org/recettes/',
28
+ g750: 'http://www.750g.com/'
29
+ }
30
+
31
+ attr_reader :url, :host, :scraped_urls, :crawled_urls, :to_crawl_urls, :recipes
32
+
33
+
34
+ #
35
+ # Create a Crawler
36
+ # @param url [String] a url a recipe to scrawl other one
37
+ def initialize url
38
+ @url = url
39
+ if url_valid?
40
+ @recipes = []
41
+ @crawled_urls = []
42
+ @scraped_urls = []
43
+ @to_crawl_urls = []
44
+ @to_crawl_urls << url
45
+ @db = SQLite3::Database.new "results.sqlite3"
46
+ @db.execute "CREATE TABLE IF NOT EXISTS recipes(
47
+ Id INTEGER PRIMARY KEY,
48
+ title TEXT,
49
+ preptime INTEGER,
50
+ cooktime INTEGER,
51
+ ingredients TEXT,
52
+ steps TEXT,
53
+ image TEXT
54
+ )"
55
+ else
56
+ raise ArgumentError , 'This url cannot be used'
57
+ end
58
+ end
59
+
60
+
61
+ #
62
+ # Check if the url can be parsed and set the host
63
+ #
64
+ # @return [Boolean] true if url can be parsed
65
+ def url_valid?
66
+ ALLOWED_URLS.each do |host, url_allowed|
67
+ if url.include? url_allowed
68
+ @host = host
69
+ return true
70
+ end
71
+ end
72
+ return false
73
+ end
74
+
75
+
76
+ #
77
+ # Start the crawl
78
+ # @param limit [Integer]
79
+ #
80
+ # @yield [RecipeSraper::Recipe] as recipe scraped
81
+ def crawl! limit=2
82
+ # find all link on url given (and urls of theses)
83
+ if @host == :cuisineaz
84
+ while !@to_crawl_urls.empty?
85
+ get_links to_crawl_urls[0]
86
+ break if @crawled_urls.count > limit
87
+ end
88
+
89
+ else
90
+ raise NotImplementedError
91
+ end
92
+
93
+ # scrap urls
94
+ recipes_returned = 0
95
+ @crawled_urls.each{ |crawled_url|
96
+ if limit > recipes_returned
97
+ yield scrape crawled_url
98
+ recipes_returned += 1
99
+ else
100
+ break
101
+ end
102
+ } if block_given?
103
+ end
104
+
105
+
106
+ #
107
+ # Scrape given url
108
+ # param url [String] as url to scrape
109
+ #
110
+ # @return [RecipeSraper::Recipe] as recipe scraped
111
+ def scrape url
112
+ recipe = RecipeSraper::Recipe.new url
113
+ @scraped_urls << url
114
+ @recipes << recipe
115
+ if save recipe
116
+ return recipe
117
+ else
118
+ raise SQLite3::Exception, 'accnot save recipe'
119
+ end
120
+ end
121
+
122
+
123
+ #
124
+ # Get recipes links from the given url
125
+ # @param url [String] as url to scrape
126
+ #
127
+ # @return [void]
128
+ def get_links url
129
+ # catch 404 error from host
130
+ begin
131
+ doc = Nokogiri::HTML(open(url))
132
+ # find internal links on page
133
+ doc.css('#tagCloud a').each do |link|
134
+ link = link.attr('href')
135
+ # If link correspond to a recipe we add it to recipe to scraw
136
+ if link.include?(ALLOWED_URLS[@host]) and !@crawled_urls.include?(url)
137
+ @to_crawl_urls << link
138
+ end
139
+ end
140
+ @to_crawl_urls.delete url
141
+ @crawled_urls << url
142
+ @to_crawl_urls.uniq!
143
+
144
+ rescue OpenURI::HTTPError
145
+ @to_crawl_urls.delete url
146
+ warn "#{url} cannot be reached"
147
+ end
148
+ end
149
+
150
+
151
+ #
152
+ # Save recipe
153
+ # @param recipe [RecipeSraper::Recipe] as recipe to save
154
+ #
155
+ # @return [Boolean] as true if success
156
+ def save recipe
157
+ begin
158
+ @db.execute "INSERT INTO recipes (title, preptime, cooktime, ingredients, steps, image)
159
+ VALUES (:title, :preptime, :cooktime, :ingredients, :steps, :image)",
160
+ title: recipe.title,
161
+ preptime: recipe.preptime,
162
+ ingredients: recipe.ingredients.join("\n"),
163
+ steps: recipe.steps.join("\n"),
164
+ image: recipe.image
165
+
166
+ return true
167
+
168
+ rescue SQLite3::Exception => e
169
+ puts "Exception occurred #{e}"
170
+ return false
171
+ end
172
+ end
173
+ end
174
+
175
+
176
+ end
@@ -1,3 +1,3 @@
1
1
  module RecipeCrawler
2
- VERSION = "2.1.1"
2
+ VERSION = "3.0.0"
3
3
  end
@@ -1,163 +1,6 @@
1
- # encoding: UTF-8
2
- require 'json'
3
- require 'open-uri'
4
- require 'nokogiri'
5
-
6
1
  require "recipe_crawler/version"
7
-
2
+ require "recipe_crawler/crawler"
8
3
 
9
4
  module RecipeCrawler
10
-
11
- # represent a recipe fetched from an Url
12
- class Recipe
13
-
14
- attr_reader :title, :preptime, :cooktime , :ingredients, :steps, :image
15
-
16
- MARMITON_HOST = {desktop: 'http://www.marmiton.org/', mobile: 'http://m.marmiton.org/'}
17
- G750_HOST = 'http://www.750g.com'
18
-
19
-
20
-
21
- # Instanciate a Recipe object with data crawled from an url
22
- #
23
- # @param url [String] representing an url from Marmiton or 750g website
24
- def initialize url
25
-
26
- warn "[DEPRECATION] `RecipeCrawler` is deprecated. Please download & use `RecipeSraper` instead."
27
-
28
- if marmiton_host? url
29
- fetch_from_marmiton url
30
- elsif g750_host? url
31
- fetch_from_g750 url
32
- else
33
- raise ArgumentError, "Instantiation cancelled (Host not supported)."
34
- end
35
- end
36
-
37
-
38
- # export object properties to hash
39
- #
40
- # @return [Hash] as object's properties
41
- def to_hash
42
- attrs = Hash.new
43
- instance_variables.each do |var|
44
- str = var.to_s.gsub /^@/, ''
45
- attrs[str.to_sym] = instance_variable_get(var)
46
- end
47
- attrs
48
- end
49
-
50
-
51
- # convert object properties to json
52
- #
53
- # @return [String] data formated in JSON
54
- def to_json
55
- return self.to_hash.to_json
56
- end
57
-
58
-
59
- private
60
-
61
-
62
- # remove `\r\n` & unwanted espaces
63
- #
64
- # @param text [String] a text to sanitize
65
- # @return [String] as text corrected formated
66
- def sanitize text
67
- [' ', '\r\n', "\r\n", "\n", "\r"].each { |text_to_remove|
68
- text.gsub!(text_to_remove,'')
69
- }
70
- return text
71
- end
72
-
73
- # test if url is from a valid marmiton.org host
74
- #
75
- # @param url [String] representing an url
76
- # @return [Boolean] as true if coresponding to a valid url
77
- def marmiton_host? url
78
- return url.include?(MARMITON_HOST[:desktop]) || url.include?(MARMITON_HOST[:mobile])
79
- end
80
-
81
-
82
- # test if url is from a valid 750g.com host
83
- #
84
- # @param url [String] representing an url
85
- # @return [Boolean] as true if coresponding to a valid url
86
- def g750_host? url
87
- return url.include? G750_HOST
88
- end
89
-
90
-
91
- # fill object properties from a Marmiton url
92
- #
93
- # @param url [String] representing an url
94
- def fetch_from_marmiton url
95
- if marmiton_host? url
96
-
97
- url.gsub! MARMITON_HOST[:mobile], MARMITON_HOST[:desktop]
98
-
99
- page = Nokogiri::HTML(open(url).read)
100
- @title = page.css('h1.m_title span.item span.fn').text
101
-
102
-
103
- # get times
104
- @preptime = page.css('p.m_content_recette_info span.preptime').text.to_i
105
- @cooktime = page.css('p.m_content_recette_info span.cooktime').text.to_i
106
-
107
- # get ingredients
108
- ingredients_text = page.css('div.m_content_recette_ingredients').text
109
- @ingredients = sanitize(ingredients_text).split '- '
110
- @ingredients.delete_at(0) # to delete the first `Ingrédients (pour 2 personnes) :`
111
-
112
- # get steps
113
- steps_text = page.css('div.m_content_recette_todo').text
114
- @steps = sanitize(steps_text).split '. '
115
- @steps.delete_at(0) # to delete the first `Ingrédients (pour 2 personnes) :`
116
-
117
- # get image
118
- @image = page.css('a.m_content_recette_illu img.m_pinitimage').attr('src').to_s
119
-
120
-
121
- else
122
- raise ArgumentError, "Instantiation cancelled (ulr not from #{MARMITON_HOST})."
123
- end
124
- end
125
-
126
-
127
- # fill object properties from a 750g url
128
- #
129
- # @param url [String] representing an url
130
- def fetch_from_g750 url
131
- if g750_host? url
132
- page = Nokogiri::HTML(open(url).read)
133
- @title = page.css('h1.c-article__title').text
134
-
135
- # get times
136
- @preptime = page.css('ul.c-recipe-summary li time[itemprop=prepTime]').text.to_i
137
- @cooktime = page.css('ul.c-recipe-summary li time[itemprop=cookTime]').text.to_i
138
-
139
- @steps = []
140
- css_step = "div[itemprop=recipeInstructions] p"
141
- @steps = page.css(css_step).text.split /[( ),(<br>)]/
142
-
143
- @ingredients = []
144
- css_ingredient = "div.c-recipe-ingredients ul.c-recipe-ingredients__list li.ingredient"
145
- page.css(css_ingredient).each { |ing_node|
146
- @ingredients << sanitize(ing_node.text)
147
- }
148
-
149
- # get image
150
- css_image = 'div.swiper-wrapper img.photo'
151
- begin
152
- @image = page.css(css_image).attr('src').to_s
153
- rescue NoMethodError => e
154
- end
155
-
156
- else
157
- raise ArgumentError, "Instantiation cancelled (ulr not from #{G750_HOST})."
158
- end
159
- end
160
-
161
- end
162
-
5
+ # Your code goes here...
163
6
  end
@@ -9,24 +9,22 @@ Gem::Specification.new do |spec|
9
9
  spec.authors = ["madeindjs"]
10
10
  spec.email = ["madeindjs@gmail.com"]
11
11
 
12
- spec.summary = %q{A web scrawler to get a Marmiton's or 750g recipe}
13
- # spec.description = %q{TODO: Write a longer description or delete this line.}
14
- spec.homepage = "https://github.com/madeindjs/recipe_crawler"
12
+ spec.summary = %q{Get all recipes from famous french cooking websites}
13
+ spec.description = %q{This crawler will use my personnal scraper named 'RecipeScraper' to dowload recipes data from Marmiton, 750g or cuisineaz}
14
+ spec.homepage = "https://github.com/madeindjs/recipe_crawler."
15
15
  spec.license = "MIT"
16
16
 
17
+
17
18
  spec.files = `git ls-files -z`.split("\x0").reject { |f| f.match(%r{^(test|spec|features)/}) }
18
19
  spec.bindir = "exe"
19
20
  spec.executables = spec.files.grep(%r{^exe/}) { |f| File.basename(f) }
20
21
  spec.require_paths = ["lib"]
21
22
 
23
+ spec.add_dependency "recipe_scraper", '>= 2.2.0'
24
+
25
+
22
26
  spec.add_development_dependency "bundler", "~> 1.11"
23
27
  spec.add_development_dependency "rake", "~> 10.0"
24
28
  spec.add_development_dependency "rspec", "~> 3.0"
25
- spec.add_development_dependency "nokogiri"
26
-
27
- spec.post_install_message = <<-MESSAGE
28
- ! The 'recipe_crawler' gem has been deprecated and has been replaced by 'recipe_scraper'.
29
- ! See: https://rubygems.org/gems/recipe_scraper
30
- ! And: https://github.com/madeindjs/recipe_scraper
31
- MESSAGE
29
+ spec.add_development_dependency "yard"
32
30
  end
metadata CHANGED
@@ -1,15 +1,29 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: recipe_crawler
3
3
  version: !ruby/object:Gem::Version
4
- version: 2.1.1
4
+ version: 3.0.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - madeindjs
8
8
  autorequire:
9
9
  bindir: exe
10
10
  cert_chain: []
11
- date: 2016-12-04 00:00:00.000000000 Z
11
+ date: 2016-12-05 00:00:00.000000000 Z
12
12
  dependencies:
13
+ - !ruby/object:Gem::Dependency
14
+ name: recipe_scraper
15
+ requirement: !ruby/object:Gem::Requirement
16
+ requirements:
17
+ - - ">="
18
+ - !ruby/object:Gem::Version
19
+ version: 2.2.0
20
+ type: :runtime
21
+ prerelease: false
22
+ version_requirements: !ruby/object:Gem::Requirement
23
+ requirements:
24
+ - - ">="
25
+ - !ruby/object:Gem::Version
26
+ version: 2.2.0
13
27
  - !ruby/object:Gem::Dependency
14
28
  name: bundler
15
29
  requirement: !ruby/object:Gem::Requirement
@@ -53,7 +67,7 @@ dependencies:
53
67
  - !ruby/object:Gem::Version
54
68
  version: '3.0'
55
69
  - !ruby/object:Gem::Dependency
56
- name: nokogiri
70
+ name: yard
57
71
  requirement: !ruby/object:Gem::Requirement
58
72
  requirements:
59
73
  - - ">="
@@ -66,7 +80,8 @@ dependencies:
66
80
  - - ">="
67
81
  - !ruby/object:Gem::Version
68
82
  version: '0'
69
- description:
83
+ description: This crawler will use my personnal scraper named 'RecipeScraper' to dowload
84
+ recipes data from Marmiton, 750g or cuisineaz
70
85
  email:
71
86
  - madeindjs@gmail.com
72
87
  executables: []
@@ -78,23 +93,20 @@ files:
78
93
  - ".travis.yml"
79
94
  - CODE_OF_CONDUCT.md
80
95
  - Gemfile
81
- - LICENSE
82
96
  - LICENSE.txt
83
97
  - README.md
84
98
  - Rakefile
85
99
  - bin/console
86
100
  - bin/setup
87
101
  - lib/recipe_crawler.rb
102
+ - lib/recipe_crawler/crawler.rb
88
103
  - lib/recipe_crawler/version.rb
89
104
  - recipe_crawler.gemspec
90
- homepage: https://github.com/madeindjs/recipe_crawler
105
+ homepage: https://github.com/madeindjs/recipe_crawler.
91
106
  licenses:
92
107
  - MIT
93
108
  metadata: {}
94
- post_install_message: |
95
- ! The 'recipe_crawler' gem has been deprecated and has been replaced by 'recipe_scraper'.
96
- ! See: https://rubygems.org/gems/recipe_scraper
97
- ! And: https://github.com/madeindjs/recipe_scraper
109
+ post_install_message:
98
110
  rdoc_options: []
99
111
  require_paths:
100
112
  - lib
@@ -110,8 +122,8 @@ required_rubygems_version: !ruby/object:Gem::Requirement
110
122
  version: '0'
111
123
  requirements: []
112
124
  rubyforge_project:
113
- rubygems_version: 2.4.8
125
+ rubygems_version: 2.5.1
114
126
  signing_key:
115
127
  specification_version: 4
116
- summary: A web scrawler to get a Marmiton's or 750g recipe
128
+ summary: Get all recipes from famous french cooking websites
117
129
  test_files: []
data/LICENSE DELETED
@@ -1,21 +0,0 @@
1
- MIT License
2
-
3
- Copyright (c) 2016 Rousseau Alexandre
4
-
5
- Permission is hereby granted, free of charge, to any person obtaining a copy
6
- of this software and associated documentation files (the "Software"), to deal
7
- in the Software without restriction, including without limitation the rights
8
- to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
- copies of the Software, and to permit persons to whom the Software is
10
- furnished to do so, subject to the following conditions:
11
-
12
- The above copyright notice and this permission notice shall be included in all
13
- copies or substantial portions of the Software.
14
-
15
- THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
- IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
- FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
- AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
- LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
- OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
- SOFTWARE.