proxycrawl 0.1.1

Sign up to get free protection for your applications and to get access to all the features.
@@ -0,0 +1,7 @@
1
+ ---
2
+ SHA1:
3
+ metadata.gz: 1c4d6754bc9eeafeb6b1d6834111df156ee342a6
4
+ data.tar.gz: e7f673ed4d6347fd1fbb73e8d60f8cf65b2ad046
5
+ SHA512:
6
+ metadata.gz: 4ddee197bd9310b8b22f866a52980ca294c8bfd06513310d93affe9a41241b7cbace0ad63353f04dc9646723303e2f56075aa5746efe908bcd64b2759d135012
7
+ data.tar.gz: 7e81282efbbf76d9b8d50e919de861baf09ca8e06af7a3266f33614bfdf8ee332a1b2c0e4b12f19fe1462d00966bd4563664a056e4494c166084f916d9bfeecd
@@ -0,0 +1,9 @@
1
+ /.bundle/
2
+ /.yardoc
3
+ /Gemfile.lock
4
+ /_yardoc/
5
+ /coverage/
6
+ /doc/
7
+ /pkg/
8
+ /spec/reports/
9
+ /tmp/
@@ -0,0 +1,74 @@
1
+ # Contributor Covenant Code of Conduct
2
+
3
+ ## Our Pledge
4
+
5
+ In the interest of fostering an open and welcoming environment, we as
6
+ contributors and maintainers pledge to making participation in our project and
7
+ our community a harassment-free experience for everyone, regardless of age, body
8
+ size, disability, ethnicity, gender identity and expression, level of experience,
9
+ nationality, personal appearance, race, religion, or sexual identity and
10
+ orientation.
11
+
12
+ ## Our Standards
13
+
14
+ Examples of behavior that contributes to creating a positive environment
15
+ include:
16
+
17
+ * Using welcoming and inclusive language
18
+ * Being respectful of differing viewpoints and experiences
19
+ * Gracefully accepting constructive criticism
20
+ * Focusing on what is best for the community
21
+ * Showing empathy towards other community members
22
+
23
+ Examples of unacceptable behavior by participants include:
24
+
25
+ * The use of sexualized language or imagery and unwelcome sexual attention or
26
+ advances
27
+ * Trolling, insulting/derogatory comments, and personal or political attacks
28
+ * Public or private harassment
29
+ * Publishing others' private information, such as a physical or electronic
30
+ address, without explicit permission
31
+ * Other conduct which could reasonably be considered inappropriate in a
32
+ professional setting
33
+
34
+ ## Our Responsibilities
35
+
36
+ Project maintainers are responsible for clarifying the standards of acceptable
37
+ behavior and are expected to take appropriate and fair corrective action in
38
+ response to any instances of unacceptable behavior.
39
+
40
+ Project maintainers have the right and responsibility to remove, edit, or
41
+ reject comments, commits, code, wiki edits, issues, and other contributions
42
+ that are not aligned to this Code of Conduct, or to ban temporarily or
43
+ permanently any contributor for other behaviors that they deem inappropriate,
44
+ threatening, offensive, or harmful.
45
+
46
+ ## Scope
47
+
48
+ This Code of Conduct applies both within project spaces and in public spaces
49
+ when an individual is representing the project or its community. Examples of
50
+ representing a project or community include using an official project e-mail
51
+ address, posting via an official social media account, or acting as an appointed
52
+ representative at an online or offline event. Representation of a project may be
53
+ further defined and clarified by project maintainers.
54
+
55
+ ## Enforcement
56
+
57
+ Instances of abusive, harassing, or otherwise unacceptable behavior may be
58
+ reported by contacting the project team at info@proxycrawl.com. All
59
+ complaints will be reviewed and investigated and will result in a response that
60
+ is deemed necessary and appropriate to the circumstances. The project team is
61
+ obligated to maintain confidentiality with regard to the reporter of an incident.
62
+ Further details of specific enforcement policies may be posted separately.
63
+
64
+ Project maintainers who do not follow or enforce the Code of Conduct in good
65
+ faith may face temporary or permanent repercussions as determined by other
66
+ members of the project's leadership.
67
+
68
+ ## Attribution
69
+
70
+ This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
71
+ available at [http://contributor-covenant.org/version/1/4][version]
72
+
73
+ [homepage]: http://contributor-covenant.org
74
+ [version]: http://contributor-covenant.org/version/1/4/
data/Gemfile ADDED
@@ -0,0 +1,3 @@
1
+ source "https://rubygems.org"
2
+
3
+ gemspec
@@ -0,0 +1,21 @@
1
+ The MIT License (MIT)
2
+
3
+ Copyright (c) 2018 ProxyCrawl
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in
13
+ all copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
21
+ THE SOFTWARE.
@@ -0,0 +1,147 @@
1
+ # Proxycrawl
2
+
3
+ Dependency free gem for scraping and crawling websites using the ProxyCrawl API.
4
+
5
+ ## Installation
6
+
7
+ Add this line to your application's Gemfile:
8
+
9
+ ```ruby
10
+ gem 'proxycrawl'
11
+ ```
12
+
13
+ And then execute:
14
+
15
+ $ bundle
16
+
17
+ Or install it yourself as:
18
+
19
+ $ gem install proxycrawl
20
+
21
+ ## Usage
22
+
23
+ Initialize the API with one of your account tokens, either normal or javascript token. Then make get or post requests accordingly.
24
+
25
+ You can get a token for free by creating a ProxyCrawl account and 1000 free testing requests. You can use them for tcp calls or javascript calls or both.
26
+
27
+ ```ruby
28
+ api = ProxyCrawl::API.new(token: 'YOUR_TOKEN')
29
+ ```
30
+
31
+ ### GET requests
32
+
33
+ Pass the url that you want to scrape plus any options from the ones available in the [API documentation](https://proxycrawl.com/dashboard/docs).
34
+
35
+ ```ruby
36
+ api.get(url, options)
37
+ ```
38
+
39
+ Example:
40
+
41
+ ```ruby
42
+
43
+ begin
44
+ response = api.get('https://www.facebook.com/britneyspears')
45
+ puts response.status_code
46
+ puts response.original_status
47
+ puts response.pc_status
48
+ puts response.body
49
+ rescue => exception
50
+ puts exception.backtrace
51
+ end
52
+
53
+ ```
54
+
55
+ You can pass any options of what the ProxyCrawl API supports in exact param format.
56
+
57
+ Example:
58
+
59
+ ```ruby
60
+ options = {
61
+ user_agent: 'Mozilla/5.0 (Windows NT 6.2; rv:20.0) Gecko/20121202 Firefox/30.0',
62
+ format: 'json'
63
+ }
64
+
65
+ response = api.get('https://www.reddit.com/r/pics/comments/5bx4bx/thanks_obama/', options)
66
+
67
+ puts response.body
68
+ puts response.status_code
69
+ puts response.original_status
70
+ puts response.pc_status
71
+ ```
72
+
73
+ ### POST requests
74
+
75
+ Pass the url that you want to scrape, the data that you want to send which can be either a json or a string, plus any options from the ones available in the [API documentation](https://proxycrawl.com/dashboard/docs).
76
+
77
+ ```ruby
78
+ api.post(url, data, options);
79
+ ```
80
+
81
+ Example:
82
+
83
+ ```ruby
84
+ api.post('https://producthunt.com/search', { text: 'example search' })
85
+ ```
86
+
87
+ You can send the data as application/json instead of x-www-form-urlencoded by setting options `post_content_type` as json.
88
+
89
+ ```ruby
90
+ response = api.post('https://httpbin.org/post', { some_json: 'with some value' }, { post_content_type: 'json' })
91
+
92
+ puts response.code
93
+ puts response.body
94
+
95
+ ```
96
+
97
+ ### Javascript requests
98
+
99
+ If you need to scrape any website built with Javascript like React, Angular, Vue, etc. You just need to pass your javascript token and use the same calls. Note that only `.get` is available for javascript and not `.post`.
100
+
101
+ ```ruby
102
+ api = ProxyCrawl::API.new(token: 'YOUR_JAVASCRIPT_TOKEN' })
103
+ ```
104
+
105
+ ```ruby
106
+ response = api.get('https://www.nfl.com')
107
+ puts response.status_code
108
+ puts response.body
109
+ ```
110
+
111
+ Same way you can pass javascript additional options.
112
+
113
+ ```ruby
114
+ response = api.get('https://www.freelancer.com', options: { page_wait: 5000 })
115
+ puts response.status_code
116
+ ```
117
+
118
+ ## Original status
119
+
120
+ You can always get the original status and proxycrawl status from the response. Read the [ProxyCrawl documentation](https://proxycrawl.com/dashboard/docs) to learn more about those status.
121
+
122
+ ```ruby
123
+ response = api.get('https://sfbay.craigslist.org/')
124
+
125
+ puts response.original_status
126
+ puts response.pc_status
127
+ ```
128
+
129
+ If you have questions or need help using the library, please open an issue or [contact us](https://proxycrawl.com/contact).
130
+
131
+ ## Development
132
+
133
+ After checking out the repo, run `bin/setup` to install dependencies. You can also run `bin/console` for an interactive prompt that will allow you to experiment.
134
+
135
+ To install this gem onto your local machine, run `bundle exec rake install`. To release a new version, update the version number in `version.rb`, and then run `bundle exec rake release`, which will create a git tag for the version, push git commits and tags, and push the `.gem` file to [rubygems.org](https://rubygems.org).
136
+
137
+ ## Contributing
138
+
139
+ Bug reports and pull requests are welcome on GitHub at https://github.com/proxycrawl/proxycrawl-ruby. This project is intended to be a safe, welcoming space for collaboration, and contributors are expected to adhere to the [Contributor Covenant](http://contributor-covenant.org) code of conduct.
140
+
141
+ ## License
142
+
143
+ The gem is available as open source under the terms of the [MIT License](http://opensource.org/licenses/MIT).
144
+
145
+ ## Code of Conduct
146
+
147
+ Everyone interacting in the Proxycrawl project’s codebases, issue trackers, chat rooms and mailing lists is expected to follow the [code of conduct](https://github.com/proxycrawl/proxycrawl-ruby/blob/master/CODE_OF_CONDUCT.md).
@@ -0,0 +1,2 @@
1
+ require "bundler/gem_tasks"
2
+ task :default => :spec
@@ -0,0 +1,14 @@
1
+ #!/usr/bin/env ruby
2
+
3
+ require "bundler/setup"
4
+ require "proxycrawl"
5
+
6
+ # You can add fixtures and/or initialization code here to make experimenting
7
+ # with your gem easier. You can also use a different console, if you like.
8
+
9
+ # (If you use this, don't forget to add pry to your Gemfile!)
10
+ # require "pry"
11
+ # Pry.start
12
+
13
+ require "irb"
14
+ IRB.start(__FILE__)
@@ -0,0 +1,8 @@
1
+ #!/usr/bin/env bash
2
+ set -euo pipefail
3
+ IFS=$'\n\t'
4
+ set -vx
5
+
6
+ bundle install
7
+
8
+ # Do any other automated setup that you need to do here
@@ -0,0 +1,5 @@
1
+ require "proxycrawl/version"
2
+ require 'proxycrawl/api'
3
+
4
+ module ProxyCrawl
5
+ end
@@ -0,0 +1,85 @@
1
+ # frozen_string_literal: true
2
+ require 'net/http'
3
+ require 'json'
4
+ require 'uri'
5
+
6
+ module ProxyCrawl
7
+ class API
8
+ attr_reader :token, :body, :status_code, :original_status, :pc_status, :url
9
+
10
+ BASE_URL = 'https://api.proxycrawl.com'
11
+
12
+ INVALID_TOKEN = 'Token is required'
13
+ INVALID_URL = 'URL is required'
14
+
15
+ def initialize(options = {})
16
+ raise INVALID_TOKEN if options[:token].nil?
17
+
18
+ @token = options[:token]
19
+ end
20
+
21
+ def get(url, options = {})
22
+ raise INVALID_URL if url.empty?
23
+
24
+ uri = prepare_uri(url, options)
25
+
26
+ response = Net::HTTP.get_response(uri)
27
+
28
+ prepare_response(response, options[:format])
29
+
30
+ self
31
+ end
32
+
33
+ def post(url, data, options = {})
34
+ raise INVALID_URL if url.empty?
35
+
36
+ uri = prepare_uri(url, options)
37
+
38
+ http = Net::HTTP.new(uri.host, uri.port)
39
+
40
+ http.use_ssl = true
41
+
42
+ content_type = options[:post_content_type].to_s.include?('json') ? { 'Content-Type': 'text/json' } : nil
43
+
44
+ request = Net::HTTP::Post.new(uri.request_uri, content_type)
45
+
46
+ if options[:post_content_type].to_s.include?('json')
47
+ request.body = data.to_json
48
+ else
49
+ request.set_form_data(data)
50
+ end
51
+
52
+ response = http.request(request)
53
+
54
+ prepare_response(response, options[:format])
55
+
56
+ self
57
+ end
58
+
59
+ private
60
+
61
+ def prepare_uri(url, options)
62
+ uri = URI(BASE_URL)
63
+ uri.query = URI.encode_www_form({ token: @token, url: url }.merge(options))
64
+
65
+ uri
66
+ end
67
+
68
+ def prepare_response(response, format)
69
+ if format == 'json'
70
+ json_response = JSON.parse(response.body)
71
+ @original_status = json_response['original_status'].to_i
72
+ @status_code = response.code.to_i
73
+ @pc_status = json_response['pc_status'].to_i
74
+ @body = json_response['body']
75
+ @url = json_response['url']
76
+ else
77
+ @original_status = response['original_status'].to_i
78
+ @status_code = response.code.to_i
79
+ @pc_status = response['pc_status'].to_i
80
+ @url = response['url']
81
+ @body = response.body
82
+ end
83
+ end
84
+ end
85
+ end
@@ -0,0 +1,3 @@
1
+ module Proxycrawl
2
+ VERSION = "0.1.1"
3
+ end
@@ -0,0 +1,31 @@
1
+ # coding: utf-8
2
+ lib = File.expand_path("../lib", __FILE__)
3
+ $LOAD_PATH.unshift(lib) unless $LOAD_PATH.include?(lib)
4
+ require "proxycrawl/version"
5
+
6
+ Gem::Specification.new do |spec|
7
+ spec.name = "proxycrawl"
8
+ spec.version = Proxycrawl::VERSION
9
+ spec.platform = Gem::Platform::RUBY
10
+ spec.authors = ["proxycrawl"]
11
+ spec.email = ["info@proxycrawl.com"]
12
+ spec.summary = %q{ProxyCrawl API client for web scraping and crawling}
13
+ spec.description = %q{Ruby based client for the ProxyCrawl API that helps developers crawl or scrape thousands of web pages anonymously}
14
+ spec.homepage = "https://github.com/proxycrawl/proxycrawl-ruby"
15
+ spec.license = "MIT"
16
+
17
+ spec.files = `git ls-files -z`.split("\x0").reject do |f|
18
+ f.match(%r{^(test|spec|features)/})
19
+ end
20
+
21
+ spec.required_ruby_version = '>= 2.0'
22
+
23
+ spec.bindir = "exe"
24
+ spec.executables = spec.files.grep(%r{^exe/}) { |f| File.basename(f) }
25
+ spec.require_paths = ["lib"]
26
+
27
+ spec.add_development_dependency "rspec", "~> 3.2"
28
+ spec.add_development_dependency "webmock", "~> 3.4"
29
+ spec.add_development_dependency "bundler", "~> 1.15"
30
+ spec.add_development_dependency "rake", "~> 10.0"
31
+ end
metadata ADDED
@@ -0,0 +1,113 @@
1
+ --- !ruby/object:Gem::Specification
2
+ name: proxycrawl
3
+ version: !ruby/object:Gem::Version
4
+ version: 0.1.1
5
+ platform: ruby
6
+ authors:
7
+ - proxycrawl
8
+ autorequire:
9
+ bindir: exe
10
+ cert_chain: []
11
+ date: 2018-05-27 00:00:00.000000000 Z
12
+ dependencies:
13
+ - !ruby/object:Gem::Dependency
14
+ name: rspec
15
+ requirement: !ruby/object:Gem::Requirement
16
+ requirements:
17
+ - - "~>"
18
+ - !ruby/object:Gem::Version
19
+ version: '3.2'
20
+ type: :development
21
+ prerelease: false
22
+ version_requirements: !ruby/object:Gem::Requirement
23
+ requirements:
24
+ - - "~>"
25
+ - !ruby/object:Gem::Version
26
+ version: '3.2'
27
+ - !ruby/object:Gem::Dependency
28
+ name: webmock
29
+ requirement: !ruby/object:Gem::Requirement
30
+ requirements:
31
+ - - "~>"
32
+ - !ruby/object:Gem::Version
33
+ version: '3.4'
34
+ type: :development
35
+ prerelease: false
36
+ version_requirements: !ruby/object:Gem::Requirement
37
+ requirements:
38
+ - - "~>"
39
+ - !ruby/object:Gem::Version
40
+ version: '3.4'
41
+ - !ruby/object:Gem::Dependency
42
+ name: bundler
43
+ requirement: !ruby/object:Gem::Requirement
44
+ requirements:
45
+ - - "~>"
46
+ - !ruby/object:Gem::Version
47
+ version: '1.15'
48
+ type: :development
49
+ prerelease: false
50
+ version_requirements: !ruby/object:Gem::Requirement
51
+ requirements:
52
+ - - "~>"
53
+ - !ruby/object:Gem::Version
54
+ version: '1.15'
55
+ - !ruby/object:Gem::Dependency
56
+ name: rake
57
+ requirement: !ruby/object:Gem::Requirement
58
+ requirements:
59
+ - - "~>"
60
+ - !ruby/object:Gem::Version
61
+ version: '10.0'
62
+ type: :development
63
+ prerelease: false
64
+ version_requirements: !ruby/object:Gem::Requirement
65
+ requirements:
66
+ - - "~>"
67
+ - !ruby/object:Gem::Version
68
+ version: '10.0'
69
+ description: Ruby based client for the ProxyCrawl API that helps developers crawl
70
+ or scrape thousands of web pages anonymously
71
+ email:
72
+ - info@proxycrawl.com
73
+ executables: []
74
+ extensions: []
75
+ extra_rdoc_files: []
76
+ files:
77
+ - ".gitignore"
78
+ - CODE_OF_CONDUCT.md
79
+ - Gemfile
80
+ - LICENSE.txt
81
+ - README.md
82
+ - Rakefile
83
+ - bin/console
84
+ - bin/setup
85
+ - lib/proxycrawl.rb
86
+ - lib/proxycrawl/api.rb
87
+ - lib/proxycrawl/version.rb
88
+ - proxycrawl.gemspec
89
+ homepage: https://github.com/proxycrawl/proxycrawl-ruby
90
+ licenses:
91
+ - MIT
92
+ metadata: {}
93
+ post_install_message:
94
+ rdoc_options: []
95
+ require_paths:
96
+ - lib
97
+ required_ruby_version: !ruby/object:Gem::Requirement
98
+ requirements:
99
+ - - ">="
100
+ - !ruby/object:Gem::Version
101
+ version: '2.0'
102
+ required_rubygems_version: !ruby/object:Gem::Requirement
103
+ requirements:
104
+ - - ">="
105
+ - !ruby/object:Gem::Version
106
+ version: '0'
107
+ requirements: []
108
+ rubyforge_project:
109
+ rubygems_version: 2.5.2
110
+ signing_key:
111
+ specification_version: 4
112
+ summary: ProxyCrawl API client for web scraping and crawling
113
+ test_files: []