web_fetch 0.1.3 → 0.2.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 0db44428d9307f942a78b30e7be62b1134e18771e6633adacfff5f95ea3a2f78
4
- data.tar.gz: 27ce789c7134dc90f1ae47b9815a4d7fbe72e53edfa5dae53ea6820dffc23cf6
3
+ metadata.gz: efc9744475219db51912a9786698c869c139ad85334380566124c0f42096beb9
4
+ data.tar.gz: 358340a89ee0a65519dd0b0cfb7adf6e15cb0e9f1d01b6fc25351394df62ffe4
5
5
  SHA512:
6
- metadata.gz: 7788c4d849fd56d27e0b167d8d2a00cb5b64d019556dfcd3196d755742179e2e7059a4e3745feff7faef6843418a43cb9bc628428b128add145a834cfc9d198e
7
- data.tar.gz: 9c17c80925224f08099a0717a4afd8fd489effee2012be7c63b59d42f14451b066fb990f2497ae1d8781a1e4ac8eea9eac1239346b30f09bf10b7a556a2caf85
6
+ metadata.gz: 9c341ee83a5877d1537f6e4afe992fd19b45651b694c4cb18ffa9ce7131d6fd7669eecad2b368f235f586dd0083e556ab203a7e17cf1f458d0eca444b839e619
7
+ data.tar.gz: a34dc8a42298aa11ae2a4cb24ce90dfc799bed47330db04e919a595a094584ca0183840f039c5aebf1bcd81742f1e2a5cffd514502a36824c026d4d277c539f1
data/README.md CHANGED
@@ -2,20 +2,18 @@
2
2
 
3
3
  ## Overview
4
4
 
5
- WebFetch is an asynchronous HTTP proxy server that accepts multiple requests for HTTP retrieval, immediately returning a token for each request, and then allowing that token to be redeemed later when the entity has fully responded.
5
+ WebFetch executes concurrent, asynchronous HTTP requests. It is itself an HTTP server implementing a RESTful API, wrapped by a Ruby client interface. Instead of returning a response, WebFetch immediately returns a *promise* which can be redeemed later when the response has been processed.
6
6
 
7
- This permits issuing multiple HTTP requests in parallel, in a fully encapsulated and external process, without having to resort to multi-threading, multi-processing, or complex non-blocking IO implementations. [EventMachine][1] is used to handle the heavy lifting.
7
+ This permits issuing multiple HTTP requests in parallel, in a fully encapsulated and external process, without having to resort to multi-threading, multi-processing, or complex non-blocking IO implementations. [EventMachine][1] is used to handle the heavy lifting.
8
8
 
9
9
  ![WebFetch architecture][2]
10
10
 
11
11
  ## Getting Started
12
12
 
13
- Although WebFetch runs as a web server and provides all functionality over a RESTful API (see below), the simplest way to use it is with its Ruby client implementation, which wraps the HTTP API for you, using [Faraday][3]. This also serves as a [reference][4] for writing WebFetch clients in other languages.
14
-
15
13
  In your `Gemfile`, add:
16
14
 
17
15
  ``` ruby
18
- gem 'web_fetch', git: 'https://github.com/bobf/web_fetch.git'
16
+ gem 'web_fetch'
19
17
  ```
20
18
 
21
19
  and update your bundle:
@@ -24,100 +22,181 @@ and update your bundle:
24
22
  bundle install
25
23
  ```
26
24
 
27
- Create, connect to, and wrap a Ruby client object around a new WebFetch server instance, listening as `localhost` on port `8077`:
25
+ Require WebFetch in your application:
28
26
 
29
27
  ``` ruby
30
28
  require 'web_fetch'
29
+ ```
30
+
31
+ ### Launch or connect to a server
32
+
33
+ Launch the server from your application (recommended for familiarising yourself
34
+ with WebFetch):
35
+
36
+ ``` ruby
31
37
  client = WebFetch::Client.create('localhost', 8077)
32
38
  ```
33
39
 
34
- Issue some requests [asynchronously]:
40
+ Or connect to an existing WebFetch server (recommended for production systems - see [below](#process-management) for more details):
35
41
 
36
42
  ``` ruby
37
- requests = [{ url: 'http://foobar.baz/' },
38
- { url: 'http://barfoo.baz/foobar',
39
- headers: { 'User-Agent' => 'Foo Browser' } },
40
- query: { foo: 'what is foo', bar: 'what is baz' } ]
41
- jobs = client.gather(requests)
43
+ client = WebFetch::Client.new('localhost', 8077)
42
44
  ```
43
45
 
44
- Retrieve the responses [synchronously - *any result that has not yet arrived will block until it has arrived while other requests continue to run in parallel*]:
46
+ ### Create a request
47
+
48
+ Create a WebFetch request. Note that the request will not begin until the next step:
45
49
 
46
50
  ``` ruby
47
- responses = []
48
- jobs.each do |job|
49
- response = client.retrieve_by_uid(job[:uid])
50
- responses.push(response)
51
+ request = WebFetch::Request.new do |req|
52
+ req.url = 'http://foobar.baz'
53
+ req.headers = { 'User-Agent' => 'Foo Browser' }
54
+ req.query = { foobar: 'baz' }
55
+ req.method = :get
56
+ req.body = 'foo bar baz'
57
+ req.custom = { my_id: '123' }
51
58
  end
52
59
  ```
53
60
 
54
- See [a working example][5]
61
+ Only `url` is required. The default HTTP method is `GET`.
55
62
 
56
- ## HTTP API
63
+ Anything assigned to `custom` will be returned with the final result (available by calling `#custom` on the result). This may be useful if you need to tag each request with your own custom identifier, for example. Anything you assign here will have no bearing whatsoever on the HTTP request.
57
64
 
58
- If you need to use the WebFetch server's HTTP API directly refer to the [Swagger API Reference][6]
65
+ If you prefer to build a request from a hash, you can call `WebFetch::Request.from_hash`
59
66
 
60
- ## Managing the WebFetch process yourself
67
+ ``` ruby
68
+ request = WebFetch::Request.from_hash(
69
+ url: 'http://foobar.baz',
70
+ headers: { 'User-Agent' => 'Foo Browser' },
71
+ query: { foobar: 'baz' },
72
+ method: :get,
73
+ body: 'foo bar baz',
74
+ custom: { my_id: '123' }
75
+ )
76
+ ```
61
77
 
62
- You may want to run the WebFetch server yourself rather than instantiate it via the client. For this case, the executable `bin/web_fetch_control` is provided.
78
+ ### Gather responses
63
79
 
64
- WebFetch can be started in the terminal with output going to STDOUT or as a daemon.
80
+ Ask WebFetch to begin gathering your HTTP requests in the background:
65
81
 
66
- Run the server as a daemon:
82
+ ``` ruby
83
+ promises = client.gather([request])
84
+ ```
85
+
86
+ `WebFetch::Client#gather` accepts an array of `WebFetch::Request` objects and immediately returns an array of `WebFetch::Promise` objects. WebFetch will process all requests in the background concurrently.
67
87
 
88
+ To retrieve the result of a request, call `WebFetch::Promise#fetch`
89
+
90
+ ``` ruby
91
+ result = promises.first.fetch
92
+
93
+ # Available methods:
94
+ result.body
95
+ result.headers
96
+ result.status # HTTP status code
97
+ result.success? # False if a network error (not HTTP error) occurred
98
+ result.error # Underlying network error if applicable
68
99
  ```
69
- $ bundle exec bin/web_fetch_control start -- --log /tmp/web_fetch.log
100
+
101
+ Note that `WebFech::Promise#fetch` will block until the result is complete by default. If you want to continue executing other code if the result is not ready (e.g. to see if any other results are ready), you can pass `wait: false`
102
+
103
+ ``` ruby
104
+ result = promises.first.fetch(wait: false)
70
105
  ```
71
106
 
72
- **Note that you should always pass `--log` when running as a daemon otherwise all output will go to the null device.**
107
+ A special value `:pending` will be returned if the result is still processing.
73
108
 
74
- Run the server in the terminal:
109
+ Alternatively, you can call `WebFetch::Promise#complete?` to check if a request has finished before waiting for the response:
75
110
 
111
+ ``` ruby
112
+ result = promises.first.fetch if promises.first.complete?
76
113
  ```
77
- $ bundle exec bin/web_fetch_control run -- --port 8080
114
+
115
+ ### Fetching results later
116
+
117
+ In some cases you may need to fetch the result of a request in a different context to which you initiated it. A unique ID is available for each *Promise* which can be used to fetch the result from a separate *Client* instance:
118
+
119
+ ``` ruby
120
+ client = WebFetch::Client.new('localhost', 8077)
121
+ promises = client.gather([
122
+ WebFetch::Request.new { |req| req.url = 'http://foobar.com' }
123
+ ])
124
+ uid = promises.first.uid
125
+
126
+ # Later ...
127
+ client = WebFetch::Client.new('localhost', 8077)
128
+ result = client.fetch(uid)
78
129
  ```
79
130
 
80
- It is further recommended to use a process management tool to monitor the pidfile (pass `--pidfile /path/to/file.pid` to specify an explicit location).
131
+ This can be useful if your web application initiates requests in one controller action and fetches them in another; the `uid` can be stored in a database and used to fetch the request later on.
81
132
 
82
- To connect to an existing process, use `WebFetch::Client.new` rather than `WebFetch::Client.create`. For example:
133
+ ### Stopping the server
134
+
135
+ When you have finished using the web server, call `WebFetch::Client#stop`
83
136
 
84
137
  ``` ruby
85
- WebFetch::Client.new('localhost', 8087)
138
+ client.stop
86
139
  ```
87
140
 
88
- ## WebFetch Client request options
141
+ The server will not automatically stop when your program exits.
89
142
 
90
- `WebFetch::Client#gather` accepts an array of hashes which may contain the following parameters:
143
+ ## Examples
91
144
 
92
- * `url`: The target URL [string]
93
- * `headers`: HTTP headers [hash]
94
- * `query`: Query parameters [hash]
95
- * `method`: HTTP method (default: `"GET"`) [string]
96
- * `body`: HTTP body [string]
145
+ [Runnable examples][5] are provided for more detailed usage.
97
146
 
98
- These parameters will all be used (where provided) when initiating the HTTP request on the target.
147
+ ## HTTP API
99
148
 
100
- Arbitrary parameters can also be passed and will be returned by `#gather` (though they will not be used to construct the HTTP request). This allows tagging requests with arbitrary information if you need to identify them in a particular way. For example, you may want to generate your own unique identifier for a request, in which case you could do:
149
+ If you need to use the WebFetch server's HTTP API directly refer to the [Swagger API Reference][6]
101
150
 
102
- ``` ruby
103
- client.gather([{ url: 'http://foobar.baz', my_unique_id: '123-456-789' }])
104
- # [{:request=>{:url=>"http://foobar.baz", :my_unique_id=>"123-456-789"}, :hash=>"7c511911d16e1072363fa1653bdd93df65208901", :uid=>"1fb4ee7a-9fc0-4896-9af2-7cbdf234a468"}]
151
+ ## Managing the WebFetch process yourself <a name='process-management'></a>
152
+
153
+ For production systems it is advised that you run the WebFetch server separately rather than instantiate it via the client. For this case, the executable `bin/web_fetch_control` is provided. Daemonisation is handled by the [daemons][7] gem.
154
+
155
+ WebFetch can be started in the terminal with output going to STDOUT or as a daemon.
156
+
157
+ Run the server as a daemon:
158
+
159
+ ```
160
+ $ web_fetch_control start
105
161
  ```
106
162
 
107
- ## Logging
163
+ Run the server in the terminal:
108
164
 
109
- WebFetch logs to STDOUT by default. An alternative log file can be set either
110
- by passing `--log /path/to/logfile` to the command line server, or by passing
111
- `log: '/path/to/logfile'` to `WebFetch::Client.create`:
165
+ ```
166
+ $ web_fetch_control run
167
+ ```
168
+
169
+ Stop the server:
112
170
 
113
171
  ```
114
- $ bundle exec bin/web_fetch_server --log /tmp/web_fetch.log
172
+ $ web_fetch_control stop
115
173
  ```
116
174
 
175
+ To pass options to WebFetch, pass `--` to `web_fetch_control` and add all WebFetch options afterwards.
176
+
177
+ Available options:
178
+
117
179
  ```
118
- client = WebFetch::Client.create('localhost', 8077, log: '/tmp/web_fetch.log')
180
+ --port 60087
181
+ --host localhost
182
+ --pidfile /tmp/web_fetch.pid
183
+ --log /var/log/web_fetch.log
119
184
  ```
120
185
 
186
+ e.g.:
187
+
188
+ ```
189
+ web_fetch_control run -- --port 8000 --host 0.0.0.0
190
+ ```
191
+
192
+ No pid file will be created unless the `--pidfile` parameter is passed. It is recommended to use a process monitoring tool (e.g. `monit` or `systemd`) to monitor the WebFetch process.
193
+
194
+ When running as a daemon, WebFetch will log to the null device so it is advised to always pass `--log` in this case.
195
+
196
+ ## Docker
197
+
198
+ To use WebFetch in Docker you can either use the provided [`Dockerfile`](docker/Dockerfile) or the public image [`web_fetch/web_fetch`](https://hub.docker.com/r/webfetch/webfetch/)
199
+
121
200
  ## Contributing
122
201
 
123
202
  WebFetch uses `rspec` for testing:
@@ -138,12 +217,13 @@ Feel free to fork and create a pull request if you would like to make any change
138
217
 
139
218
  ## License
140
219
 
141
- WebFetch is licensed under the [MIT License][7]. You are encouraged to re-use the code in any way you see fit as long as you give credit to the original author. If you do use the code for any other projects then feel free to let me know but, of course, this is not required.
220
+ WebFetch is licensed under the [MIT License][8]. You are encouraged to re-use the code in any way you see fit as long as you give credit to the original author. If you do use the code for any other projects then feel free to let me know but, of course, this is not required.
142
221
 
143
222
  [1]: https://github.com/eventmachine/eventmachine
144
223
  [2]: doc/web_fetch_architecture.png
145
224
  [3]: https://github.com/lostisland/faraday
146
225
  [4]: lib/web_fetch/client.rb
147
- [5]: doc/client_example.rb
226
+ [5]: doc/examples/
148
227
  [6]: swagger.yaml
149
- [7]: LICENSE
228
+ [7]: https://github.com/thuehlinger/daemons
229
+ [8]: LICENSE
@@ -10,3 +10,7 @@ en:
10
10
 
11
11
  uid_not_found: "Provided `uid` has not yet been requested"
12
12
  hash_not_found: "Provided `hash` has not yet been requested"
13
+
14
+ pending: "Your request is still being processed"
15
+
16
+ no_request: "No active request found for UID: %{uid}"
@@ -0,0 +1,33 @@
1
+ # frozen_string_literal: true
2
+
3
+ # Run me with:
4
+ # bundle exec ruby doc/examples/blocking_requests.rb
5
+ #
6
+ # rubocop:disable all
7
+ require 'web_fetch'
8
+
9
+ def blocking_requests
10
+ urls = ['https://rubygems.org/',
11
+ 'http://lycos.com/',
12
+ 'http://google.com/']
13
+
14
+ requests = urls.map do |url|
15
+ WebFetch::Request.new do |request|
16
+ request.url = url
17
+ end
18
+ end
19
+
20
+ client = WebFetch::Client.create('localhost', 8077)
21
+ promises = client.gather(requests)
22
+
23
+ promises.each do |promise|
24
+ result = promise.fetch(wait: true) # Will block (default behaviour)
25
+ puts result.body[0..100]
26
+ puts "Success: #{result.success?}"
27
+ end
28
+
29
+ client.stop
30
+ end
31
+
32
+ blocking_requests
33
+ # rubocop:enable all
@@ -0,0 +1,34 @@
1
+ # frozen_string_literal: true
2
+
3
+ # Run me with:
4
+ # bundle exec ruby doc/examples/non_blocking_requests.rb
5
+ #
6
+ # rubocop:disable all
7
+ require 'web_fetch'
8
+
9
+ def non_blocking_requests
10
+ urls = ['https://rubygems.org/',
11
+ 'http://lycos.com/',
12
+ 'http://google.com/']
13
+
14
+ requests = urls.map do |url|
15
+ WebFetch::Request.new do |request|
16
+ request.url = url
17
+ end
18
+ end
19
+
20
+ client = WebFetch::Client.create('localhost', 8077)
21
+ promises = client.gather(requests)
22
+
23
+ while promises.any? { |promise| !promise.complete? }
24
+ promises.each do |promise|
25
+ result = promise.fetch(wait: false) # Will not block
26
+ puts result.body[0..100] unless result == :pending
27
+ end
28
+ end
29
+
30
+ client.stop
31
+ end
32
+
33
+ non_blocking_requests
34
+ # rubocop:enable all
@@ -0,0 +1,28 @@
1
+ # frozen_string_literal: true
2
+
3
+ # Run me with:
4
+ # bundle exec ruby doc/examples/use_uid_for_request.rb
5
+ #
6
+ # rubocop:disable all
7
+ require 'web_fetch'
8
+
9
+ def use_uid_for_request
10
+ request = WebFetch::Request.new do |request|
11
+ request.url = 'https://rubygems.org/'
12
+ end
13
+
14
+ client = WebFetch::Client.create('localhost', 8077)
15
+ promises = client.gather([request])
16
+ uid = promises.first.uid
17
+ while true
18
+ result = client.fetch(uid, wait: false)
19
+ break unless result == :pending
20
+ end
21
+
22
+ puts result.body[0..100]
23
+
24
+ client.stop
25
+ end
26
+
27
+ use_uid_for_request
28
+ # rubocop:enable all
data/docker/Dockerfile ADDED
@@ -0,0 +1,3 @@
1
+ FROM library/ruby
2
+ RUN gem install web_fetch
3
+ CMD ["web_fetch_control", "run", "--", "--port", "8077", "--host", "0.0.0.0"]
@@ -4,6 +4,10 @@ module WebFetch
4
4
  # Client to be used in application code. Capable of spawning a server and
5
5
  # interacting with it to gather requests and retrieve them
6
6
  class Client
7
+ include ClientHttp
8
+
9
+ attr_reader :host, :port
10
+
7
11
  def initialize(host, port, options = {})
8
12
  @host = host
9
13
  @port = port
@@ -38,15 +42,35 @@ module WebFetch
38
42
  end
39
43
 
40
44
  def gather(requests)
41
- json = JSON.dump(requests: requests)
45
+ json = JSON.dump(requests: requests.map(&:to_h))
42
46
  response = post('gather', json)
43
- return nil unless response.success?
44
47
 
45
- JSON.parse(response.body, symbolize_names: true)[:requests]
48
+ handle_error(JSON.parse(response.body)['error']) unless response.success?
49
+
50
+ requests = JSON.parse(response.body, symbolize_names: true)[:requests]
51
+ promises(requests)
52
+ end
53
+
54
+ def fetch(uid, options = {})
55
+ block = options.fetch(:wait, true)
56
+
57
+ outcome = block ? retrieve_by_uid(uid) : find_by_uid(uid)
58
+ no_request_error(uid) if outcome.nil?
59
+
60
+ return :pending if outcome[:pending]
61
+
62
+ new_result(outcome)
46
63
  end
47
64
 
48
65
  def retrieve_by_uid(uid)
49
- response = get('retrieve', uid: uid)
66
+ response = get("retrieve/#{uid}")
67
+ return nil unless response.success?
68
+
69
+ JSON.parse(response.body, symbolize_names: true)
70
+ end
71
+
72
+ def find_by_uid(uid)
73
+ response = get("find/#{uid}")
50
74
  return nil unless response.success?
51
75
 
52
76
  JSON.parse(response.body, symbolize_names: true)
@@ -78,23 +102,29 @@ module WebFetch
78
102
 
79
103
  private
80
104
 
81
- def base_uri
82
- "http://#{@host}:#{@port}"
105
+ def handle_error(errors)
106
+ raise WebFetch::ClientError, errors
83
107
  end
84
108
 
85
- def get(endpoint, params = {})
86
- conn = Faraday.new(url: base_uri)
87
- conn.get do |request|
88
- request.url "/#{endpoint}"
89
- request.params.merge!(params)
90
- end
109
+ def no_request_error(uid)
110
+ raise RequestNotFoundError, [I18n.t('no_request', uid: uid)]
111
+ end
112
+
113
+ def new_result(outcome)
114
+ response = outcome[:response]
115
+ Result.new(
116
+ body: response[:body],
117
+ headers: response[:headers],
118
+ status: response[:status],
119
+ success: response[:success],
120
+ error: response[:error],
121
+ uid: outcome[:uid]
122
+ )
91
123
  end
92
124
 
93
- def post(endpoint, body)
94
- conn = Faraday.new(url: base_uri)
95
- conn.post do |request|
96
- request.url "/#{endpoint}"
97
- request.body = body
125
+ def promises(requests)
126
+ requests.map do |request|
127
+ Promise.new(self, uid: request[:uid], request: request[:request])
98
128
  end
99
129
  end
100
130
  end
@@ -0,0 +1,25 @@
1
+ # frozen_string_literal: true
2
+
3
+ module WebFetch
4
+ module ClientHttp
5
+ def base_uri
6
+ "http://#{@host}:#{@port}"
7
+ end
8
+
9
+ def get(endpoint, params = {})
10
+ conn = Faraday.new(url: base_uri)
11
+ conn.get do |request|
12
+ request.url "/#{endpoint}"
13
+ request.params.merge!(params)
14
+ end
15
+ end
16
+
17
+ def post(endpoint, body)
18
+ conn = Faraday.new(url: base_uri)
19
+ conn.post do |request|
20
+ request.url "/#{endpoint}"
21
+ request.body = body
22
+ end
23
+ end
24
+ end
25
+ end
@@ -0,0 +1,15 @@
1
+ # frozen_string_literal: true
2
+
3
+ module WebFetch
4
+ class Error < StandardError
5
+ attr_reader :errors
6
+
7
+ def initialize(errors = nil)
8
+ @errors = errors
9
+ end
10
+ end
11
+
12
+ class ClientError < Error; end
13
+
14
+ class RequestNotFoundError < Error; end
15
+ end
@@ -3,31 +3,44 @@
3
3
  module WebFetch
4
4
  # EventMachine layer-specific helpers
5
5
  module EventMachineHelpers
6
- def wait_for_response(deferred, response)
7
- deferred[:http].callback do
8
- Logger.debug("HTTP fetch complete for uid: #{deferred[:uid]}")
9
- deferred[:succeeded] = true
6
+ def request_async(request)
7
+ async_request = EM::HttpRequest.new(request[:url])
8
+ method = request.fetch(:method, 'GET').downcase.to_sym
9
+ async_request.public_send(
10
+ method,
11
+ head: request[:headers],
12
+ query: request.fetch(:query, {}),
13
+ body: request.fetch(:body, nil)
14
+ )
15
+ end
16
+
17
+ def apply_callbacks(request)
18
+ request[:deferred].callback do
19
+ Logger.debug("HTTP fetch complete for uid: #{request[:uid]}")
20
+ request[:succeeded] = true
10
21
  end
11
22
 
12
- deferred[:http].errback do
13
- Logger.debug("HTTP fetch failed for uid: #{deferred[:uid]}")
14
- deferred[:failed] = true
23
+ request[:deferred].errback do
24
+ Logger.debug("HTTP fetch failed for uid: #{request[:uid]}")
25
+ request[:failed] = true
15
26
  end
27
+ end
16
28
 
17
- tick_loop(deferred, response)
29
+ def wait_for_response(request, response)
30
+ tick_loop(request, response)
18
31
  end
19
32
 
20
- def tick_loop(deferred, response)
33
+ def tick_loop(request, response)
21
34
  # XXX There may be a much nicer way to wait for an async task to complete
22
35
  # before returning a response but I couldn't figure it out, so I used
23
36
  # EM.tick_loop which effectively does the same as a Twisted deferred
24
37
  # callback chain, just much more explicitly.
25
38
  EM.tick_loop do
26
- if deferred[:succeeded]
27
- succeed(deferred, response)
39
+ if request[:succeeded]
40
+ succeed(request, response)
28
41
  :stop
29
- elsif deferred[:failed]
30
- fail_(deferred, response)
42
+ elsif request[:failed]
43
+ fail_(request, response)
31
44
  :stop
32
45
  end
33
46
  end
@@ -9,6 +9,16 @@ module WebFetch
9
9
  response.send_response
10
10
  end
11
11
 
12
+ def pending(result, response)
13
+ respond_immediately({
14
+ payload: {
15
+ uid: result[:request][:uid],
16
+ pending: true,
17
+ message: I18n.t(:pending)
18
+ }
19
+ }, response)
20
+ end
21
+
12
22
  def compress(string)
13
23
  return string unless accept_gzip?
14
24
 
@@ -35,31 +45,33 @@ module WebFetch
35
45
  JSON.parse(@http_post_content, symbolize_names: true)
36
46
  end
37
47
 
38
- def succeed(deferred, response)
48
+ def succeed(request, response)
39
49
  response.status = 200
40
- response.content = compress(JSON.dump(success(deferred)))
50
+ response.content = compress(JSON.dump(success(request)))
41
51
  response.send_response
52
+ storage.delete(request[:uid])
42
53
  end
43
54
 
44
- def success(deferred)
45
- result = deferred[:http]
55
+ def success(request)
56
+ result = request[:deferred]
46
57
  { response: {
47
58
  success: true,
48
59
  body: result.response,
49
60
  headers: result.headers,
50
61
  status: result.response_header.status
51
62
  },
52
- uid: deferred[:uid] }
63
+ uid: request[:uid] }
53
64
  end
54
65
 
55
- def fail_(deferred, response)
66
+ def fail_(request, response)
56
67
  response.status = 200
57
- response.content = compress(JSON.dump(failure(deferred)))
68
+ response.content = compress(JSON.dump(failure(request)))
58
69
  response.send_response
70
+ storage.delete(request[:uid])
59
71
  end
60
72
 
61
- def failure(deferred)
62
- result = deferred[:http]
73
+ def failure(request)
74
+ result = request[:deferred]
63
75
  { response: {
64
76
  success: false,
65
77
  body: result.response,
@@ -67,7 +79,7 @@ module WebFetch
67
79
  status: result.response_header.status,
68
80
  error: (result.error&.inspect)
69
81
  },
70
- uid: deferred[:uid] }
82
+ uid: request[:uid] }
71
83
  end
72
84
 
73
85
  def accept_gzip?