chimera_http_client 1.2.0 → 1.2.1.beta
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +4 -4
- data/README.markdown +181 -93
- data/TODO.markdown +3 -1
- data/chimera_http_client.gemspec +1 -1
- data/lib/chimera_http_client/queue.rb +16 -1
- data/lib/chimera_http_client/request.rb +5 -1
- data/lib/chimera_http_client/version.rb +1 -1
- metadata +5 -5
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA256:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: d5bd262f2489439bd64af6657b8a929217fbbf7d89e410192e7470512b329ea2
|
4
|
+
data.tar.gz: bcc303549d4a428bc2589c558881e63330f88e080e475daeeb28d0d1a9a84ddb
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: '09047e533a15ce537f83839a652de962c1a6931aabd895e4f2ae25386e047874732f163f4c151f2483ddaafd381cfbee58ab390d346826c228564d2d81671027'
|
7
|
+
data.tar.gz: ea9a533ac8ea2307cc0d2c3e074d41cc663d4d18ba9fffde5f78bcf36e43e69609a6b69b816194ad61e84e73a1e2288730ed9ad97a3c99f61eb1e770a1252a52
|
data/README.markdown
CHANGED
@@ -66,18 +66,28 @@ The basic usage looks like this:
|
|
66
66
|
|
67
67
|
```ruby
|
68
68
|
connection = ChimeraHttpClient::Connection.new(base_url: 'http://localhost/namespace')
|
69
|
+
|
69
70
|
response = connection.get!(endpoint, params: params)
|
70
71
|
```
|
71
72
|
|
72
73
|
### Initialization
|
73
74
|
|
74
|
-
|
75
|
+
```ruby
|
76
|
+
connection = ChimeraHttpClient::Connection.new(
|
77
|
+
base_url: 'http://localhost:3000/v1',
|
78
|
+
cache: cache,
|
79
|
+
deserializer: { deserializer: { error: HtmlParser, response: XMLParser }},
|
80
|
+
logger: logger,
|
81
|
+
monitor: monitor,
|
82
|
+
timeout: 2
|
83
|
+
)
|
84
|
+
```
|
75
85
|
|
76
86
|
#### Mandatory initialization parameter `base_url`
|
77
87
|
|
78
88
|
The mandatory parameter is **base_url** which should include the host, port and base path to the API endpoints you want to call, e.g. `'http://localhost:3000/v1'`.
|
79
89
|
|
80
|
-
Setting the `base_url` is meant to be a comfort feature, as you can then pass short endpoints to each request like
|
90
|
+
Setting the `base_url` is meant to be a comfort feature, as you can then pass short endpoints to each request like `users` (yes, feel free to omit leading or trailing slashes). You could set an empty string `''` as `base_url` and then pass full qualified URLs as endpoint of the requests.
|
81
91
|
|
82
92
|
#### Optional initialization parameters
|
83
93
|
|
@@ -91,34 +101,7 @@ The optional parameters are:
|
|
91
101
|
* `user_agent` - if you would like your calls to identify with a specific user agent
|
92
102
|
* `verbose` - the default is `false`, set it to true while debugging issues
|
93
103
|
|
94
|
-
|
95
|
-
|
96
|
-
In case the API you are connecting to does not return JSON, you can pass custom deserializers to `Connection.new` or `Queue.new`:
|
97
|
-
|
98
|
-
deserializers: { error: your_error_deserializer, response: your_response_deserializer }
|
99
|
-
|
100
|
-
A Deserializer has to be an object on which the method `call` with the parameter `body` can be called:
|
101
|
-
|
102
|
-
custom_deserializer.call(body)
|
103
|
-
|
104
|
-
where `body` is the response body (in the default case a JSON object). The class `Deserializer` contains the default objects that are used. They might help you creating your own. Don't forget to make requests with another header than the default `"Content-Type" => "application/json"`, when the API you connect to does not support JSON.
|
105
|
-
|
106
|
-
##### Monitoring, metrics, instrumentation
|
107
|
-
|
108
|
-
Pass an object as `:monitor` to a connection that defines the method `call` and accepts a hash as parameter.
|
109
|
-
|
110
|
-
monitor.call({...})
|
111
|
-
|
112
|
-
It will receive information about every request as soon as it finished. What you do with this information is up for you to implement.
|
113
|
-
|
114
|
-
| Field | Description |
|
115
|
-
|:---------------|:----------------------------------------------------------------------|
|
116
|
-
| `url` | URL of the endpoint that was called |
|
117
|
-
| `method` | HTTP method: get, post, ... |
|
118
|
-
| `status` | HTTP status code: 200, ... |
|
119
|
-
| `runtime` | the time in seconds it took the request to finish |
|
120
|
-
| `completed_at` | Time.now.utc.iso8601(3) |
|
121
|
-
| `context` | Whatever you pass as `monitoring_context` to the options of a request |
|
104
|
+
> Detailed information about every parameter can be found below.
|
122
105
|
|
123
106
|
### Request methods
|
124
107
|
|
@@ -126,9 +109,12 @@ The available methods are:
|
|
126
109
|
|
127
110
|
* `get` / `get!`
|
128
111
|
* `post` / `post!`
|
129
|
-
* `put` / `put
|
112
|
+
* `put` / `put!`
|
130
113
|
* `patch` / `patch!`
|
131
114
|
* `delete` / `delete!`
|
115
|
+
* `head` / `head!`
|
116
|
+
* `options` / `options!`
|
117
|
+
* `trace` / `trace!`
|
132
118
|
|
133
119
|
where the methods ending on a _bang!_ will raise an error (which you should handle in your application) while the others will return an error object.
|
134
120
|
|
@@ -143,7 +129,7 @@ connection.get("users/#{id}")
|
|
143
129
|
connection.get("/users/#{id}")
|
144
130
|
```
|
145
131
|
|
146
|
-
All forms above
|
132
|
+
All forms above are valid and will make a request to the same URL.
|
147
133
|
|
148
134
|
* Please take note that _the endpoint can be given as a String, a Symbol, or an Array._
|
149
135
|
* While they do no harm, there is _no need to pass leading or trailing `/` in endpoints._
|
@@ -162,68 +148,20 @@ All request methods expect a mandatory `endpoint` and an optional hash as parame
|
|
162
148
|
* `cache` - optionally overwrite the cache store set in `Connection` in any request
|
163
149
|
* `monitoring_context` - pass additional information you want to collect with your instrumentation `monitor`
|
164
150
|
|
165
|
-
|
151
|
+
> Detailed information about every parameter can be found below.
|
152
|
+
|
153
|
+
### Example usage
|
166
154
|
|
167
155
|
```ruby
|
168
|
-
connection.post(
|
156
|
+
response = connection.post(
|
169
157
|
:users,
|
170
158
|
body: { name: "Andy" },
|
171
159
|
params: { origin: `Twitter`},
|
172
160
|
headers: { "Authorization" => "Bearer #{token}" },
|
173
|
-
timeout:
|
174
|
-
cache: nil
|
161
|
+
timeout: 5
|
175
162
|
)
|
176
163
|
```
|
177
164
|
|
178
|
-
#### Basic auth
|
179
|
-
|
180
|
-
In case you need to use an API that is protected by **basic_auth** just pass the credentials as optional parameters:
|
181
|
-
`username: 'admin', password: 'secret'`
|
182
|
-
|
183
|
-
#### Timeout duration
|
184
|
-
|
185
|
-
The default timeout duration is **3 seconds**.
|
186
|
-
|
187
|
-
If you want to use a different timeout, you can pass the key `timeout` when initializing the `Connection`. You can also overwrite it on every call.
|
188
|
-
|
189
|
-
#### Custom logger
|
190
|
-
|
191
|
-
By default no logging is happening. If you need request logging, you can pass your custom Logger to the key `logger` when initializing the `Connection`. It will write to `logger.info` when starting and when completing a request.
|
192
|
-
|
193
|
-
#### Caching responses
|
194
|
-
|
195
|
-
To cache all the reponses of a connection, just pass the optional parameter `cache` to its initializer. You can also overwrite the connection's cache configuration by passing the parameter `cache` to any `get` call.
|
196
|
-
|
197
|
-
It could be an instance of an implementation as simple as this:
|
198
|
-
|
199
|
-
```ruby
|
200
|
-
class Cache
|
201
|
-
def initialize
|
202
|
-
@memory = {}
|
203
|
-
end
|
204
|
-
|
205
|
-
def get(request)
|
206
|
-
@memory[request]
|
207
|
-
end
|
208
|
-
|
209
|
-
def set(request, response)
|
210
|
-
@memory[request] = response
|
211
|
-
end
|
212
|
-
end
|
213
|
-
```
|
214
|
-
|
215
|
-
Or use an adapter for Dalli, Redis, or Rails cache that also support an optional time-to-live `default_ttl` parameter. If you use `Rails.cache` with the adapter `:memory_store` or `:mem_cache_store`, the object you would have to pass looks like this:
|
216
|
-
|
217
|
-
```ruby
|
218
|
-
require "typhoeus/cache/rails"
|
219
|
-
|
220
|
-
cache: Typhoeus::Cache::Rails.new(Rails.cache, default_ttl: 600) # 600 seconds
|
221
|
-
```
|
222
|
-
|
223
|
-
Read more about how to use it: https://github.com/typhoeus/typhoeus#caching
|
224
|
-
|
225
|
-
### Example usage
|
226
|
-
|
227
165
|
To use the gem, it is recommended to write wrapper classes for the endpoints used. While it would be possible to use the `get, get!, post, post!, put, put!, patch, patch!, delete, delete!` or also the bare `request.run` methods directly, wrapper classes will unify the usage pattern and be very convenient to use by veterans and newcomers to the team. A wrapper class could look like this:
|
228
166
|
|
229
167
|
```ruby
|
@@ -296,6 +234,117 @@ To create and fetch a user from a remote service with the `Users` wrapper listed
|
|
296
234
|
user.name # == "Andy"
|
297
235
|
```
|
298
236
|
|
237
|
+
### Connection parameter details
|
238
|
+
|
239
|
+
#### base_url
|
240
|
+
|
241
|
+
#### Cache
|
242
|
+
|
243
|
+
To cache the responses of a connection, just pass the optional parameter `cache` to its initializer. You can also overwrite the connection's cache configuration by passing the parameter `cache` to any `get` call.
|
244
|
+
|
245
|
+
It could be an instance of an implementation as simple as this:
|
246
|
+
|
247
|
+
```ruby
|
248
|
+
class Cache
|
249
|
+
def initialize
|
250
|
+
@memory = {}
|
251
|
+
end
|
252
|
+
|
253
|
+
def get(request)
|
254
|
+
@memory[request]
|
255
|
+
end
|
256
|
+
|
257
|
+
def set(request, response)
|
258
|
+
@memory[request] = response
|
259
|
+
end
|
260
|
+
end
|
261
|
+
```
|
262
|
+
|
263
|
+
Or use an adapter for Dalli, Redis, or Rails cache that also support an optional time-to-live `default_ttl` parameter. If you use `Rails.cache` with the adapter `:memory_store` or `:mem_cache_store`, the object you would have to pass looks like this:
|
264
|
+
|
265
|
+
```ruby
|
266
|
+
require "typhoeus/cache/rails"
|
267
|
+
|
268
|
+
cache: Typhoeus::Cache::Rails.new(Rails.cache, default_ttl: 600) # 600 seconds
|
269
|
+
```
|
270
|
+
|
271
|
+
Read more about how to use it: https://github.com/typhoeus/typhoeus#caching
|
272
|
+
|
273
|
+
#### Custom deserializers
|
274
|
+
|
275
|
+
In case the API you are connecting to does not return JSON, you can pass custom deserializers to `Connection.new` or `Queue.new`:
|
276
|
+
|
277
|
+
deserializers: { error: your_error_deserializer, response: your_response_deserializer }
|
278
|
+
|
279
|
+
A Deserializer has to be an object on which the method `call` with the parameter `body` can be called:
|
280
|
+
|
281
|
+
custom_deserializer.call(body)
|
282
|
+
|
283
|
+
where `body` is the response body (in the default case a JSON object). The class `Deserializer` contains the default objects that are used. They might help you creating your own. Don't forget to make requests with another header than the default `"Content-Type" => "application/json"`, when the API you connect to does not support JSON.
|
284
|
+
|
285
|
+
If you don't want that Chimera deserialize the response body, pass a Proc that does nothing:
|
286
|
+
|
287
|
+
deserializer: { response: proc { |body| body }, response: proc { |body| body } }
|
288
|
+
|
289
|
+
#### Logger
|
290
|
+
|
291
|
+
By default no logging is happening. If you need request logging, you can pass your custom Logger as option `:logger` when initializing the `Connection`. Chimera will write to `logger.info` when starting and when completing a request.
|
292
|
+
|
293
|
+
#### Monitoring, metrics, instrumentation
|
294
|
+
|
295
|
+
Pass an object as `:monitor` to a connection that defines the method `call` and accepts a hash as parameter.
|
296
|
+
|
297
|
+
monitor.call({...})
|
298
|
+
|
299
|
+
It will receive information about every request as soon as it finished. What you do with this information is up for you to implement.
|
300
|
+
|
301
|
+
| Field | Description |
|
302
|
+
|:---------------|:----------------------------------------------------------------------|
|
303
|
+
| `url` | URL of the endpoint that was called |
|
304
|
+
| `method` | HTTP method: get, post, ... |
|
305
|
+
| `status` | HTTP status code: 200, ... |
|
306
|
+
| `runtime` | the time in seconds it took the request to finish |
|
307
|
+
| `completed_at` | Time.now.utc.iso8601(3) |
|
308
|
+
| `context` | Whatever you pass as `monitoring_context` to the options of a request |
|
309
|
+
|
310
|
+
#### Timeout
|
311
|
+
|
312
|
+
The default timeout duration is **3 seconds**.
|
313
|
+
|
314
|
+
If you want to use a different timeout, you can pass the option `timeout` when initializing the `Connection`. Give the timeout in seconds (it can be below 1 second, just pass `0.5`). You can also overwrite the time out on every call.
|
315
|
+
|
316
|
+
#### User Agent
|
317
|
+
|
318
|
+
#### verbose
|
319
|
+
|
320
|
+
### Request parameter details
|
321
|
+
|
322
|
+
#### endpoint
|
323
|
+
|
324
|
+
#### Body
|
325
|
+
|
326
|
+
#### Headers
|
327
|
+
|
328
|
+
#### Params
|
329
|
+
|
330
|
+
#### Basic auth / `username:password`
|
331
|
+
|
332
|
+
In case you need to use an API that is protected by **basic_auth** just pass the credentials as optional parameters:
|
333
|
+
`username: 'admin', password: 'secret'`
|
334
|
+
|
335
|
+
#### Timeout duration
|
336
|
+
|
337
|
+
The default timeout duration is **3 seconds**. You can pass the option `:timeout` to overwrite the Connection default or its custom setting it on every call.
|
338
|
+
|
339
|
+
#### Caching responses
|
340
|
+
|
341
|
+
You inject a caching adapter on a per request basis, this is also possible when an adapter has been set for the Connection already. Please keep in mind that not all HTTP calls support caching.
|
342
|
+
<!-- # TODO: list examples -->
|
343
|
+
|
344
|
+
#### monitoring_context
|
345
|
+
|
346
|
+
<!-- # TODO: list examples -->
|
347
|
+
|
299
348
|
## The Request class
|
300
349
|
|
301
350
|
Usually it does not have to be used directly. It is the class that executes the `Typhoeus::Requests`, raises `Errors` on failing and returns `Response` objects on successful calls.
|
@@ -308,10 +357,10 @@ The `ChimeraHttpClient::Response` objects have the following interface:
|
|
308
357
|
|
309
358
|
* body (content the call returns)
|
310
359
|
* code (http code, should be 200 or 2xx)
|
311
|
-
* time (for monitoring)
|
312
|
-
* response (the full response object, including the request)
|
313
360
|
* error? (returns false)
|
314
|
-
* parsed_body (returns the result of `deserializer[:response].call(body)`)
|
361
|
+
* parsed_body (returns the result of `deserializer[:response].call(body)` by default it deserializes JSON)
|
362
|
+
* response (the full response object, including the request)
|
363
|
+
* time (for monitoring)
|
315
364
|
|
316
365
|
If your API does not use JSON, but a different format e.g. XML, you can pass a custom deserializer to the Connection.
|
317
366
|
|
@@ -319,14 +368,14 @@ If your API does not use JSON, but a different format e.g. XML, you can pass a c
|
|
319
368
|
|
320
369
|
All errors inherit from `ChimeraHttpClient::Error` and therefore offer the same attributes:
|
321
370
|
|
322
|
-
* code (http error code)
|
323
371
|
* body (alias => message)
|
324
|
-
*
|
325
|
-
* response (the full response object, including the request)
|
372
|
+
* code (http error code)
|
326
373
|
* error? (returns true)
|
327
374
|
* error_class (e.g. ChimeraHttpClient::NotFoundError)
|
328
|
-
*
|
375
|
+
* response (the full response object, including the request)
|
376
|
+
* time (runtime of the request for monitoring)
|
329
377
|
* to_json (information to return to the API consumer / respects ENV['CHIMERA_HTTP_CLIENT_LOG_REQUESTS'])
|
378
|
+
* to_s (information for logging / respects ENV['CHIMERA_HTTP_CLIENT_LOG_REQUESTS'])
|
330
379
|
|
331
380
|
The error classes and their corresponding http error codes:
|
332
381
|
|
@@ -345,7 +394,10 @@ The error classes and their corresponding http error codes:
|
|
345
394
|
|
346
395
|
## The Queue class
|
347
396
|
|
348
|
-
Instead of making single requests immediately, the ChimeraHttpClient allows to queue requests and run them in **parallel**.
|
397
|
+
Instead of making single requests immediately, the ChimeraHttpClient allows to queue requests and run them in **parallel**. The two big benefits are:
|
398
|
+
|
399
|
+
* only one HTTP connection has to be created (and closed) for all the requests (if the server supports this)
|
400
|
+
* no need for more asynchronicity in your code, just use the aggregated result set
|
349
401
|
|
350
402
|
The number of parallel requests is limited by your system. There is a hard limit for 200 concurrent requests. You will have to measure yourself where the sweet spot for optimal performance is - and when things start to get flaky. I recommend to queue not much more than 20 requests before running them.
|
351
403
|
|
@@ -370,6 +422,42 @@ The only difference is that a parameter to set the HTTP method has to prepended.
|
|
370
422
|
* `:put` / `'put'` / `'PUT'`
|
371
423
|
* `:patch` / `'patch'` / `'PATCH'`
|
372
424
|
* `:delete` / `'delete'` / `'DELETE'`
|
425
|
+
* `:head` / `'head'` / `'HEAD'`
|
426
|
+
* `:options` / `'options'` / `'OPTIONS'`
|
427
|
+
* `:trace` / `'trace'` / `'TRACE'`
|
428
|
+
|
429
|
+
#### Request chaining
|
430
|
+
|
431
|
+
In version _1.2.1.beta_ `queue.add` accepts the parameter `success_handler` which takes a proc or lambda that accepts the **two parameters** `queue` and `response`. It will be executed once the given request returns a successful response. In it another request can be queued, which can use parts of the response of the previous successful request.
|
432
|
+
|
433
|
+
You could for example first queue a user object to receive some related _id_ and then use this _id_ to fetch this object.
|
434
|
+
|
435
|
+
```ruby
|
436
|
+
success_handler =
|
437
|
+
proc do |queue, response|
|
438
|
+
neighbour_id = response.parsed_body[:neighbour_id]
|
439
|
+
|
440
|
+
queue.add(:get, "users/#{neighbour_id}")
|
441
|
+
|
442
|
+
response
|
443
|
+
end
|
444
|
+
```
|
445
|
+
|
446
|
+
It is important to let your success_handler _always_ return the `response` of the original request (unless you are not interested in the response).
|
447
|
+
|
448
|
+
Calls defined in a success_handler will be run after all previously queued requests.
|
449
|
+
|
450
|
+
```ruby
|
451
|
+
queue.add(method, endpoint, success_handler: success_handler) # first
|
452
|
+
queue.add(method, other_endpoint) # second
|
453
|
+
|
454
|
+
responses = queue.execute
|
455
|
+
|
456
|
+
responses[0] # first
|
457
|
+
responses[1] # second
|
458
|
+
responses[2] # third, the call given in the success_handler
|
459
|
+
```
|
460
|
+
|
373
461
|
|
374
462
|
### Executing requests in parallel
|
375
463
|
|
data/TODO.markdown
CHANGED
@@ -15,6 +15,7 @@ _none known_
|
|
15
15
|
* [x] ~~add (example) to README~~
|
16
16
|
* [ ] add logger.warn / .error for error cases (?)
|
17
17
|
* [ ] streamline log message
|
18
|
+
* [ ] refactor to `monitor` concept, pass in proc, not a logger object
|
18
19
|
|
19
20
|
### ~~Custom De-serializer~~
|
20
21
|
|
@@ -26,8 +27,9 @@ _none known_
|
|
26
27
|
|
27
28
|
* [x] ~~allow to queue multiple requests~~
|
28
29
|
* [x] ~~execute (up to 200) requests in parallel~~
|
30
|
+
* [x] allow to pass one proc / block to request to use as on_complete handler
|
31
|
+
* [ ] ensure to check timeouts and connection errors are not re-run (conditionally?)
|
29
32
|
* [ ] allow to pass one proc / block (for all requests) to use as on_complete handler for each request
|
30
|
-
* [ ] allow to pass one proc / block per requests) to use as on_complete handler
|
31
33
|
* [ ] add example to README
|
32
34
|
|
33
35
|
### Add server for testing
|
data/chimera_http_client.gemspec
CHANGED
@@ -16,7 +16,7 @@ Gem::Specification.new do |spec|
|
|
16
16
|
It is lightweight, fast and enables you to queue HTTP requests to run them in parallel
|
17
17
|
for better performance and simple aggregating of distributed data. Despite it's simple
|
18
18
|
interface it allows for advanced features like using custom deserializers, loggers,
|
19
|
-
caching requests individiually, and instrumentation support
|
19
|
+
caching requests individiually, and custom instrumentation support.
|
20
20
|
DESCRIPTION
|
21
21
|
|
22
22
|
spec.homepage = "https://github.com/mediafinger/chimera_http_client"
|
@@ -20,7 +20,20 @@ module ChimeraHttpClient
|
|
20
20
|
|
21
21
|
hydra.run
|
22
22
|
|
23
|
-
responses = queued_requests.map
|
23
|
+
responses = queued_requests.map do |request|
|
24
|
+
if request.result.nil?
|
25
|
+
options = request.send(:instance_variable_get, "@options")
|
26
|
+
response = request.request.run
|
27
|
+
|
28
|
+
if response.success?
|
29
|
+
::ChimeraHttpClient::Response.new(response, options)
|
30
|
+
else
|
31
|
+
::ChimeraHttpClient::Error.new(response, options)
|
32
|
+
end
|
33
|
+
else
|
34
|
+
request.result
|
35
|
+
end
|
36
|
+
end
|
24
37
|
|
25
38
|
empty
|
26
39
|
|
@@ -42,6 +55,8 @@ module ChimeraHttpClient
|
|
42
55
|
deserializer: @deserializer,
|
43
56
|
logger: @logger,
|
44
57
|
monitor: @monitor,
|
58
|
+
success_handler: options[:success_handler],
|
59
|
+
queue: self,
|
45
60
|
}
|
46
61
|
|
47
62
|
Request.new(instance_options).create(
|
@@ -58,7 +58,11 @@ module ChimeraHttpClient
|
|
58
58
|
private
|
59
59
|
|
60
60
|
def on_complete_handler(response)
|
61
|
-
|
61
|
+
if response.success?
|
62
|
+
return Response.new(response, @options) unless @options[:success_handler]
|
63
|
+
|
64
|
+
return @options[:success_handler].call(@options[:queue], Response.new(response, @options))
|
65
|
+
end
|
62
66
|
|
63
67
|
exception_for(response)
|
64
68
|
end
|
metadata
CHANGED
@@ -1,14 +1,14 @@
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
2
2
|
name: chimera_http_client
|
3
3
|
version: !ruby/object:Gem::Version
|
4
|
-
version: 1.2.
|
4
|
+
version: 1.2.1.beta
|
5
5
|
platform: ruby
|
6
6
|
authors:
|
7
7
|
- Andreas Finger
|
8
8
|
autorequire:
|
9
9
|
bindir: exe
|
10
10
|
cert_chain: []
|
11
|
-
date: 2019-06-
|
11
|
+
date: 2019-06-21 00:00:00.000000000 Z
|
12
12
|
dependencies:
|
13
13
|
- !ruby/object:Gem::Dependency
|
14
14
|
name: typhoeus
|
@@ -225,7 +225,7 @@ description: |
|
|
225
225
|
It is lightweight, fast and enables you to queue HTTP requests to run them in parallel
|
226
226
|
for better performance and simple aggregating of distributed data. Despite it's simple
|
227
227
|
interface it allows for advanced features like using custom deserializers, loggers,
|
228
|
-
caching requests individiually, and instrumentation support
|
228
|
+
caching requests individiually, and custom instrumentation support.
|
229
229
|
email:
|
230
230
|
- webmaster@mediafinger.com
|
231
231
|
executables: []
|
@@ -267,9 +267,9 @@ required_ruby_version: !ruby/object:Gem::Requirement
|
|
267
267
|
version: 2.5.0
|
268
268
|
required_rubygems_version: !ruby/object:Gem::Requirement
|
269
269
|
requirements:
|
270
|
-
- - "
|
270
|
+
- - ">"
|
271
271
|
- !ruby/object:Gem::Version
|
272
|
-
version:
|
272
|
+
version: 1.3.1
|
273
273
|
requirements: []
|
274
274
|
rubygems_version: 3.0.3
|
275
275
|
signing_key:
|