chimera_http_client 1.2.1.beta → 1.3.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/README.markdown +105 -182
- data/TODO.markdown +1 -3
- data/chimera_http_client.gemspec +1 -1
- data/lib/chimera_http_client/queue.rb +1 -16
- data/lib/chimera_http_client/request.rb +24 -8
- data/lib/chimera_http_client/version.rb +1 -1
- metadata +6 -6
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA256:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: dc63d4372cf53748047f173b87859a5efe7a2db057a1f41be8ba2f265255b28a
|
4
|
+
data.tar.gz: 1ff54acfcac10f4ffb85428d8c91b47d8d71da878d047a0937b02922f5f6ea05
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: cb24d335a115edfc0ba2408b77372157bcf9e54d10a4e817d603f9923a1dc51151d342c74b181560d1f681a006f7be4e67a16d0e4c4ae5c08b3a6f164776f8b1
|
7
|
+
data.tar.gz: 85fa644f64fcb9ffbef08f18d5e03e1cb3d3633020723b1db5984eb8c51c87534cd0d9c4c8d6ec7005795dd1d035f38e583c3c22cc4718c95c5e9a4164b5fbaf
|
data/README.markdown
CHANGED
@@ -66,28 +66,18 @@ The basic usage looks like this:
|
|
66
66
|
|
67
67
|
```ruby
|
68
68
|
connection = ChimeraHttpClient::Connection.new(base_url: 'http://localhost/namespace')
|
69
|
-
|
70
69
|
response = connection.get!(endpoint, params: params)
|
71
70
|
```
|
72
71
|
|
73
72
|
### Initialization
|
74
73
|
|
75
|
-
|
76
|
-
connection = ChimeraHttpClient::Connection.new(
|
77
|
-
base_url: 'http://localhost:3000/v1',
|
78
|
-
cache: cache,
|
79
|
-
deserializer: { deserializer: { error: HtmlParser, response: XMLParser }},
|
80
|
-
logger: logger,
|
81
|
-
monitor: monitor,
|
82
|
-
timeout: 2
|
83
|
-
)
|
84
|
-
```
|
74
|
+
`connection = ChimeraHttpClient::Connection.new(base_url: 'http://localhost:3000/v1', logger: logger, cache: cache)`
|
85
75
|
|
86
76
|
#### Mandatory initialization parameter `base_url`
|
87
77
|
|
88
78
|
The mandatory parameter is **base_url** which should include the host, port and base path to the API endpoints you want to call, e.g. `'http://localhost:3000/v1'`.
|
89
79
|
|
90
|
-
Setting the `base_url` is meant to be a comfort feature, as you can then pass short endpoints to each request like
|
80
|
+
Setting the `base_url` is meant to be a comfort feature, as you can then pass short endpoints to each request like `/users`. You could set an empty string `''` as `base_url` and then pass full qualified URLs as endpoint of the requests.
|
91
81
|
|
92
82
|
#### Optional initialization parameters
|
93
83
|
|
@@ -95,13 +85,40 @@ The optional parameters are:
|
|
95
85
|
|
96
86
|
* `cache` - an instance of your cache solution, can be overwritten in any request
|
97
87
|
* `deserializers` - custom methods to deserialize the response body, below more details
|
98
|
-
* `logger` - an instance of a logger class that implements `#info` and `#
|
88
|
+
* `logger` - an instance of a logger class that implements `#info`, `#warn` and `#error` methods
|
99
89
|
* `monitor` - to collect metrics about requests, the basis for your instrumentation needs
|
100
90
|
* `timeout` - the timeout for all requests, can be overwritten in any request, the default are 3 seconds
|
101
91
|
* `user_agent` - if you would like your calls to identify with a specific user agent
|
102
92
|
* `verbose` - the default is `false`, set it to true while debugging issues
|
103
93
|
|
104
|
-
|
94
|
+
##### Custom deserializers
|
95
|
+
|
96
|
+
In case the API you are connecting to does not return JSON, you can pass custom deserializers to `Connection.new` or `Queue.new`:
|
97
|
+
|
98
|
+
deserializers: { error: your_error_deserializer, response: your_response_deserializer }
|
99
|
+
|
100
|
+
A Deserializer has to be an object on which the method `call` with the parameter `body` can be called:
|
101
|
+
|
102
|
+
custom_deserializer.call(body)
|
103
|
+
|
104
|
+
where `body` is the response body (in the default case a JSON object). The class `Deserializer` contains the default objects that are used. They might help you creating your own. Don't forget to make requests with another header than the default `"Content-Type" => "application/json"`, when the API you connect to does not support JSON.
|
105
|
+
|
106
|
+
##### Monitoring, metrics, instrumentation
|
107
|
+
|
108
|
+
Pass an object as `:monitor` to a connection that defines the method `call` and accepts a hash as parameter.
|
109
|
+
|
110
|
+
monitor.call({...})
|
111
|
+
|
112
|
+
It will receive information about every request as soon as it finished. What you do with this information is up for you to implement.
|
113
|
+
|
114
|
+
| Field | Description |
|
115
|
+
|:---------------|:----------------------------------------------------------------------|
|
116
|
+
| `url` | URL of the endpoint that was called |
|
117
|
+
| `method` | HTTP method: get, post, ... |
|
118
|
+
| `status` | HTTP status code: 200, ... |
|
119
|
+
| `runtime` | the time in seconds it took the request to finish |
|
120
|
+
| `completed_at` | Time.now.utc.iso8601(3) |
|
121
|
+
| `context` | Whatever you pass as `monitoring_context` to the options of a request |
|
105
122
|
|
106
123
|
### Request methods
|
107
124
|
|
@@ -109,12 +126,9 @@ The available methods are:
|
|
109
126
|
|
110
127
|
* `get` / `get!`
|
111
128
|
* `post` / `post!`
|
112
|
-
* `put` / `put
|
129
|
+
* `put` / `put`
|
113
130
|
* `patch` / `patch!`
|
114
131
|
* `delete` / `delete!`
|
115
|
-
* `head` / `head!`
|
116
|
-
* `options` / `options!`
|
117
|
-
* `trace` / `trace!`
|
118
132
|
|
119
133
|
where the methods ending on a _bang!_ will raise an error (which you should handle in your application) while the others will return an error object.
|
120
134
|
|
@@ -129,7 +143,7 @@ connection.get("users/#{id}")
|
|
129
143
|
connection.get("/users/#{id}")
|
130
144
|
```
|
131
145
|
|
132
|
-
All forms above
|
146
|
+
All forms above ave valid and will make a request to the same URL.
|
133
147
|
|
134
148
|
* Please take note that _the endpoint can be given as a String, a Symbol, or an Array._
|
135
149
|
* While they do no harm, there is _no need to pass leading or trailing `/` in endpoints._
|
@@ -148,20 +162,79 @@ All request methods expect a mandatory `endpoint` and an optional hash as parame
|
|
148
162
|
* `cache` - optionally overwrite the cache store set in `Connection` in any request
|
149
163
|
* `monitoring_context` - pass additional information you want to collect with your instrumentation `monitor`
|
150
164
|
|
151
|
-
|
152
|
-
|
153
|
-
### Example usage
|
165
|
+
Example:
|
154
166
|
|
155
167
|
```ruby
|
156
|
-
|
168
|
+
connection.post(
|
157
169
|
:users,
|
158
170
|
body: { name: "Andy" },
|
159
171
|
params: { origin: `Twitter`},
|
160
172
|
headers: { "Authorization" => "Bearer #{token}" },
|
161
|
-
timeout:
|
173
|
+
timeout: 10,
|
174
|
+
cache: nil
|
162
175
|
)
|
163
176
|
```
|
164
177
|
|
178
|
+
#### Basic auth
|
179
|
+
|
180
|
+
In case you need to use an API that is protected by **basic_auth** just pass the credentials as optional parameters:
|
181
|
+
`username: 'admin', password: 'secret'`
|
182
|
+
|
183
|
+
#### Timeout duration
|
184
|
+
|
185
|
+
The default timeout duration is **3 seconds**.
|
186
|
+
|
187
|
+
If you want to use a different timeout, you can pass the key `timeout` when initializing the `Connection`. You can also overwrite it on every call.
|
188
|
+
|
189
|
+
#### Custom logger
|
190
|
+
|
191
|
+
By default no logging is happening. If you need request logging, you can pass your custom Logger to the key `logger` when initializing the `Connection`. It will write to `logger.info` when starting and when completing a request.
|
192
|
+
|
193
|
+
The message passed to the logger is a hash with the following fields:
|
194
|
+
|
195
|
+
| Key | Description |
|
196
|
+
|:-------------|:--------------------------------------------|
|
197
|
+
| `message` | indicator if a call was started or finished |
|
198
|
+
| `method` | the HTTP method used |
|
199
|
+
| `url` | the requested URL |
|
200
|
+
| `code` | HTTP status code |
|
201
|
+
| `runtime` | time the request took in ms |
|
202
|
+
| `user_agent` | the user_agent used to open the connection |
|
203
|
+
|
204
|
+
#### Caching responses
|
205
|
+
|
206
|
+
To cache all the reponses of a connection, just pass the optional parameter `cache` to its initializer. You can also overwrite the connection's cache configuration by passing the parameter `cache` to any `get` call.
|
207
|
+
|
208
|
+
It could be an instance of an implementation as simple as this:
|
209
|
+
|
210
|
+
```ruby
|
211
|
+
class Cache
|
212
|
+
def initialize
|
213
|
+
@memory = {}
|
214
|
+
end
|
215
|
+
|
216
|
+
def get(request)
|
217
|
+
@memory[request]
|
218
|
+
end
|
219
|
+
|
220
|
+
def set(request, response)
|
221
|
+
@memory[request] = response
|
222
|
+
end
|
223
|
+
end
|
224
|
+
```
|
225
|
+
|
226
|
+
Or use an adapter for Dalli, Redis, or Rails cache that also support an optional time-to-live `default_ttl` parameter. If you use `Rails.cache` with the adapter `:memory_store` or `:mem_cache_store`, the object you would have to pass looks like this:
|
227
|
+
|
228
|
+
```ruby
|
229
|
+
require "typhoeus/cache/rails"
|
230
|
+
|
231
|
+
cache: Typhoeus::Cache::Rails.new(Rails.cache, default_ttl: 600) # 600 seconds
|
232
|
+
```
|
233
|
+
|
234
|
+
Read more about how to use it: https://github.com/typhoeus/typhoeus#caching
|
235
|
+
|
236
|
+
### Example usage
|
237
|
+
|
165
238
|
To use the gem, it is recommended to write wrapper classes for the endpoints used. While it would be possible to use the `get, get!, post, post!, put, put!, patch, patch!, delete, delete!` or also the bare `request.run` methods directly, wrapper classes will unify the usage pattern and be very convenient to use by veterans and newcomers to the team. A wrapper class could look like this:
|
166
239
|
|
167
240
|
```ruby
|
@@ -234,117 +307,6 @@ To create and fetch a user from a remote service with the `Users` wrapper listed
|
|
234
307
|
user.name # == "Andy"
|
235
308
|
```
|
236
309
|
|
237
|
-
### Connection parameter details
|
238
|
-
|
239
|
-
#### base_url
|
240
|
-
|
241
|
-
#### Cache
|
242
|
-
|
243
|
-
To cache the responses of a connection, just pass the optional parameter `cache` to its initializer. You can also overwrite the connection's cache configuration by passing the parameter `cache` to any `get` call.
|
244
|
-
|
245
|
-
It could be an instance of an implementation as simple as this:
|
246
|
-
|
247
|
-
```ruby
|
248
|
-
class Cache
|
249
|
-
def initialize
|
250
|
-
@memory = {}
|
251
|
-
end
|
252
|
-
|
253
|
-
def get(request)
|
254
|
-
@memory[request]
|
255
|
-
end
|
256
|
-
|
257
|
-
def set(request, response)
|
258
|
-
@memory[request] = response
|
259
|
-
end
|
260
|
-
end
|
261
|
-
```
|
262
|
-
|
263
|
-
Or use an adapter for Dalli, Redis, or Rails cache that also support an optional time-to-live `default_ttl` parameter. If you use `Rails.cache` with the adapter `:memory_store` or `:mem_cache_store`, the object you would have to pass looks like this:
|
264
|
-
|
265
|
-
```ruby
|
266
|
-
require "typhoeus/cache/rails"
|
267
|
-
|
268
|
-
cache: Typhoeus::Cache::Rails.new(Rails.cache, default_ttl: 600) # 600 seconds
|
269
|
-
```
|
270
|
-
|
271
|
-
Read more about how to use it: https://github.com/typhoeus/typhoeus#caching
|
272
|
-
|
273
|
-
#### Custom deserializers
|
274
|
-
|
275
|
-
In case the API you are connecting to does not return JSON, you can pass custom deserializers to `Connection.new` or `Queue.new`:
|
276
|
-
|
277
|
-
deserializers: { error: your_error_deserializer, response: your_response_deserializer }
|
278
|
-
|
279
|
-
A Deserializer has to be an object on which the method `call` with the parameter `body` can be called:
|
280
|
-
|
281
|
-
custom_deserializer.call(body)
|
282
|
-
|
283
|
-
where `body` is the response body (in the default case a JSON object). The class `Deserializer` contains the default objects that are used. They might help you creating your own. Don't forget to make requests with another header than the default `"Content-Type" => "application/json"`, when the API you connect to does not support JSON.
|
284
|
-
|
285
|
-
If you don't want that Chimera deserialize the response body, pass a Proc that does nothing:
|
286
|
-
|
287
|
-
deserializer: { response: proc { |body| body }, response: proc { |body| body } }
|
288
|
-
|
289
|
-
#### Logger
|
290
|
-
|
291
|
-
By default no logging is happening. If you need request logging, you can pass your custom Logger as option `:logger` when initializing the `Connection`. Chimera will write to `logger.info` when starting and when completing a request.
|
292
|
-
|
293
|
-
#### Monitoring, metrics, instrumentation
|
294
|
-
|
295
|
-
Pass an object as `:monitor` to a connection that defines the method `call` and accepts a hash as parameter.
|
296
|
-
|
297
|
-
monitor.call({...})
|
298
|
-
|
299
|
-
It will receive information about every request as soon as it finished. What you do with this information is up for you to implement.
|
300
|
-
|
301
|
-
| Field | Description |
|
302
|
-
|:---------------|:----------------------------------------------------------------------|
|
303
|
-
| `url` | URL of the endpoint that was called |
|
304
|
-
| `method` | HTTP method: get, post, ... |
|
305
|
-
| `status` | HTTP status code: 200, ... |
|
306
|
-
| `runtime` | the time in seconds it took the request to finish |
|
307
|
-
| `completed_at` | Time.now.utc.iso8601(3) |
|
308
|
-
| `context` | Whatever you pass as `monitoring_context` to the options of a request |
|
309
|
-
|
310
|
-
#### Timeout
|
311
|
-
|
312
|
-
The default timeout duration is **3 seconds**.
|
313
|
-
|
314
|
-
If you want to use a different timeout, you can pass the option `timeout` when initializing the `Connection`. Give the timeout in seconds (it can be below 1 second, just pass `0.5`). You can also overwrite the time out on every call.
|
315
|
-
|
316
|
-
#### User Agent
|
317
|
-
|
318
|
-
#### verbose
|
319
|
-
|
320
|
-
### Request parameter details
|
321
|
-
|
322
|
-
#### endpoint
|
323
|
-
|
324
|
-
#### Body
|
325
|
-
|
326
|
-
#### Headers
|
327
|
-
|
328
|
-
#### Params
|
329
|
-
|
330
|
-
#### Basic auth / `username:password`
|
331
|
-
|
332
|
-
In case you need to use an API that is protected by **basic_auth** just pass the credentials as optional parameters:
|
333
|
-
`username: 'admin', password: 'secret'`
|
334
|
-
|
335
|
-
#### Timeout duration
|
336
|
-
|
337
|
-
The default timeout duration is **3 seconds**. You can pass the option `:timeout` to overwrite the Connection default or its custom setting it on every call.
|
338
|
-
|
339
|
-
#### Caching responses
|
340
|
-
|
341
|
-
You inject a caching adapter on a per request basis, this is also possible when an adapter has been set for the Connection already. Please keep in mind that not all HTTP calls support caching.
|
342
|
-
<!-- # TODO: list examples -->
|
343
|
-
|
344
|
-
#### monitoring_context
|
345
|
-
|
346
|
-
<!-- # TODO: list examples -->
|
347
|
-
|
348
310
|
## The Request class
|
349
311
|
|
350
312
|
Usually it does not have to be used directly. It is the class that executes the `Typhoeus::Requests`, raises `Errors` on failing and returns `Response` objects on successful calls.
|
@@ -357,10 +319,10 @@ The `ChimeraHttpClient::Response` objects have the following interface:
|
|
357
319
|
|
358
320
|
* body (content the call returns)
|
359
321
|
* code (http code, should be 200 or 2xx)
|
360
|
-
* error? (returns false)
|
361
|
-
* parsed_body (returns the result of `deserializer[:response].call(body)` by default it deserializes JSON)
|
362
|
-
* response (the full response object, including the request)
|
363
322
|
* time (for monitoring)
|
323
|
+
* response (the full response object, including the request)
|
324
|
+
* error? (returns false)
|
325
|
+
* parsed_body (returns the result of `deserializer[:response].call(body)`)
|
364
326
|
|
365
327
|
If your API does not use JSON, but a different format e.g. XML, you can pass a custom deserializer to the Connection.
|
366
328
|
|
@@ -368,14 +330,14 @@ If your API does not use JSON, but a different format e.g. XML, you can pass a c
|
|
368
330
|
|
369
331
|
All errors inherit from `ChimeraHttpClient::Error` and therefore offer the same attributes:
|
370
332
|
|
371
|
-
* body (alias => message)
|
372
333
|
* code (http error code)
|
334
|
+
* body (alias => message)
|
335
|
+
* time (for monitoring)
|
336
|
+
* response (the full response object, including the request)
|
373
337
|
* error? (returns true)
|
374
338
|
* error_class (e.g. ChimeraHttpClient::NotFoundError)
|
375
|
-
* response (the full response object, including the request)
|
376
|
-
* time (runtime of the request for monitoring)
|
377
|
-
* to_json (information to return to the API consumer / respects ENV['CHIMERA_HTTP_CLIENT_LOG_REQUESTS'])
|
378
339
|
* to_s (information for logging / respects ENV['CHIMERA_HTTP_CLIENT_LOG_REQUESTS'])
|
340
|
+
* to_json (information to return to the API consumer / respects ENV['CHIMERA_HTTP_CLIENT_LOG_REQUESTS'])
|
379
341
|
|
380
342
|
The error classes and their corresponding http error codes:
|
381
343
|
|
@@ -394,10 +356,7 @@ The error classes and their corresponding http error codes:
|
|
394
356
|
|
395
357
|
## The Queue class
|
396
358
|
|
397
|
-
Instead of making single requests immediately, the ChimeraHttpClient allows to queue requests and run them in **parallel**.
|
398
|
-
|
399
|
-
* only one HTTP connection has to be created (and closed) for all the requests (if the server supports this)
|
400
|
-
* no need for more asynchronicity in your code, just use the aggregated result set
|
359
|
+
Instead of making single requests immediately, the ChimeraHttpClient allows to queue requests and run them in **parallel**.
|
401
360
|
|
402
361
|
The number of parallel requests is limited by your system. There is a hard limit for 200 concurrent requests. You will have to measure yourself where the sweet spot for optimal performance is - and when things start to get flaky. I recommend to queue not much more than 20 requests before running them.
|
403
362
|
|
@@ -422,42 +381,6 @@ The only difference is that a parameter to set the HTTP method has to prepended.
|
|
422
381
|
* `:put` / `'put'` / `'PUT'`
|
423
382
|
* `:patch` / `'patch'` / `'PATCH'`
|
424
383
|
* `:delete` / `'delete'` / `'DELETE'`
|
425
|
-
* `:head` / `'head'` / `'HEAD'`
|
426
|
-
* `:options` / `'options'` / `'OPTIONS'`
|
427
|
-
* `:trace` / `'trace'` / `'TRACE'`
|
428
|
-
|
429
|
-
#### Request chaining
|
430
|
-
|
431
|
-
In version _1.2.1.beta_ `queue.add` accepts the parameter `success_handler` which takes a proc or lambda that accepts the **two parameters** `queue` and `response`. It will be executed once the given request returns a successful response. In it another request can be queued, which can use parts of the response of the previous successful request.
|
432
|
-
|
433
|
-
You could for example first queue a user object to receive some related _id_ and then use this _id_ to fetch this object.
|
434
|
-
|
435
|
-
```ruby
|
436
|
-
success_handler =
|
437
|
-
proc do |queue, response|
|
438
|
-
neighbour_id = response.parsed_body[:neighbour_id]
|
439
|
-
|
440
|
-
queue.add(:get, "users/#{neighbour_id}")
|
441
|
-
|
442
|
-
response
|
443
|
-
end
|
444
|
-
```
|
445
|
-
|
446
|
-
It is important to let your success_handler _always_ return the `response` of the original request (unless you are not interested in the response).
|
447
|
-
|
448
|
-
Calls defined in a success_handler will be run after all previously queued requests.
|
449
|
-
|
450
|
-
```ruby
|
451
|
-
queue.add(method, endpoint, success_handler: success_handler) # first
|
452
|
-
queue.add(method, other_endpoint) # second
|
453
|
-
|
454
|
-
responses = queue.execute
|
455
|
-
|
456
|
-
responses[0] # first
|
457
|
-
responses[1] # second
|
458
|
-
responses[2] # third, the call given in the success_handler
|
459
|
-
```
|
460
|
-
|
461
384
|
|
462
385
|
### Executing requests in parallel
|
463
386
|
|
data/TODO.markdown
CHANGED
@@ -15,7 +15,6 @@ _none known_
|
|
15
15
|
* [x] ~~add (example) to README~~
|
16
16
|
* [ ] add logger.warn / .error for error cases (?)
|
17
17
|
* [ ] streamline log message
|
18
|
-
* [ ] refactor to `monitor` concept, pass in proc, not a logger object
|
19
18
|
|
20
19
|
### ~~Custom De-serializer~~
|
21
20
|
|
@@ -27,9 +26,8 @@ _none known_
|
|
27
26
|
|
28
27
|
* [x] ~~allow to queue multiple requests~~
|
29
28
|
* [x] ~~execute (up to 200) requests in parallel~~
|
30
|
-
* [x] allow to pass one proc / block to request to use as on_complete handler
|
31
|
-
* [ ] ensure to check timeouts and connection errors are not re-run (conditionally?)
|
32
29
|
* [ ] allow to pass one proc / block (for all requests) to use as on_complete handler for each request
|
30
|
+
* [ ] allow to pass one proc / block per requests) to use as on_complete handler
|
33
31
|
* [ ] add example to README
|
34
32
|
|
35
33
|
### Add server for testing
|
data/chimera_http_client.gemspec
CHANGED
@@ -16,7 +16,7 @@ Gem::Specification.new do |spec|
|
|
16
16
|
It is lightweight, fast and enables you to queue HTTP requests to run them in parallel
|
17
17
|
for better performance and simple aggregating of distributed data. Despite it's simple
|
18
18
|
interface it allows for advanced features like using custom deserializers, loggers,
|
19
|
-
caching requests individiually, and
|
19
|
+
caching requests individiually, and instrumentation support (soon to be implemented).
|
20
20
|
DESCRIPTION
|
21
21
|
|
22
22
|
spec.homepage = "https://github.com/mediafinger/chimera_http_client"
|
@@ -20,20 +20,7 @@ module ChimeraHttpClient
|
|
20
20
|
|
21
21
|
hydra.run
|
22
22
|
|
23
|
-
responses = queued_requests.map
|
24
|
-
if request.result.nil?
|
25
|
-
options = request.send(:instance_variable_get, "@options")
|
26
|
-
response = request.request.run
|
27
|
-
|
28
|
-
if response.success?
|
29
|
-
::ChimeraHttpClient::Response.new(response, options)
|
30
|
-
else
|
31
|
-
::ChimeraHttpClient::Error.new(response, options)
|
32
|
-
end
|
33
|
-
else
|
34
|
-
request.result
|
35
|
-
end
|
36
|
-
end
|
23
|
+
responses = queued_requests.map { |request| request.result }
|
37
24
|
|
38
25
|
empty
|
39
26
|
|
@@ -55,8 +42,6 @@ module ChimeraHttpClient
|
|
55
42
|
deserializer: @deserializer,
|
56
43
|
logger: @logger,
|
57
44
|
monitor: @monitor,
|
58
|
-
success_handler: options[:success_handler],
|
59
|
-
queue: self,
|
60
45
|
}
|
61
46
|
|
62
47
|
Request.new(instance_options).create(
|
@@ -16,6 +16,7 @@ module ChimeraHttpClient
|
|
16
16
|
@result
|
17
17
|
end
|
18
18
|
|
19
|
+
# rubocop:disable Metrics/MethodLength
|
19
20
|
def create(url:, method:, body: nil, options: {}, headers: {})
|
20
21
|
request_params = {
|
21
22
|
method: method,
|
@@ -44,25 +45,40 @@ module ChimeraHttpClient
|
|
44
45
|
completed_at: Time.now.utc.iso8601(3), context: options[:monitoring_context]
|
45
46
|
}
|
46
47
|
)
|
47
|
-
|
48
|
-
|
48
|
+
|
49
|
+
@options[:logger]&.info(
|
50
|
+
{
|
51
|
+
message: "Completed Chimera HTTP Request",
|
52
|
+
method: method.upcase,
|
53
|
+
url: url,
|
54
|
+
code: response.code,
|
55
|
+
runtime: runtime,
|
56
|
+
user_agent: Typhoeus::Config.user_agent,
|
57
|
+
}
|
58
|
+
)
|
49
59
|
|
50
60
|
@result = on_complete_handler(response)
|
51
61
|
end
|
52
62
|
|
53
|
-
@options[:logger]&.info(
|
63
|
+
@options[:logger]&.info(
|
64
|
+
{
|
65
|
+
message: "Starting Chimera HTTP Request",
|
66
|
+
method: method.upcase,
|
67
|
+
url: url,
|
68
|
+
code: nil,
|
69
|
+
runtime: 0,
|
70
|
+
user_agent: Typhoeus::Config.user_agent,
|
71
|
+
}
|
72
|
+
)
|
54
73
|
|
55
74
|
self
|
56
75
|
end
|
76
|
+
# rubocop:enable Metrics/MethodLength
|
57
77
|
|
58
78
|
private
|
59
79
|
|
60
80
|
def on_complete_handler(response)
|
61
|
-
if response.success?
|
62
|
-
return Response.new(response, @options) unless @options[:success_handler]
|
63
|
-
|
64
|
-
return @options[:success_handler].call(@options[:queue], Response.new(response, @options))
|
65
|
-
end
|
81
|
+
return Response.new(response, @options) if response.success?
|
66
82
|
|
67
83
|
exception_for(response)
|
68
84
|
end
|
metadata
CHANGED
@@ -1,14 +1,14 @@
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
2
2
|
name: chimera_http_client
|
3
3
|
version: !ruby/object:Gem::Version
|
4
|
-
version: 1.
|
4
|
+
version: 1.3.0
|
5
5
|
platform: ruby
|
6
6
|
authors:
|
7
7
|
- Andreas Finger
|
8
8
|
autorequire:
|
9
9
|
bindir: exe
|
10
10
|
cert_chain: []
|
11
|
-
date: 2019-
|
11
|
+
date: 2019-10-31 00:00:00.000000000 Z
|
12
12
|
dependencies:
|
13
13
|
- !ruby/object:Gem::Dependency
|
14
14
|
name: typhoeus
|
@@ -225,7 +225,7 @@ description: |
|
|
225
225
|
It is lightweight, fast and enables you to queue HTTP requests to run them in parallel
|
226
226
|
for better performance and simple aggregating of distributed data. Despite it's simple
|
227
227
|
interface it allows for advanced features like using custom deserializers, loggers,
|
228
|
-
caching requests individiually, and
|
228
|
+
caching requests individiually, and instrumentation support (soon to be implemented).
|
229
229
|
email:
|
230
230
|
- webmaster@mediafinger.com
|
231
231
|
executables: []
|
@@ -267,11 +267,11 @@ required_ruby_version: !ruby/object:Gem::Requirement
|
|
267
267
|
version: 2.5.0
|
268
268
|
required_rubygems_version: !ruby/object:Gem::Requirement
|
269
269
|
requirements:
|
270
|
-
- - "
|
270
|
+
- - ">="
|
271
271
|
- !ruby/object:Gem::Version
|
272
|
-
version:
|
272
|
+
version: '0'
|
273
273
|
requirements: []
|
274
|
-
rubygems_version: 3.0.
|
274
|
+
rubygems_version: 3.0.4
|
275
275
|
signing_key:
|
276
276
|
specification_version: 4
|
277
277
|
summary: General http client functionality to quickly connect to JSON REST API endpoints
|