lhc 9.4.0 → 9.4.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,54 +0,0 @@
1
- # Authentication Interceptor
2
-
3
- Add the auth interceptor to your basic set of LHC interceptors.
4
-
5
- ```ruby
6
- LHC.config.interceptors = [LHC::Auth]
7
- ```
8
-
9
- ## Bearer Authentication
10
-
11
- ```ruby
12
- LHC.get('http://local.ch', auth: { bearer: -> { access_token } })
13
- ```
14
-
15
- Adds the following header to the request:
16
- ```
17
- 'Authorization': 'Bearer 123456'
18
- ```
19
-
20
- Assuming the method `access_token` responds on runtime of the request with `123456`.
21
-
22
- ## Basic Authentication
23
-
24
- ```ruby
25
- LHC.get('http://local.ch', auth: { basic: { username: 'steve', password: 'can' } })
26
- ```
27
-
28
- Adds the following header to the request:
29
- ```
30
- 'Authorization': 'Basic c3RldmU6Y2Fu'
31
- ```
32
-
33
- Which is the base64 encoded credentials "username:password".
34
-
35
- # Reauthenticate
36
-
37
- The current implementation can only offer reauthenticate for _client access tokens_. For this to work the following has to be given:
38
-
39
- * You have configured and implemented `LHC::Auth.refresh_client_token = -> { TokenRefreshUtil.client_access_token(true) }` which when called will force a refresh of the token and return the new value. It is also expected that this implementation will handle invalidating caches if necessary.
40
- * Your interceptors contain `LHC::Auth` and `LHC::Retry`, whereas `LHC::Retry` comes _after_ `LHC::Auth` in the chain.
41
-
42
- ## Bearer Authentication with client access token
43
-
44
- Reauthentication will be initiated if:
45
-
46
- * setup is correct
47
- * `response.success?` is false and an `LHC::Unauthorized` was observed
48
- * reauthentication wasn't already attempted once
49
-
50
- If this is the case, this happens:
51
-
52
- * refresh the client token, by calling `refresh_client_token`
53
- * the authentication header will be updated with the new token
54
- * `LHC::Retry` will be triggered by adding `retry: { max: 1 }` to the request options
@@ -1,68 +0,0 @@
1
- # Caching Interceptor
2
-
3
- Add the cache interceptor to your basic set of LHC interceptors.
4
-
5
- ```ruby
6
- LHC.config.interceptors = [LHC::Caching]
7
- ```
8
-
9
- You can configure your own cache (default Rails.cache) and logger (default Rails.logger):
10
-
11
- ```ruby
12
- LHC::Caching.cache = ActiveSupport::Cache::MemoryStore.new
13
- LHC::Caching.logger = Logger.new(STDOUT)
14
- ```
15
-
16
- Caching is not enabled by default, although you added it to your basic set of interceptors.
17
- If you want to have requests served/stored and stored in/from cache, you have to enable it by request.
18
-
19
- ```ruby
20
- LHC.get('http://local.ch', cache: true)
21
- ```
22
-
23
- You can also enable caching when configuring an endpoint in LHS.
24
-
25
- ```ruby
26
- class Feedbacks < LHS::Service
27
- endpoint '{+datastore}/v2/feedbacks', cache: true
28
- end
29
- ```
30
-
31
- Only GET requests are cached by default. If you want to cache any other request method, just configure it:
32
-
33
- ```ruby
34
- LHC.get('http://local.ch', cache: { methods: [:get] })
35
- ```
36
-
37
- Responses served from cache are marked as served from cache:
38
-
39
- ```ruby
40
- response = LHC.get('http://local.ch', cache: true)
41
- response.from_cache? # true
42
- ```
43
-
44
- ## Options
45
-
46
- ```ruby
47
- LHC.get('http://local.ch', cache: { key: 'key' expires_in: 1.day, race_condition_ttl: 15.seconds, use: ActiveSupport::Cache::MemoryStore.new })
48
- ```
49
-
50
- `expires_in` - lets the cache expires every X seconds.
51
-
52
- `key` - Set the key that is used for caching by using the option. Every key is prefixed with `LHC_CACHE(v1): `.
53
-
54
- `race_condition_ttl` - very useful in situations where a cache entry is used very frequently and is under heavy load.
55
- If a cache expires and due to heavy load several different processes will try to read data natively and then they all will try to write to cache.
56
- To avoid that case the first process to find an expired cache entry will bump the cache expiration time by the value set in `cache_race_condition_ttl`.
57
-
58
- `use` - Set an explicit cache to be used for this request. If this option is missing `LHC::Caching.cache` is used.
59
-
60
- ## Testing
61
-
62
- Add to your spec_helper.rb:
63
-
64
- ```ruby
65
- require 'lhc/test/cache_helper.rb'
66
- ```
67
-
68
- This will initialize a MemoryStore cache for LHC::Caching interceptor and resets the cache before every test.
@@ -1,17 +0,0 @@
1
- # Default Timeout Interceptor
2
-
3
- Applies default timeout values to all requests made in an application, that uses LHC.
4
-
5
- ```ruby
6
- LHC.config.interceptors = [LHC::DefaultTimeout]
7
- ```
8
-
9
- `timeout` default: 15 seconds
10
- `connecttimeout` default: 2 seconds
11
-
12
- ## Overwrite defaults
13
-
14
- ```ruby
15
- LHC::DefaultTimeout.timeout = 5 # seconds
16
- LHC::DefaultTimeout.connecttimeout = 3 # seconds
17
- ```
@@ -1,29 +0,0 @@
1
- # Logging Interceptor
2
-
3
- The logging interceptor logs all requests done with LHC to the Rails logs.
4
-
5
- ## Installation
6
-
7
- ```ruby
8
- LHC.config.interceptors = [LHC::Logging]
9
-
10
- LHC::Logging.logger = Rails.logger
11
- ```
12
-
13
- ## What and how it logs
14
-
15
- The logging Interceptor logs basic information about the request and the response:
16
-
17
- ```ruby
18
- LHC.get('http://local.ch')
19
- # Before LHC request<70128730317500> GET http://local.ch at 2018-05-23T07:53:19+02:00 Params={} Headers={\"User-Agent\"=>\"Typhoeus - https://github.com/typhoeus/typhoeus\", \"Expect\"=>\"\"}
20
- # After LHC response for request<70128730317500>: GET http://local.ch at 2018-05-23T07:53:28+02:00 Time=0ms URL=http://local.ch:80/
21
- ```
22
-
23
- ## Configure
24
-
25
- You can configure the logger beeing used by the logging interceptor:
26
-
27
- ```ruby
28
- LHC::Logging.logger = Another::Logger
29
- ```
@@ -1,68 +0,0 @@
1
- # Monitoring Interceptor
2
-
3
- The monitoring interceptor reports all requests done with LHC to a given StatsD instance.
4
-
5
- ## Installation
6
-
7
- ```ruby
8
- LHC.config.interceptors = [LHC::Monitoring]
9
- ```
10
-
11
- You also have to configure statsd in order to have the monitoring interceptor report.
12
-
13
- ```ruby
14
- LHC::Monitoring.statsd = <your-instance-of-statsd>
15
- ```
16
-
17
- ### Environment
18
-
19
- By default, the monitoring interceptor uses Rails.env to determine the environment. In case you want to configure that, use:
20
-
21
- ```ruby
22
- LHC::Monitoring.env = ENV['DEPLOYMENT_TYPE'] || Rails.env
23
- ```
24
-
25
- ## What it tracks
26
-
27
- It tracks request attempts with `before_request` and `after_request` (counts).
28
-
29
- In case your workers/processes are getting killed due limited time constraints,
30
- you are able to detect deltas with relying on "before_request", and "after_request" counts:
31
-
32
- ```ruby
33
- "lhc.<app_name>.<env>.<host>.<http_method>.before_request", 1
34
- "lhc.<app_name>.<env>.<host>.<http_method>.after_request", 1
35
- ```
36
-
37
- In case of a successful response it reports the response code with a count and the response time with a gauge value.
38
-
39
- ```ruby
40
- LHC.get('http://local.ch')
41
-
42
- "lhc.<app_name>.<env>.<host>.<http_method>.count", 1
43
- "lhc.<app_name>.<env>.<host>.<http_method>.200", 1
44
- "lhc.<app_name>.<env>.<host>.<http_method>.time", 43
45
- ```
46
-
47
- Timeouts are also reported:
48
-
49
- ```ruby
50
- "lhc.<app_name>.<env>.<host>.<http_method>.timeout", 1
51
- ```
52
-
53
- All the dots in the host are getting replaced with underscore (_), because dot is the default separator in graphite.
54
-
55
-
56
- ## Configure
57
-
58
- It is possible to set the key for Monitoring Interceptor on per request basis:
59
-
60
- ```ruby
61
- LHC.get('http://local.ch', monitoring_key: 'local_website')
62
-
63
- "local_website.count", 1
64
- "local_website.200", 1
65
- "local_website.time", 43
66
- ```
67
-
68
- If you use this approach you need to add all namespaces (app, environment etc.) to the key on your own.
@@ -1,18 +0,0 @@
1
- # Prometheus Interceptor
2
-
3
- Logs basic request/response information to prometheus.
4
-
5
- ```ruby
6
- require 'prometheus/client'
7
- LHC::Prometheus.client = Prometheus::Client
8
- LHC::Prometheus.namespace = 'web_location_app'
9
- LHC.config.interceptors = [LHC::Prometheus]
10
- ```
11
-
12
- ```ruby
13
- LHC.get('http://local.ch')
14
- ```
15
-
16
- - Creates a prometheus counter that receives additional meta information for: `:code`, `:success` and `:timeout`.
17
-
18
- - Creates a prometheus histogram for response times in milliseconds.
@@ -1,24 +0,0 @@
1
- # Retry Interceptor
2
-
3
- If you enable the retry interceptor, you can have LHC retry requests for you:
4
-
5
- ```ruby
6
- LHC.config.interceptors = [LHC::Retry]
7
- response = LHC.get('http://local.ch', retry: true)
8
- ```
9
-
10
- It will try to retry the request up to 3 times (default) internally, before it passes the last response back, or raises an error for the last response.
11
-
12
- Consider, that all other interceptors will run for every single retry.
13
-
14
- ## Limit the amount of retries while making the request
15
-
16
- ```ruby
17
- LHC.get('http://local.ch', retry: { max: 1 })
18
- ```
19
-
20
- ## Change the default maximum of retries of the retry interceptor
21
-
22
- ```ruby
23
- LHC::Retry.max = 3
24
- ```
@@ -1,19 +0,0 @@
1
- # Rollbar Interceptor
2
-
3
- Forward errors to rollbar when exceptions occur during http requests.
4
-
5
- ```ruby
6
- LHC.config.interceptors = [LHC::Rollbar]
7
- ```
8
-
9
- ```ruby
10
- LHC.get('http://local.ch')
11
- ```
12
-
13
- If it raises, it forwards the request and response object to rollbar, which contain all necessary data.
14
-
15
- ## Forward additional parameters
16
-
17
- ```ruby
18
- LHC.get('http://local.ch', rollbar: { tracking_key: 'this particular request' })
19
- ```
@@ -1,23 +0,0 @@
1
- # Zipkin
2
-
3
- Zipkin is a distributed tracing system. It helps gather timing data needed to troubleshoot latency problems in microservice architectures [Zipkin Distributed Tracing](https://zipkin.io/).
4
-
5
- Add the zipkin interceptor to your basic set of LHC interceptors.
6
-
7
- ```ruby
8
- LHC.config.interceptors = [LHC::Zipkin]
9
- ```
10
-
11
- The following configuration needs to happen in the application that wants to run this interceptor:
12
-
13
- 1. Add `gem 'zipkin-tracer'` to your Gemfile.
14
- 2. Add the necessary Rack middleware and configuration
15
-
16
- ```ruby
17
- config.middleware.use ZipkinTracer::RackHandler, {
18
- service_name: 'service-name', # name your service will be known as in zipkin
19
- service_port: 80, # the port information that is sent along the trace
20
- json_api_host: 'http://zipkin-collector', # the zipkin endpoint
21
- sample_rate: 1 # sample rate, where 1 = 100% of all requests, and 0.1 is 10% of all requests
22
- }
23
- ```
@@ -1,24 +0,0 @@
1
- Request
2
- ===
3
-
4
- The request class handles the http request,
5
- implements the interceptor pattern,
6
- loads configured endpoints,
7
- generates urls from url-templates
8
- and raises exceptions for any response code that is not indicating success (2**).
9
-
10
- → [Read more about exceptions](exceptions.md)
11
-
12
- ```ruby
13
- request.response #<LHC::Response> the associated response.
14
-
15
- request.options #<Hash> the options used for creating the request.
16
-
17
- request.params # access request params
18
-
19
- request.headers # access request headers
20
-
21
- request.url #<String> URL that is used for doing the request
22
-
23
- request.method #<Symbol> provides the used http-method
24
- ```
@@ -1,19 +0,0 @@
1
- Response
2
- ===
3
-
4
- ```ruby
5
- response.request #<LHC::Request> the associated request.
6
-
7
- response.data #<OpenStruct> in case response body contains parsable JSON.
8
- response.data.something.nested
9
-
10
- response.body #<String>
11
-
12
- response.code #<Fixnum>
13
-
14
- response.headers #<Hash>
15
-
16
- response.time #<Fixnum> Provides response time in ms.
17
-
18
- response.timeout? #true|false
19
- ```