lhc 12.1.3 → 13.1.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: c1d43867c7519eb1697f118fb44668142aa46bcdfb13d725be6617c74f4ed47f
4
- data.tar.gz: cdfa710bc5c7ae5297eeceec6858738ad414e828237038b92a3dcbaa9c088fed
3
+ metadata.gz: 98300a9c64d910094f32a96b4428c71aa4d1d8927e1dc9e4b939c1119e28333c
4
+ data.tar.gz: 7cb93bf8911d06c5fbd015b36b2ccc62c3ff211a1e70a2aa10dfd8fe447be878
5
5
  SHA512:
6
- metadata.gz: 9e11b8821556d2bb0e40b91cb080379fc82a3a90453fe36dd6127b4d26a571d065637275988ba03b46d96f6217a697022d0a6aa46532ab20a5d46e1ef5dbadfd
7
- data.tar.gz: 192cabec15b2d1881a14505343f2e2f4218fad00ffaac4935dd3d1edbdc6132edb6b9ae0834f2d556dfbcacca9221f473f39dba37071ec40949e407bc953813f
6
+ metadata.gz: 24d10469629a7bb4942bd2a77fabc74910a98179e5998fef6e502fa88e25d583274becd33897f4f509ed97e3e9a3ed2d246145ac9a0908ae3b367d4d0f00ca9d
7
+ data.tar.gz: c22f9044dbc3a2052c3487eb014f04e86791017da1b3d438d6c9e48e63d30bf4bf5e19663854f3b1a2ba675ad240da84e3c528a0ca25037759a3576109275657
@@ -1,4 +1,4 @@
1
1
  source 'https://rubygems.org/'
2
2
 
3
3
  gemspec
4
- gem 'activesupport', '~> 5.0.0'
4
+ gem 'activesupport', '~> 5.2'
@@ -1,4 +1,4 @@
1
1
  source 'https://rubygems.org/'
2
2
 
3
3
  gemspec
4
- gem 'activesupport', '~> 6.0.0'
4
+ gem 'activesupport', '~> 6.0'
data/README.md CHANGED
@@ -73,6 +73,10 @@ use it like:
73
73
  * [Installation](#installation-1)
74
74
  * [Environment](#environment)
75
75
  * [What it tracks](#what-it-tracks)
76
+ * [Before and after request tracking](#before-and-after-request-tracking)
77
+ * [Response tracking](#response-tracking)
78
+ * [Timeout tracking](#timeout-tracking)
79
+ * [Caching tracking](#caching-tracking)
76
80
  * [Configure](#configure-1)
77
81
  * [Prometheus Interceptor](#prometheus-interceptor)
78
82
  * [Retry Interceptor](#retry-interceptor)
@@ -94,6 +98,8 @@ use it like:
94
98
 
95
99
 
96
100
 
101
+
102
+
97
103
  ## Basic methods
98
104
 
99
105
  Available are `get`, `post`, `put` & `delete`.
@@ -263,7 +269,7 @@ You can also use URL templates, when [configuring endpoints](#configuring-endpoi
263
269
  LHC.configure do |c|
264
270
  c.endpoint(:find_feedback, 'http://datastore/v2/feedbacks/{id}')
265
271
  end
266
-
272
+
267
273
  LHC.get(:find_feedback, params:{ id: 123 }) # GET http://datastore/v2/feedbacks/123
268
274
  ```
269
275
 
@@ -276,7 +282,7 @@ Working and configuring timeouts is important, to ensure your app stays alive wh
276
282
  LHC forwards two timeout options directly to typhoeus:
277
283
 
278
284
  `timeout` (in seconds) - The maximum time in seconds that you allow the libcurl transfer operation to take. Normally, name lookups can take a considerable time and limiting operations to less than a few seconds risk aborting perfectly normal operations. This option may cause libcurl to use the SIGALRM signal to timeout system calls.
279
- `connecttimeout` (in seconds) - It should contain the maximum time in seconds that you allow the connection phase to the server to take. This only limits the connection phase, it has no impact once it has connected. Set to zero to switch to the default built-in connection timeout - 300 seconds.
285
+ `connecttimeout` (in seconds) - It should contain the maximum time in seconds that you allow the connection phase to the server to take. This only limits the connection phase, it has no impact once it has connected. Set to zero to switch to the default built-in connection timeout - 300 seconds.
280
286
 
281
287
  ```ruby
282
288
  LHC.get('http://local.ch', timeout: 5, connecttimeout: 1)
@@ -481,7 +487,7 @@ You can configure global placeholders, that are used when generating urls from u
481
487
  c.placeholder(:datastore, 'http://datastore')
482
488
  c.endpoint(:feedbacks, '{+datastore}/feedbacks', { params: { has_reviews: true } })
483
489
  end
484
-
490
+
485
491
  LHC.get(:feedbacks) # http://datastore/v2/feedbacks
486
492
  ```
487
493
 
@@ -600,7 +606,6 @@ You can configure your own cache (default Rails.cache) and logger (default Rails
600
606
 
601
607
  ```ruby
602
608
  LHC::Caching.cache = ActiveSupport::Cache::MemoryStore.new
603
- LHC::Caching.logger = Logger.new(STDOUT)
604
609
  ```
605
610
 
606
611
  Caching is not enabled by default, although you added it to your basic set of interceptors.
@@ -631,6 +636,18 @@ Responses served from cache are marked as served from cache:
631
636
  response.from_cache? # true
632
637
  ```
633
638
 
639
+ You can also use a central http cache to be used by the `LHC::Caching` interceptor.
640
+
641
+ If you configure a local and a central cache, LHC will perform multi-level-caching.
642
+ LHC will try to retrieve cached information first from the central, in case of a miss from the local cache, while writing back into both.
643
+
644
+ ```ruby
645
+ LHC::Caching.central = {
646
+ read: 'redis://$PASSWORD@central-http-cache-replica.namespace:6379/0',
647
+ write: 'redis://$PASSWORD@central-http-cache-master.namespace:6379/0'
648
+ }
649
+ ```
650
+
634
651
  ##### Options
635
652
 
636
653
  ```ruby
@@ -643,7 +660,7 @@ Responses served from cache are marked as served from cache:
643
660
 
644
661
  `race_condition_ttl` - very useful in situations where a cache entry is used very frequently and is under heavy load.
645
662
  If a cache expires and due to heavy load several different processes will try to read data natively and then they all will try to write to cache.
646
- To avoid that case the first process to find an expired cache entry will bump the cache expiration time by the value set in `cache_race_condition_ttl`.
663
+ To avoid that case the first process to find an expired cache entry will bump the cache expiration time by the value set in `race_condition_ttl`.
647
664
 
648
665
  `use` - Set an explicit cache to be used for this request. If this option is missing `LHC::Caching.cache` is used.
649
666
 
@@ -729,14 +746,18 @@ LHC::Monitoring.env = ENV['DEPLOYMENT_TYPE'] || Rails.env
729
746
 
730
747
  It tracks request attempts with `before_request` and `after_request` (counts).
731
748
 
732
- In case your workers/processes are getting killed due limited time constraints,
749
+ In case your workers/processes are getting killed due limited time constraints,
733
750
  you are able to detect deltas with relying on "before_request", and "after_request" counts:
734
751
 
752
+ ###### Before and after request tracking
753
+
735
754
  ```ruby
736
755
  "lhc.<app_name>.<env>.<host>.<http_method>.before_request", 1
737
756
  "lhc.<app_name>.<env>.<host>.<http_method>.after_request", 1
738
757
  ```
739
758
 
759
+ ###### Response tracking
760
+
740
761
  In case of a successful response it reports the response code with a count and the response time with a gauge value.
741
762
 
742
763
  ```ruby
@@ -747,6 +768,17 @@ In case of a successful response it reports the response code with a count and t
747
768
  "lhc.<app_name>.<env>.<host>.<http_method>.time", 43
748
769
  ```
749
770
 
771
+ In case of a unsuccessful response it reports the response code with a count but no time:
772
+
773
+ ```ruby
774
+ LHC.get('http://local.ch')
775
+
776
+ "lhc.<app_name>.<env>.<host>.<http_method>.count", 1
777
+ "lhc.<app_name>.<env>.<host>.<http_method>.500", 1
778
+ ```
779
+
780
+ ###### Timeout tracking
781
+
750
782
  Timeouts are also reported:
751
783
 
752
784
  ```ruby
@@ -755,6 +787,30 @@ Timeouts are also reported:
755
787
 
756
788
  All the dots in the host are getting replaced with underscore, because dot is the default separator in graphite.
757
789
 
790
+ ###### Caching tracking
791
+
792
+ When you want to track caching stats please make sure you have enabled the `LHC::Caching` and the `LHC::Monitoring` interceptor.
793
+
794
+ Make sure that the `LHC::Caching` is listed before `LHC::Monitoring` interceptor when configuring interceptors:
795
+
796
+ ```ruby
797
+ LHC.configure do |c|
798
+ c.interceptors = [LHC::Caching, LHC::Monitoring]
799
+ end
800
+ ```
801
+
802
+ If a response was served from cache it tracks:
803
+
804
+ ```ruby
805
+ "lhc.<app_name>.<env>.<host>.<http_method>.cache.hit", 1
806
+ ```
807
+
808
+ If a response was not served from cache it tracks:
809
+
810
+ ```ruby
811
+ "lhc.<app_name>.<env>.<host>.<http_method>.cache.miss", 1
812
+ ```
813
+
758
814
  ##### Configure
759
815
 
760
816
  It is possible to set the key for Monitoring Interceptor on per request basis:
@@ -780,7 +836,7 @@ Logs basic request/response information to prometheus.
780
836
  LHC.configure do |c|
781
837
  c.interceptors = [LHC::Prometheus]
782
838
  end
783
-
839
+
784
840
  LHC::Prometheus.client = Prometheus::Client
785
841
  LHC::Prometheus.namespace = 'web_location_app'
786
842
  ```
@@ -802,7 +858,7 @@ If you enable the retry interceptor, you can have LHC retry requests for you:
802
858
  LHC.configure do |c|
803
859
  c.interceptors = [LHC::Retry]
804
860
  end
805
-
861
+
806
862
  response = LHC.get('http://local.ch', retry: true)
807
863
  ```
808
864
 
@@ -877,15 +933,15 @@ The throttle interceptor allows you to raise an exception if a predefined quota
877
933
  end
878
934
  ```
879
935
  ```ruby
880
- options = {
936
+ options = {
881
937
  throttle: {
882
- track: true, # enables tracking of current limit/remaining requests of rate-limiting
883
- break: '80%', # quota in percent after which errors are raised. Percentage symbol is optional, values will be converted to integer (e.g. '23.5' will become 23)
884
- provider: 'local.ch', # name of the provider under which throttling tracking is aggregated,
885
- limit: { header: 'Rate-Limit-Limit' }, # either a hard-coded integer, or a hash pointing at the response header containing the limit value
886
- remaining: { header: 'Rate-Limit-Remaining' }, # a hash pointing at the response header containing the current amount of remaining requests
887
- expires: { header: 'Rate-Limit-Reset' } # a hash pointing at the response header containing the timestamp when the quota will reset
888
- }
938
+ track: true,
939
+ break: '80%',
940
+ provider: 'local.ch',
941
+ limit: { header: 'Rate-Limit-Limit' },
942
+ remaining: { header: 'Rate-Limit-Remaining' },
943
+ expires: { header: 'Rate-Limit-Reset' }
944
+ }
889
945
  }
890
946
 
891
947
  LHC.get('http://local.ch', options)
@@ -895,6 +951,22 @@ LHC.get('http://local.ch', options)
895
951
  # raises LHC::Throttle::OutOfQuota: Reached predefined quota for local.ch
896
952
  ```
897
953
 
954
+ **Options Description**
955
+ * `track`: enables tracking of current limit/remaining requests of rate-limiting
956
+ * `break`: quota in percent after which errors are raised. Percentage symbol is optional, values will be converted to integer (e.g. '23.5' will become 23)
957
+ * `provider`: name of the provider under which throttling tracking is aggregated,
958
+ * `limit`:
959
+ * a hard-coded integer
960
+ * a hash pointing at the response header containing the limit value
961
+ * a proc that receives the response as argument and returns the limit value
962
+ * `remaining`:
963
+ * a hash pointing at the response header containing the current amount of remaining requests
964
+ * a proc that receives the response as argument and returns the current amount of remaining requests
965
+ * `expires`:
966
+ * a hash pointing at the response header containing the timestamp when the quota will reset
967
+ * a proc that receives the response as argument and returns the timestamp when the quota will reset
968
+
969
+
898
970
  #### Zipkin
899
971
 
900
972
  ** Zipkin 0.33 breaks our current implementation of the Zipkin interceptor **
@@ -1,6 +1,5 @@
1
1
  jobs:
2
2
  include:
3
- - cider-ci/jobs/rspec-activesupport-4.yml
4
3
  - cider-ci/jobs/rspec-activesupport-5.yml
5
4
  - cider-ci/jobs/rspec-activesupport-6.yml
6
5
  - cider-ci/jobs/rubocop.yml
@@ -21,14 +21,15 @@ Gem::Specification.new do |s|
21
21
 
22
22
  s.requirements << 'Ruby >= 2.0.0'
23
23
 
24
- s.add_dependency 'activesupport', '>= 4.2'
24
+ s.add_dependency 'activesupport', '>= 5.2'
25
25
  s.add_dependency 'addressable'
26
26
  s.add_dependency 'typhoeus', '>= 0.11'
27
27
 
28
28
  s.add_development_dependency 'geminabox'
29
29
  s.add_development_dependency 'prometheus-client', '~> 0.7.1'
30
30
  s.add_development_dependency 'pry'
31
- s.add_development_dependency 'rails', '>= 4.2'
31
+ s.add_development_dependency 'rails', '>= 5.2'
32
+ s.add_development_dependency 'redis'
32
33
  s.add_development_dependency 'rspec-rails', '>= 3.0.0'
33
34
  s.add_development_dependency 'rubocop', '~> 0.57.1'
34
35
  s.add_development_dependency 'rubocop-rspec', '~> 1.26.0'
@@ -64,8 +64,10 @@ class LHC::Error < StandardError
64
64
  end
65
65
 
66
66
  def to_s
67
- return response if response.is_a?(String)
67
+ return response.to_s unless response.is_a?(LHC::Response)
68
68
  request = response.request
69
+ return unless request.is_a?(LHC::Request)
70
+
69
71
  debug = []
70
72
  debug << [request.method, request.url].map { |str| self.class.fix_invalid_encoding(str) }.join(' ')
71
73
  debug << "Options: #{request.options}"
@@ -29,4 +29,8 @@ class LHC::Interceptor
29
29
  def self.dup
30
30
  self
31
31
  end
32
+
33
+ def all_interceptor_classes
34
+ @all_interceptors ||= LHC::Interceptors.new(request).all.map(&:class)
35
+ end
32
36
  end
@@ -75,10 +75,6 @@ class LHC::Auth < LHC::Interceptor
75
75
  @refresh_client_token_option ||= auth_options[:refresh_client_token] || refresh_client_token
76
76
  end
77
77
 
78
- def all_interceptor_classes
79
- @all_interceptors ||= LHC::Interceptors.new(request).all.map(&:class)
80
- end
81
-
82
78
  def auth_options
83
79
  request.options[:auth] || {}
84
80
  end
@@ -3,69 +3,104 @@
3
3
  class LHC::Caching < LHC::Interceptor
4
4
  include ActiveSupport::Configurable
5
5
 
6
- config_accessor :cache, :logger
6
+ config_accessor :cache, :central
7
7
 
8
+ # to control cache invalidation across all applications in case of
9
+ # breaking changes within this inteceptor
10
+ # that do not lead to cache invalidation otherwise
8
11
  CACHE_VERSION = '1'
9
12
 
10
13
  # Options forwarded to the cache
11
14
  FORWARDED_OPTIONS = [:expires_in, :race_condition_ttl]
12
15
 
16
+ class MultilevelCache
17
+
18
+ def initialize(central: nil, local: nil)
19
+ @central = central
20
+ @local = local
21
+ end
22
+
23
+ def fetch(key)
24
+ central_response = @central[:read].fetch(key) if @central && @central[:read].present?
25
+ if central_response
26
+ puts %Q{[LHC] served from central cache: "#{key}"}
27
+ return central_response
28
+ end
29
+ local_response = @local.fetch(key) if @local
30
+ if local_response
31
+ puts %Q{[LHC] served from local cache: "#{key}"}
32
+ return local_response
33
+ end
34
+ end
35
+
36
+ def write(key, content, options)
37
+ @central[:write].write(key, content, options) if @central && @central[:write].present?
38
+ @local.write(key, content, options) if @local.present?
39
+ end
40
+ end
41
+
13
42
  def before_request
14
43
  return unless cache?(request)
15
- deprecation_warning(request.options)
16
- options = options(request.options)
17
- key = key(request, options[:key])
18
- response_data = cache_for(options).fetch(key)
19
- return unless response_data
20
- logger&.info "Served from cache: #{key}"
44
+ return if response_data.blank?
21
45
  from_cache(request, response_data)
22
46
  end
23
47
 
24
48
  def after_response
25
49
  return unless response.success?
26
- request = response.request
27
50
  return unless cache?(request)
28
- options = options(request.options)
29
- cache_for(options).write(
51
+ return if response_data.present?
52
+ multilevel_cache.write(
30
53
  key(request, options[:key]),
31
54
  to_cache(response),
32
- cache_options(options)
55
+ cache_options
33
56
  )
34
57
  end
35
58
 
36
59
  private
37
60
 
38
- # return the cache for the given options
39
- def cache_for(options)
61
+ # from cache
62
+ def response_data
63
+ # stop calling multi-level cache if it already returned nil for this interceptor instance
64
+ return @response_data if defined? @response_data
65
+ @response_data ||= multilevel_cache.fetch(key(request, options[:key]))
66
+ end
67
+
68
+ # performs read/write (fetch/write) on all configured cache levels (e.g. local & central)
69
+ def multilevel_cache
70
+ MultilevelCache.new(
71
+ central: central_cache,
72
+ local: local_cache
73
+ )
74
+ end
75
+
76
+ # returns the local cache either configured for entire LHC
77
+ # or configured locally for that particular request
78
+ def local_cache
40
79
  options.fetch(:use, cache)
41
80
  end
42
81
 
82
+ def central_cache
83
+ return nil if central.blank? || (central[:read].blank? && central[:write].blank?)
84
+ {}.tap do |options|
85
+ options[:read] = ActiveSupport::Cache::RedisCacheStore.new(url: central[:read]) if central[:read].present?
86
+ options[:write] = ActiveSupport::Cache::RedisCacheStore.new(url: central[:write]) if central[:write].present?
87
+ end
88
+ end
89
+
43
90
  # do we even need to bother with this interceptor?
44
91
  # based on the options, this method will
45
92
  # return false if this interceptor cannot work
46
93
  def cache?(request)
47
94
  return false unless request.options[:cache]
48
- options = options(request.options)
49
- cache_for(options) &&
95
+ (local_cache || central_cache) &&
50
96
  cached_method?(request.method, options[:methods])
51
97
  end
52
98
 
53
- # returns the request_options
54
- # will map deprecated options to the new format
55
- def options(request_options)
56
- options = (request_options[:cache] == true) ? {} : request_options[:cache].dup
57
- map_deprecated_options!(request_options, options)
99
+ def options
100
+ options = (request.options[:cache] == true) ? {} : request.options[:cache].dup
58
101
  options
59
102
  end
60
103
 
61
- # maps `cache_key` -> `key`, `cache_expires_in` -> `expires_in` and so on
62
- def map_deprecated_options!(request_options, options)
63
- deprecated_keys(request_options).each do |deprecated_key|
64
- new_key = deprecated_key.to_s.gsub(/^cache_/, '').to_sym
65
- options[new_key] = request_options[deprecated_key]
66
- end
67
- end
68
-
69
104
  # converts json we read from the cache to an LHC::Response object
70
105
  def from_cache(request, data)
71
106
  raw = Typhoeus::Response.new(data)
@@ -104,24 +139,10 @@ class LHC::Caching < LHC::Interceptor
104
139
 
105
140
  # extracts the options that should be forwarded to
106
141
  # the cache
107
- def cache_options(input = {})
108
- input.each_with_object({}) do |(key, value), result|
142
+ def cache_options
143
+ options.each_with_object({}) do |(key, value), result|
109
144
  result[key] = value if key.in? FORWARDED_OPTIONS
110
145
  result
111
146
  end
112
147
  end
113
-
114
- # grabs the deprecated keys from the request options
115
- def deprecated_keys(request_options)
116
- request_options.keys.select { |k| k =~ /^cache_.*/ }.sort
117
- end
118
-
119
- # emits a deprecation warning if necessary
120
- def deprecation_warning(request_options)
121
- unless deprecated_keys(request_options).empty?
122
- ActiveSupport::Deprecation.warn(
123
- "Cache options have changed! #{deprecated_keys(request_options).join(', ')} are deprecated and will be removed in future versions."
124
- )
125
- end
126
- end
127
148
  end