lhc 12.1.2 → 13.0.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 7c7bdb4bd4a91af430b69b11f503806f9fdc55cae4154dad422ac425c91b5c14
4
- data.tar.gz: 625949e2cdbf59636411283b7ce9e6db35273ffb54612c9e3aa5a52d49c0d928
3
+ metadata.gz: 4795a0c91e246b2139465a030a8f98e0ba2deb07a48664632524b00ad2f02403
4
+ data.tar.gz: 821adbc1157dbd699ad9a40f556a3ea7e94088aa8a320de6685b475a11596351
5
5
  SHA512:
6
- metadata.gz: 0a88ba1534ec293a977484aea27613fc99c7329368b210924b74325904f28a77ac8d6b7e504b3228ee8a39b3f453b8efca1d4fdf354872b0d35177625803b68d
7
- data.tar.gz: 2af73f13c98546e6cca30e35ca3bba4c7b2980790fe1ac35de3a339d63883f9d55e5d624932bdafac315dda1c58fa9e809739a5364f289ec73556c213c78f731
6
+ metadata.gz: 9f8a0035d222617beeafce69dfadb17138d5eb31ea6fed900b6aca4582bbafb332b564e3557f42a1dbbcb287f2e6c55fe100c5e74f92e916f0686172b06f9f07
7
+ data.tar.gz: '058260bffcbacc6edac359e9852a519c3b49fe902bf48a3c7635226c43c563f73c1308ca11fbf6096213ade013ffc40e22c22e4901e455628b19e0d87cc74c7d'
@@ -1,4 +1,4 @@
1
1
  source 'https://rubygems.org/'
2
2
 
3
3
  gemspec
4
- gem 'activesupport', '~> 5.0.0'
4
+ gem 'activesupport', '~> 5.2'
@@ -1,4 +1,4 @@
1
1
  source 'https://rubygems.org/'
2
2
 
3
3
  gemspec
4
- gem 'activesupport', '~> 6.0.0'
4
+ gem 'activesupport', '~> 6.0'
data/README.md CHANGED
@@ -94,6 +94,7 @@ use it like:
94
94
 
95
95
 
96
96
 
97
+
97
98
  ## Basic methods
98
99
 
99
100
  Available are `get`, `post`, `put` & `delete`.
@@ -263,7 +264,7 @@ You can also use URL templates, when [configuring endpoints](#configuring-endpoi
263
264
  LHC.configure do |c|
264
265
  c.endpoint(:find_feedback, 'http://datastore/v2/feedbacks/{id}')
265
266
  end
266
-
267
+
267
268
  LHC.get(:find_feedback, params:{ id: 123 }) # GET http://datastore/v2/feedbacks/123
268
269
  ```
269
270
 
@@ -276,7 +277,7 @@ Working and configuring timeouts is important, to ensure your app stays alive wh
276
277
  LHC forwards two timeout options directly to typhoeus:
277
278
 
278
279
  `timeout` (in seconds) - The maximum time in seconds that you allow the libcurl transfer operation to take. Normally, name lookups can take a considerable time and limiting operations to less than a few seconds risk aborting perfectly normal operations. This option may cause libcurl to use the SIGALRM signal to timeout system calls.
279
- `connecttimeout` (in seconds) - It should contain the maximum time in seconds that you allow the connection phase to the server to take. This only limits the connection phase, it has no impact once it has connected. Set to zero to switch to the default built-in connection timeout - 300 seconds.
280
+ `connecttimeout` (in seconds) - It should contain the maximum time in seconds that you allow the connection phase to the server to take. This only limits the connection phase, it has no impact once it has connected. Set to zero to switch to the default built-in connection timeout - 300 seconds.
280
281
 
281
282
  ```ruby
282
283
  LHC.get('http://local.ch', timeout: 5, connecttimeout: 1)
@@ -481,7 +482,7 @@ You can configure global placeholders, that are used when generating urls from u
481
482
  c.placeholder(:datastore, 'http://datastore')
482
483
  c.endpoint(:feedbacks, '{+datastore}/feedbacks', { params: { has_reviews: true } })
483
484
  end
484
-
485
+
485
486
  LHC.get(:feedbacks) # http://datastore/v2/feedbacks
486
487
  ```
487
488
 
@@ -600,7 +601,6 @@ You can configure your own cache (default Rails.cache) and logger (default Rails
600
601
 
601
602
  ```ruby
602
603
  LHC::Caching.cache = ActiveSupport::Cache::MemoryStore.new
603
- LHC::Caching.logger = Logger.new(STDOUT)
604
604
  ```
605
605
 
606
606
  Caching is not enabled by default, although you added it to your basic set of interceptors.
@@ -631,6 +631,18 @@ Responses served from cache are marked as served from cache:
631
631
  response.from_cache? # true
632
632
  ```
633
633
 
634
+ You can also use a central http cache to be used by the `LHC::Caching` interceptor.
635
+
636
+ If you configure a local and a central cache, LHC will perform multi-level-caching.
637
+ LHC will try to retrieve cached information first from the central, in case of a miss from the local cache, while writing back into both.
638
+
639
+ ```ruby
640
+ LHC::Caching.central = {
641
+ read: 'redis://$PASSWORD@central-http-cache-replica.namespace:6379/0',
642
+ write: 'redis://$PASSWORD@central-http-cache-master.namespace:6379/0'
643
+ }
644
+ ```
645
+
634
646
  ##### Options
635
647
 
636
648
  ```ruby
@@ -643,7 +655,7 @@ Responses served from cache are marked as served from cache:
643
655
 
644
656
  `race_condition_ttl` - very useful in situations where a cache entry is used very frequently and is under heavy load.
645
657
  If a cache expires and due to heavy load several different processes will try to read data natively and then they all will try to write to cache.
646
- To avoid that case the first process to find an expired cache entry will bump the cache expiration time by the value set in `cache_race_condition_ttl`.
658
+ To avoid that case the first process to find an expired cache entry will bump the cache expiration time by the value set in `race_condition_ttl`.
647
659
 
648
660
  `use` - Set an explicit cache to be used for this request. If this option is missing `LHC::Caching.cache` is used.
649
661
 
@@ -729,7 +741,7 @@ LHC::Monitoring.env = ENV['DEPLOYMENT_TYPE'] || Rails.env
729
741
 
730
742
  It tracks request attempts with `before_request` and `after_request` (counts).
731
743
 
732
- In case your workers/processes are getting killed due limited time constraints,
744
+ In case your workers/processes are getting killed due limited time constraints,
733
745
  you are able to detect deltas with relying on "before_request", and "after_request" counts:
734
746
 
735
747
  ```ruby
@@ -780,7 +792,7 @@ Logs basic request/response information to prometheus.
780
792
  LHC.configure do |c|
781
793
  c.interceptors = [LHC::Prometheus]
782
794
  end
783
-
795
+
784
796
  LHC::Prometheus.client = Prometheus::Client
785
797
  LHC::Prometheus.namespace = 'web_location_app'
786
798
  ```
@@ -802,7 +814,7 @@ If you enable the retry interceptor, you can have LHC retry requests for you:
802
814
  LHC.configure do |c|
803
815
  c.interceptors = [LHC::Retry]
804
816
  end
805
-
817
+
806
818
  response = LHC.get('http://local.ch', retry: true)
807
819
  ```
808
820
 
@@ -877,15 +889,15 @@ The throttle interceptor allows you to raise an exception if a predefined quota
877
889
  end
878
890
  ```
879
891
  ```ruby
880
- options = {
892
+ options = {
881
893
  throttle: {
882
- track: true, # enables tracking of current limit/remaining requests of rate-limiting
883
- break: '80%', # quota in percent after which errors are raised. Percentage symbol is optional, values will be converted to integer (e.g. '23.5' will become 23)
884
- provider: 'local.ch', # name of the provider under which throttling tracking is aggregated,
885
- limit: { header: 'Rate-Limit-Limit' }, # either a hard-coded integer, or a hash pointing at the response header containing the limit value
886
- remaining: { header: 'Rate-Limit-Remaining' }, # a hash pointing at the response header containing the current amount of remaining requests
887
- expires: { header: 'Rate-Limit-Reset' } # a hash pointing at the response header containing the timestamp when the quota will reset
888
- }
894
+ track: true,
895
+ break: '80%',
896
+ provider: 'local.ch',
897
+ limit: { header: 'Rate-Limit-Limit' },
898
+ remaining: { header: 'Rate-Limit-Remaining' },
899
+ expires: { header: 'Rate-Limit-Reset' }
900
+ }
889
901
  }
890
902
 
891
903
  LHC.get('http://local.ch', options)
@@ -895,6 +907,22 @@ LHC.get('http://local.ch', options)
895
907
  # raises LHC::Throttle::OutOfQuota: Reached predefined quota for local.ch
896
908
  ```
897
909
 
910
+ **Options Description**
911
+ * `track`: enables tracking of current limit/remaining requests of rate-limiting
912
+ * `break`: quota in percent after which errors are raised. Percentage symbol is optional, values will be converted to integer (e.g. '23.5' will become 23)
913
+ * `provider`: name of the provider under which throttling tracking is aggregated,
914
+ * `limit`:
915
+ * a hard-coded integer
916
+ * a hash pointing at the response header containing the limit value
917
+ * a proc that receives the response as argument and returns the limit value
918
+ * `remaining`:
919
+ * a hash pointing at the response header containing the current amount of remaining requests
920
+ * a proc that receives the response as argument and returns the current amount of remaining requests
921
+ * `expires`:
922
+ * a hash pointing at the response header containing the timestamp when the quota will reset
923
+ * a proc that receives the response as argument and returns the timestamp when the quota will reset
924
+
925
+
898
926
  #### Zipkin
899
927
 
900
928
  ** Zipkin 0.33 breaks our current implementation of the Zipkin interceptor **
@@ -1,6 +1,5 @@
1
1
  jobs:
2
2
  include:
3
- - cider-ci/jobs/rspec-activesupport-4.yml
4
3
  - cider-ci/jobs/rspec-activesupport-5.yml
5
4
  - cider-ci/jobs/rspec-activesupport-6.yml
6
5
  - cider-ci/jobs/rubocop.yml
@@ -21,14 +21,15 @@ Gem::Specification.new do |s|
21
21
 
22
22
  s.requirements << 'Ruby >= 2.0.0'
23
23
 
24
- s.add_dependency 'activesupport', '>= 4.2'
24
+ s.add_dependency 'activesupport', '>= 5.2'
25
25
  s.add_dependency 'addressable'
26
26
  s.add_dependency 'typhoeus', '>= 0.11'
27
27
 
28
28
  s.add_development_dependency 'geminabox'
29
29
  s.add_development_dependency 'prometheus-client', '~> 0.7.1'
30
30
  s.add_development_dependency 'pry'
31
- s.add_development_dependency 'rails', '>= 4.2'
31
+ s.add_development_dependency 'rails', '>= 5.2'
32
+ s.add_development_dependency 'redis'
32
33
  s.add_development_dependency 'rspec-rails', '>= 3.0.0'
33
34
  s.add_development_dependency 'rubocop', '~> 0.57.1'
34
35
  s.add_development_dependency 'rubocop-rspec', '~> 1.26.0'
@@ -3,69 +3,98 @@
3
3
  class LHC::Caching < LHC::Interceptor
4
4
  include ActiveSupport::Configurable
5
5
 
6
- config_accessor :cache, :logger
6
+ config_accessor :cache, :central
7
7
 
8
+ # to control cache invalidation across all applications in case of
9
+ # breaking changes within this inteceptor
10
+ # that do not lead to cache invalidation otherwise
8
11
  CACHE_VERSION = '1'
9
12
 
10
13
  # Options forwarded to the cache
11
14
  FORWARDED_OPTIONS = [:expires_in, :race_condition_ttl]
12
15
 
16
+ class MultilevelCache
17
+
18
+ def initialize(central: nil, local: nil)
19
+ @central = central
20
+ @local = local
21
+ end
22
+
23
+ def fetch(key)
24
+ central_response = @central[:read].fetch(key) if @central && @central[:read].present?
25
+ if central_response
26
+ puts %Q{[LHC] served from central cache: "#{key}"}
27
+ return central_response
28
+ end
29
+ local_response = @local.fetch(key) if @local
30
+ if local_response
31
+ puts %Q{[LHC] served from local cache: "#{key}"}
32
+ return local_response
33
+ end
34
+ end
35
+
36
+ def write(key, content, options)
37
+ @central[:write].write(key, content, options) if @central && @central[:write].present?
38
+ @local.write(key, content, options) if @local.present?
39
+ end
40
+ end
41
+
13
42
  def before_request
14
43
  return unless cache?(request)
15
- deprecation_warning(request.options)
16
- options = options(request.options)
17
44
  key = key(request, options[:key])
18
- response_data = cache_for(options).fetch(key)
45
+ response_data = multilevel_cache.fetch(key)
19
46
  return unless response_data
20
- logger&.info "Served from cache: #{key}"
21
47
  from_cache(request, response_data)
22
48
  end
23
49
 
24
50
  def after_response
25
51
  return unless response.success?
26
- request = response.request
27
52
  return unless cache?(request)
28
- options = options(request.options)
29
- cache_for(options).write(
53
+ multilevel_cache.write(
30
54
  key(request, options[:key]),
31
55
  to_cache(response),
32
- cache_options(options)
56
+ cache_options
33
57
  )
34
58
  end
35
59
 
36
60
  private
37
61
 
38
- # return the cache for the given options
39
- def cache_for(options)
62
+ # performs read/write (fetch/write) on all configured cache levels (e.g. local & central)
63
+ def multilevel_cache
64
+ MultilevelCache.new(
65
+ central: central_cache,
66
+ local: local_cache
67
+ )
68
+ end
69
+
70
+ # returns the local cache either configured for entire LHC
71
+ # or configured locally for that particular request
72
+ def local_cache
40
73
  options.fetch(:use, cache)
41
74
  end
42
75
 
76
+ def central_cache
77
+ return nil if central.blank? || (central[:read].blank? && central[:write].blank?)
78
+ {}.tap do |options|
79
+ options[:read] = ActiveSupport::Cache::RedisCacheStore.new(url: central[:read]) if central[:read].present?
80
+ options[:write] = ActiveSupport::Cache::RedisCacheStore.new(url: central[:write]) if central[:write].present?
81
+ end
82
+ end
83
+
43
84
  # do we even need to bother with this interceptor?
44
85
  # based on the options, this method will
45
86
  # return false if this interceptor cannot work
46
87
  def cache?(request)
47
88
  return false unless request.options[:cache]
48
- options = options(request.options)
49
- cache_for(options) &&
89
+ (local_cache || central_cache) &&
50
90
  cached_method?(request.method, options[:methods])
51
91
  end
52
92
 
53
- # returns the request_options
54
- # will map deprecated options to the new format
55
- def options(request_options)
56
- options = (request_options[:cache] == true) ? {} : request_options[:cache].dup
57
- map_deprecated_options!(request_options, options)
93
+ def options
94
+ options = (request.options[:cache] == true) ? {} : request.options[:cache].dup
58
95
  options
59
96
  end
60
97
 
61
- # maps `cache_key` -> `key`, `cache_expires_in` -> `expires_in` and so on
62
- def map_deprecated_options!(request_options, options)
63
- deprecated_keys(request_options).each do |deprecated_key|
64
- new_key = deprecated_key.to_s.gsub(/^cache_/, '').to_sym
65
- options[new_key] = request_options[deprecated_key]
66
- end
67
- end
68
-
69
98
  # converts json we read from the cache to an LHC::Response object
70
99
  def from_cache(request, data)
71
100
  raw = Typhoeus::Response.new(data)
@@ -104,24 +133,10 @@ class LHC::Caching < LHC::Interceptor
104
133
 
105
134
  # extracts the options that should be forwarded to
106
135
  # the cache
107
- def cache_options(input = {})
108
- input.each_with_object({}) do |(key, value), result|
136
+ def cache_options
137
+ options.each_with_object({}) do |(key, value), result|
109
138
  result[key] = value if key.in? FORWARDED_OPTIONS
110
139
  result
111
140
  end
112
141
  end
113
-
114
- # grabs the deprecated keys from the request options
115
- def deprecated_keys(request_options)
116
- request_options.keys.select { |k| k =~ /^cache_.*/ }.sort
117
- end
118
-
119
- # emits a deprecation warning if necessary
120
- def deprecation_warning(request_options)
121
- unless deprecated_keys(request_options).empty?
122
- ActiveSupport::Deprecation.warn(
123
- "Cache options have changed! #{deprecated_keys(request_options).join(', ')} are deprecated and will be removed in future versions."
124
- )
125
- end
126
- end
127
142
  end
@@ -3,7 +3,6 @@
3
3
  require 'active_support/duration'
4
4
 
5
5
  class LHC::Throttle < LHC::Interceptor
6
-
7
6
  class OutOfQuota < StandardError
8
7
  end
9
8
 
@@ -21,8 +20,7 @@ class LHC::Throttle < LHC::Interceptor
21
20
 
22
21
  def after_response
23
22
  options = response.request.options.dig(:throttle)
24
- return unless options
25
- return unless options.dig(:track)
23
+ return unless throttle?(options)
26
24
  self.class.track ||= {}
27
25
  self.class.track[options.dig(:provider)] = {
28
26
  limit: limit(options: options[:limit], response: response),
@@ -33,6 +31,10 @@ class LHC::Throttle < LHC::Interceptor
33
31
 
34
32
  private
35
33
 
34
+ def throttle?(options)
35
+ [options&.dig(:track), response.headers].none?(&:blank?)
36
+ end
37
+
36
38
  def break_when_quota_reached!
37
39
  options = request.options.dig(:throttle)
38
40
  track = (self.class.track || {}).dig(options[:provider])
@@ -46,36 +48,39 @@ class LHC::Throttle < LHC::Interceptor
46
48
  end
47
49
 
48
50
  def limit(options:, response:)
49
- @limit ||= begin
50
- if options.is_a?(Integer)
51
+ @limit ||=
52
+ if options.is_a?(Proc)
53
+ options.call(response)
54
+ elsif options.is_a?(Integer)
51
55
  options
52
- elsif options.is_a?(Hash) && options[:header] && response.headers.present?
56
+ elsif options.is_a?(Hash) && options[:header]
53
57
  response.headers[options[:header]]&.to_i
54
58
  end
55
- end
56
59
  end
57
60
 
58
61
  def remaining(options:, response:)
59
- @remaining ||= begin
60
- if options.is_a?(Hash) && options[:header] && response.headers.present?
61
- response.headers[options[:header]]&.to_i
62
+ @remaining ||=
63
+ begin
64
+ if options.is_a?(Proc)
65
+ options.call(response)
66
+ elsif options.is_a?(Hash) && options[:header]
67
+ response.headers[options[:header]]&.to_i
68
+ end
62
69
  end
63
- end
64
70
  end
65
71
 
66
72
  def expires(options:, response:)
67
- @expires ||= begin
68
- if options.is_a?(Hash) && options[:header] && response.headers.present?
69
- convert_expires(response.headers[options[:header]]&.to_i)
70
- else
71
- convert_expires(options)
72
- end
73
- end
73
+ @expires ||= convert_expires(read_expire_option(options, response))
74
+ end
75
+
76
+ def read_expire_option(options, response)
77
+ (options.is_a?(Hash) && options[:header]) ? response.headers[options[:header]] : options
74
78
  end
75
79
 
76
80
  def convert_expires(value)
77
- if value.is_a?(Integer)
78
- Time.zone.at(value).to_datetime
79
- end
81
+ return if value.blank?
82
+ return value.call(response) if value.is_a?(Proc)
83
+ return Time.parse(value) if value.match(/GMT/)
84
+ Time.zone.at(value.to_i).to_datetime
80
85
  end
81
86
  end
@@ -4,7 +4,6 @@ module LHC
4
4
  class Railtie < Rails::Railtie
5
5
  initializer "lhc.configure_rails_initialization" do
6
6
  LHC::Caching.cache ||= Rails.cache
7
- LHC::Caching.logger ||= Rails.logger
8
7
  end
9
8
  end
10
9
  end
@@ -3,9 +3,8 @@
3
3
  require 'lhc'
4
4
 
5
5
  RSpec.configure do |config|
6
- LHC::Caching.cache = ActiveSupport::Cache::MemoryStore.new
7
-
8
6
  config.before(:each) do
7
+ LHC::Caching.cache = ActiveSupport::Cache::MemoryStore.new
9
8
  LHC::Caching.cache.clear
10
9
  LHC::Throttle.track = nil
11
10
  end
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module LHC
4
- VERSION ||= '12.1.2'
4
+ VERSION ||= '13.0.0'
5
5
  end
@@ -0,0 +1,138 @@
1
+ # frozen_string_literal: true
2
+
3
+ require 'rails_helper'
4
+
5
+ describe LHC::Caching do
6
+ let(:redis_url) { 'redis://localhost:6379/0' }
7
+ let(:redis_cache) do
8
+ spy('ActiveSupport::Cache::RedisCacheStore')
9
+ end
10
+
11
+ before do
12
+ Rails.cache.clear
13
+ LHC.config.interceptors = [LHC::Caching]
14
+ ActiveSupport::Cache::RedisCacheStore.new(url: redis_url).clear
15
+ allow(ActiveSupport::Cache::RedisCacheStore).to receive(:new).and_return(redis_cache)
16
+ allow(redis_cache).to receive(:present?).and_return(true)
17
+ end
18
+
19
+ let!(:request_stub) do
20
+ stub_request(:get, "http://local.ch/")
21
+ .to_return(body: '<h1>Hi there</h1>')
22
+ end
23
+
24
+ def request
25
+ LHC.get('http://local.ch', cache: true)
26
+ end
27
+
28
+ def response_has_been_cached_and_served_from_cache!
29
+ original_response = request
30
+ cached_response = request
31
+
32
+ expect(original_response.body).to eq cached_response.body
33
+ expect(original_response.code).to eq cached_response.code
34
+ expect(original_response.headers).to eq cached_response.headers
35
+ expect(original_response.options[:return_code]).to eq cached_response.options[:return_code]
36
+ expect(original_response.mock).to eq cached_response.mock
37
+
38
+ assert_requested request_stub, times: 1
39
+ end
40
+
41
+ context 'only local cache has been configured' do
42
+ before do
43
+ LHC::Caching.cache = Rails.cache
44
+ end
45
+
46
+ it 'serves a response from local cache without trying the central cache' do
47
+ expect(Rails.cache).to receive(:fetch).at_least(:once).and_call_original
48
+ expect(Rails.cache).to receive(:write).and_call_original
49
+ expect(-> { response_has_been_cached_and_served_from_cache! })
50
+ .to output(%Q{[LHC] served from local cache: "LHC_CACHE(v1): GET http://local.ch"\n}).to_stdout
51
+ end
52
+ end
53
+
54
+ context 'local and central cache have been configured' do
55
+ before do
56
+ LHC::Caching.cache = Rails.cache
57
+ LHC::Caching.central = {
58
+ read: redis_url,
59
+ write: redis_url
60
+ }
61
+ end
62
+
63
+ context 'found in central cache' do
64
+ it 'serves it from central cache if found there' do
65
+ expect(redis_cache).to receive(:fetch).and_return(nil, body: '<h1>Hi there</h1>', code: 200, headers: nil, return_code: nil, mock: :webmock)
66
+ expect(redis_cache).to receive(:write).and_return(true)
67
+ expect(Rails.cache).to receive(:fetch).and_call_original
68
+ expect(Rails.cache).to receive(:write).and_call_original
69
+ expect(-> { response_has_been_cached_and_served_from_cache! })
70
+ .to output(%Q{[LHC] served from central cache: "LHC_CACHE(v1): GET http://local.ch"\n}).to_stdout
71
+ end
72
+ end
73
+
74
+ context 'not found in central cache' do
75
+ it 'serves it from local cache if found there' do
76
+ expect(redis_cache).to receive(:fetch).and_return(nil, nil)
77
+ expect(redis_cache).to receive(:write).and_return(true)
78
+ expect(Rails.cache).to receive(:fetch).at_least(:once).and_call_original
79
+ expect(Rails.cache).to receive(:write).and_call_original
80
+ expect(-> { response_has_been_cached_and_served_from_cache! })
81
+ .to output(%Q{[LHC] served from local cache: "LHC_CACHE(v1): GET http://local.ch"\n}).to_stdout
82
+ end
83
+ end
84
+ end
85
+
86
+ context 'only central read configured' do
87
+ before do
88
+ LHC::Caching.cache = Rails.cache
89
+ LHC::Caching.central = {
90
+ read: redis_url
91
+ }
92
+ end
93
+
94
+ it 'still serves responses from cache, but does not write them back' do
95
+ expect(redis_cache).to receive(:fetch).and_return(nil, body: '<h1>Hi there</h1>', code: 200, headers: nil, return_code: nil, mock: :webmock)
96
+ expect(redis_cache).not_to receive(:write)
97
+ expect(Rails.cache).to receive(:fetch).and_call_original
98
+ expect(Rails.cache).to receive(:write).and_call_original
99
+ expect(-> { response_has_been_cached_and_served_from_cache! })
100
+ .to output(%Q{[LHC] served from central cache: "LHC_CACHE(v1): GET http://local.ch"\n}).to_stdout
101
+ end
102
+ end
103
+
104
+ context 'only central write configured' do
105
+ before do
106
+ LHC::Caching.cache = Rails.cache
107
+ LHC::Caching.central = {
108
+ write: redis_url
109
+ }
110
+ end
111
+
112
+ it 'still writes responses to cache, but does not retrieve them from there' do
113
+ expect(redis_cache).not_to receive(:fetch)
114
+ expect(redis_cache).to receive(:write).and_return(true)
115
+ expect(Rails.cache).to receive(:fetch).at_least(:once).and_call_original
116
+ expect(Rails.cache).to receive(:write).and_call_original
117
+ expect(-> { response_has_been_cached_and_served_from_cache! })
118
+ .to output(%Q{[LHC] served from local cache: "LHC_CACHE(v1): GET http://local.ch"\n}).to_stdout
119
+ end
120
+ end
121
+
122
+ context 'central cache configured only' do
123
+ before do
124
+ LHC::Caching.cache = nil
125
+ LHC::Caching.central = {
126
+ read: redis_url,
127
+ write: redis_url
128
+ }
129
+ end
130
+
131
+ it 'does not inquire the local cache for information neither to write them' do
132
+ expect(redis_cache).to receive(:fetch).and_return(nil, body: '<h1>Hi there</h1>', code: 200, headers: nil, return_code: nil, mock: :webmock)
133
+ expect(redis_cache).to receive(:write).and_return(true)
134
+ expect(-> { response_has_been_cached_and_served_from_cache! })
135
+ .to output(%Q{[LHC] served from central cache: "LHC_CACHE(v1): GET http://local.ch"\n}).to_stdout
136
+ end
137
+ end
138
+ end
@@ -20,17 +20,6 @@ describe LHC::Caching do
20
20
  default_cache.clear
21
21
  end
22
22
 
23
- it 'maps deprecated cache options' do
24
- expected_options = { expires_in: 5.minutes, race_condition_ttl: 15.seconds }
25
- expected_key = "LHC_CACHE(v1): key"
26
- expect(default_cache).to receive(:write).with(expected_key, anything, expected_options)
27
- expect(lambda {
28
- LHC.get('http://local.ch', cache: true, cache_expires_in: 5.minutes, cache_key: 'key', cache_race_condition_ttl: 15.seconds)
29
- }).to output(
30
- /Cache options have changed! cache_expires_in, cache_key, cache_race_condition_ttl are deprecated and will be removed in future versions./
31
- ).to_stderr
32
- end
33
-
34
23
  it 'does cache' do
35
24
  expect(default_cache).to receive(:fetch)
36
25
  expect(default_cache).to receive(:write)
@@ -3,66 +3,64 @@
3
3
  require 'rails_helper'
4
4
 
5
5
  describe LHC::Throttle do
6
+ let(:options_break) { false }
7
+ let(:options_expires) { { header: 'reset' } }
8
+ let(:options_limit) { { header: 'limit' } }
9
+ let(:options_remaining) { { header: 'remaining' } }
6
10
  let(:provider) { 'local.ch' }
7
- let(:limit) { 10000 }
8
- let(:remaining) { 1900 }
11
+ let(:quota_limit) { 10_000 }
12
+ let(:quota_remaining) { 1900 }
13
+ let(:quota_reset) { (Time.zone.now + 1.hour).to_i }
9
14
  let(:options) do
10
15
  {
11
16
  throttle: {
12
17
  provider: provider,
13
18
  track: true,
14
- limit: limit_options,
15
- remaining: { header: 'Rate-Limit-Remaining' },
16
- expires: { header: 'Rate-Limit-Reset' },
17
- break: break_option
19
+ limit: options_limit,
20
+ remaining: options_remaining,
21
+ expires: options_expires,
22
+ break: options_break
18
23
  }
19
24
  }
20
25
  end
21
- let(:limit_options) { { header: 'Rate-Limit-Limit' } }
22
- let(:break_option) { false }
23
- let(:expires_in) { (Time.zone.now + 1.hour).to_i }
24
26
 
25
27
  before(:each) do
26
28
  LHC::Throttle.track = nil
27
29
  LHC.config.interceptors = [LHC::Throttle]
28
30
 
29
- stub_request(:get, 'http://local.ch')
30
- .to_return(
31
- headers: {
32
- 'Rate-Limit-Limit' => limit,
33
- 'Rate-Limit-Remaining' => remaining,
34
- 'Rate-Limit-Reset' => expires_in
35
- }
36
- )
31
+ stub_request(:get, 'http://local.ch').to_return(
32
+ headers: { 'limit' => quota_limit, 'remaining' => quota_remaining, 'reset' => quota_reset }
33
+ )
37
34
  end
38
35
 
39
36
  it 'tracks the request limits based on response data' do
40
37
  LHC.get('http://local.ch', options)
41
- expect(LHC::Throttle.track[provider][:limit]).to eq limit
42
- expect(LHC::Throttle.track[provider][:remaining]).to eq remaining
38
+ expect(LHC::Throttle.track[provider][:limit]).to eq quota_limit
39
+ expect(LHC::Throttle.track[provider][:remaining]).to eq quota_remaining
43
40
  end
44
41
 
45
42
  context 'fix predefined integer for limit' do
46
- let(:limit_options) { 1000 }
43
+ let(:options_limit) { 1000 }
47
44
 
48
45
  it 'tracks the limit based on initialy provided data' do
49
46
  LHC.get('http://local.ch', options)
50
- expect(LHC::Throttle.track[provider][:limit]).to eq limit_options
47
+ expect(LHC::Throttle.track[provider][:limit]).to eq options_limit
51
48
  end
52
49
  end
53
50
 
54
51
  context 'breaks' do
55
- let(:break_option) { '80%' }
52
+ let(:options_break) { '80%' }
56
53
 
57
54
  it 'hit the breaks if throttling quota is reached' do
58
55
  LHC.get('http://local.ch', options)
59
- expect(-> {
60
- LHC.get('http://local.ch', options)
61
- }).to raise_error(LHC::Throttle::OutOfQuota, 'Reached predefined quota for local.ch')
56
+ expect { LHC.get('http://local.ch', options) }.to raise_error(
57
+ LHC::Throttle::OutOfQuota,
58
+ 'Reached predefined quota for local.ch'
59
+ )
62
60
  end
63
61
 
64
62
  context 'still within quota' do
65
- let(:break_option) { '90%' }
63
+ let(:options_break) { '90%' }
66
64
 
67
65
  it 'does not hit the breaks' do
68
66
  LHC.get('http://local.ch', options)
@@ -72,17 +70,14 @@ describe LHC::Throttle do
72
70
  end
73
71
 
74
72
  context 'no response headers' do
75
- before do
76
- stub_request(:get, 'http://local.ch')
77
- .to_return(status: 200)
78
- end
73
+ before { stub_request(:get, 'http://local.ch').to_return(status: 200) }
79
74
 
80
75
  it 'does not raise an exception' do
81
76
  LHC.get('http://local.ch', options)
82
77
  end
83
78
 
84
79
  context 'no remaining tracked, but break enabled' do
85
- let(:break_option) { '90%' }
80
+ let(:options_break) { '90%' }
86
81
 
87
82
  it 'does not fail if a remaining was not tracked yet' do
88
83
  LHC.get('http://local.ch', options)
@@ -92,15 +87,150 @@ describe LHC::Throttle do
92
87
  end
93
88
 
94
89
  context 'expires' do
95
- let(:break_option) { '80%' }
90
+ let(:options_break) { '80%' }
96
91
 
97
92
  it 'attempts another request if the quota expired' do
98
93
  LHC.get('http://local.ch', options)
99
- expect(-> {
100
- LHC.get('http://local.ch', options)
101
- }).to raise_error(LHC::Throttle::OutOfQuota, 'Reached predefined quota for local.ch')
94
+ expect { LHC.get('http://local.ch', options) }.to raise_error(
95
+ LHC::Throttle::OutOfQuota,
96
+ 'Reached predefined quota for local.ch'
97
+ )
102
98
  Timecop.travel(Time.zone.now + 2.hours)
103
99
  LHC.get('http://local.ch', options)
104
100
  end
105
101
  end
102
+
103
+ describe 'configuration values as Procs' do
104
+ describe 'calculate "limit" in proc' do
105
+ let(:options_limit) do
106
+ ->(*) { 10_000 }
107
+ end
108
+
109
+ before(:each) do
110
+ LHC.get('http://local.ch', options)
111
+ end
112
+
113
+ context 'breaks' do
114
+ let(:options_break) { '80%' }
115
+
116
+ it 'hit the breaks if throttling quota is reached' do
117
+ expect { LHC.get('http://local.ch', options) }.to raise_error(
118
+ LHC::Throttle::OutOfQuota,
119
+ 'Reached predefined quota for local.ch'
120
+ )
121
+ end
122
+
123
+ context 'still within quota' do
124
+ let(:options_break) { '90%' }
125
+
126
+ it 'does not hit the breaks' do
127
+ LHC.get('http://local.ch', options)
128
+ end
129
+ end
130
+ end
131
+ end
132
+
133
+ describe 'calculate "remaining" in proc' do
134
+ let(:quota_current) { 8100 }
135
+ let(:options_remaining) do
136
+ ->(response) { (response.headers['limit']).to_i - (response.headers['current']).to_i }
137
+ end
138
+
139
+ before(:each) do
140
+ stub_request(:get, 'http://local.ch').to_return(
141
+ headers: { 'limit' => quota_limit, 'current' => quota_current, 'reset' => quota_reset }
142
+ )
143
+ LHC.get('http://local.ch', options)
144
+ end
145
+
146
+ context 'breaks' do
147
+ let(:options_break) { '80%' }
148
+
149
+ it 'hit the breaks if throttling quota is reached' do
150
+ expect { LHC.get('http://local.ch', options) }.to raise_error(
151
+ LHC::Throttle::OutOfQuota,
152
+ 'Reached predefined quota for local.ch'
153
+ )
154
+ end
155
+
156
+ context 'still within quota' do
157
+ let(:options_break) { '90%' }
158
+
159
+ it 'does not hit the breaks' do
160
+ LHC.get('http://local.ch', options)
161
+ end
162
+ end
163
+ end
164
+ end
165
+
166
+ describe 'calculate "reset" in proc' do
167
+ let(:options_expires) { ->(*) { Time.zone.now + 1.second } }
168
+
169
+ before(:each) do
170
+ stub_request(:get, 'http://local.ch').to_return(
171
+ headers: { 'limit' => quota_limit, 'remaining' => quota_remaining }
172
+ )
173
+ LHC.get('http://local.ch', options)
174
+ end
175
+
176
+ context 'breaks' do
177
+ let(:options_break) { '80%' }
178
+
179
+ it 'hit the breaks if throttling quota is reached' do
180
+ expect { LHC.get('http://local.ch', options) }.to raise_error(
181
+ LHC::Throttle::OutOfQuota,
182
+ 'Reached predefined quota for local.ch'
183
+ )
184
+ end
185
+
186
+ context 'still within quota' do
187
+ let(:options_break) { '90%' }
188
+
189
+ it 'does not hit the breaks' do
190
+ LHC.get('http://local.ch', options)
191
+ end
192
+ end
193
+ end
194
+ end
195
+ end
196
+
197
+ describe 'parsing reset time given in prose' do
198
+ let(:quota_reset) { (Time.zone.now + 1.day).strftime('%A, %B %d, %Y 12:00:00 AM GMT').to_s }
199
+
200
+ before { LHC.get('http://local.ch', options) }
201
+
202
+ context 'breaks' do
203
+ let(:options_break) { '80%' }
204
+
205
+ it 'hit the breaks if throttling quota is reached' do
206
+ expect { LHC.get('http://local.ch', options) }.to raise_error(
207
+ LHC::Throttle::OutOfQuota,
208
+ 'Reached predefined quota for local.ch'
209
+ )
210
+ end
211
+
212
+ context 'still within quota' do
213
+ let(:options_break) { '90%' }
214
+
215
+ it 'does not hit the breaks' do
216
+ LHC.get('http://local.ch', options)
217
+ end
218
+ end
219
+ end
220
+ end
221
+
222
+ context 'when value is empty' do
223
+ let(:quota_reset) { nil }
224
+
225
+ before do
226
+ stub_request(:get, 'http://local.ch').to_return(
227
+ headers: { 'limit' => quota_limit, 'remaining' => quota_remaining }
228
+ )
229
+ LHC.get('http://local.ch', options)
230
+ end
231
+
232
+ it 'still runs' do
233
+ LHC.get('http://local.ch', options)
234
+ end
235
+ end
106
236
  end
@@ -3,6 +3,7 @@
3
3
  require 'pry'
4
4
  require 'webmock/rspec'
5
5
  require 'lhc'
6
+ require 'lhc/rspec'
6
7
  require 'timecop'
7
8
 
8
9
  Dir[File.join(__dir__, "support/**/*.rb")].each { |f| require f }
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: lhc
3
3
  version: !ruby/object:Gem::Version
4
- version: 12.1.2
4
+ version: 13.0.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - https://github.com/local-ch/lhc/contributors
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2020-08-20 00:00:00.000000000 Z
11
+ date: 2020-09-28 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: activesupport
@@ -16,14 +16,14 @@ dependencies:
16
16
  requirements:
17
17
  - - ">="
18
18
  - !ruby/object:Gem::Version
19
- version: '4.2'
19
+ version: '5.2'
20
20
  type: :runtime
21
21
  prerelease: false
22
22
  version_requirements: !ruby/object:Gem::Requirement
23
23
  requirements:
24
24
  - - ">="
25
25
  - !ruby/object:Gem::Version
26
- version: '4.2'
26
+ version: '5.2'
27
27
  - !ruby/object:Gem::Dependency
28
28
  name: addressable
29
29
  requirement: !ruby/object:Gem::Requirement
@@ -100,14 +100,28 @@ dependencies:
100
100
  requirements:
101
101
  - - ">="
102
102
  - !ruby/object:Gem::Version
103
- version: '4.2'
103
+ version: '5.2'
104
104
  type: :development
105
105
  prerelease: false
106
106
  version_requirements: !ruby/object:Gem::Requirement
107
107
  requirements:
108
108
  - - ">="
109
109
  - !ruby/object:Gem::Version
110
- version: '4.2'
110
+ version: '5.2'
111
+ - !ruby/object:Gem::Dependency
112
+ name: redis
113
+ requirement: !ruby/object:Gem::Requirement
114
+ requirements:
115
+ - - ">="
116
+ - !ruby/object:Gem::Version
117
+ version: '0'
118
+ type: :development
119
+ prerelease: false
120
+ version_requirements: !ruby/object:Gem::Requirement
121
+ requirements:
122
+ - - ">="
123
+ - !ruby/object:Gem::Version
124
+ version: '0'
111
125
  - !ruby/object:Gem::Dependency
112
126
  name: rspec-rails
113
127
  requirement: !ruby/object:Gem::Requirement
@@ -193,7 +207,6 @@ files:
193
207
  - ".rubocop.yml"
194
208
  - ".ruby-version"
195
209
  - Gemfile
196
- - Gemfile.activesupport4
197
210
  - Gemfile.activesupport5
198
211
  - Gemfile.activesupport6
199
212
  - LICENSE
@@ -203,7 +216,6 @@ files:
203
216
  - cider-ci/bin/bundle
204
217
  - cider-ci/bin/ruby_install
205
218
  - cider-ci/bin/ruby_version
206
- - cider-ci/jobs/rspec-activesupport-4.yml
207
219
  - cider-ci/jobs/rspec-activesupport-5.yml
208
220
  - cider-ci/jobs/rspec-activesupport-6.yml
209
221
  - cider-ci/jobs/rubocop.yml
@@ -335,6 +347,7 @@ files:
335
347
  - spec/interceptors/caching/hydra_spec.rb
336
348
  - spec/interceptors/caching/main_spec.rb
337
349
  - spec/interceptors/caching/methods_spec.rb
350
+ - spec/interceptors/caching/multilevel_cache_spec.rb
338
351
  - spec/interceptors/caching/options_spec.rb
339
352
  - spec/interceptors/caching/parameters_spec.rb
340
353
  - spec/interceptors/caching/response_status_spec.rb
@@ -488,6 +501,7 @@ test_files:
488
501
  - spec/interceptors/caching/hydra_spec.rb
489
502
  - spec/interceptors/caching/main_spec.rb
490
503
  - spec/interceptors/caching/methods_spec.rb
504
+ - spec/interceptors/caching/multilevel_cache_spec.rb
491
505
  - spec/interceptors/caching/options_spec.rb
492
506
  - spec/interceptors/caching/parameters_spec.rb
493
507
  - spec/interceptors/caching/response_status_spec.rb
@@ -1,4 +0,0 @@
1
- source 'https://rubygems.org/'
2
-
3
- gemspec
4
- gem 'activesupport', '~> 4.2.11'
@@ -1,28 +0,0 @@
1
- rspec-active-support-v4:
2
- name: 'rspec with ActiveSupport v4'
3
-
4
- run_when:
5
- 'some HEAD has been updated':
6
- type: branch
7
- include_match: ^.*$
8
-
9
- context:
10
-
11
- script_defaults:
12
- template_environment_variables: true
13
-
14
- task_defaults:
15
- environment_variables:
16
- ACTIVESUPPORT: '4'
17
- BUNDLER: '_1.17.3_'
18
-
19
- max_trials: 2
20
- dispatch_storm_delay_duration: 1 Seconds
21
- include:
22
- - cider-ci/task_components/ruby.yml
23
- - cider-ci/task_components/bundle.yml
24
- - cider-ci/task_components/rspec.yml
25
-
26
- tasks:
27
- all-rspec:
28
- name: All rspec tests, using ActiveSupport v4