apisonator 3.0.1.1 → 3.3.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: bd4ac963d174350e0e93513a0bfe572110c9e90659f7907277996358c9761902
4
- data.tar.gz: 3c327a805079b402cc65330cb4ff3e84a247ef9c57c781a9e4e103be283cc8a5
3
+ metadata.gz: a812d330bdcb770421ed926c478c8a8b48087ba33ad28ab80317a48db935f432
4
+ data.tar.gz: 5bc8b40c69fbf5a6a680b71692dc66433746b053c0dbf3dcfe6e3d4aacd656af
5
5
  SHA512:
6
- metadata.gz: 52f2e4795a62a1ed314b21d1b3839dc78c2e6f8ee5bcb6fce02d9b5cd1ced3004915a78927b9fd667d210a2bb547fe34f27844dd267cdb405ff36eb629d7e1c8
7
- data.tar.gz: df81a8aceb61a99c1151f3418c156ce70e1c18a9e2ab28db80d9f35424bf73fe82cfe46bed02c6da5e229cb212344432dd603520cd1dea8940e4448c14e6c3e8
6
+ metadata.gz: 807c4d8cfc932e600780f4ba97fa5cfa3afcaca04ad0e27395e92fca31407d96e360412a290566c78955e5cf12895a5f4a70e589474de045bf3f0f7cf21311af
7
+ data.tar.gz: 17bf0d7c783ee36b78a885a441846edeaf8906185ab183c91ffc7f0c7f7c13e45b4237c0c7adb1a228794b3f64d4a0729f6eb8360a59910021d65a0b99c1edc1
data/CHANGELOG.md CHANGED
@@ -2,6 +2,80 @@
2
2
 
3
3
  Notable changes to Apisonator will be tracked in this document.
4
4
 
5
+ ## 3.3.1 - 2021-02-11
6
+
7
+ ### Fixed
8
+
9
+ - Usages with `#0` (set to 0) no longer generate unnecessary stats keys in Redis
10
+ ([#258](https://github.com/3scale/apisonator/pull/258)).
11
+
12
+ ## 3.3.0 - 2021-02-09
13
+
14
+ ### Added
15
+
16
+ - Rake task to delete stats keys set to 0 in the DB left there because of [this
17
+ issue](https://github.com/3scale/apisonator/pull/247)
18
+ ([#250](https://github.com/3scale/apisonator/pull/250)).
19
+
20
+ ### Fixed
21
+
22
+ - Made the worker more reliable when configured in async mode. Now it handles
23
+ connection errors better
24
+ ([#253](https://github.com/3scale/apisonator/pull/253)),
25
+ ([#254](https://github.com/3scale/apisonator/pull/254)), and
26
+ ([#255](https://github.com/3scale/apisonator/pull/255)).
27
+
28
+ ### Changed
29
+
30
+ - Updated async-redis to v0.5.1
31
+ ([#251](https://github.com/3scale/apisonator/pull/251)).
32
+
33
+ ## 3.2.1 - 2021-01-22
34
+
35
+ ### Fixed
36
+
37
+ - Reports of 0 hits no longer generate unnecessary stats keys in Redis
38
+ ([#247](https://github.com/3scale/apisonator/pull/247)).
39
+
40
+ ## 3.2.0 - 2021-01-19
41
+
42
+ ### Added
43
+
44
+ - New endpoint in the internal API to get the provider key for a given (token,
45
+ service_id) pair ([#243](https://github.com/3scale/apisonator/pull/243)).
46
+
47
+ ### Changed
48
+
49
+ - The config file used when running in a Docker image now parses "1" and "true"
50
+ (case-insensitive) as true
51
+ ([#245](https://github.com/3scale/apisonator/pull/245)).
52
+
53
+ ### Fixed
54
+
55
+ - Fixed some metrics of the internal API that were not being counted
56
+ correctly([#244](https://github.com/3scale/apisonator/pull/244)).
57
+
58
+
59
+ ## 3.1.0 - 2020-10-14
60
+
61
+ ### Added
62
+
63
+ - Prometheus metrics for the internal API
64
+ ([#236](https://github.com/3scale/apisonator/pull/236)).
65
+ - Docs with a detailed explanation about how counter updates are performed
66
+ ([#239](https://github.com/3scale/apisonator/pull/239)).
67
+
68
+ ### Changed
69
+
70
+ - NotifyJobs are run only when the service ID is explicitly defined
71
+ ([#238](https://github.com/3scale/apisonator/pull/238)).
72
+
73
+ ### Fixed
74
+
75
+ - Fixed corner case that raised "TransactionTimestampNotWithinRange" in notify
76
+ jobs ([#235](https://github.com/3scale/apisonator/pull/235)).
77
+
78
+
5
79
  ## 3.0.1.1 - 2020-07-28
6
80
 
7
81
  ### Changed
data/Gemfile.base CHANGED
@@ -15,11 +15,15 @@ platform :ruby do
15
15
  end
16
16
 
17
17
  group :test do
18
+ # Newer versions of rack-test don't work well with rspec-api-documentation.
19
+ # See https://github.com/rack/rack-test/pull/223 &
20
+ # https://github.com/zipmark/rspec_api_documentation/issues/342
21
+ gem 'rack-test', '= 0.8.2'
22
+
18
23
  gem 'benchmark-ips', '~> 2.7.2'
19
24
  gem 'mocha', '~> 1.3'
20
25
  gem 'nokogiri', '~> 1.10.8'
21
26
  gem 'pkg-config', '~> 1.1.7'
22
- gem 'rack-test', '~> 0.8.2'
23
27
  gem 'resque_unit', '~> 0.4.4', source: 'https://rubygems.org'
24
28
  gem 'test-unit', '~> 3.2.6'
25
29
  gem 'resque_spec', '~> 0.17.0'
@@ -53,13 +57,14 @@ gem 'rake', '~> 13.0'
53
57
  gem 'builder', '= 3.2.3'
54
58
  # Use a patched resque to allow reusing their Airbrake Failure class
55
59
  gem 'resque', git: 'https://github.com/3scale/resque', branch: '3scale'
60
+ gem 'redis-namespace', '~>1.8.0'
56
61
  gem 'rack', '~> 2.1.4'
57
62
  gem 'sinatra', '~> 2.0.3'
58
63
  gem 'sinatra-contrib', '~> 2.0.3'
59
64
  # Optional external error logging services
60
65
  gem 'bugsnag', '~> 6', require: nil
61
66
  gem 'yabeda-prometheus', '~> 0.5.0'
62
- gem 'async-redis', '~> 0.5'
67
+ gem 'async-redis', '~> 0.5.1'
63
68
  gem 'falcon', '~> 0.35'
64
69
 
65
70
  # Use a patched redis-rb that fixes an issue when trying to connect with
data/Gemfile.lock CHANGED
@@ -35,7 +35,7 @@ GIT
35
35
  PATH
36
36
  remote: .
37
37
  specs:
38
- apisonator (3.0.1.1)
38
+ apisonator (3.3.1)
39
39
 
40
40
  GEM
41
41
  remote: https://rubygems.org/
@@ -70,7 +70,7 @@ GEM
70
70
  async (~> 1.14)
71
71
  async-pool (0.2.0)
72
72
  async (~> 1.8)
73
- async-redis (0.5.0)
73
+ async-redis (0.5.1)
74
74
  async (~> 1.8)
75
75
  async-io (~> 1.10)
76
76
  async-pool (~> 0.2)
@@ -142,7 +142,7 @@ GEM
142
142
  net-scp (1.2.1)
143
143
  net-ssh (>= 2.6.5)
144
144
  net-ssh (4.2.0)
145
- nio4r (2.5.2)
145
+ nio4r (2.5.4)
146
146
  nokogiri (1.10.9)
147
147
  mini_portile2 (~> 2.4.0)
148
148
  parslet (1.8.2)
@@ -178,7 +178,7 @@ GEM
178
178
  rack-test (0.8.2)
179
179
  rack (>= 1.0, < 3)
180
180
  rake (13.0.1)
181
- redis-namespace (1.6.0)
181
+ redis-namespace (1.8.0)
182
182
  redis (>= 3.0.4)
183
183
  resque_spec (0.17.0)
184
184
  resque (>= 1.19.0)
@@ -241,7 +241,7 @@ GEM
241
241
  thread_safe (0.3.6)
242
242
  tilt (2.0.8)
243
243
  timecop (0.9.1)
244
- timers (4.3.0)
244
+ timers (4.3.2)
245
245
  toml (0.2.0)
246
246
  parslet (~> 1.8.0)
247
247
  tzinfo (1.2.7)
@@ -267,7 +267,7 @@ PLATFORMS
267
267
  DEPENDENCIES
268
268
  airbrake (= 4.3.1)
269
269
  apisonator!
270
- async-redis (~> 0.5)
270
+ async-redis (~> 0.5.1)
271
271
  async-rspec
272
272
  aws-sdk (= 2.4.2)
273
273
  benchmark-ips (~> 2.7.2)
@@ -288,9 +288,10 @@ DEPENDENCIES
288
288
  pry-doc (~> 0.11.1)
289
289
  puma!
290
290
  rack (~> 2.1.4)
291
- rack-test (~> 0.8.2)
291
+ rack-test (= 0.8.2)
292
292
  rake (~> 13.0)
293
293
  redis!
294
+ redis-namespace (~> 1.8.0)
294
295
  resque!
295
296
  resque_spec (~> 0.17.0)
296
297
  resque_unit (~> 0.4.4)!
data/Gemfile.on_prem.lock CHANGED
@@ -35,7 +35,7 @@ GIT
35
35
  PATH
36
36
  remote: .
37
37
  specs:
38
- apisonator (3.0.1.1)
38
+ apisonator (3.3.1)
39
39
 
40
40
  GEM
41
41
  remote: https://rubygems.org/
@@ -67,7 +67,7 @@ GEM
67
67
  async (~> 1.14)
68
68
  async-pool (0.2.0)
69
69
  async (~> 1.8)
70
- async-redis (0.5.0)
70
+ async-redis (0.5.1)
71
71
  async (~> 1.8)
72
72
  async-io (~> 1.10)
73
73
  async-pool (~> 0.2)
@@ -131,7 +131,7 @@ GEM
131
131
  net-scp (1.2.1)
132
132
  net-ssh (>= 2.6.5)
133
133
  net-ssh (4.2.0)
134
- nio4r (2.5.2)
134
+ nio4r (2.5.4)
135
135
  nokogiri (1.10.9)
136
136
  mini_portile2 (~> 2.4.0)
137
137
  parslet (1.8.2)
@@ -166,7 +166,7 @@ GEM
166
166
  rack-test (0.8.2)
167
167
  rack (>= 1.0, < 3)
168
168
  rake (13.0.1)
169
- redis-namespace (1.6.0)
169
+ redis-namespace (1.8.0)
170
170
  redis (>= 3.0.4)
171
171
  resque_spec (0.17.0)
172
172
  resque (>= 1.19.0)
@@ -227,7 +227,7 @@ GEM
227
227
  thread_safe (0.3.6)
228
228
  tilt (2.0.8)
229
229
  timecop (0.9.1)
230
- timers (4.3.0)
230
+ timers (4.3.2)
231
231
  toml (0.2.0)
232
232
  parslet (~> 1.8.0)
233
233
  tzinfo (1.2.7)
@@ -250,7 +250,7 @@ PLATFORMS
250
250
 
251
251
  DEPENDENCIES
252
252
  apisonator!
253
- async-redis (~> 0.5)
253
+ async-redis (~> 0.5.1)
254
254
  async-rspec
255
255
  benchmark-ips (~> 2.7.2)
256
256
  bugsnag (~> 6)
@@ -269,9 +269,10 @@ DEPENDENCIES
269
269
  pry-doc (~> 0.11.1)
270
270
  puma!
271
271
  rack (~> 2.1.4)
272
- rack-test (~> 0.8.2)
272
+ rack-test (= 0.8.2)
273
273
  rake (~> 13.0)
274
274
  redis!
275
+ redis-namespace (~> 1.8.0)
275
276
  resque!
276
277
  resque_spec (~> 0.17.0)
277
278
  resque_unit (~> 0.4.4)!
data/Rakefile CHANGED
@@ -261,27 +261,49 @@ task :reschedule_failed_jobs do
261
261
  "Pending failed jobs: #{result[:failed_current]}."
262
262
  end
263
263
 
264
- desc 'Delete stats of services marked for deletion'
265
264
  namespace :stats do
265
+ desc 'Delete stats of services marked for deletion'
266
266
  task :cleanup, [:redis_urls, :log_deleted_keys] do |_, args|
267
- redis_urls = args[:redis_urls] && args[:redis_urls].split(' ')
267
+ redis_conns = redis_conns(args[:redis_urls])
268
268
 
269
- if redis_urls.nil? || redis_urls.empty?
269
+ if redis_conns.empty?
270
270
  puts 'No Redis URLs specified'
271
271
  exit(false)
272
272
  end
273
273
 
274
- redis_clients = redis_urls.map do |redis_url|
275
- parsed_uri = URI.parse(ThreeScale::Backend::Storage::Helpers.send(
276
- :to_redis_uri, redis_url)
277
- )
278
- Redis.new(host: parsed_uri.host, port: parsed_uri.port)
274
+ ThreeScale::Backend::Stats::Cleaner.delete!(
275
+ redis_conns, log_deleted_keys: logger_for_deleted_keys(args[:log_deleted_keys])
276
+ )
277
+ end
278
+
279
+ desc 'Delete stats keys set to 0'
280
+ task :delete_stats_keys_set_to_0, [:redis_urls, :log_deleted_keys] do |_, args|
281
+ redis_conns = redis_conns(args[:redis_urls])
282
+
283
+ if redis_conns.empty?
284
+ puts 'No Redis URLs specified'
285
+ exit(false)
279
286
  end
280
287
 
281
- log_deleted = args[:log_deleted_keys] == 'true' ? STDOUT : nil
288
+ ThreeScale::Backend::Stats::Cleaner.delete_stats_keys_set_to_0(
289
+ redis_conns, log_deleted_keys: logger_for_deleted_keys(args[:log_deleted_keys])
290
+ )
291
+ end
292
+ end
282
293
 
283
- ThreeScale::Backend::Stats::Cleaner.delete!(
284
- redis_clients, log_deleted_keys: log_deleted
294
+ def redis_conns(urls)
295
+ redis_urls = urls && urls.split(' ')
296
+
297
+ return [] if redis_urls.nil? || redis_urls.empty?
298
+
299
+ redis_urls.map do |redis_url|
300
+ parsed_uri = URI.parse(ThreeScale::Backend::Storage::Helpers.send(
301
+ :to_redis_uri, redis_url)
285
302
  )
303
+ Redis.new(host: parsed_uri.host, port: parsed_uri.port)
286
304
  end
287
305
  end
306
+
307
+ def logger_for_deleted_keys(arg_log_deleted_keys)
308
+ arg_log_deleted_keys == 'true' ? STDOUT : nil
309
+ end
@@ -7,6 +7,14 @@ module ThreeScale
7
7
  ServiceToken.exists?(token, service_id) ? 200 : 404
8
8
  end
9
9
 
10
+ get '/:token/:service_id/provider_key' do |token, service_id|
11
+ if ServiceToken.exists?(token, service_id)
12
+ { status: :found, provider_key: Service.provider_key_for(service_id) }.to_json
13
+ else
14
+ respond_with_404('token/service combination not found'.freeze)
15
+ end
16
+ end
17
+
10
18
  post '/' do
11
19
  check_tokens_param!
12
20
 
@@ -6,32 +6,13 @@ module ThreeScale
6
6
  respond_with_404('service not found') unless Service.exists?(params[:service_id])
7
7
  end
8
8
 
9
- # This is very slow and needs to be disabled until the performance
10
- # issues are solved. In the meanwhile, the job will just return OK.
11
- =begin
12
- delete '' do |service_id|
13
- delete_stats_job_attrs = api_params Stats::DeleteJobDef
14
- delete_stats_job_attrs[:service_id] = service_id
15
- delete_stats_job_attrs[:from] = delete_stats_job_attrs[:from].to_i
16
- delete_stats_job_attrs[:to] = delete_stats_job_attrs[:to].to_i
17
- begin
18
- Stats::DeleteJobDef.new(delete_stats_job_attrs).run_async
19
- rescue DeleteServiceStatsValidationError => e
20
- [400, headers, { status: :error, error: e.message }.to_json]
21
- else
22
- { status: :to_be_deleted }.to_json
23
- end
24
- =end
25
-
26
- # This is an alternative to the above. It just adds the service to a
27
- # Redis set to marked is as "to be deleted".
28
- # Later a script can read that set and actually delete the keys.
29
- # Read the docs of the Stats::Cleaner class for more details.
9
+ # This adds the service to a Redis set to mark is as "to be deleted".
10
+ # Later a script can read that set and actually delete the keys. Read
11
+ # the docs of the Stats::Cleaner class for more details.
30
12
  #
31
- # Notice that this method ignores the "from" and "to" parameters. When
32
- # system calls this method, they're always interested in deleting all
33
- # the keys. They were just passing "from" and "to" to make the
34
- # implementation of the option above easier.
13
+ # Notice that this method ignores the "from" and "to" parameters used in
14
+ # previous versions. When system calls this method, they're always
15
+ # interested in deleting all the keys.
35
16
  delete '' do |service_id|
36
17
  Stats::Cleaner.mark_service_to_be_deleted(service_id)
37
18
  { status: :to_be_deleted }.to_json
@@ -40,10 +40,8 @@ module ThreeScale
40
40
  private
41
41
 
42
42
  def self.first_traffic(service_id, application_id)
43
- key = Stats::Keys.applications_key_prefix(
44
- Stats::Keys.service_key_prefix(service_id)
45
- )
46
- if storage.sadd(key, encode_key(application_id))
43
+ if storage.sadd(Stats::Keys.set_of_apps_with_traffic(service_id),
44
+ encode_key(application_id))
47
45
  EventStorage.store(:first_traffic,
48
46
  { service_id: service_id,
49
47
  application_id: application_id,
@@ -1,6 +1,7 @@
1
1
  require '3scale/backend/configuration/loader'
2
2
  require '3scale/backend/environment'
3
3
  require '3scale/backend/configurable'
4
+ require '3scale/backend/errors'
4
5
 
5
6
  module ThreeScale
6
7
  module Backend
@@ -31,8 +32,6 @@ module ThreeScale
31
32
 
32
33
  CONFIG_DELETE_STATS_BATCH_SIZE = 50
33
34
  private_constant :CONFIG_DELETE_STATS_BATCH_SIZE
34
- CONFIG_DELETE_STATS_PARTITION_BATCH_SIZE = 1000
35
- private_constant :CONFIG_DELETE_STATS_PARTITION_BATCH_SIZE
36
35
 
37
36
  @configuration = Configuration::Loader.new
38
37
 
@@ -53,7 +52,7 @@ module ThreeScale
53
52
  config.add_section(:analytics_redis, :server,
54
53
  :connect_timeout, :read_timeout, :write_timeout)
55
54
  config.add_section(:hoptoad, :service, :api_key)
56
- config.add_section(:stats, :bucket_size, :delete_batch_size, :delete_partition_batch_size)
55
+ config.add_section(:stats, :bucket_size, :delete_batch_size)
57
56
  config.add_section(:redshift, :host, :port, :dbname, :user, :password)
58
57
  config.add_section(:statsd, :host, :port)
59
58
  config.add_section(:internal_api, :user, :password)
@@ -77,9 +76,6 @@ module ThreeScale
77
76
  master_metrics = [:transactions, :transactions_authorize]
78
77
  config.master.metrics = Struct.new(*master_metrics).new
79
78
 
80
- # Default config
81
- config.master_service_id = 1
82
-
83
79
  # This setting controls whether the listener can create event buckets in
84
80
  # Redis. We do not want all the listeners creating buckets yet, as we do
85
81
  # not know exactly the rate at which we can send events to Kinesis
@@ -127,9 +123,6 @@ module ThreeScale
127
123
  config.stats.delete_batch_size = parse_int(config.stats.delete_batch_size,
128
124
  CONFIG_DELETE_STATS_BATCH_SIZE)
129
125
 
130
- config.stats.delete_partition_batch_size = parse_int(config.stats.delete_partition_batch_size,
131
- CONFIG_DELETE_STATS_PARTITION_BATCH_SIZE)
132
-
133
126
  # often we don't have a log_file setting - generate it here from
134
127
  # the log_path setting.
135
128
  log_file = config.log_file
@@ -292,12 +292,6 @@ module ThreeScale
292
292
  end
293
293
  end
294
294
 
295
- class DeleteServiceStatsValidationError < Error
296
- def initialize(service_id, msg)
297
- super "Delete stats job context validation error. Service: #{service_id}. Error: #{msg}"
298
- end
299
- end
300
-
301
295
  class EndUsersNoLongerSupported < BadRequest
302
296
  def initialize
303
297
  super 'End-users are no longer supported, do not specify the user_id parameter'.freeze
@@ -32,25 +32,6 @@ module ThreeScale
32
32
  DEFAULT_WAIT_BEFORE_FETCHING_MORE_JOBS
33
33
  end
34
34
 
35
- def pop_from_queue
36
- begin
37
- encoded_job = @redis.blpop(*@queues, timeout: @fetch_timeout)
38
- rescue Redis::BaseConnectionError, Errno::ECONNREFUSED, Errno::EPIPE => e
39
- raise RedisConnectionError.new(e.message)
40
- rescue Redis::CommandError => e
41
- # Redis::CommandError from redis-rb can be raised for multiple
42
- # reasons, so we need to check the error message to distinguish
43
- # connection errors from the rest.
44
- if e.message == 'ERR Connection timed out'.freeze
45
- raise RedisConnectionError.new(e.message)
46
- else
47
- raise e
48
- end
49
- end
50
-
51
- encoded_job
52
- end
53
-
54
35
  def fetch
55
36
  encoded_job = pop_from_queue
56
37
  return nil if encoded_job.nil? || encoded_job.empty?
@@ -99,10 +80,11 @@ module ThreeScale
99
80
 
100
81
  # Re-instantiate Redis instance. This is needed to recover from
101
82
  # Errno::EPIPE, not sure if there are others.
102
- @redis = ThreeScale::Backend::QueueStorage.connection(
103
- ThreeScale::Backend.environment,
104
- ThreeScale::Backend.configuration
83
+ @redis = Redis::Namespace.new(
84
+ WorkerAsync.const_get(:RESQUE_REDIS_NAMESPACE),
85
+ redis: QueueStorage.connection(Backend.environment, Backend.configuration)
105
86
  )
87
+
106
88
  # If there is a different kind of error, it's probably a
107
89
  # programming error. Like sending an invalid blpop command to
108
90
  # Redis. In that case, let the worker crash.
@@ -111,12 +93,36 @@ module ThreeScale
111
93
  end
112
94
  end
113
95
 
96
+ rescue Exception => e
97
+ Worker.logger.notify(e)
98
+ ensure
114
99
  job_queue.close
115
100
  end
116
101
 
117
102
  def shutdown
118
103
  @shutdown = true
119
104
  end
105
+
106
+ private
107
+
108
+ def pop_from_queue
109
+ begin
110
+ encoded_job = @redis.blpop(*@queues, timeout: @fetch_timeout)
111
+ rescue Redis::BaseConnectionError, Errno::ECONNREFUSED, Errno::EPIPE => e
112
+ raise RedisConnectionError.new(e.message)
113
+ rescue Redis::CommandError => e
114
+ # Redis::CommandError from redis-rb can be raised for multiple
115
+ # reasons, so we need to check the error message to distinguish
116
+ # connection errors from the rest.
117
+ if e.message == 'ERR Connection timed out'.freeze
118
+ raise RedisConnectionError.new(e.message)
119
+ else
120
+ raise e
121
+ end
122
+ end
123
+
124
+ encoded_job
125
+ end
120
126
  end
121
127
  end
122
128
  end