eventq 4.0.0 → 4.2.0
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +4 -4
- data/README.md +77 -14
- data/lib/eventq/eventq_aws/aws_calculate_visibility_timeout.rb +53 -25
- data/lib/eventq/eventq_aws/aws_queue_worker.rb +16 -10
- data/lib/eventq/eventq_base/nonce_manager.rb +67 -15
- data/lib/eventq/eventq_base/queue.rb +6 -0
- data/lib/eventq/eventq_rabbitmq/rabbitmq_queue_worker.rb +26 -10
- data/lib/eventq/queue_worker.rb +1 -1
- data/lib/eventq.rb +2 -4
- metadata +34 -20
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA256:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: c0bbacb646ab33aa08c10738ceeba304c17c0c5f80024b0b8c5e22cb4a11e41a
|
4
|
+
data.tar.gz: d05f206a2231a46750ab172426725dc7a7347acfd084c6371ea888955182042c
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: 8daf3d93b434bae5ca09c3d2c51e9058df10f3355efbfcf116bd44eb52fd4769209f90a5638204de5e9eff93b418f07596bd746d33db5510a5c5a3471fea1f7f
|
7
|
+
data.tar.gz: 2cf23e913f766e2f2dff17cb1ea47dfbb5b6c1f476728113225cf50801ade03dfcacdfec810a44a8ac2c3c50d84b3af2fc91bbbda1dda7ea5ea2eb324040e7c2
|
data/README.md
CHANGED
@@ -54,6 +54,7 @@ A subscription queue should be defined to receive any events raised for the subs
|
|
54
54
|
|
55
55
|
- **allow_retry** [Bool] [Optional] [Default=false] This determines if the queue should allow processing failures to be retried.
|
56
56
|
- **allow_retry_back_off** [Bool] [Optional] [Default=false] This is used to specify if failed messages that retry should incrementally backoff.
|
57
|
+
- **allow_exponential_back_off** [Bool] [Optional] [Default=false] This is used to specify if failed messages that retry should expontentially backoff.
|
57
58
|
- **retry_back_off_grace** [Int] [Optional] [Default=0] This is the number of times to allow retries without applying retry back off if enabled.
|
58
59
|
- **dlq** [EventQ::Queue] [Optional] [Default=nil] A queue that will receive the messages which were not successfully processed after maximum number of receives by consumers. This is created at the same time as the parent queue.
|
59
60
|
- **max_retry_attempts** [Int] [Optional] [Default=5] This is used to specify the max number of times an event should be allowed to retry before failing.
|
@@ -63,21 +64,77 @@ A subscription queue should be defined to receive any events raised for the subs
|
|
63
64
|
- **require_signature** [Bool] [Optional] [Default=false] This is used to specify if messages within this queue must be signed.
|
64
65
|
- **retry_delay** [Int] [Optional] [Default=30000] This is used to specify the time delay in milliseconds before a failed message is re-added to the subscription queue.
|
65
66
|
- **retry_back_off_weight** [Int] [Optional] [Default=1] Additional multiplier for the timeout backoff. Normally used when `retry_delay` is too small (eg: 30ms) in order to get meaningful backoff values.
|
67
|
+
- **retry_jitter_ratio** [Int] [Optional] [Default=0] Amount of randomness for retry delays in percent to avoid a bulk of retries hitting again at the same time. 0% means no randomness, while 100% means full randomness. With full randomness, a random number between 0 and the calculated retry delay will be chosen for the delay.
|
66
68
|
|
67
69
|
**Example**
|
68
70
|
|
69
71
|
```ruby
|
70
|
-
# Create a queue that allows retries and accepts a maximum of
|
72
|
+
# Create a queue that allows retries and accepts a maximum of 5 retries with a 20 second delay between retries.
|
71
73
|
class DataChangeAddressQueue < Queue
|
72
74
|
def initialize
|
73
75
|
@name = 'Data.Change.Address'
|
74
76
|
@allow_retry = true
|
75
|
-
@retry_delay =
|
76
|
-
@max_retry_attempts =
|
77
|
+
@retry_delay = 20_000
|
78
|
+
@max_retry_attempts = 5
|
77
79
|
end
|
78
80
|
end
|
79
81
|
```
|
80
82
|
|
83
|
+
**Retry Strategies**
|
84
|
+
|
85
|
+
In distributed systems, it is expected for some events to fail.
|
86
|
+
Thankfully, those events can be put "on hold" and will be processed again after a given waiting time.
|
87
|
+
The attributes affecting your retry strategy the most are:
|
88
|
+
* `retry_delay` (base duration that events are waiting before being reprocessed)
|
89
|
+
* `max_receive_count` and `max_retry_attempts` (limiting how often an event can be seen / processed)
|
90
|
+
* `allow_retry`, `allow_retry_back_off` and `allow_exponential_back_off` (defining if retries are allowed and how duration between retries should be calculated)
|
91
|
+
|
92
|
+
If only `retry_delay` is set to `true`, while `allow_retry_back_off` and `allow_exponential_back_off` remain `false`, the duration between retries will be `retry_delay` each time ("fixed back off").
|
93
|
+
So there is a fixed duration between events, like in the example for `DataChangeAddressQueue` above.
|
94
|
+
With the configuration of that class, the event will be retried 5 times, with at least 20 seconds between retries.
|
95
|
+
Therefore we can calculate that the final retry will have happened after `retry_duration * max_retry_attempts`, which results in 100 seconds here.
|
96
|
+
|
97
|
+
If also `allow_retry_back_off` is set to `true`, the duration between retries will scale with the number of retries ("incremental back off").
|
98
|
+
So the first retry will happen after `retry_duration`, the second after `2 * retry_duration`, the third after `3 * retry_duration` and so on.
|
99
|
+
So the retries will be spread out further apart each time.
|
100
|
+
The last retry will be processed after `(max_retry_attempts * (max_retry_attempts + 1))/2 * retry_duration`.
|
101
|
+
So in the example above, it would result in 300 seconds until the last retry.
|
102
|
+
|
103
|
+
If also `allow_exponential_back_off` is set to `true`, the duration between retries will double each time ("exponential back off").
|
104
|
+
So the first retry will happen after `retry_duration`, the second after `2 * retry_duration`, the third after `4 * retry_duration` and so on.
|
105
|
+
The last retry will be processed after `(2^max_retry_attempts - 1) * retry_duration`.
|
106
|
+
So in the example above, it would result in 620 seconds until the last retry.
|
107
|
+
|
108
|
+
You can run experiments on your retry configuration using [plot_visibility_timeout.rb](https://github.com/Sage/eventq/blob/master/utilities/plot_visibility_timeout.rb), which will output the retry duration on each retry given your settings.
|
109
|
+
|
110
|
+
|
111
|
+
![Graph comparing back off strategies](images/back-off-strategy.png)
|
112
|
+
|
113
|
+
**Randomness**
|
114
|
+
|
115
|
+
By default, there will be no randomness in your retry strategy.
|
116
|
+
However, that means that with a fixed 20 second back off, many events overloading your service will all come back after exactly 20 seconds, overloading it again.
|
117
|
+
Therefore it can be useful to introduce randomness to your retry duration, so the events that initially hit the queue at the same time, are spread out when scheduling them for retry.
|
118
|
+
|
119
|
+
The attribute `retry_jitter_ratio` allows you to configure how much randomness ("jitter") is allowed for the retry duration.
|
120
|
+
Let's assume we have a `retry_duration = 20_000` (20 seconds).
|
121
|
+
Then the `retry_jitter_ratio` would have the following effect:
|
122
|
+
* 0 means no randomness, so retry duration of 20 seconds is used every time
|
123
|
+
* 20 means 20% randomness, so the duration will be randomly chosen between 80% to 100% of the value, i.e. between 16 to 20 seconds
|
124
|
+
* 50 means 50% randomness, i.e. between 10 to 20 seconds
|
125
|
+
* 80 means 80% randomness, i.e. between 4 to 20 seconds
|
126
|
+
* 100 means 100% randomness, i.e. between 0 to 20 seconds
|
127
|
+
|
128
|
+
In the graphs below you can see how adding 50% randomness can help avoid overloading the service.
|
129
|
+
In the first graph ("Fixed Retry Duration"), all failures are hitting the queue again after exactly 20 seconds.
|
130
|
+
This leads to only a couple of events to succeed, as the others fail due to too many concurrent requests running into locks etc.
|
131
|
+
However, in the second graph ("Randomised Retry Duration"), the events are randomnly spread out over the next 10 to 20 seconds.
|
132
|
+
This means less events hit the service concurrently, allowing it to succesfully process more events and processing all of the events in a shorter duration, reducing the overall load on the service.
|
133
|
+
|
134
|
+
![Graph showing that events overload the service repeatedly with fixed retry duration](images/fixed-retry-duration.png)
|
135
|
+
|
136
|
+
![Graph showing that events are spread out on retries when randomising retry duration](images/randomised-retry-duration.png)
|
137
|
+
|
81
138
|
### SubscriptionManager
|
82
139
|
|
83
140
|
In order to receive events within a subscription queue it must subscribe to the type of the event it should receive.
|
@@ -109,10 +166,10 @@ This method is called to unsubscribe a queue.
|
|
109
166
|
|
110
167
|
**Example**
|
111
168
|
|
112
|
-
#create an
|
169
|
+
#create an instance of the queue definition
|
113
170
|
queue = DateChangeAddressQueue.new
|
114
171
|
|
115
|
-
#
|
172
|
+
#unsubscribe the queue definition
|
116
173
|
subscription_manager.unsubscribe(queue)
|
117
174
|
|
118
175
|
|
@@ -321,17 +378,17 @@ This method is called to verify connection to an event_type (topic/exchange).
|
|
321
378
|
|
322
379
|
## Development
|
323
380
|
|
324
|
-
|
325
|
-
|
326
|
-
To install this gem onto your local machine, run `bundle exec rake install`. To release a new version, update the version number in the file, `EVENTQ_VERSION`, and then run `bundle exec rake release`, which will create a git tag for the version, push git commits and tags, and push the `.gem` file to [rubygems.org](https://rubygems.org).
|
381
|
+
### Setup
|
327
382
|
|
383
|
+
After checking out the repo, run `bin/setup` to install dependencies.
|
384
|
+
You can also run `bin/console` for an interactive prompt that will allow you to experiment.
|
385
|
+
To install this gem onto your local machine, run `bundle exec rake install`.
|
328
386
|
|
329
387
|
### Preparing the Docker images
|
330
388
|
|
331
389
|
Run the setup script of eventq to build the environment. This will create the `eventq` image.
|
332
390
|
|
333
|
-
$
|
334
|
-
$ ./setup.sh
|
391
|
+
$ ./script/setup.sh
|
335
392
|
|
336
393
|
### Running the tests
|
337
394
|
|
@@ -345,13 +402,19 @@ You will also need to comment out the AWS_* environment variables in the `docker
|
|
345
402
|
|
346
403
|
Run the whole test suite:
|
347
404
|
|
348
|
-
$
|
349
|
-
$ ./test.sh
|
405
|
+
$ ./script/test.sh
|
350
406
|
|
351
407
|
You can run the specs that don't depend on an AWS account with:
|
352
408
|
|
353
|
-
$
|
354
|
-
|
409
|
+
$ ./script/test.sh --tag ~integration
|
410
|
+
|
411
|
+
### Release new version
|
412
|
+
|
413
|
+
To release a new version, first update the version number in the file [`EVENTQ_VERSION`](https://github.com/Sage/eventq/blob/master/EVENTQ_VERSION).
|
414
|
+
With that change merged to `master`, just [draft a new release](https://github.com/Sage/eventq/releases/new) with the same version you specified in `EVENTQ_VERSION`.
|
415
|
+
Use "Generate Release Notes" to generate details for this release.
|
416
|
+
|
417
|
+
This will create a git tag for the version and triggers the GitHub [Workflow to publish the new gem](https://github.com/Sage/eventq/actions/workflows/publish.yml) (defined in [publish.yml](https://github.com/Sage/eventq/blob/master/.github/workflows/publish.yml)) to [rubygems.org](https://rubygems.org).
|
355
418
|
|
356
419
|
## Contributing
|
357
420
|
|
@@ -5,7 +5,7 @@ module EventQ
|
|
5
5
|
# Class responsible to know how to calculate message Visibility Timeout for Amazon SQS
|
6
6
|
class CalculateVisibilityTimeout
|
7
7
|
def initialize(max_timeout:, logger: EventQ.logger)
|
8
|
-
@max_timeout = max_timeout
|
8
|
+
@max_timeout = seconds_to_ms(max_timeout)
|
9
9
|
@logger = logger
|
10
10
|
end
|
11
11
|
|
@@ -14,28 +14,33 @@ module EventQ
|
|
14
14
|
# @param retry_attempts [Integer] Current retry
|
15
15
|
# @param queue_settings [Hash] Queue settings
|
16
16
|
# @option allow_retry_back_off [Bool] Enables/Disables backoff strategy
|
17
|
+
# @option allow_exponential_back_off [Bool] Enables/Disables exponential backoff strategy
|
17
18
|
# @option max_retry_delay [Integer] Maximum amount of time a retry will take in ms
|
18
19
|
# @option retry_back_off_grace [Integer] Amount of retries to wait before starting to backoff
|
19
20
|
# @option retry_back_off_weight [Integer] Multiplier for the backoff retry
|
21
|
+
# @option retry_jitter_ratio [Integer] Ratio of how much jitter to apply to the backoff retry
|
20
22
|
# @option retry_delay [Integer] Amount of time to wait until retry in ms
|
21
23
|
# @return [Integer] the calculated visibility timeout in seconds
|
22
24
|
def call(retry_attempts:, queue_settings:)
|
23
|
-
@retry_attempts
|
25
|
+
@retry_attempts = retry_attempts
|
24
26
|
|
25
|
-
@allow_retry_back_off
|
26
|
-
@
|
27
|
-
@
|
28
|
-
@
|
29
|
-
@
|
27
|
+
@allow_retry_back_off = queue_settings.fetch(:allow_retry_back_off)
|
28
|
+
@allow_exponential_back_off = queue_settings.fetch(:allow_exponential_back_off)
|
29
|
+
@max_retry_delay = queue_settings.fetch(:max_retry_delay)
|
30
|
+
@retry_back_off_grace = queue_settings.fetch(:retry_back_off_grace)
|
31
|
+
@retry_back_off_weight = queue_settings.fetch(:retry_back_off_weight)
|
32
|
+
@retry_jitter_ratio = queue_settings.fetch(:retry_jitter_ratio)
|
33
|
+
@retry_delay = queue_settings.fetch(:retry_delay)
|
30
34
|
|
31
|
-
if @allow_retry_back_off && retry_past_grace_period?
|
32
|
-
|
33
|
-
visibility_timeout = check_for_max_timeout(visibility_timeout)
|
35
|
+
visibility_timeout = if @allow_retry_back_off && retry_past_grace_period?
|
36
|
+
timeout_with_back_off
|
34
37
|
else
|
35
|
-
|
38
|
+
timeout_without_back_off
|
36
39
|
end
|
37
40
|
|
38
|
-
visibility_timeout
|
41
|
+
visibility_timeout = apply_jitter(visibility_timeout) if @retry_jitter_ratio > 0
|
42
|
+
|
43
|
+
ms_to_seconds(visibility_timeout)
|
39
44
|
end
|
40
45
|
|
41
46
|
private
|
@@ -47,34 +52,57 @@ module EventQ
|
|
47
52
|
end
|
48
53
|
|
49
54
|
def timeout_without_back_off
|
50
|
-
|
55
|
+
@retry_delay
|
51
56
|
end
|
52
57
|
|
53
58
|
def timeout_with_back_off
|
54
59
|
factor = @retry_attempts - @retry_back_off_grace
|
60
|
+
weighted_retry_delay = @retry_delay * @retry_back_off_weight
|
55
61
|
|
56
|
-
visibility_timeout =
|
57
|
-
|
58
|
-
|
59
|
-
|
60
|
-
logger.debug { "[#{self.class}] - Max message back off retry delay reached: #{max_retry_delay}" }
|
61
|
-
visibility_timeout = max_retry_delay
|
62
|
+
visibility_timeout = if @allow_exponential_back_off
|
63
|
+
weighted_retry_delay * 2 ** (factor - 1)
|
64
|
+
else
|
65
|
+
weighted_retry_delay * factor
|
62
66
|
end
|
63
67
|
|
64
|
-
visibility_timeout
|
68
|
+
visibility_timeout = check_for_max_retry_delay(visibility_timeout)
|
69
|
+
check_for_max_timeout(visibility_timeout)
|
70
|
+
end
|
71
|
+
|
72
|
+
def apply_jitter(visibility_timeout)
|
73
|
+
ratio = @retry_jitter_ratio / 100.0
|
74
|
+
min_visibility_timeout = (visibility_timeout * (1 - ratio)).to_i
|
75
|
+
rand(min_visibility_timeout..visibility_timeout)
|
65
76
|
end
|
66
77
|
|
67
78
|
def ms_to_seconds(value)
|
68
79
|
value / 1000
|
69
80
|
end
|
70
81
|
|
82
|
+
def seconds_to_ms(value)
|
83
|
+
value * 1000
|
84
|
+
end
|
85
|
+
|
86
|
+
def check_for_max_retry_delay(visibility_timeout)
|
87
|
+
return visibility_timeout if visibility_timeout <= @max_retry_delay
|
88
|
+
|
89
|
+
logger.debug do
|
90
|
+
"[#{self.class}] - Max message back off retry delay reached: #{ms_to_seconds(@max_retry_delay)}"
|
91
|
+
end
|
92
|
+
|
93
|
+
@max_retry_delay
|
94
|
+
end
|
95
|
+
|
71
96
|
def check_for_max_timeout(visibility_timeout)
|
72
|
-
if visibility_timeout
|
73
|
-
|
74
|
-
|
97
|
+
return visibility_timeout if visibility_timeout <= @max_timeout
|
98
|
+
|
99
|
+
logger.debug do
|
100
|
+
"[#{self.class}] - AWS max visibility timeout of 12 hours has been exceeded. "\
|
101
|
+
"Setting message retry delay to 12 hours."
|
75
102
|
end
|
76
|
-
|
103
|
+
|
104
|
+
@max_timeout
|
77
105
|
end
|
78
106
|
end
|
79
107
|
end
|
80
|
-
end
|
108
|
+
end
|
@@ -119,16 +119,7 @@ module EventQ
|
|
119
119
|
|
120
120
|
EventQ.logger.warn("[#{self.class}] - Message Id: #{args.id}. Rejected requesting retry. Attempts: #{retry_attempts}")
|
121
121
|
|
122
|
-
visibility_timeout =
|
123
|
-
retry_attempts: retry_attempts,
|
124
|
-
queue_settings: {
|
125
|
-
allow_retry_back_off: queue.allow_retry_back_off,
|
126
|
-
max_retry_delay: queue.max_retry_delay,
|
127
|
-
retry_back_off_grace: queue.retry_back_off_grace,
|
128
|
-
retry_back_off_weight: queue.retry_back_off_weight,
|
129
|
-
retry_delay: queue.retry_delay
|
130
|
-
}
|
131
|
-
)
|
122
|
+
visibility_timeout = calculate_visibility_timeout(retry_attempts, queue)
|
132
123
|
|
133
124
|
EventQ.logger.debug { "[#{self.class}] - Sending message for retry. Message TTL: #{visibility_timeout}" }
|
134
125
|
poller.change_message_visibility_timeout(msg, visibility_timeout)
|
@@ -136,6 +127,21 @@ module EventQ
|
|
136
127
|
context.call_on_retry_block(message)
|
137
128
|
end
|
138
129
|
end
|
130
|
+
|
131
|
+
def calculate_visibility_timeout(retry_attempts, queue)
|
132
|
+
@calculate_visibility_timeout.call(
|
133
|
+
retry_attempts: retry_attempts,
|
134
|
+
queue_settings: {
|
135
|
+
allow_retry_back_off: queue.allow_retry_back_off,
|
136
|
+
allow_exponential_back_off: queue.allow_exponential_back_off,
|
137
|
+
max_retry_delay: queue.max_retry_delay,
|
138
|
+
retry_back_off_grace: queue.retry_back_off_grace,
|
139
|
+
retry_back_off_weight: queue.retry_back_off_weight,
|
140
|
+
retry_jitter_ratio: queue.retry_jitter_ratio,
|
141
|
+
retry_delay: queue.retry_delay
|
142
|
+
}
|
143
|
+
)
|
144
|
+
end
|
139
145
|
end
|
140
146
|
end
|
141
147
|
end
|
@@ -1,10 +1,24 @@
|
|
1
1
|
module EventQ
|
2
|
+
class NonceManagerNotConfiguredError < StandardError; end
|
3
|
+
|
2
4
|
class NonceManager
|
3
5
|
|
4
|
-
def self.configure(server:,timeout:10000,lifespan:3600)
|
6
|
+
def self.configure(server:,timeout:10000,lifespan:3600, pool_size: 5, pool_timeout: 5)
|
5
7
|
@server_url = server
|
6
8
|
@timeout = timeout
|
7
9
|
@lifespan = lifespan
|
10
|
+
@pool_size = pool_size
|
11
|
+
@pool_timeout = pool_timeout
|
12
|
+
|
13
|
+
@redis_pool = begin
|
14
|
+
require 'connection_pool'
|
15
|
+
require 'redis'
|
16
|
+
|
17
|
+
ConnectionPool.new(size: @pool_size, timeout: @pool_timeout) do
|
18
|
+
Redis.new(url: @server_url)
|
19
|
+
end
|
20
|
+
end
|
21
|
+
@configured = true
|
8
22
|
end
|
9
23
|
|
10
24
|
def self.server_url
|
@@ -19,39 +33,77 @@ module EventQ
|
|
19
33
|
@lifespan
|
20
34
|
end
|
21
35
|
|
22
|
-
def self.
|
23
|
-
|
24
|
-
|
36
|
+
def self.pool_size
|
37
|
+
@pool_size
|
38
|
+
end
|
39
|
+
|
40
|
+
def self.pool_timeout
|
41
|
+
@pool_timeout
|
42
|
+
end
|
43
|
+
|
44
|
+
def self.lock(nonce)
|
45
|
+
# act as if successfully locked if not nonce manager configured - makes it a no-op
|
46
|
+
return true if !configured?
|
47
|
+
|
48
|
+
successfully_locked = false
|
49
|
+
with_redis_connection do |conn|
|
50
|
+
successfully_locked = conn.set(nonce, 1, ex: lifespan, nx: true)
|
25
51
|
end
|
26
52
|
|
27
|
-
|
28
|
-
lock = Redlock::Client.new([ @server_url ]).lock(nonce, @timeout)
|
29
|
-
if lock == false
|
53
|
+
if !successfully_locked
|
30
54
|
EventQ.log(:info, "[#{self.class}] - Message has already been processed: #{nonce}")
|
31
|
-
return false
|
32
55
|
end
|
33
56
|
|
34
|
-
|
57
|
+
successfully_locked
|
35
58
|
end
|
36
59
|
|
60
|
+
# if the message was successfully procesed, lock for another lifespan length
|
61
|
+
# so it isn't reprocessed
|
37
62
|
def self.complete(nonce)
|
38
|
-
|
39
|
-
|
63
|
+
return true if !configured?
|
64
|
+
|
65
|
+
with_redis_connection do |conn|
|
66
|
+
conn.expire(nonce, lifespan)
|
40
67
|
end
|
41
|
-
|
68
|
+
|
69
|
+
true
|
42
70
|
end
|
43
71
|
|
72
|
+
# if it failed, unlock immediately so that retries can kick in
|
44
73
|
def self.failed(nonce)
|
45
|
-
|
46
|
-
|
74
|
+
return true if !configured?
|
75
|
+
|
76
|
+
with_redis_connection do |conn|
|
77
|
+
conn.del(nonce)
|
47
78
|
end
|
48
|
-
|
79
|
+
|
80
|
+
true
|
49
81
|
end
|
50
82
|
|
51
83
|
def self.reset
|
52
84
|
@server_url = nil
|
53
85
|
@timeout = nil
|
54
86
|
@lifespan = nil
|
87
|
+
@pool_size = nil
|
88
|
+
@pool_timeout = nil
|
89
|
+
@configured = false
|
90
|
+
@redis_pool.reload(&:close)
|
91
|
+
end
|
92
|
+
|
93
|
+
def self.configured?
|
94
|
+
@configured == true
|
95
|
+
end
|
96
|
+
|
97
|
+
private
|
98
|
+
|
99
|
+
def self.with_redis_connection
|
100
|
+
if !configured?
|
101
|
+
raise NonceManagerNotConfiguredError, 'Unable to checkout redis connection from pool, nonce manager has not been configured. Call .configure on NonceManager.'
|
102
|
+
end
|
103
|
+
|
104
|
+
@redis_pool.with do |conn|
|
105
|
+
yield conn
|
106
|
+
end
|
55
107
|
end
|
56
108
|
end
|
57
109
|
end
|
@@ -2,6 +2,7 @@ module EventQ
|
|
2
2
|
class Queue
|
3
3
|
attr_accessor :allow_retry
|
4
4
|
attr_accessor :allow_retry_back_off
|
5
|
+
attr_accessor :allow_exponential_back_off
|
5
6
|
attr_accessor :dlq
|
6
7
|
attr_accessor :max_retry_attempts
|
7
8
|
attr_accessor :max_retry_delay
|
@@ -11,6 +12,7 @@ module EventQ
|
|
11
12
|
attr_accessor :retry_delay
|
12
13
|
attr_accessor :retry_back_off_grace
|
13
14
|
attr_accessor :retry_back_off_weight
|
15
|
+
attr_accessor :retry_jitter_ratio
|
14
16
|
# Character delimiter between namespace and queue name. Default = '-'
|
15
17
|
attr_accessor :namespace_delimiter
|
16
18
|
# Flag to control that the queue runs in isolation of auto creating the topic it belongs to
|
@@ -20,6 +22,8 @@ module EventQ
|
|
20
22
|
@allow_retry = false
|
21
23
|
# Default retry back off settings
|
22
24
|
@allow_retry_back_off = false
|
25
|
+
# Default exponential back off settings
|
26
|
+
@allow_exponential_back_off = false
|
23
27
|
# Default max receive count is 30
|
24
28
|
@max_receive_count = 30
|
25
29
|
# Default max retry attempts is 5
|
@@ -34,6 +38,8 @@ module EventQ
|
|
34
38
|
@retry_back_off_grace = 0
|
35
39
|
# Multiplier for the backoff retry in case retry_delay is too small
|
36
40
|
@retry_back_off_weight = 1
|
41
|
+
# Ratio of how much jitter to apply to the retry delay
|
42
|
+
@retry_jitter_ratio = 0
|
37
43
|
@isolated = false
|
38
44
|
end
|
39
45
|
end
|
@@ -78,16 +78,7 @@ module EventQ
|
|
78
78
|
message.retry_attempts += 1
|
79
79
|
retry_attempts = message.retry_attempts - queue.retry_back_off_grace
|
80
80
|
retry_attempts = 1 if retry_attempts < 1
|
81
|
-
|
82
|
-
if queue.allow_retry_back_off == true
|
83
|
-
message_ttl = retry_attempts * queue.retry_delay
|
84
|
-
if (retry_attempts * queue.retry_delay) > queue.max_retry_delay
|
85
|
-
EventQ.logger.debug { "[#{self.class}] - Max message back off retry delay reached." }
|
86
|
-
message_ttl = queue.max_retry_delay
|
87
|
-
end
|
88
|
-
else
|
89
|
-
message_ttl = queue.retry_delay
|
90
|
-
end
|
81
|
+
message_ttl = retry_delay(queue, retry_attempts)
|
91
82
|
|
92
83
|
EventQ.logger.debug { "[#{self.class}] - Sending message for retry. Message TTL: #{message_ttl}" }
|
93
84
|
retry_exchange.publish(serialize_message(message), :expiration => message_ttl)
|
@@ -128,6 +119,31 @@ module EventQ
|
|
128
119
|
raise "Unrecognized status: #{status}"
|
129
120
|
end
|
130
121
|
end
|
122
|
+
|
123
|
+
def retry_delay(queue, retry_attempts)
|
124
|
+
return queue.retry_delay unless queue.allow_retry_back_off
|
125
|
+
|
126
|
+
message_ttl = if queue.allow_exponential_back_off
|
127
|
+
queue.retry_delay * 2 ** (retry_attempts - 1)
|
128
|
+
else
|
129
|
+
queue.retry_delay * retry_attempts
|
130
|
+
end
|
131
|
+
|
132
|
+
if message_ttl > queue.max_retry_delay
|
133
|
+
EventQ.logger.debug { "[#{self.class}] - Max message back off retry delay reached." }
|
134
|
+
message_ttl = queue.max_retry_delay
|
135
|
+
end
|
136
|
+
|
137
|
+
apply_jitter(queue.retry_jitter_ratio, message_ttl)
|
138
|
+
end
|
139
|
+
|
140
|
+
def apply_jitter(retry_jitter_ratio, message_ttl)
|
141
|
+
return message_ttl if retry_jitter_ratio == 0
|
142
|
+
|
143
|
+
ratio = retry_jitter_ratio / 100.0
|
144
|
+
min_message_ttl = (message_ttl * (1 - ratio)).to_i
|
145
|
+
rand(min_message_ttl..message_ttl)
|
146
|
+
end
|
131
147
|
end
|
132
148
|
end
|
133
149
|
end
|
data/lib/eventq/queue_worker.rb
CHANGED
@@ -127,7 +127,7 @@ module EventQ
|
|
127
127
|
|
128
128
|
EventQ.logger.debug("[#{self.class}] - Message received. Id: #{message.id}. Retry Attempts: #{retry_attempts}")
|
129
129
|
|
130
|
-
if (!EventQ::NonceManager.
|
130
|
+
if (!EventQ::NonceManager.lock(message.id))
|
131
131
|
EventQ.logger.warn("[#{self.class}] - Duplicate Message received. Id: #{message.id}. Ignoring message.")
|
132
132
|
status = :duplicate
|
133
133
|
return status, message_args
|
data/lib/eventq.rb
CHANGED
@@ -1,5 +1,6 @@
|
|
1
|
+
# frozen_string_literal: true
|
2
|
+
|
1
3
|
require 'securerandom'
|
2
|
-
require 'redlock'
|
3
4
|
require 'class_kit'
|
4
5
|
require 'hash_kit'
|
5
6
|
require 'oj'
|
@@ -22,6 +23,3 @@ require_relative 'eventq/eventq_base/nonce_manager'
|
|
22
23
|
require_relative 'eventq/eventq_base/signature_providers'
|
23
24
|
require_relative 'eventq/eventq_base/exceptions'
|
24
25
|
require_relative 'eventq/queue_worker'
|
25
|
-
|
26
|
-
|
27
|
-
|
metadata
CHANGED
@@ -1,14 +1,14 @@
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
2
2
|
name: eventq
|
3
3
|
version: !ruby/object:Gem::Version
|
4
|
-
version: 4.
|
4
|
+
version: 4.2.0
|
5
5
|
platform: ruby
|
6
6
|
authors:
|
7
7
|
- SageOne
|
8
|
-
autorequire:
|
8
|
+
autorequire:
|
9
9
|
bindir: bin
|
10
10
|
cert_chain: []
|
11
|
-
date:
|
11
|
+
date: 2025-01-21 00:00:00.000000000 Z
|
12
12
|
dependencies:
|
13
13
|
- !ruby/object:Gem::Dependency
|
14
14
|
name: activesupport
|
@@ -16,14 +16,14 @@ dependencies:
|
|
16
16
|
requirements:
|
17
17
|
- - "~>"
|
18
18
|
- !ruby/object:Gem::Version
|
19
|
-
version: '
|
19
|
+
version: '6'
|
20
20
|
type: :development
|
21
21
|
prerelease: false
|
22
22
|
version_requirements: !ruby/object:Gem::Requirement
|
23
23
|
requirements:
|
24
24
|
- - "~>"
|
25
25
|
- !ruby/object:Gem::Version
|
26
|
-
version: '
|
26
|
+
version: '6'
|
27
27
|
- !ruby/object:Gem::Dependency
|
28
28
|
name: bundler
|
29
29
|
requirement: !ruby/object:Gem::Requirement
|
@@ -72,14 +72,14 @@ dependencies:
|
|
72
72
|
requirements:
|
73
73
|
- - "~>"
|
74
74
|
- !ruby/object:Gem::Version
|
75
|
-
version: '
|
75
|
+
version: '13'
|
76
76
|
type: :development
|
77
77
|
prerelease: false
|
78
78
|
version_requirements: !ruby/object:Gem::Requirement
|
79
79
|
requirements:
|
80
80
|
- - "~>"
|
81
81
|
- !ruby/object:Gem::Version
|
82
|
-
version: '
|
82
|
+
version: '13'
|
83
83
|
- !ruby/object:Gem::Dependency
|
84
84
|
name: rspec
|
85
85
|
requirement: !ruby/object:Gem::Requirement
|
@@ -123,33 +123,47 @@ dependencies:
|
|
123
123
|
- !ruby/object:Gem::Version
|
124
124
|
version: 0.18.0
|
125
125
|
- !ruby/object:Gem::Dependency
|
126
|
-
name: aws-sdk-
|
126
|
+
name: aws-sdk-core
|
127
127
|
requirement: !ruby/object:Gem::Requirement
|
128
128
|
requirements:
|
129
|
-
- - "
|
129
|
+
- - ">="
|
130
130
|
- !ruby/object:Gem::Version
|
131
|
-
version: '
|
131
|
+
version: '0'
|
132
132
|
type: :runtime
|
133
133
|
prerelease: false
|
134
134
|
version_requirements: !ruby/object:Gem::Requirement
|
135
135
|
requirements:
|
136
|
-
- - "
|
136
|
+
- - ">="
|
137
137
|
- !ruby/object:Gem::Version
|
138
|
-
version: '
|
138
|
+
version: '0'
|
139
139
|
- !ruby/object:Gem::Dependency
|
140
140
|
name: aws-sdk-sns
|
141
141
|
requirement: !ruby/object:Gem::Requirement
|
142
142
|
requirements:
|
143
|
-
- - "
|
143
|
+
- - ">="
|
144
144
|
- !ruby/object:Gem::Version
|
145
|
-
version: '
|
145
|
+
version: '0'
|
146
146
|
type: :runtime
|
147
147
|
prerelease: false
|
148
148
|
version_requirements: !ruby/object:Gem::Requirement
|
149
149
|
requirements:
|
150
|
-
- - "
|
150
|
+
- - ">="
|
151
151
|
- !ruby/object:Gem::Version
|
152
|
-
version: '
|
152
|
+
version: '0'
|
153
|
+
- !ruby/object:Gem::Dependency
|
154
|
+
name: aws-sdk-sqs
|
155
|
+
requirement: !ruby/object:Gem::Requirement
|
156
|
+
requirements:
|
157
|
+
- - ">="
|
158
|
+
- !ruby/object:Gem::Version
|
159
|
+
version: '0'
|
160
|
+
type: :runtime
|
161
|
+
prerelease: false
|
162
|
+
version_requirements: !ruby/object:Gem::Requirement
|
163
|
+
requirements:
|
164
|
+
- - ">="
|
165
|
+
- !ruby/object:Gem::Version
|
166
|
+
version: '0'
|
153
167
|
- !ruby/object:Gem::Dependency
|
154
168
|
name: bunny
|
155
169
|
requirement: !ruby/object:Gem::Requirement
|
@@ -221,7 +235,7 @@ dependencies:
|
|
221
235
|
- !ruby/object:Gem::Version
|
222
236
|
version: '0'
|
223
237
|
- !ruby/object:Gem::Dependency
|
224
|
-
name:
|
238
|
+
name: connection_pool
|
225
239
|
requirement: !ruby/object:Gem::Requirement
|
226
240
|
requirements:
|
227
241
|
- - ">="
|
@@ -295,7 +309,7 @@ homepage: https://github.com/sage/eventq
|
|
295
309
|
licenses:
|
296
310
|
- MIT
|
297
311
|
metadata: {}
|
298
|
-
post_install_message:
|
312
|
+
post_install_message:
|
299
313
|
rdoc_options: []
|
300
314
|
require_paths:
|
301
315
|
- lib
|
@@ -310,8 +324,8 @@ required_rubygems_version: !ruby/object:Gem::Requirement
|
|
310
324
|
- !ruby/object:Gem::Version
|
311
325
|
version: '0'
|
312
326
|
requirements: []
|
313
|
-
rubygems_version: 3.
|
314
|
-
signing_key:
|
327
|
+
rubygems_version: 3.4.20
|
328
|
+
signing_key:
|
315
329
|
specification_version: 4
|
316
330
|
summary: EventQ is a pub/sub system that uses async notifications and message queues
|
317
331
|
test_files: []
|