redis_queued_locks 0.0.38 → 0.0.39
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +4 -4
- data/CHANGELOG.md +18 -0
- data/README.md +58 -15
- data/Rakefile +14 -5
- data/lib/redis_queued_locks/acquier/acquire_lock/try_to_lock.rb +35 -73
- data/lib/redis_queued_locks/acquier/acquire_lock/yield_with_expire.rb +2 -2
- data/lib/redis_queued_locks/acquier/acquire_lock.rb +18 -9
- data/lib/redis_queued_locks/acquier/extend_lock_ttl.rb +22 -4
- data/lib/redis_queued_locks/acquier/lock_info.rb +3 -3
- data/lib/redis_queued_locks/acquier/locks.rb +3 -3
- data/lib/redis_queued_locks/acquier/release_all_locks.rb +1 -1
- data/lib/redis_queued_locks/acquier/release_lock.rb +1 -1
- data/lib/redis_queued_locks/client.rb +15 -3
- data/lib/redis_queued_locks/data.rb +0 -1
- data/lib/redis_queued_locks/version.rb +2 -2
- metadata +3 -3
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA256:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: df02925d34d26ec7181e33c783a2c368f84e2981a1b6249da100f0fc19515d5c
|
4
|
+
data.tar.gz: 4ae526151618eecba0ac733677e2d3e5dd8ae2558058c8407b693a6085e712d0
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: 44f6546626e39b0fd1378a2cdcd8d72f2d394cba7478cbd2594c3b50b60cd2d484d0d4fca584a09391f5a5cc68c275b3b8f5c7fdf52d0c8c55976518bf3fe03b
|
7
|
+
data.tar.gz: 6c1251c02654e7b816e993d8e4a24c7cbf92ea0d908ba2d3efb60b99f711d0c7ce93902199f29c40e35b42fac2d76de4693380b72d01df0192c43e630e2cdf8e
|
data/CHANGELOG.md
CHANGED
@@ -1,5 +1,23 @@
|
|
1
1
|
## [Unreleased]
|
2
2
|
|
3
|
+
## [0.0.39] - 2024-03-31
|
4
|
+
### Added
|
5
|
+
- Logging:
|
6
|
+
- added new log `[redis_queued_locks.fail_fast_or_limits_reached__dequeue]`;
|
7
|
+
- Client:
|
8
|
+
- `#extend_lock_ttl` implementation;
|
9
|
+
### Changed
|
10
|
+
- Removed `RadisQueuedLocks::Debugger.debug(...)` injections;
|
11
|
+
- Instrumentation:
|
12
|
+
- the `:at` payload field of `"redis_queued_locks.explicit_lock_release"` event and
|
13
|
+
`"redis_queued_locks.explicit_all_locks_release"` event is changed from `Integer` to `Float`
|
14
|
+
in order to reflect micro/nano seconds too for more accurate time value;
|
15
|
+
- Lock information:
|
16
|
+
- the lock infrmation extracting now uses `RedisClient#pipelined` instead of `RedisClient#mutli` cuz
|
17
|
+
it is more reasonable for information-oriented logic (the queue information extraction works via `pipelined` invocations for example);
|
18
|
+
- Logging:
|
19
|
+
- log message is used as a `message` (not `pragma`) according to `Logger#debug` signature;
|
20
|
+
|
3
21
|
## [0.0.38] - 2024-03-28
|
4
22
|
### Changed
|
5
23
|
- Minor update (dropped useless constant);
|
data/README.md
CHANGED
@@ -1,16 +1,17 @@
|
|
1
|
-
# RedisQueuedLocks
|
1
|
+
# RedisQueuedLocks · [![Gem Version](https://badge.fury.io/rb/redis_queued_locks.svg)](https://badge.fury.io/rb/redis_queued_locks)
|
2
2
|
|
3
3
|
Distributed locks with "lock acquisition queue" capabilities based on the Redis Database.
|
4
4
|
|
5
5
|
Provides flexible invocation flow, parametrized limits (lock request ttl, lock ttls, queue ttls, fast failing, etc), logging and instrumentation.
|
6
6
|
|
7
|
-
Each lock request is put into the request queue (each lock is hosted by it's own queue separately from other queues) and processed in order of their priority (FIFO). Each lock request lives some period of time (RTTL) which guarantees the request queue will never be stacked.
|
7
|
+
Each lock request is put into the request queue (each lock is hosted by it's own queue separately from other queues) and processed in order of their priority (FIFO). Each lock request lives some period of time (RTTL) (with requeue capabilities) which guarantees the request queue will never be stacked.
|
8
8
|
|
9
9
|
---
|
10
10
|
|
11
11
|
## Table of Contents
|
12
12
|
|
13
13
|
- [Requirements](#requirements)
|
14
|
+
- [Experience](#experience)
|
14
15
|
- [Algorithm](#algorithm)
|
15
16
|
- [Installation](#installation)
|
16
17
|
- [Setup](#setup)
|
@@ -30,6 +31,7 @@ Each lock request is put into the request queue (each lock is hosted by it's own
|
|
30
31
|
- [keys](#keys---get-list-of-taken-locks-and-queues)
|
31
32
|
- [locks_info](#locks_info---get-list-of-locks-with-their-info)
|
32
33
|
- [queues_info](#queues_info---get-list-of-queues-with-their-info)
|
34
|
+
- [clear_dead_requests](#clear_dead_requests)
|
33
35
|
- [Instrumentation](#instrumentation)
|
34
36
|
- [Instrumentation Events](#instrumentation-events)
|
35
37
|
- [Roadmap](#roadmap)
|
@@ -43,6 +45,14 @@ Each lock request is put into the request queue (each lock is hosted by it's own
|
|
43
45
|
|
44
46
|
- Redis Version: `~> 7.x`;
|
45
47
|
- Redis Protocol: `RESP3`;
|
48
|
+
- gem `redis-client`: `~> 0.20`;
|
49
|
+
|
50
|
+
---
|
51
|
+
|
52
|
+
### Experience
|
53
|
+
|
54
|
+
- Battle-tested on huge ruby projects in production: `~1500` locks-per-second are obtained and released on an ongoing basis;
|
55
|
+
- Works well with `hiredis` driver enabled (it is enabled by default on our projects where `redis_queued_locks` are used);
|
46
56
|
|
47
57
|
---
|
48
58
|
|
@@ -156,6 +166,7 @@ clinet = RedisQueuedLocks::Client.new(redis_client) do |config|
|
|
156
166
|
# - "[redis_queued_locks.start_try_to_lock_cycle]" (logs "lock_key", "queue_ttl", "acq_id");
|
157
167
|
# - "[redis_queued_locks.dead_score_reached__reset_acquier_position]" (logs "lock_key", "queue_ttl", "acq_id");
|
158
168
|
# - "[redis_queued_locks.lock_obtained]" (logs "lockkey", "queue_ttl", "acq_id", "acq_time");
|
169
|
+
# - "[redis_queued_locks.fail_fast_or_limits_reached__dequeue] (logs "lock_key", "queue_ttl", "acq_id");
|
159
170
|
# - by default uses VoidLogger that does nothing;
|
160
171
|
config.logger = RedisQueuedLocks::Logging::VoidLogger
|
161
172
|
|
@@ -170,8 +181,8 @@ clinet = RedisQueuedLocks::Client.new(redis_client) do |config|
|
|
170
181
|
# - "[redis_queued_locks.try_lock.get_first_from_queue]" (logs "lock_key", "queue_ttl", "acq_id", "first_acq_id_in_queue");
|
171
182
|
# - "[redis_queued_locks.try_lock.exit__queue_ttl_reached]" (logs "lock_key", "queue_ttl", "acq_id");
|
172
183
|
# - "[redis_queued_locks.try_lock.exit__no_first]" (logs "lock_key", "queue_ttl", "acq_id", "first_acq_id_in_queue", "<current_lock_data>");
|
173
|
-
# - "[redis_queued_locks.try_lock.
|
174
|
-
# - "[redis_queued_locks.try_lock.
|
184
|
+
# - "[redis_queued_locks.try_lock.exit__lock_still_obtained]" (logs "lock_key", "queue_ttl", "acq_id", "first_acq_id_in_queue", "locked_by_acq_id", "<current_lock_data>");
|
185
|
+
# - "[redis_queued_locks.try_lock.obtain_free_to_acquire]" (logs "lock_key", "queue_ttl", "acq_id");
|
175
186
|
config.log_lock_try = false
|
176
187
|
end
|
177
188
|
```
|
@@ -194,6 +205,7 @@ end
|
|
194
205
|
- [keys](#keys---get-list-of-taken-locks-and-queues)
|
195
206
|
- [locks_info](#locks_info---get-list-of-locks-with-their-info)
|
196
207
|
- [queues_info](#queues_info---get-list-of-queues-with-their-info)
|
208
|
+
- [clear_dead_requests](#clear_dead_requests)
|
197
209
|
|
198
210
|
---
|
199
211
|
|
@@ -336,7 +348,7 @@ See `#lock` method [documentation](#lock---obtain-a-lock).
|
|
336
348
|
|
337
349
|
- get the lock information;
|
338
350
|
- returns `nil` if lock does not exist;
|
339
|
-
- lock data (`Hash<
|
351
|
+
- lock data (`Hash<String,String|Integer>`):
|
340
352
|
- `"lock_key"` - `string` - lock key in redis;
|
341
353
|
- `"acq_id"` - `string` - acquier identifier (process_id/thread_id/fiber_id/ractor_id/identity);
|
342
354
|
- `"ts"` - `integer`/`epoch` - the time lock was obtained;
|
@@ -385,7 +397,7 @@ rql.lock_info("your_lock_name")
|
|
385
397
|
- score is represented as a timestamp when the lock request was made;
|
386
398
|
- represents the acquier identifier and their score as an array of hashes;
|
387
399
|
- returns `nil` if lock queue does not exist;
|
388
|
-
- lock queue data (`Hash<
|
400
|
+
- lock queue data (`Hash<String,String|Array<Hash<String|Numeric>>`):
|
389
401
|
- `"lock_queue"` - `string` - lock queue key in redis;
|
390
402
|
- `"queue"` - `array` - an array of lock requests (array of hashes):
|
391
403
|
- `"acq_id"` - `string` - acquier identifier (process_id/thread_id/fiber_id/ractor_id/identity by default);
|
@@ -493,7 +505,31 @@ Return:
|
|
493
505
|
|
494
506
|
#### #extend_lock_ttl
|
495
507
|
|
496
|
-
-
|
508
|
+
- Extend the lock's TTL (in milliseconds);
|
509
|
+
- returns `{ ok: true, result: :ttl_extended }` when ttl is extended;
|
510
|
+
- returns `{ ok: false, result: :async_expire_or_no_lock }` when lock not found or lock is expired during
|
511
|
+
some steps of invocation (see **Important** section below);
|
512
|
+
- **Important**:
|
513
|
+
- the method is non-atomic cuz redis does not provide an atomic function for TTL/PTTL extension;
|
514
|
+
- the method consists of two commands:
|
515
|
+
- (1) read current pttl;
|
516
|
+
- (2) set new ttl that is calculated as "current pttl + additional milliseconds";
|
517
|
+
- what can happen during these steps:
|
518
|
+
- lock is expired between commands or before the first command;
|
519
|
+
- lock is expired before the second command;
|
520
|
+
- lock is expired AND newly acquired by another process (so you will extend the
|
521
|
+
totally new lock with fresh PTTL);
|
522
|
+
- use it at your own risk and consider the async nature when calling this method;
|
523
|
+
|
524
|
+
```ruby
|
525
|
+
rql.extend_lock_ttl("my_lock", 5_000) # NOTE: add 5_000 milliseconds
|
526
|
+
|
527
|
+
# => `ok` case
|
528
|
+
{ ok: true, result: :ttl_extended }
|
529
|
+
|
530
|
+
# => `failed` case
|
531
|
+
{ ok: false, result: :async_expire_or_no_lock }
|
532
|
+
```
|
497
533
|
|
498
534
|
---
|
499
535
|
|
@@ -505,7 +541,7 @@ Return:
|
|
505
541
|
- `:with_info` - `Boolean` - `false` by default (for details see [#locks_info](#locks_info---get-list-of-locks-with-their-info));
|
506
542
|
- returns:
|
507
543
|
- `Set<String>` (for `with_info: false`);
|
508
|
-
- `Set<Hash<Symbol,Any>>` (for `with_info: true`). See
|
544
|
+
- `Set<Hash<Symbol,Any>>` (for `with_info: true`). See [#locks_info](#locks_info---get-list-of-locks-with-their-info) for details;
|
509
545
|
|
510
546
|
```ruby
|
511
547
|
rql.locks # or rql.locks(scan_size: 123)
|
@@ -532,10 +568,10 @@ rql.locks # or rql.locks(scan_size: 123)
|
|
532
568
|
- uses redis `SCAN` under the hood;
|
533
569
|
- accepts
|
534
570
|
- `:scan_size` - `Integer` - (`config[:key_extraction_batch_size]` by default);
|
535
|
-
- `:with_info` - `Boolean` - `false` by default (for details see [queues_info](#queues_info---get-list-of-queues-with-their-info));
|
571
|
+
- `:with_info` - `Boolean` - `false` by default (for details see [#queues_info](#queues_info---get-list-of-queues-with-their-info));
|
536
572
|
- returns:
|
537
573
|
- `Set<String>` (for `with_info: false`);
|
538
|
-
- `Set<Hash<Symbol,Any>>` (for `with_info: true`). See
|
574
|
+
- `Set<Hash<Symbol,Any>>` (for `with_info: true`). See [#locks_info](#locks_info---get-list-of-locks-with-their-info) for details;
|
539
575
|
|
540
576
|
```ruby
|
541
577
|
rql.queues # or rql.queues(scan_size: 123)
|
@@ -645,11 +681,18 @@ rql.queues_info # or rql.qeuues_info(scan_size: 123)
|
|
645
681
|
{"acq_id"=>"rql:acq:38529/4460/4480/4360/66093702f24a3129", "score"=>1711606640.540808}]},
|
646
682
|
...}>
|
647
683
|
```
|
684
|
+
---
|
685
|
+
|
686
|
+
#### #clear_dead_requests
|
687
|
+
|
688
|
+
- soon
|
648
689
|
|
649
690
|
---
|
650
691
|
|
651
692
|
## Instrumentation
|
652
693
|
|
694
|
+
- [Instrumentation Events](#instrumentation-events)
|
695
|
+
|
653
696
|
An instrumentation layer is incapsulated in `instrumenter` object stored in [config](#configuration) (`RedisQueuedLocks::Client#config[:instrumenter]`).
|
654
697
|
|
655
698
|
Instrumenter object should provide `notify(event, payload)` method with the following signarue:
|
@@ -701,7 +744,7 @@ Detalized event semantics and payload structure:
|
|
701
744
|
- `"redis_queued_locks.explicit_lock_release"`
|
702
745
|
- an event signalizes about the explicit lock release (invoked via `RedisQueuedLock#unlock`);
|
703
746
|
- payload:
|
704
|
-
- `:at` - `
|
747
|
+
- `:at` - `float`/`epoch` - the time when the lock was released;
|
705
748
|
- `:rel_time` - `float`/`milliseconds` - time spent on lock releasing;
|
706
749
|
- `:lock_key` - `string` - released lock (lock name);
|
707
750
|
- `:lock_key_queue` - `string` - released lock queue (lock queue name);
|
@@ -709,7 +752,7 @@ Detalized event semantics and payload structure:
|
|
709
752
|
- an event signalizes about the explicit all locks release (invoked via `RedisQueuedLock#clear_locks`);
|
710
753
|
- payload:
|
711
754
|
- `:rel_time` - `float`/`milliseconds` - time spent on "realese all locks" operation;
|
712
|
-
- `:at` - `
|
755
|
+
- `:at` - `float`/`epoch` - the time when the operation has ended;
|
713
756
|
- `:rel_keys` - `integer` - released redis keys count (`released queue keys` + `released lock keys`);
|
714
757
|
|
715
758
|
---
|
@@ -717,7 +760,7 @@ Detalized event semantics and payload structure:
|
|
717
760
|
## Roadmap
|
718
761
|
|
719
762
|
- Semantic Error objects for unexpected Redis errors;
|
720
|
-
-
|
763
|
+
- better specs :) with 100% test coverage;
|
721
764
|
- per-block-holding-the-lock sidecar `Ractor` and `in progress queue` in RedisDB that will extend
|
722
765
|
the acquired lock for long-running blocks of code (that invoked "under" the lock
|
723
766
|
whose ttl may expire before the block execution completes). It only makes sense for non-`timed` locks;
|
@@ -726,8 +769,8 @@ Detalized event semantics and payload structure:
|
|
726
769
|
- structured logging (separated docs);
|
727
770
|
- GitHub Actions CI;
|
728
771
|
- `RedisQueuedLocks::Acquier::Try.try_to_lock` - detailed successful result analization;
|
729
|
-
- better code stylization and interesting refactorings;
|
730
|
-
- dead
|
772
|
+
- better code stylization and interesting refactorings (observers);
|
773
|
+
- dead requests cleanup;
|
731
774
|
- statistics with UI;
|
732
775
|
|
733
776
|
---
|
data/Rakefile
CHANGED
@@ -2,11 +2,20 @@
|
|
2
2
|
|
3
3
|
require 'bundler/gem_tasks'
|
4
4
|
require 'rspec/core/rake_task'
|
5
|
-
|
6
|
-
RSpec::Core::RakeTask.new(:spec)
|
7
|
-
|
5
|
+
require 'rubocop'
|
8
6
|
require 'rubocop/rake_task'
|
7
|
+
require 'rubocop-performance'
|
8
|
+
require 'rubocop-rspec'
|
9
|
+
require 'rubocop-rake'
|
10
|
+
|
11
|
+
RuboCop::RakeTask.new(:rubocop) do |t|
|
12
|
+
config_path = File.expand_path(File.join('.rubocop.yml'), __dir__)
|
13
|
+
t.options = ['--config', config_path]
|
14
|
+
t.requires << 'rubocop-rspec'
|
15
|
+
t.requires << 'rubocop-performance'
|
16
|
+
t.requires << 'rubocop-rake'
|
17
|
+
end
|
9
18
|
|
10
|
-
|
19
|
+
RSpec::Core::RakeTask.new(:rspec)
|
11
20
|
|
12
|
-
task default:
|
21
|
+
task default: :rspec
|
@@ -42,12 +42,12 @@ module RedisQueuedLocks::Acquier::AcquireLock::TryToLock
|
|
42
42
|
|
43
43
|
if log_lock_try
|
44
44
|
run_non_critical do
|
45
|
-
logger.debug
|
45
|
+
logger.debug do
|
46
46
|
"[redis_queued_locks.try_lock.start] " \
|
47
47
|
"lock_key => '#{lock_key}' " \
|
48
48
|
"queue_ttl => #{queue_ttl} " \
|
49
49
|
"acq_id => '#{acquier_id}'"
|
50
|
-
|
50
|
+
end
|
51
51
|
end
|
52
52
|
end
|
53
53
|
|
@@ -55,12 +55,12 @@ module RedisQueuedLocks::Acquier::AcquireLock::TryToLock
|
|
55
55
|
result = redis.with do |rconn|
|
56
56
|
if log_lock_try
|
57
57
|
run_non_critical do
|
58
|
-
logger.debug
|
58
|
+
logger.debug do
|
59
59
|
"[redis_queued_locks.try_lock.rconn_fetched] " \
|
60
60
|
"lock_key => '#{lock_key}' " \
|
61
61
|
"queue_ttl => #{queue_ttl} " \
|
62
62
|
"acq_id => '#{acquier_id}'"
|
63
|
-
|
63
|
+
end
|
64
64
|
end
|
65
65
|
end
|
66
66
|
|
@@ -74,25 +74,21 @@ module RedisQueuedLocks::Acquier::AcquireLock::TryToLock
|
|
74
74
|
inter_result = :fail_fast_no_try
|
75
75
|
else
|
76
76
|
# Step 1: add an acquier to the lock acquirement queue
|
77
|
-
|
77
|
+
rconn.call('ZADD', lock_key_queue, 'NX', acquier_position, acquier_id)
|
78
78
|
|
79
79
|
if log_lock_try
|
80
80
|
run_non_critical do
|
81
|
-
logger.debug
|
81
|
+
logger.debug do
|
82
82
|
"[redis_queued_locks.try_lock.acq_added_to_queue] " \
|
83
83
|
"lock_key => '#{lock_key}' " \
|
84
84
|
"queue_ttl => #{queue_ttl} " \
|
85
85
|
"acq_id => '#{acquier_id}'"
|
86
|
-
|
86
|
+
end
|
87
87
|
end
|
88
88
|
end
|
89
89
|
|
90
|
-
RedisQueuedLocks.debug(
|
91
|
-
"Step №1: добавление в очередь (#{acquier_id}). [ZADD to the queue: #{res}]"
|
92
|
-
)
|
93
|
-
|
94
90
|
# Step 2.1: drop expired acquiers from the lock queue
|
95
|
-
|
91
|
+
rconn.call(
|
96
92
|
'ZREMRANGEBYSCORE',
|
97
93
|
lock_key_queue,
|
98
94
|
'-inf',
|
@@ -101,58 +97,44 @@ module RedisQueuedLocks::Acquier::AcquireLock::TryToLock
|
|
101
97
|
|
102
98
|
if log_lock_try
|
103
99
|
run_non_critical do
|
104
|
-
logger.debug
|
100
|
+
logger.debug do
|
105
101
|
"[redis_queued_locks.try_lock.remove_expired_acqs] " \
|
106
102
|
"lock_key => '#{lock_key}' " \
|
107
103
|
"queue_ttl => #{queue_ttl} " \
|
108
104
|
"acq_id => '#{acquier_id}'"
|
109
|
-
|
105
|
+
end
|
110
106
|
end
|
111
107
|
end
|
112
108
|
|
113
|
-
RedisQueuedLocks.debug(
|
114
|
-
"Step №2: дропаем из очереди просроченных ожидающих. [ZREMRANGE: #{res}]"
|
115
|
-
)
|
116
|
-
|
117
109
|
# Step 3: get the actual acquier waiting in the queue
|
118
110
|
waiting_acquier = Array(rconn.call('ZRANGE', lock_key_queue, '0', '0')).first
|
119
111
|
|
120
112
|
if log_lock_try
|
121
113
|
run_non_critical do
|
122
|
-
logger.debug
|
114
|
+
logger.debug do
|
123
115
|
"[redis_queued_locks.try_lock.get_first_from_queue] " \
|
124
116
|
"lock_key => '#{lock_key}' " \
|
125
117
|
"queue_ttl => #{queue_ttl} " \
|
126
118
|
"acq_id => '#{acquier_id}' " \
|
127
119
|
"first_acq_id_in_queue => '#{waiting_acquier}'"
|
128
|
-
|
120
|
+
end
|
129
121
|
end
|
130
122
|
end
|
131
123
|
|
132
|
-
RedisQueuedLocks.debug(
|
133
|
-
"Step №3: какой процесс в очереди сейчас ждет. " \
|
134
|
-
"[ZRANGE <следующий процесс>: #{waiting_acquier} :: <текущий процесс>: #{acquier_id}]"
|
135
|
-
)
|
136
|
-
|
137
124
|
# Step PRE-4.x: check if the request time limit is reached
|
138
125
|
# (when the current try self-removes itself from queue (queue ttl has come))
|
139
126
|
if waiting_acquier == nil
|
140
127
|
if log_lock_try
|
141
128
|
run_non_critical do
|
142
|
-
logger.debug
|
129
|
+
logger.debug do
|
143
130
|
"[redis_queued_locks.try_lock.exit__queue_ttl_reached] " \
|
144
131
|
"lock_key => '#{lock_key}' " \
|
145
132
|
"queue_ttl => #{queue_ttl} " \
|
146
133
|
"acq_id => '#{acquier_id}'"
|
147
|
-
|
134
|
+
end
|
148
135
|
end
|
149
136
|
end
|
150
137
|
|
151
|
-
RedisQueuedLocks.debug(
|
152
|
-
"Step PRE-ROLLBACK №0: достигли лимита времени эквайра лока (queue ttl). выходим. " \
|
153
|
-
"[Наша позиция: #{acquier_id}. queue_ttl: #{queue_ttl}]"
|
154
|
-
)
|
155
|
-
|
156
138
|
inter_result = :dead_score_reached
|
157
139
|
# Step 4: check the actual acquier: is it ours? are we aready to lock?
|
158
140
|
elsif waiting_acquier != acquier_id
|
@@ -160,59 +142,41 @@ module RedisQueuedLocks::Acquier::AcquireLock::TryToLock
|
|
160
142
|
|
161
143
|
if log_lock_try
|
162
144
|
run_non_critical do
|
163
|
-
logger.debug
|
145
|
+
logger.debug do
|
164
146
|
"[redis_queued_locks.try_lock.exit__no_first] " \
|
165
147
|
"lock_key => '#{lock_key}' " \
|
166
148
|
"queue_ttl => #{queue_ttl} " \
|
167
149
|
"acq_id => '#{acquier_id}' " \
|
168
150
|
"first_acq_id_in_queue => '#{waiting_acquier}' " \
|
169
151
|
"<current_lock_data> => <<#{rconn.call('HGETALL', lock_key).to_h}>>"
|
170
|
-
|
152
|
+
end
|
171
153
|
end
|
172
154
|
end
|
173
155
|
|
174
|
-
RedisQueuedLocks.debug(
|
175
|
-
"Step ROLLBACK №1: не одинаковые ключи. выходим. " \
|
176
|
-
"[Ждет: #{waiting_acquier}. А нужен: #{acquier_id}]"
|
177
|
-
)
|
178
|
-
|
179
156
|
inter_result = :acquier_is_not_first_in_queue
|
180
157
|
else
|
181
158
|
# NOTE: our time has come! let's try to acquire the lock!
|
182
159
|
|
183
|
-
# Step 5: check if the our lock is already acquired
|
160
|
+
# Step 5: find the lock -> check if the our lock is already acquired
|
184
161
|
locked_by_acquier = rconn.call('HGET', lock_key, 'acq_id')
|
185
162
|
|
186
|
-
# rubocop:disable Layout/LineLength
|
187
|
-
RedisQueuedLocks.debug(
|
188
|
-
"Ste №5: Ищем требуемый лок. " \
|
189
|
-
"[HGET<#{lock_key}>: " \
|
190
|
-
"#{(locked_by_acquier == nil) ? 'не занят' : "занят процессом <#{locked_by_acquier}>"}"
|
191
|
-
)
|
192
|
-
# rubocop:enable Layout/LineLength
|
193
|
-
|
194
163
|
if locked_by_acquier
|
195
164
|
# Step ROLLBACK 2: required lock is stil acquired. retry!
|
196
165
|
|
197
166
|
if log_lock_try
|
198
167
|
run_non_critical do
|
199
|
-
logger.debug
|
200
|
-
"[redis_queued_locks.try_lock.
|
168
|
+
logger.debug do
|
169
|
+
"[redis_queued_locks.try_lock.exit__lock_still_obtained] " \
|
201
170
|
"lock_key => '#{lock_key}' " \
|
202
171
|
"queue_ttl => #{queue_ttl} " \
|
203
172
|
"acq_id => '#{acquier_id}' " \
|
204
173
|
"first_acq_id_in_queue => '#{waiting_acquier}' " \
|
205
174
|
"locked_by_acq_id => '#{locked_by_acquier}' " \
|
206
175
|
"<current_lock_data> => <<#{rconn.call('HGETALL', lock_key).to_h}>>"
|
207
|
-
|
176
|
+
end
|
208
177
|
end
|
209
178
|
end
|
210
179
|
|
211
|
-
RedisQueuedLocks.debug(
|
212
|
-
"Step ROLLBACK №2: Ключ уже занят. Ничего не делаем. " \
|
213
|
-
"[Занят процессом: #{locked_by_acquier}]"
|
214
|
-
)
|
215
|
-
|
216
180
|
inter_result = :lock_is_still_acquired
|
217
181
|
else
|
218
182
|
# NOTE: required lock is free and ready to be acquired! acquire!
|
@@ -220,16 +184,6 @@ module RedisQueuedLocks::Acquier::AcquireLock::TryToLock
|
|
220
184
|
# Step 6.1: remove our acquier from waiting queue
|
221
185
|
transact.call('ZREM', lock_key_queue, acquier_id)
|
222
186
|
|
223
|
-
RedisQueuedLocks.debug(
|
224
|
-
'Step №4: Забираем наш текущий процесс из очереди. [ZREM]'
|
225
|
-
)
|
226
|
-
|
227
|
-
# rubocop:disable Layout/LineLength
|
228
|
-
RedisQueuedLocks.debug(
|
229
|
-
"===> <FINAL> Step №6: закрепляем лок за процессом [HSET<#{lock_key}>: #{acquier_id}]"
|
230
|
-
)
|
231
|
-
# rubocop:enable Layout/LineLength
|
232
|
-
|
233
187
|
# Step 6.2: acquire a lock and store an info about the acquier
|
234
188
|
transact.call(
|
235
189
|
'HSET',
|
@@ -245,12 +199,12 @@ module RedisQueuedLocks::Acquier::AcquireLock::TryToLock
|
|
245
199
|
|
246
200
|
if log_lock_try
|
247
201
|
run_non_critical do
|
248
|
-
logger.debug
|
249
|
-
"[redis_queued_locks.try_lock.
|
202
|
+
logger.debug do
|
203
|
+
"[redis_queued_locks.try_lock.obtain_free_to_acquire] " \
|
250
204
|
"lock_key => '#{lock_key}' " \
|
251
205
|
"queue_ttl => #{queue_ttl} " \
|
252
206
|
"acq_id => '#{acquier_id}'"
|
253
|
-
|
207
|
+
end
|
254
208
|
end
|
255
209
|
end
|
256
210
|
end
|
@@ -297,18 +251,26 @@ module RedisQueuedLocks::Acquier::AcquireLock::TryToLock
|
|
297
251
|
# rubocop:enable Metrics/MethodLength, Metrics/PerceivedComplexity
|
298
252
|
|
299
253
|
# @param redis [RedisClient]
|
254
|
+
# @param logger [::Logger,#debug]
|
255
|
+
# @param lock_key [String]
|
300
256
|
# @param lock_key_queue [String]
|
257
|
+
# @param queue_ttl [Integer]
|
301
258
|
# @param acquier_id [String]
|
302
259
|
# @return [Hash<Symbol,Any>] Format: { ok: true/false, result: Any }
|
303
260
|
#
|
304
261
|
# @api private
|
305
262
|
# @since 0.1.0
|
306
|
-
def dequeue_from_lock_queue(redis, lock_key_queue, acquier_id)
|
263
|
+
def dequeue_from_lock_queue(redis, logger, lock_key, lock_key_queue, queue_ttl, acquier_id)
|
307
264
|
result = redis.call('ZREM', lock_key_queue, acquier_id)
|
308
265
|
|
309
|
-
|
310
|
-
|
311
|
-
|
266
|
+
run_non_critical do
|
267
|
+
logger.debug do
|
268
|
+
"[redis_queued_locks.fail_fast_or_limits_reached__dequeue] " \
|
269
|
+
"lock_key => '#{lock_key}' " \
|
270
|
+
"queue_ttl => '#{queue_ttl}' " \
|
271
|
+
"acq_id => '#{acquier_id}'"
|
272
|
+
end
|
273
|
+
end
|
312
274
|
|
313
275
|
RedisQueuedLocks::Data[ok: true, result: result]
|
314
276
|
end
|
@@ -40,12 +40,12 @@ module RedisQueuedLocks::Acquier::AcquireLock::YieldWithExpire
|
|
40
40
|
end
|
41
41
|
ensure
|
42
42
|
run_non_critical do
|
43
|
-
logger.debug
|
43
|
+
logger.debug do
|
44
44
|
"[redis_queued_locks.expire_lock] " \
|
45
45
|
"lock_key => '#{lock_key}' " \
|
46
46
|
"queue_ttl => #{queue_ttl} " \
|
47
47
|
"acq_id => '#{acquier_id}'"
|
48
|
-
|
48
|
+
end
|
49
49
|
end
|
50
50
|
redis.call('EXPIRE', lock_key, '0')
|
51
51
|
end
|
@@ -165,15 +165,24 @@ module RedisQueuedLocks::Acquier::AcquireLock
|
|
165
165
|
hold_time: nil, # NOTE: in milliseconds
|
166
166
|
rel_time: nil # NOTE: in milliseconds
|
167
167
|
}
|
168
|
-
|
168
|
+
|
169
|
+
acq_dequeue = proc do
|
170
|
+
dequeue_from_lock_queue(
|
171
|
+
redis, logger,
|
172
|
+
lock_key,
|
173
|
+
lock_key_queue,
|
174
|
+
queue_ttl,
|
175
|
+
acquier_id
|
176
|
+
)
|
177
|
+
end
|
169
178
|
|
170
179
|
run_non_critical do
|
171
|
-
logger.debug
|
180
|
+
logger.debug do
|
172
181
|
"[redis_queued_locks.start_lock_obtaining] " \
|
173
182
|
"lock_key => '#{lock_key}' " \
|
174
183
|
"queue_ttl => #{queue_ttl} " \
|
175
184
|
"acq_id => '#{acquier_id}'"
|
176
|
-
|
185
|
+
end
|
177
186
|
end
|
178
187
|
|
179
188
|
# Step 2: try to lock with timeout
|
@@ -183,12 +192,12 @@ module RedisQueuedLocks::Acquier::AcquireLock
|
|
183
192
|
# Step 2.1: caclically try to obtain the lock
|
184
193
|
while acq_process[:should_try]
|
185
194
|
run_non_critical do
|
186
|
-
logger.debug
|
195
|
+
logger.debug do
|
187
196
|
"[redis_queued_locks.start_try_to_lock_cycle] " \
|
188
197
|
"lock_key => '#{lock_key}' " \
|
189
198
|
"queue_ttl => #{queue_ttl} " \
|
190
199
|
"acq_id => '{#{acquier_id}'"
|
191
|
-
|
200
|
+
end
|
192
201
|
end
|
193
202
|
|
194
203
|
# Step 2.X: check the actual score: is it in queue ttl limit or not?
|
@@ -197,12 +206,12 @@ module RedisQueuedLocks::Acquier::AcquireLock
|
|
197
206
|
acquier_position = RedisQueuedLocks::Resource.calc_initial_acquier_position
|
198
207
|
|
199
208
|
run_non_critical do
|
200
|
-
logger.debug
|
209
|
+
logger.debug do
|
201
210
|
"[redis_queued_locks.dead_score_reached__reset_acquier_position] " \
|
202
211
|
"lock_key => '#{lock_key} " \
|
203
212
|
"queue_ttl => #{queue_ttl} " \
|
204
213
|
"acq_id => '#{acquier_id}'"
|
205
|
-
|
214
|
+
end
|
206
215
|
end
|
207
216
|
end
|
208
217
|
|
@@ -230,13 +239,13 @@ module RedisQueuedLocks::Acquier::AcquireLock
|
|
230
239
|
# Step 2.1: analyze an acquirement attempt
|
231
240
|
if ok
|
232
241
|
run_non_critical do
|
233
|
-
logger.debug
|
242
|
+
logger.debug do
|
234
243
|
"[redis_queued_locks.lock_obtained] " \
|
235
244
|
"lock_key => '#{result[:lock_key]}' " \
|
236
245
|
"queue_ttl => #{queue_ttl} " \
|
237
246
|
"acq_id => '#{acquier_id}' " \
|
238
247
|
"acq_time => #{acq_time} (ms)"
|
239
|
-
|
248
|
+
end
|
240
249
|
end
|
241
250
|
|
242
251
|
# Step X (instrumentation): lock obtained
|
@@ -3,17 +3,35 @@
|
|
3
3
|
# @api private
|
4
4
|
# @since 0.1.0
|
5
5
|
module RedisQueuedLocks::Acquier::ExtendLockTTL
|
6
|
+
# @return [String]
|
7
|
+
#
|
8
|
+
# @api private
|
9
|
+
# @since 0.1.0
|
10
|
+
EXTEND_LOCK_PTTL = <<~LUA_SCRIPT.strip.tr("\n", '').freeze
|
11
|
+
local new_lock_pttl = redis.call("PTTL", KEYS[1]) + ARGV[1];
|
12
|
+
return redis.call("PEXPIRE", KEYS[1], new_lock_pttl);
|
13
|
+
LUA_SCRIPT
|
14
|
+
|
6
15
|
class << self
|
7
16
|
# @param redis_client [RedisClient]
|
8
17
|
# @param lock_name [String]
|
9
18
|
# @param milliseconds [Integer]
|
10
|
-
# @
|
11
|
-
# @return [?]
|
19
|
+
# @return [Hash<Symbol,Boolean|Symbol>]
|
12
20
|
#
|
13
21
|
# @api private
|
14
22
|
# @since 0.1.0
|
15
|
-
def extend_lock_ttl(redis_client, lock_name, milliseconds
|
16
|
-
|
23
|
+
def extend_lock_ttl(redis_client, lock_name, milliseconds)
|
24
|
+
lock_key = RedisQueuedLocks::Resource.prepare_lock_key(lock_name)
|
25
|
+
|
26
|
+
# NOTE: EVAL signature -> <lua script>, (keys number), *(keys), *(arguments)
|
27
|
+
result = redis_client.call('EVAL', EXTEND_LOCK_PTTL, 1, lock_key, milliseconds)
|
28
|
+
# TODO: upload scripts to the redis
|
29
|
+
|
30
|
+
if result == 1
|
31
|
+
RedisQueuedLocks::Data[ok: true, result: :ttl_extended]
|
32
|
+
else
|
33
|
+
RedisQueuedLocks::Data[ok: false, result: :async_expire_or_no_lock]
|
34
|
+
end
|
17
35
|
end
|
18
36
|
end
|
19
37
|
end
|
@@ -21,9 +21,9 @@ module RedisQueuedLocks::Acquier::LockInfo
|
|
21
21
|
def lock_info(redis_client, lock_name)
|
22
22
|
lock_key = RedisQueuedLocks::Resource.prepare_lock_key(lock_name)
|
23
23
|
|
24
|
-
result = redis_client.
|
25
|
-
|
26
|
-
|
24
|
+
result = redis_client.pipelined do |pipeline|
|
25
|
+
pipeline.call('HGETALL', lock_key)
|
26
|
+
pipeline.call('PTTL', lock_key)
|
27
27
|
end
|
28
28
|
|
29
29
|
if result == nil
|
@@ -50,9 +50,9 @@ module RedisQueuedLocks::Acquier::Locks
|
|
50
50
|
# Step X: iterate each lock and extract their info
|
51
51
|
lock_keys.each do |lock_key|
|
52
52
|
# Step 1: extract lock info from redis
|
53
|
-
lock_info = redis_client.
|
54
|
-
|
55
|
-
|
53
|
+
lock_info = redis_client.pipelined do |pipeline|
|
54
|
+
pipeline.call('HGETALL', lock_key)
|
55
|
+
pipeline.call('PTTL', lock_key)
|
56
56
|
end.yield_self do |result| # Step 2: format the result
|
57
57
|
# Step 2.X: lock is released
|
58
58
|
if result == nil
|
@@ -28,7 +28,7 @@ module RedisQueuedLocks::Acquier::ReleaseAllLocks
|
|
28
28
|
def release_all_locks(redis, batch_size, instrumenter, logger)
|
29
29
|
rel_start_time = ::Process.clock_gettime(::Process::CLOCK_MONOTONIC)
|
30
30
|
fully_release_all_locks(redis, batch_size) => { ok:, result: }
|
31
|
-
time_at = Time.now.
|
31
|
+
time_at = Time.now.to_f
|
32
32
|
rel_end_time = ::Process.clock_gettime(::Process::CLOCK_MONOTONIC)
|
33
33
|
rel_time = ((rel_end_time - rel_start_time) * 1_000).ceil(2)
|
34
34
|
|
@@ -34,7 +34,7 @@ module RedisQueuedLocks::Acquier::ReleaseLock
|
|
34
34
|
|
35
35
|
rel_start_time = ::Process.clock_gettime(::Process::CLOCK_MONOTONIC)
|
36
36
|
fully_release_lock(redis, lock_key, lock_key_queue) => { ok:, result: }
|
37
|
-
time_at = Time.now.
|
37
|
+
time_at = Time.now.to_f
|
38
38
|
rel_end_time = ::Process.clock_gettime(::Process::CLOCK_MONOTONIC)
|
39
39
|
rel_time = ((rel_end_time - rel_start_time) * 1_000).ceil(2)
|
40
40
|
|
@@ -249,9 +249,22 @@ class RedisQueuedLocks::Client
|
|
249
249
|
RedisQueuedLocks::Acquier::QueueInfo.queue_info(redis_client, lock_name)
|
250
250
|
end
|
251
251
|
|
252
|
+
# This method is non-atomic cuz redis does not provide an atomic function for TTL/PTTL extension.
|
253
|
+
# So the methid is spliited into the two commands:
|
254
|
+
# (1) read current pttl
|
255
|
+
# (2) set new ttl that is calculated as "current pttl + additional milliseconds"
|
256
|
+
# What can happen during these steps
|
257
|
+
# - lock is expired between commands or before the first command;
|
258
|
+
# - lock is expired before the second command;
|
259
|
+
# - lock is expired AND newly acquired by another process (so you will extend the
|
260
|
+
# totally new lock with fresh PTTL);
|
261
|
+
# Use it at your own risk and consider async nature when calling this method.
|
262
|
+
#
|
252
263
|
# @param lock_name [String]
|
253
264
|
# @param milliseconds [Integer] How many milliseconds should be added.
|
254
|
-
# @return [
|
265
|
+
# @return [Hash<Symbol,Boolean|Symbol>]
|
266
|
+
# - { ok: true, result: :ttl_extended }
|
267
|
+
# - { ok: false, result: :async_expire_or_no_lock }
|
255
268
|
#
|
256
269
|
# @api public
|
257
270
|
# @since 0.1.0
|
@@ -259,8 +272,7 @@ class RedisQueuedLocks::Client
|
|
259
272
|
RedisQueuedLocks::Acquier::ExtendLockTTL.extend_lock_ttl(
|
260
273
|
redis_client,
|
261
274
|
lock_name,
|
262
|
-
milliseconds
|
263
|
-
config[:logger]
|
275
|
+
milliseconds
|
264
276
|
)
|
265
277
|
end
|
266
278
|
|
metadata
CHANGED
@@ -1,14 +1,14 @@
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
2
2
|
name: redis_queued_locks
|
3
3
|
version: !ruby/object:Gem::Version
|
4
|
-
version: 0.0.
|
4
|
+
version: 0.0.39
|
5
5
|
platform: ruby
|
6
6
|
authors:
|
7
7
|
- Rustam Ibragimov
|
8
8
|
autorequire:
|
9
9
|
bindir: exe
|
10
10
|
cert_chain: []
|
11
|
-
date: 2024-03-
|
11
|
+
date: 2024-03-31 00:00:00.000000000 Z
|
12
12
|
dependencies:
|
13
13
|
- !ruby/object:Gem::Dependency
|
14
14
|
name: redis-client
|
@@ -107,7 +107,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
|
|
107
107
|
- !ruby/object:Gem::Version
|
108
108
|
version: '0'
|
109
109
|
requirements: []
|
110
|
-
rubygems_version: 3.
|
110
|
+
rubygems_version: 3.3.7
|
111
111
|
signing_key:
|
112
112
|
specification_version: 4
|
113
113
|
summary: Queued distributed locks based on Redis.
|