ruby_reactor 0.3.1 → 0.3.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -16,6 +16,8 @@ end
16
16
 
17
17
  This will give you access to the `test_reactor` helper method and all custom matchers.
18
18
 
19
+ > For reactors that use `with_lock`, `with_semaphore`, `with_rate_limit`, or `with_period`, see [Testing Coordination Primitives](#testing-coordination-primitives) — it covers the `be_skipped`, `be_locked`, `have_available_tokens`, `have_held_tokens`, `have_rate_limit_count`, and `be_period_marked` matchers, plus patterns for testing async snooze and escalation.
20
+
19
21
  ## Basic Usage
20
22
 
21
23
  ### The `test_reactor` Helper
@@ -543,6 +545,175 @@ end
543
545
 
544
546
  ---
545
547
 
548
+ ## Testing Coordination Primitives
549
+
550
+ Reactors that declare `with_lock`, `with_semaphore`, `with_rate_limit`, or `with_period` can be tested with both vanilla execution assertions and dedicated state matchers that read the live Redis state via the configured storage adapter.
551
+
552
+ ### Test environment requirements
553
+
554
+ The matchers ship with the standard test setup — once `RubyReactor::RSpec.configure(config)` runs in your `spec_helper.rb`, they're available. They require:
555
+
556
+ - A real Redis (the in-memory test mode does not back the primitives).
557
+ - A clean Redis between tests — typically `redis.flushdb` in a `before` block — so leftover lock owners, semaphore tokens, rate-limit counters and period markers don't leak across examples.
558
+
559
+ ### The `Skipped` result
560
+
561
+ `RubyReactor::Skipped` is a `Success` subclass returned in two cases: a `with_period` bucket has already been claimed, or a step explicitly returns `RubyReactor.Skipped(reason: "...")` to halt cleanly (no compensation runs). Use `be_skipped` to distinguish it from a plain `Success`:
562
+
563
+ ```ruby
564
+ result = MonthlyReportReactor.run(org_id: 7)
565
+ expect(result).to be_skipped # any Skipped
566
+ expect(result).to be_skipped.because(:period) # gate hit
567
+ expect(result).to be_skipped.at_step(:second) # step return
568
+ ```
569
+
570
+ `Skipped` still satisfies `success?`, so legacy `if result.success?` callers continue to work; `result.skipped?` discriminates.
571
+
572
+ ### Asserting lock state
573
+
574
+ ```ruby
575
+ it "releases the lock after a successful run" do
576
+ RefundOrderReactor.run(order_id: 42)
577
+
578
+ expect("order:42").not_to be_locked
579
+ end
580
+
581
+ it "holds the lock for the duration of a long step" do
582
+ thread = Thread.new { LongRefundReactor.run(order_id: 42) }
583
+ # Give the executor a moment to acquire
584
+ sleep 0.05
585
+
586
+ expect("order:42").to be_locked
587
+ expect("order:42").to be_locked.by(thread.value.context.context_id)
588
+ end
589
+
590
+ it "raises when contention is hit inline" do
591
+ redis.hset("lock:order:42", "owner", "someone_else")
592
+ redis.hset("lock:order:42", "count", "1")
593
+
594
+ expect { RefundOrderReactor.run(order_id: 42) }
595
+ .to raise_error(RubyReactor::Lock::AcquisitionError)
596
+ end
597
+ ```
598
+
599
+ The `be_locked` matcher takes the **user-provided lock key** (without the internal `lock:` prefix). Use the `.by(owner)` chain to assert ownership — typically the `context_id` of the top-level execution.
600
+
601
+ ### Asserting semaphore state
602
+
603
+ ```ruby
604
+ it "returns the token to the pool on success" do
605
+ 3.times { ApiCallReactor.run }
606
+
607
+ expect("api_limit").to have_available_tokens(5)
608
+ expect("api_limit").to have_held_tokens(0)
609
+ end
610
+
611
+ it "exhausts capacity when held externally" do
612
+ s = RubyReactor::Semaphore.new("api_limit", limit: 2)
613
+ 2.times { s.acquire }
614
+
615
+ expect("api_limit").to have_available_tokens(0)
616
+ expect("api_limit").to have_held_tokens(2)
617
+
618
+ expect { ApiCallReactor.run }.to raise_error(RubyReactor::Semaphore::AcquisitionError)
619
+ end
620
+ ```
621
+
622
+ Both matchers take the **user-provided semaphore name** (without the `semaphore:` prefix).
623
+
624
+ ### Asserting rate-limit state
625
+
626
+ ```ruby
627
+ it "counts each call against the per-second window" do
628
+ 3.times { ChargeReactor.run(account_id: 42) }
629
+
630
+ expect("stripe:42").to have_rate_limit_count(3).for(:second)
631
+ end
632
+
633
+ it "snoozes inline once the window is full" do
634
+ 3.times { ChargeReactor.run(account_id: 42) }
635
+
636
+ expect { ChargeReactor.run(account_id: 42) }
637
+ .to raise_error(RubyReactor::RateLimit::ExceededError) do |e|
638
+ expect(e.period_name).to eq("second")
639
+ expect(e.retry_after_seconds).to be_between(1, 1)
640
+ end
641
+ end
642
+ ```
643
+
644
+ `have_rate_limit_count(n).for(period)` looks at the **current** bucket for the given `period` (use the same symbol or integer seconds you passed to `with_rate_limit`). For multi-window limits, assert each window separately:
645
+
646
+ ```ruby
647
+ expect("stripe:42").to have_rate_limit_count(3).for(:second)
648
+ expect("stripe:42").to have_rate_limit_count(3).for(:minute)
649
+ ```
650
+
651
+ ### Asserting period markers
652
+
653
+ ```ruby
654
+ it "marks the bucket after the first successful run" do
655
+ MonthlyReportReactor.run(org_id: 7)
656
+ expect("monthly_report:7").to be_period_marked.for(:month)
657
+ end
658
+
659
+ it "does not mark the bucket when the run fails" do
660
+ FailingMonthlyReactor.run(org_id: 7)
661
+ expect("monthly_report:7").not_to be_period_marked.for(:month)
662
+ end
663
+
664
+ it "skips a second call in the same bucket" do
665
+ MonthlyReportReactor.run(org_id: 7)
666
+ result = MonthlyReportReactor.run(org_id: 7)
667
+
668
+ expect(result).to be_skipped.because(:period)
669
+ end
670
+ ```
671
+
672
+ `be_period_marked.for(period)` checks the marker at the **current** bucket. To verify the marker's TTL behavior, drop to direct Redis: `redis.ttl(RubyReactor::Period.key("monthly_report:7", :month))`.
673
+
674
+ ### Testing async snooze behavior
675
+
676
+ The Sidekiq worker rescues `Lock::AcquisitionError`, `Semaphore::AcquisitionError`, and `RateLimit::ExceededError` and reschedules via `perform_in`. To test that wiring without spinning Sidekiq:
677
+
678
+ ```ruby
679
+ it "reschedules with retry_after on rate limit hits" do
680
+ redis.set("rate:stripe:42:second:#{Time.now.to_i}", "999")
681
+
682
+ serialized = RubyReactor::ContextSerializer.serialize(
683
+ RubyReactor::Context.new({ account_id: 42 }, ChargeReactor)
684
+ )
685
+
686
+ expect(RubyReactor::SidekiqWorkers::Worker)
687
+ .to receive(:perform_in)
688
+ .with(a_value_between(1.0, 2.0), serialized, "ChargeReactor", 1)
689
+
690
+ RubyReactor::SidekiqWorkers::Worker.new.perform(serialized, "ChargeReactor")
691
+ end
692
+ ```
693
+
694
+ For lock/semaphore the same pattern works — the `perform_in` delay is `lock_snooze_base_delay + rand(0..lock_snooze_jitter)`. Pin those knobs to deterministic values (`jitter = 0`) in tests that assert on the exact delay.
695
+
696
+ ### Testing snooze escalation
697
+
698
+ When the snooze cap (`lock_snooze_max_attempts`) is reached, the worker stops rescheduling and marks the context as failed:
699
+
700
+ ```ruby
701
+ it "marks the context as failed after the snooze cap" do
702
+ RubyReactor.configuration.lock_snooze_max_attempts = 3
703
+
704
+ redis.hset("lock:order:42", "owner", "other")
705
+ redis.hset("lock:order:42", "count", "1")
706
+
707
+ context = RubyReactor::Context.new({ order_id: 42 }, RefundOrderReactor)
708
+ serialized = RubyReactor::ContextSerializer.serialize(context)
709
+
710
+ expect(RubyReactor::SidekiqWorkers::Worker).not_to receive(:perform_in)
711
+
712
+ RubyReactor::SidekiqWorkers::Worker.new
713
+ .perform(serialized, "RefundOrderReactor", 3)
714
+ end
715
+ ```
716
+
546
717
  ## Complete Examples
547
718
 
548
719
  ### Testing a Payment Workflow
@@ -810,3 +981,14 @@ end
810
981
  | `have_retried_step(name)` | Assert step was retried |
811
982
  | `.times(count)` | Chain: assert retry count |
812
983
  | `have_validation_error(field)` | Assert input validation error on field |
984
+ | `be_skipped` | Assert result is a `RubyReactor::Skipped` (period gate or step return) |
985
+ | `.because(reason)` | Chain: assert the skip reason matches |
986
+ | `.at_step(name)` | Chain: assert the halting step (step-returned Skipped only) |
987
+ | `be_locked` | Assert an exclusive lock is currently held for the given key |
988
+ | `.by(owner)` | Chain: assert the lock owner (typically a `context_id`) |
989
+ | `have_available_tokens(n)` | Assert `n` semaphore tokens are still in the pool |
990
+ | `have_held_tokens(n)` | Assert `n` semaphore tokens are currently checked out |
991
+ | `have_rate_limit_count(n)` | Assert the current rate-limit bucket count |
992
+ | `.for(period)` | Chain (required): which window to check (`:second`, `:minute`, …, or integer seconds) |
993
+ | `be_period_marked` | Assert a `with_period` bucket has been marked |
994
+ | `.for(period)` | Chain (required): which bucket granularity to check |
@@ -7,7 +7,8 @@ module RubyReactor
7
7
  class Configuration
8
8
  include Singleton
9
9
 
10
- attr_writer :sidekiq_queue, :sidekiq_retry_count, :logger, :async_router
10
+ attr_writer :sidekiq_queue, :sidekiq_retry_count, :logger, :async_router,
11
+ :lock_snooze_base_delay, :lock_snooze_jitter, :lock_snooze_max_attempts
11
12
 
12
13
  def sidekiq_queue
13
14
  @sidekiq_queue ||= :default
@@ -17,6 +18,22 @@ module RubyReactor
17
18
  @sidekiq_retry_count ||= 3
18
19
  end
19
20
 
21
+ # Base seconds the Sidekiq worker waits before re-checking a contended lock.
22
+ def lock_snooze_base_delay
23
+ @lock_snooze_base_delay ||= 5
24
+ end
25
+
26
+ # Extra random seconds added to the base delay to avoid thundering herd.
27
+ def lock_snooze_jitter
28
+ @lock_snooze_jitter ||= 5
29
+ end
30
+
31
+ # How many times a single job can snooze on lock contention before it is
32
+ # marked as failed. Set to :infinity to never escalate.
33
+ def lock_snooze_max_attempts
34
+ @lock_snooze_max_attempts ||= 20
35
+ end
36
+
20
37
  def logger
21
38
  @logger ||= Logger.new($stderr)
22
39
  end
@@ -0,0 +1,130 @@
1
+ # frozen_string_literal: true
2
+
3
+ module RubyReactor
4
+ module Dsl
5
+ module Lockable
6
+ def self.included(base)
7
+ base.extend(ClassMethods)
8
+ end
9
+
10
+ module ClassMethods
11
+ attr_reader :lock_config, :semaphore_config, :period_config, :rate_limit_config
12
+
13
+ # Propagate lock/semaphore/period/rate-limit config to subclasses;
14
+ # without this a subclass of a configured reactor would silently lose
15
+ # those settings.
16
+ def inherited(subclass)
17
+ super
18
+ subclass.instance_variable_set(:@lock_config, @lock_config) if @lock_config
19
+ subclass.instance_variable_set(:@semaphore_config, @semaphore_config) if @semaphore_config
20
+ subclass.instance_variable_set(:@period_config, @period_config) if @period_config
21
+ subclass.instance_variable_set(:@rate_limit_config, @rate_limit_config) if @rate_limit_config
22
+ end
23
+
24
+ # Configure locking for this reactor
25
+ # @param ttl [Integer] Time to live in seconds (default: 60)
26
+ # @param wait [Integer] Time to wait for lock in seconds (default: 0)
27
+ # @param auto_extend [Boolean] When true (default), a background thread
28
+ # refreshes the lock TTL every ttl/3 seconds while the reactor runs,
29
+ # protecting steps that may legitimately outlast `ttl`. Pass `false`
30
+ # to disable and rely solely on `ttl` for expiry.
31
+ # @yield [inputs] Block that returns the lock key string
32
+ def with_lock(ttl: 60, wait: 0, auto_extend: true, &block)
33
+ @lock_config = {
34
+ ttl: ttl,
35
+ wait: wait,
36
+ auto_extend: auto_extend,
37
+ key_proc: block
38
+ }
39
+ end
40
+
41
+ # Configure semaphore for this reactor
42
+ # @param limit [Integer] Maximum concurrent executions
43
+ # @param wait [Integer] Time to wait for a token in seconds (default: 0)
44
+ # @yield [inputs] Block that returns the semaphore key string
45
+ def with_semaphore(limit:, wait: 0, &block)
46
+ @semaphore_config = {
47
+ limit: limit,
48
+ wait: wait,
49
+ key_proc: block
50
+ }
51
+ end
52
+
53
+ # Configure a calendar-aligned dedup window for this reactor. The
54
+ # reactor will run at most once per bucket per key; subsequent calls
55
+ # in the same bucket return `RubyReactor::Skipped` without executing
56
+ # any steps.
57
+ #
58
+ # Note: `with_period` is *dedup*, not *concurrency*. Two concurrent
59
+ # racers can both see no marker and both run. Pair with `with_lock`
60
+ # for true at-most-one semantics within the bucket.
61
+ #
62
+ # @param every [Symbol, Integer] :minute / :hour / :day / :week /
63
+ # :month / :year, or an integer number of seconds for a sliding
64
+ # bucket (index = `time.to_i / every`).
65
+ # @yield [inputs] Block that returns the period key base. The final
66
+ # Redis marker key is `period:<base>:<bucket_id>`.
67
+ def with_period(every:, &block)
68
+ # Validate eagerly so misconfiguration surfaces at class load time.
69
+ RubyReactor::Period.period_seconds(every)
70
+
71
+ @period_config = {
72
+ every: every,
73
+ key_proc: block
74
+ }
75
+ end
76
+
77
+ # Configure rate limiting for this reactor (fixed-window counter).
78
+ # Pass either a single window via `limit:` + `period:`, or a hash of
79
+ # windows via `limits:` for layered API quotas.
80
+ #
81
+ # @example Single window
82
+ # with_rate_limit(limit: 3, period: :second) { |i| "stripe:#{i[:account_id]}" }
83
+ #
84
+ # @example Multi-window (3/sec AND 100/min AND 5000/hr)
85
+ # with_rate_limit(
86
+ # limits: { second: 3, minute: 100, hour: 5000 }
87
+ # ) { |i| "stripe:#{i[:account_id]}" }
88
+ #
89
+ # @param limit [Integer] requests per period (single-window form)
90
+ # @param period [Symbol, Integer] :second / :minute / :hour / :day /
91
+ # :week / :month / :year, or integer seconds (single-window form)
92
+ # @param limits [Hash{Symbol,Integer => Integer}] mapping of period
93
+ # unit to limit (multi-window form)
94
+ # @yield [inputs] Block returning the rate-limit key base.
95
+ def with_rate_limit(limit: nil, period: nil, limits: nil, &block)
96
+ normalized = normalize_rate_limit_args(limit, period, limits)
97
+
98
+ @rate_limit_config = {
99
+ limits: normalized,
100
+ key_proc: block
101
+ }
102
+ end
103
+
104
+ private
105
+
106
+ def normalize_rate_limit_args(limit, period, limits)
107
+ if limits
108
+ raise ArgumentError, "with_rate_limit: use either :limits, or :limit + :period, not both" if limit || period
109
+
110
+ limits.map do |period_key, limit_val|
111
+ {
112
+ period_seconds: RubyReactor::Period.period_seconds(period_key),
113
+ limit: Integer(limit_val),
114
+ name: period_key.to_s
115
+ }
116
+ end
117
+ elsif limit && period
118
+ [{
119
+ period_seconds: RubyReactor::Period.period_seconds(period),
120
+ limit: Integer(limit),
121
+ name: period.to_s
122
+ }]
123
+ else
124
+ raise ArgumentError, "with_rate_limit requires :limit + :period, or :limits"
125
+ end
126
+ end
127
+ end
128
+ end
129
+ end
130
+ end
@@ -14,6 +14,9 @@ module RubyReactor
14
14
 
15
15
  def handle_step_result(step_config, result, resolved_arguments)
16
16
  case result
17
+ when RubyReactor::Skipped
18
+ # Important: must come before the Success branch — Skipped < Success.
19
+ handle_skipped(step_config, result)
17
20
  when RubyReactor::Success
18
21
  handle_success(step_config, result, resolved_arguments)
19
22
  when RubyReactor::MaxRetriesExhaustedFailure
@@ -53,6 +56,22 @@ module RubyReactor
53
56
 
54
57
  private
55
58
 
59
+ # A step returned `RubyReactor.Skipped(...)`. Halt cleanly: record the
60
+ # event in the trace, do NOT push to the undo stack (so existing
61
+ # completed steps stay as-is — no compensation), and stamp the step
62
+ # name on the result so the caller can see who halted.
63
+ def handle_skipped(step_config, result)
64
+ @step_results[step_config.name] = result
65
+ result.instance_variable_set(:@step_name, step_config.name) if result.step_name.nil?
66
+ @context.execution_trace << {
67
+ type: :skipped,
68
+ step: step_config.name,
69
+ timestamp: Time.now,
70
+ reason: result.reason
71
+ }
72
+ result
73
+ end
74
+
56
75
  def handle_success(step_config, result, resolved_arguments)
57
76
  validate_step_output(step_config, result.value, resolved_arguments)
58
77
  @step_results[step_config.name] = result
@@ -33,6 +33,11 @@ module RubyReactor
33
33
  # If a step returns RetryQueuedResult, we need to stop and return it
34
34
  return result if result.is_a?(RetryQueuedResult)
35
35
 
36
+ # If a step returns Skipped, halt the reactor cleanly (no
37
+ # compensation). Must be checked BEFORE Failure / Success because
38
+ # Skipped is a Success subclass.
39
+ return result if result.is_a?(RubyReactor::Skipped)
40
+
36
41
  # If a step returns Failure, we need to stop execution and return it
37
42
  return result if result.is_a?(RubyReactor::Failure)
38
43
 
@@ -34,9 +34,22 @@ module RubyReactor
34
34
  }
35
35
  )
36
36
  @result = nil
37
+ @acquired_lock = nil
38
+ @acquired_semaphore = nil
37
39
  end
38
40
 
39
41
  def execute
42
+ skipped = check_period_gate
43
+ if skipped
44
+ @result = skipped
45
+ update_context_status(@result)
46
+ save_context
47
+ return @result
48
+ end
49
+
50
+ check_rate_limit
51
+ acquire_locks
52
+
40
53
  input_validator = InputValidator.new(@reactor_class, @context)
41
54
  input_validator.validate!
42
55
 
@@ -49,18 +62,25 @@ module RubyReactor
49
62
 
50
63
  @result = @step_executor.execute_all_steps
51
64
  update_context_status(@result)
65
+ mark_period_on_success(@result)
52
66
  handle_interrupt(@result) if @result.is_a?(RubyReactor::InterruptResult)
53
67
  @result
68
+ rescue RubyReactor::Lock::AcquisitionError,
69
+ RubyReactor::Semaphore::AcquisitionError,
70
+ RubyReactor::RateLimit::ExceededError => e
71
+ raise e
54
72
  rescue StandardError => e
55
73
  @result = @result_handler.handle_execution_error(e)
56
74
  update_context_status(@result)
57
75
  @result
58
76
  ensure
77
+ release_locks
59
78
  save_context
60
79
  end
61
80
 
62
81
  def resume_execution
63
82
  @context.status = :running
83
+ acquire_locks
64
84
  prepare_for_resume
65
85
  save_context
66
86
 
@@ -71,15 +91,21 @@ module RubyReactor
71
91
  end
72
92
 
73
93
  update_context_status(@result)
94
+ mark_period_on_success(@result)
74
95
 
75
96
  handle_interrupt(@result) if @result.is_a?(RubyReactor::InterruptResult)
76
97
 
77
98
  @result
99
+ rescue RubyReactor::Lock::AcquisitionError,
100
+ RubyReactor::Semaphore::AcquisitionError,
101
+ RubyReactor::RateLimit::ExceededError => e
102
+ raise e
78
103
  rescue StandardError => e
79
104
  handle_resume_error(e)
80
105
  update_context_status(@result)
81
106
  @result
82
107
  ensure
108
+ release_locks
83
109
  save_context
84
110
  end
85
111
 
@@ -110,12 +136,122 @@ module RubyReactor
110
136
 
111
137
  private
112
138
 
139
+ def acquire_locks
140
+ acquire_exclusive_lock if @reactor_class.respond_to?(:lock_config) && @reactor_class.lock_config
141
+ acquire_semaphore if @reactor_class.respond_to?(:semaphore_config) && @reactor_class.semaphore_config
142
+ end
143
+
144
+ # Consume one slot from each configured rate-limit window. Raises
145
+ # `RubyReactor::RateLimit::ExceededError` (carrying a `retry_after_seconds`
146
+ # hint) if any window is full. Only consulted on initial `execute`; resumes
147
+ # never re-check (a paused reactor must not block itself on resume).
148
+ def check_rate_limit
149
+ return unless @reactor_class.respond_to?(:rate_limit_config) && @reactor_class.rate_limit_config
150
+
151
+ config = @reactor_class.rate_limit_config
152
+ key_base = config[:key_proc].call(@context.inputs)
153
+
154
+ RubyReactor::RateLimit.new(key_base, limits: config[:limits]).check_and_increment!
155
+ end
156
+
157
+ # Returns a Skipped result if the period bucket is already marked, else nil.
158
+ # Only consulted on initial `execute`; resumes never re-check (a paused run
159
+ # must not skip itself when its own marker eventually appears).
160
+ def check_period_gate
161
+ return nil unless @reactor_class.respond_to?(:period_config) && @reactor_class.period_config
162
+
163
+ config = @reactor_class.period_config
164
+ key = period_key(config)
165
+ return nil unless RubyReactor.configuration.storage_adapter.period_seen?(key)
166
+
167
+ RubyReactor::Skipped.new(reason: :period, period_key: key)
168
+ end
169
+
170
+ def mark_period_on_success(result)
171
+ return unless @reactor_class.respond_to?(:period_config) && @reactor_class.period_config
172
+ return unless result.is_a?(RubyReactor::Success)
173
+ return if result.is_a?(RubyReactor::Skipped)
174
+
175
+ config = @reactor_class.period_config
176
+ ttl = RubyReactor::Period.ttl_seconds(config[:every])
177
+ RubyReactor.configuration.storage_adapter.period_mark(period_key(config), ttl)
178
+ end
179
+
180
+ def period_key(config)
181
+ base = config[:key_proc].call(@context.inputs)
182
+ RubyReactor::Period.key(base, config[:every])
183
+ end
184
+
185
+ def acquire_exclusive_lock
186
+ config = @reactor_class.lock_config
187
+ key = config[:key_proc].call(@context.inputs)
188
+
189
+ # Use root context ID as owner to allow re-entrancy across nested reactors
190
+ owner = (@context.root_context || @context).context_id
191
+
192
+ lock = RubyReactor::Lock.new(
193
+ key,
194
+ owner: owner,
195
+ ttl: config[:ttl],
196
+ wait: contention_wait(config[:wait]),
197
+ auto_extend: config.fetch(:auto_extend, true)
198
+ )
199
+ lock.acquire
200
+ @acquired_lock = lock
201
+ end
202
+
203
+ def acquire_semaphore
204
+ config = @reactor_class.semaphore_config
205
+ key = config[:key_proc].call(@context.inputs)
206
+ limit = config[:limit]
207
+
208
+ semaphore = RubyReactor::Semaphore.new(key, limit: limit, wait: contention_wait(config[:wait]))
209
+ semaphore.acquire
210
+ @acquired_semaphore = semaphore
211
+ end
212
+
213
+ # Inside a Sidekiq worker we'd rather snooze the job via perform_in than
214
+ # tie up the worker thread on a BLPOP / sleep loop. The non-blocking path
215
+ # fails fast and the Worker rescue branch reschedules.
216
+ def contention_wait(configured_wait)
217
+ return 0 if @context.inline_async_execution
218
+
219
+ configured_wait
220
+ end
221
+
222
+ def release_locks
223
+ release_one("semaphore", @acquired_semaphore) if @acquired_semaphore
224
+ @acquired_semaphore = nil
225
+
226
+ return unless @acquired_lock
227
+
228
+ release_one("lock", @acquired_lock)
229
+ @acquired_lock = nil
230
+ end
231
+
232
+ def release_one(kind, primitive)
233
+ released = primitive.release
234
+ return if released
235
+
236
+ RubyReactor.configuration.logger.warn(
237
+ "RubyReactor #{kind} '#{primitive.key}' was not held at release time " \
238
+ "(likely TTL expired or owner changed)"
239
+ )
240
+ rescue StandardError => e
241
+ # Never let release break the ensure chain — log and move on.
242
+ RubyReactor.configuration.logger.warn(
243
+ "RubyReactor failed to release #{kind} '#{primitive.key}': #{e.message}"
244
+ )
245
+ end
246
+
113
247
  def update_context_status(result)
114
248
  return unless result
115
249
 
116
250
  case result
117
251
  when RubyReactor::AsyncResult
118
252
  @context.status = :running
253
+ when RubyReactor::Skipped
254
+ @context.status = :skipped
119
255
  when RubyReactor::Success
120
256
  @context.status = :completed
121
257
  when RubyReactor::Failure
@@ -148,8 +284,15 @@ module RubyReactor
148
284
  @result = @step_executor.execute_all_steps
149
285
  else
150
286
  case result
151
- when RetryQueuedResult, RubyReactor::Failure, RubyReactor::AsyncResult, RubyReactor::InterruptResult
152
- # Step was requeued, failed, or handed off to async - return the result
287
+ # Skipped must be listed before Success (Skipped < Success) so the
288
+ # halt path wins over the "continue with remaining steps" path.
289
+ when RubyReactor::Skipped,
290
+ RetryQueuedResult,
291
+ RubyReactor::Failure,
292
+ RubyReactor::AsyncResult,
293
+ RubyReactor::InterruptResult
294
+ # Terminal: step was skipped, requeued, failed, paused, or handed
295
+ # off to async. Return the result as-is.
153
296
  @result = result
154
297
  when RubyReactor::Success
155
298
  # Step succeeded, continue with remaining steps