journaled 6.2.1 → 6.2.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: e2e688048716e5c43e64a60599615e6df3020065155b4acbffdbd3ac7341a460
4
- data.tar.gz: 0c18dd4b468168d6d4d96c52e65f29ca7d480c0549cb41e668fd73be6149bda0
3
+ metadata.gz: 95198670d7f94155a1de312881e9f74d1ad0f80fe10b65080cf6db8f27332ab4
4
+ data.tar.gz: 43c8dc50365e0a68aed87cb4cb69a2632f23113e4c9f3e1cc71ec239659713df
5
5
  SHA512:
6
- metadata.gz: 166663bbc460b1ab3bc88924bf5a28f7444ab2bdcde42ad1aee46cc9c5fa532a4623203c58588e5dd96ec7d9064c8cf511193b465e09943a666e754b23f4ba7c
7
- data.tar.gz: 8ab6e8278746681bb4216a5afb6f6fee8e6337ad6399247ea702b27741ad838810bebcdc1ac17ab7e8470e56de73f9fe0a572ba5ea2004908a2f290762e71c18
6
+ metadata.gz: 5a06d677d4e1aaa042f2c57d4d6a3fd6d9e72c657d737b7dacff976caac0a08e6a48101a968dbb9c4a23f2f523a10cdf66383ce8b637193cbce2c0fd404fe0a2
7
+ data.tar.gz: fb7a06748b42787878ad50d126be1bc8607ae0e3140235cd4fc4505cfcb2e239cc612b97de323c7dec050ca25bc2fa365323dd4a07230642e59e75ee52012bd8
data/README.md CHANGED
@@ -164,6 +164,25 @@ Journaling provides a number of different configuation options that can be set i
164
164
  Journaled.outbox_base_class_name = 'EventsRecord'
165
165
  ```
166
166
 
167
+ #### `Journaled.outbox_processing_mode` (default: `:batch`)
168
+
169
+ **Only relevant when using `Journaled::Outbox::Adapter`.**
170
+
171
+ Controls how events are sent to Kinesis. Two modes are available:
172
+
173
+ - **`:batch`** (default) - Uses the Kinesis `put_records` batch API for high throughput. Events are sent in parallel batches, allowing multiple workers to run concurrently. Best for most use cases where strict ordering is not required.
174
+
175
+ - **`:guaranteed_order`** - Uses the Kinesis `put_record` single-event API to send events sequentially. Events are processed one at a time in order, stopping on the first transient failure to preserve ordering. Use this when you need strict ordering guarantees per partition key. Note: The current implementation requires single-threaded processing, but future optimizations may support batching and multi-threading by partition key.
176
+
177
+ Example:
178
+ ```ruby
179
+ # For high throughput (default)
180
+ Journaled.outbox_processing_mode = :batch
181
+
182
+ # For guaranteed ordering
183
+ Journaled.outbox_processing_mode = :guaranteed_order
184
+ ```
185
+
167
186
  #### ActiveJob `set` options
168
187
 
169
188
  Both model-level directives accept additional options to be passed into ActiveJob's `set` method:
@@ -182,6 +201,8 @@ journal_attributes :email, enqueue_with: { priority: 20, queue: 'journaled' }
182
201
 
183
202
  Journaled includes a built-in Outbox-style delivery adapter with horizontally scalable workers.
184
203
 
204
+ By default, the Outbox adapter uses the Kinesis `put_records` batch API for high-throughput event processing, allowing multiple workers to process events in parallel. If you require strict ordering guarantees per partition key, you can configure sequential processing mode (see configuration options below).
205
+
185
206
  **Setup:**
186
207
 
187
208
  This feature requires creating database tables and is completely optional. Existing users are unaffected.
@@ -207,6 +228,16 @@ Journaled.delivery_adapter = Journaled::Outbox::Adapter
207
228
  # Optional: Customize worker behavior (these are the defaults)
208
229
  Journaled.worker_batch_size = 500 # Max events per Kinesis batch (Kinesis API limit)
209
230
  Journaled.worker_poll_interval = 5 # Seconds between polls
231
+
232
+ # Optional: Configure processing mode (default: :batch)
233
+ # - :batch - Uses Kinesis put_records batch API for high throughput (default)
234
+ # Events are sent in parallel batches. Multiple workers can run concurrently.
235
+ # - :guaranteed_order - Uses Kinesis put_record single-event API for sequential processing
236
+ # Events are sent one at a time in order. Use this if you need
237
+ # strict ordering guarantees per partition key. The current
238
+ # implementation processes events single-threaded, though future
239
+ # optimizations may support batching/multi-threading by partition key.
240
+ Journaled.outbox_processing_mode = :batch
210
241
  ```
211
242
 
212
243
  **Note:** When using the Outbox adapter, you do **not** need to configure an ActiveJob queue adapter (skip step 1 of Installation). The Outbox adapter uses the `journaled_outbox_events` table for event storage and its own worker daemons for processing, making it independent of ActiveJob. Transactional batching still works seamlessly with the Outbox adapter.
@@ -217,6 +248,8 @@ Journaled.worker_poll_interval = 5 # Seconds between polls
217
248
  bundle exec rake journaled_worker:work
218
249
  ```
219
250
 
251
+ **Note:** In `:batch` mode (the default), you can run multiple worker processes concurrently for horizontal scaling. In `:guaranteed_order` mode, the current implementation is optimized for running a single worker to maintain ordering guarantees.
252
+
220
253
  4. **Monitoring:**
221
254
 
222
255
  The system emits `ActiveSupport::Notifications` events:
@@ -29,12 +29,20 @@ module Journaled
29
29
 
30
30
  # Fetch a batch of events for processing using SELECT FOR UPDATE
31
31
  #
32
+ # In :guaranteed_order mode, uses blocking lock to ensure sequential processing.
33
+ # In :batch mode, uses SKIP LOCKED to allow parallel workers.
34
+ #
32
35
  # @return [Array<Journaled::Outbox::Event>] Events locked for processing
33
36
  def self.fetch_batch_for_update
34
- ready_to_process
35
- .limit(Journaled.worker_batch_size)
36
- .lock
37
- .to_a
37
+ query = ready_to_process.limit(Journaled.worker_batch_size)
38
+
39
+ lock_clause = if Journaled.outbox_processing_mode == :guaranteed_order
40
+ 'FOR UPDATE'
41
+ else
42
+ 'FOR UPDATE SKIP LOCKED'
43
+ end
44
+
45
+ query.lock(lock_clause).to_a
38
46
  end
39
47
 
40
48
  # Requeue a failed event for processing
@@ -1,98 +1,108 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module Journaled
4
- # Sends batches of events to Kinesis using the PutRecord single-event API
4
+ # Sends batches of events to Kinesis using the PutRecords batch API
5
5
  #
6
6
  # This class handles:
7
- # - Sending events individually to support guaranteed ordering
7
+ # - Sending events in batches to improve throughput
8
8
  # - Handling failures on a per-event basis
9
9
  # - Classifying errors as transient vs permanent
10
10
  #
11
11
  # Returns structured results for the caller to handle event state management.
12
12
  class KinesisBatchSender
13
- FailedEvent = Struct.new(:event, :error_code, :error_message, :transient, keyword_init: true) do
14
- def transient?
15
- transient
16
- end
17
-
18
- def permanent?
19
- !transient
20
- end
21
- end
22
-
23
- PERMANENT_ERROR_CLASSES = [
24
- Aws::Kinesis::Errors::ValidationException,
13
+ # Per-record error codes that indicate permanent failures (bad event data)
14
+ PERMANENT_ERROR_CODES = [
15
+ 'ValidationException',
25
16
  ].freeze
26
17
 
27
18
  # Send a batch of database events to Kinesis
28
19
  #
29
- # Sends events one at a time to guarantee ordering. Stops on first transient failure.
20
+ # Uses put_records batch API. Groups events by stream and sends each group as a batch.
30
21
  #
31
22
  # @param events [Array<Journaled::Outbox::Event>] Events to send
32
23
  # @return [Hash] Result with:
33
24
  # - succeeded: Array of successfully sent events
34
- # - failed: Array of FailedEvent structs (only permanent failures)
25
+ # - failed: Array of FailedEvent structs (both transient and permanent failures)
35
26
  def send_batch(events)
36
- result = { succeeded: [], failed: [] }
27
+ # Group events by stream since put_records requires all records to go to the same stream
28
+ events.group_by(&:stream_name).each_with_object({ succeeded: [], failed: [] }) do |(stream_name, stream_events), result|
29
+ batch_result = send_stream_batch(stream_name, stream_events)
30
+ result[:succeeded].concat(batch_result[:succeeded])
31
+ result[:failed].concat(batch_result[:failed])
32
+ end
33
+ end
37
34
 
38
- events.each do |event|
39
- event_result = send_event(event)
40
- if event_result.is_a?(FailedEvent)
41
- if event_result.transient?
42
- emit_transient_failure_metric
43
- break
44
- else
45
- result[:failed] << event_result
46
- end
47
- else
48
- result[:succeeded] << event_result
49
- end
35
+ private
36
+
37
+ def send_stream_batch(stream_name, stream_events)
38
+ records = build_records(stream_events)
39
+
40
+ begin
41
+ response = kinesis_client.put_records(stream_name:, records:)
42
+ process_response(response, stream_events)
43
+ rescue Aws::Kinesis::Errors::ValidationException
44
+ # Re-raise batch-level validation errors (configuration issues)
45
+ # These indicate invalid stream name, batch too large, etc.
46
+ # Not event data problems - requires manual intervention
47
+ raise
48
+ rescue StandardError => e
49
+ # Handle transient errors (throttling, network issues, service unavailable)
50
+ handle_transient_batch_error(e, stream_events)
50
51
  end
52
+ end
51
53
 
52
- result
54
+ def build_records(stream_events)
55
+ stream_events.map do |event|
56
+ {
57
+ data: event.event_data.merge(id: event.id).to_json,
58
+ partition_key: event.partition_key,
59
+ }
60
+ end
53
61
  end
54
62
 
55
- private
63
+ def process_response(response, stream_events)
64
+ succeeded = []
65
+ failed = []
56
66
 
57
- # Send a single event to Kinesis
58
- #
59
- # @param event [Journaled::Outbox::Event] Event to send
60
- # @return [Journaled::Outbox::Event, FailedEvent] The event on success, or FailedEvent on failure
61
- def send_event(event)
62
- # Merge the DB-generated ID into the event data before sending to Kinesis
63
- event_data_with_id = event.event_data.merge(id: event.id)
67
+ response.records.each_with_index do |record_result, index|
68
+ event = stream_events[index]
64
69
 
65
- kinesis_client.put_record(
66
- stream_name: event.stream_name,
67
- data: event_data_with_id.to_json,
68
- partition_key: event.partition_key,
69
- )
70
+ if record_result.error_code
71
+ failed << create_failed_event(event, record_result)
72
+ else
73
+ succeeded << event
74
+ end
75
+ end
70
76
 
71
- event
72
- rescue *PERMANENT_ERROR_CLASSES => e
73
- Rails.logger.error("Kinesis event send failed (permanent): #{e.class} - #{e.message}")
74
- FailedEvent.new(
75
- event:,
76
- error_code: e.class.to_s,
77
- error_message: e.message,
78
- transient: false,
79
- )
80
- rescue StandardError => e
81
- Rails.logger.error("Kinesis event send failed (transient): #{e.class} - #{e.message}")
82
- FailedEvent.new(
77
+ { succeeded:, failed: }
78
+ end
79
+
80
+ def create_failed_event(event, record_result)
81
+ Journaled::KinesisFailedEvent.new(
83
82
  event:,
84
- error_code: e.class.to_s,
85
- error_message: e.message,
86
- transient: true,
83
+ error_code: record_result.error_code,
84
+ error_message: record_result.error_message,
85
+ transient: PERMANENT_ERROR_CODES.exclude?(record_result.error_code),
87
86
  )
88
87
  end
89
88
 
90
- def kinesis_client
91
- @kinesis_client ||= KinesisClientFactory.build
89
+ def handle_transient_batch_error(error, stream_events)
90
+ Rails.logger.error("Kinesis batch send failed (transient): #{error.class} - #{error.message}")
91
+
92
+ failed = stream_events.map do |event|
93
+ Journaled::KinesisFailedEvent.new(
94
+ event:,
95
+ error_code: error.class.to_s,
96
+ error_message: error.message,
97
+ transient: true,
98
+ )
99
+ end
100
+
101
+ { succeeded: [], failed: }
92
102
  end
93
103
 
94
- def emit_transient_failure_metric
95
- ActiveSupport::Notifications.instrument('journaled.kinesis_batch_sender.transient_failure')
104
+ def kinesis_client
105
+ @kinesis_client ||= KinesisClientFactory.build
96
106
  end
97
107
  end
98
108
  end
@@ -0,0 +1,18 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Journaled
4
+ # Represents a failed event from Kinesis send operations
5
+ #
6
+ # Used by both KinesisBatchSender and KinesisSequentialSender to represent
7
+ # events that failed to send to Kinesis, along with error details and whether
8
+ # the failure is transient (retriable) or permanent.
9
+ KinesisFailedEvent = Struct.new(:event, :error_code, :error_message, :transient, keyword_init: true) do
10
+ def transient?
11
+ transient
12
+ end
13
+
14
+ def permanent?
15
+ !transient
16
+ end
17
+ end
18
+ end
@@ -0,0 +1,85 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Journaled
4
+ # Sends batches of events to Kinesis using the PutRecord single-event API
5
+ #
6
+ # This class handles:
7
+ # - Sending events individually in order to support guaranteed ordering
8
+ # - Stopping on first transient failure to preserve ordering
9
+ # - Classifying errors as transient vs permanent
10
+ #
11
+ # Returns structured results for the caller to handle event state management.
12
+ class KinesisSequentialSender
13
+ PERMANENT_ERROR_CLASSES = [
14
+ Aws::Kinesis::Errors::ValidationException,
15
+ ].freeze
16
+
17
+ # Send a batch of database events to Kinesis
18
+ #
19
+ # Sends events one at a time to guarantee ordering. Stops on first transient failure.
20
+ #
21
+ # @param events [Array<Journaled::Outbox::Event>] Events to send
22
+ # @return [Hash] Result with:
23
+ # - succeeded: Array of successfully sent events
24
+ # - failed: Array of FailedEvent structs (only permanent failures)
25
+ def send_batch(events)
26
+ result = { succeeded: [], failed: [] }
27
+
28
+ events.each do |event|
29
+ event_result = send_event(event)
30
+ if event_result.is_a?(Journaled::KinesisFailedEvent)
31
+ if event_result.transient?
32
+ emit_transient_failure_metric
33
+ break
34
+ else
35
+ result[:failed] << event_result
36
+ end
37
+ else
38
+ result[:succeeded] << event_result
39
+ end
40
+ end
41
+
42
+ result
43
+ end
44
+
45
+ private
46
+
47
+ # Send a single event to Kinesis
48
+ #
49
+ # @param event [Journaled::Outbox::Event] Event to send
50
+ # @return [Journaled::Outbox::Event, FailedEvent] The event on success, or FailedEvent on failure
51
+ def send_event(event)
52
+ kinesis_client.put_record(
53
+ stream_name: event.stream_name,
54
+ data: event.event_data.merge(id: event.id).to_json,
55
+ partition_key: event.partition_key,
56
+ )
57
+
58
+ event
59
+ rescue *PERMANENT_ERROR_CLASSES => e
60
+ Rails.logger.error("[Journaled] Kinesis event send failed (permanent): #{e.class} - #{e.message}")
61
+ Journaled::KinesisFailedEvent.new(
62
+ event:,
63
+ error_code: e.class.to_s,
64
+ error_message: e.message,
65
+ transient: false,
66
+ )
67
+ rescue StandardError => e
68
+ Rails.logger.error("[Journaled] Kinesis event send failed (transient): #{e.class} - #{e.message}")
69
+ Journaled::KinesisFailedEvent.new(
70
+ event:,
71
+ error_code: e.class.to_s,
72
+ error_message: e.message,
73
+ transient: true,
74
+ )
75
+ end
76
+
77
+ def kinesis_client
78
+ @kinesis_client ||= KinesisClientFactory.build
79
+ end
80
+
81
+ def emit_transient_failure_metric
82
+ ActiveSupport::Notifications.instrument('journaled.kinesis_sequential_sender.transient_failure')
83
+ end
84
+ end
85
+ end
@@ -6,31 +6,36 @@ module Journaled
6
6
  #
7
7
  # This class handles the core business logic of:
8
8
  # - Fetching events from the database (with FOR UPDATE)
9
- # - Sending them to Kinesis one at a time to guarantee ordering
9
+ # - Sending them to Kinesis (batch API or sequential)
10
10
  # - Handling successful deliveries (deleting events)
11
11
  # - Handling permanent failures (marking with failed_at)
12
- # - Handling ephemeral failures (stopping processing and committing)
12
+ # - Handling transient failures (leaving unlocked for retry)
13
13
  #
14
- # Events are processed one at a time to guarantee ordering. If an event fails
15
- # with an ephemeral error, processing stops and the transaction commits
16
- # (deleting successes and marking permanent failures), then the loop re-enters.
14
+ # Supports two modes based on Journaled.outbox_processing_mode:
15
+ # - :batch - Uses put_records API for high throughput with parallel workers
16
+ # - :guaranteed_order - Uses put_record API for sequential processing
17
17
  #
18
18
  # All operations happen within a single database transaction for consistency.
19
19
  # The Worker class delegates to this for actual event processing.
20
20
  class BatchProcessor
21
21
  def initialize
22
- @batch_sender = KinesisBatchSender.new
22
+ @batch_sender = if Journaled.outbox_processing_mode == :guaranteed_order
23
+ KinesisSequentialSender.new
24
+ else
25
+ KinesisBatchSender.new
26
+ end
23
27
  end
24
28
 
25
29
  # Process a single batch of events
26
30
  #
27
31
  # Wraps the entire batch processing in a single transaction:
28
32
  # 1. SELECT FOR UPDATE (claim events)
29
- # 2. Send to Kinesis (batch sender handles one-at-a-time and short-circuiting)
33
+ # 2. Send to Kinesis (batch API or sequential, based on mode)
30
34
  # 3. Delete successful events
31
- # 4. Mark failed events (batch sender only returns permanent failures)
35
+ # 4. Mark permanently failed events
36
+ # 5. Leave transient failures untouched (will be retried)
32
37
  #
33
- # @return [Hash] Statistics with :succeeded, :failed_permanently counts
38
+ # @return [Hash] Statistics with :succeeded, :failed_permanently, :failed_transiently counts
34
39
  def process_batch
35
40
  ActiveRecord::Base.transaction do
36
41
  events = Event.fetch_batch_for_update
@@ -38,20 +43,24 @@ module Journaled
38
43
 
39
44
  result = batch_sender.send_batch(events)
40
45
 
41
- # Delete successful events
42
46
  Event.where(id: result[:succeeded].map(&:id)).delete_all if result[:succeeded].any?
43
47
 
44
- # Mark failed events
45
- mark_events_as_failed(result[:failed]) if result[:failed].any?
48
+ permanent_failures = result[:failed].select(&:permanent?)
49
+ transient_failures = result[:failed].select(&:transient?)
50
+
51
+ mark_events_as_failed(permanent_failures) if permanent_failures.any?
46
52
 
47
53
  Rails.logger.info(
48
54
  "[journaled] Batch complete: #{result[:succeeded].count} succeeded, " \
49
- "#{result[:failed].count} marked as failed (batch size: #{events.count})",
55
+ "#{permanent_failures.count} permanently failed, " \
56
+ "#{transient_failures.count} transiently failed (will retry) " \
57
+ "(batch size: #{events.count})",
50
58
  )
51
59
 
52
60
  {
53
61
  succeeded: result[:succeeded].count,
54
- failed_permanently: result[:failed].count,
62
+ failed_permanently: permanent_failures.count,
63
+ failed_transiently: transient_failures.count,
55
64
  }
56
65
  end
57
66
  end
@@ -12,13 +12,14 @@ module Journaled
12
12
 
13
13
  # Emit batch processing metrics
14
14
  #
15
- # @param stats [Hash] Processing statistics with :succeeded, :failed_permanently
15
+ # @param stats [Hash] Processing statistics with :succeeded, :failed_permanently, :failed_transiently
16
16
  def emit_batch_metrics(stats)
17
- total_events = stats[:succeeded] + stats[:failed_permanently]
17
+ total_events = stats[:succeeded] + stats[:failed_permanently] + stats[:failed_transiently]
18
18
 
19
19
  emit_metric('journaled.worker.batch_process', value: total_events)
20
20
  emit_metric('journaled.worker.batch_sent', value: stats[:succeeded])
21
- emit_metric('journaled.worker.batch_failed', value: stats[:failed_permanently])
21
+ emit_metric('journaled.worker.batch_failed_permanently', value: stats[:failed_permanently])
22
+ emit_metric('journaled.worker.batch_failed_transiently', value: stats[:failed_transiently])
22
23
  end
23
24
 
24
25
  # Collect and emit queue metrics
@@ -60,9 +60,8 @@ module Journaled
60
60
  break
61
61
  end
62
62
 
63
- events_processed = 0
64
63
  begin
65
- events_processed = process_batch
64
+ process_batch
66
65
  emit_metrics_if_needed
67
66
  rescue StandardError => e
68
67
  Rails.logger.error("Worker error: #{e.class} - #{e.message}")
@@ -71,20 +70,13 @@ module Journaled
71
70
 
72
71
  break if shutdown_requested
73
72
 
74
- # Only sleep if no events were processed to prevent excessive polling on empty table
75
- sleep(Journaled.worker_poll_interval) if events_processed.zero?
73
+ sleep(Journaled.worker_poll_interval)
76
74
  end
77
75
  end
78
76
 
79
77
  def process_batch
80
78
  stats = processor.process_batch
81
79
 
82
- instrument_batch_results(stats)
83
-
84
- stats[:succeeded] + stats[:failed_permanently]
85
- end
86
-
87
- def instrument_batch_results(stats)
88
80
  metric_emitter.emit_batch_metrics(stats)
89
81
  end
90
82
 
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module Journaled
4
- VERSION = "6.2.1"
4
+ VERSION = "6.2.2"
5
5
  end
data/lib/journaled.rb CHANGED
@@ -12,7 +12,9 @@ require 'journaled/delivery_adapter'
12
12
  require 'journaled/delivery_adapters/active_job_adapter'
13
13
  require 'journaled/outbox/adapter'
14
14
  require 'journaled/kinesis_client_factory'
15
+ require 'journaled/kinesis_failed_event'
15
16
  require 'journaled/kinesis_batch_sender'
17
+ require 'journaled/kinesis_sequential_sender'
16
18
  require 'journaled/outbox/batch_processor'
17
19
  require 'journaled/outbox/metric_emitter'
18
20
  require 'journaled/outbox/worker'
@@ -31,8 +33,9 @@ module Journaled
31
33
  mattr_writer(:transactional_batching_enabled) { true }
32
34
 
33
35
  # Worker configuration (for Outbox-style event processing)
34
- mattr_accessor(:worker_batch_size) { 1000 }
35
- mattr_accessor(:worker_poll_interval) { 1 } # seconds
36
+ mattr_accessor(:worker_batch_size) { 500 }
37
+ mattr_accessor(:worker_poll_interval) { 0.5 } # seconds
38
+ mattr_accessor(:outbox_processing_mode) { :batch } # :batch or :guaranteed_order
36
39
 
37
40
  def self.transactional_batching_enabled?
38
41
  Thread.current[:journaled_transactional_batching_enabled] || @@transactional_batching_enabled
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: journaled
3
3
  version: !ruby/object:Gem::Version
4
- version: 6.2.1
4
+ version: 6.2.2
5
5
  platform: ruby
6
6
  authors:
7
7
  - Jake Lipson
@@ -274,6 +274,8 @@ files:
274
274
  - lib/journaled/errors.rb
275
275
  - lib/journaled/kinesis_batch_sender.rb
276
276
  - lib/journaled/kinesis_client_factory.rb
277
+ - lib/journaled/kinesis_failed_event.rb
278
+ - lib/journaled/kinesis_sequential_sender.rb
277
279
  - lib/journaled/outbox/adapter.rb
278
280
  - lib/journaled/outbox/batch_processor.rb
279
281
  - lib/journaled/outbox/metric_emitter.rb