journaled 6.0.0 → 6.2.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/README.md +134 -3
- data/app/jobs/journaled/delivery_job.rb +1 -48
- data/app/models/journaled/outbox/event.rb +60 -0
- data/app/models/journaled/writer.rb +1 -11
- data/db/migrate/1_install_uuid_generate_v7.rb +27 -0
- data/db/migrate/2_create_journaled_events.rb +22 -0
- data/lib/journaled/connection.rb +1 -11
- data/lib/journaled/delivery_adapter.rb +41 -0
- data/lib/journaled/delivery_adapters/active_job_adapter.rb +75 -0
- data/lib/journaled/engine.rb +3 -1
- data/lib/journaled/kinesis_batch_sender.rb +98 -0
- data/lib/journaled/kinesis_client_factory.rb +62 -0
- data/lib/journaled/outbox/adapter.rb +96 -0
- data/lib/journaled/outbox/batch_processor.rb +86 -0
- data/lib/journaled/outbox/metric_emitter.rb +84 -0
- data/lib/journaled/outbox/worker.rb +135 -0
- data/lib/journaled/version.rb +1 -1
- data/lib/journaled.rb +14 -15
- data/lib/tasks/journaled_worker.rake +8 -0
- metadata +17 -7
checksums.yaml
CHANGED
|
@@ -1,7 +1,7 @@
|
|
|
1
1
|
---
|
|
2
2
|
SHA256:
|
|
3
|
-
metadata.gz:
|
|
4
|
-
data.tar.gz:
|
|
3
|
+
metadata.gz: bb4df3fde5197a7e78aa3adff771292b722d866c0c8ac21bc3b0f1629c760a4a
|
|
4
|
+
data.tar.gz: 20132d03f33b880433969c3604ca8cf234a54e67797abe38dd78cf5511ee04bf
|
|
5
5
|
SHA512:
|
|
6
|
-
metadata.gz:
|
|
7
|
-
data.tar.gz:
|
|
6
|
+
metadata.gz: 9d540540921c7da3fc35599cb1c8232d0b898dbd7d1816b4b8a1960eaf5afc0b23f6052a6ed9c22e15c92eaf9fc6abe1fe67df73a97751d8442a58c599b56baa
|
|
7
|
+
data.tar.gz: 1a905b5b5e657f58fa5d9ebdd62defc012033ccbba7d4a0fa9f771fad9b3fc9412a3049da2838a18cef07a8500089161860d6fc4ed15e63111cd03f5728770e6
|
data/README.md
CHANGED
|
@@ -22,9 +22,11 @@ durable, eventually consistent record that discrete events happened.
|
|
|
22
22
|
|
|
23
23
|
## Installation
|
|
24
24
|
|
|
25
|
-
1.
|
|
26
|
-
|
|
27
|
-
|
|
25
|
+
1. **Configure a queue adapter** (only required if using the default ActiveJob delivery adapter):
|
|
26
|
+
|
|
27
|
+
If you haven't already,
|
|
28
|
+
[configure ActiveJob](https://guides.rubyonrails.org/active_job_basics.html)
|
|
29
|
+
to use one of the following queue adapters:
|
|
28
30
|
|
|
29
31
|
- `:delayed_job` (via `delayed_job_active_record`)
|
|
30
32
|
- `:que`
|
|
@@ -52,6 +54,8 @@ to use one of the following queue adapters:
|
|
|
52
54
|
|
|
53
55
|
This configuration isn't necessary for applications running Rails 8+.
|
|
54
56
|
|
|
57
|
+
**Note:** If you plan to use the [Outbox-style Event Processing](#outbox-style-event-processing-optional) (Outbox adapter), you can skip this step entirely, as the Outbox adapter does not use ActiveJob.
|
|
58
|
+
|
|
55
59
|
2. To integrate Journaled into your application, simply include the gem in your
|
|
56
60
|
app's Gemfile.
|
|
57
61
|
|
|
@@ -129,6 +133,37 @@ Journaling provides a number of different configuation options that can be set i
|
|
|
129
133
|
|
|
130
134
|
The number of seconds before the :http_handler should timeout while waiting for a HTTP response.
|
|
131
135
|
|
|
136
|
+
#### `Journaled.delivery_adapter` (default: `Journaled::DeliveryAdapters::ActiveJobAdapter`)
|
|
137
|
+
|
|
138
|
+
Determines how events are delivered to Kinesis. Two options are available:
|
|
139
|
+
|
|
140
|
+
- **`Journaled::DeliveryAdapters::ActiveJobAdapter`** (default) - Enqueues events to ActiveJob. Requires a DB-backed queue adapter (see Installation).
|
|
141
|
+
|
|
142
|
+
- **`Journaled::Outbox::Adapter`** - Stores events in a database table and processes them via separate worker daemons. See [Outbox-style Event Processing](#outbox-style-event-processing-optional) for setup instructions.
|
|
143
|
+
|
|
144
|
+
Example:
|
|
145
|
+
```ruby
|
|
146
|
+
# Use the Outbox-style adapter
|
|
147
|
+
Journaled.delivery_adapter = Journaled::Outbox::Adapter
|
|
148
|
+
```
|
|
149
|
+
|
|
150
|
+
#### `Journaled.outbox_base_class_name` (default: `'ActiveRecord::Base'`)
|
|
151
|
+
|
|
152
|
+
**Only relevant when using `Journaled::Outbox::Adapter`.**
|
|
153
|
+
|
|
154
|
+
Specifies which ActiveRecord base class the Outbox event storage model (`Journaled::Outbox::Event`) should use for its database connection. This is useful for multi-database setups where you want to store events in a separate database.
|
|
155
|
+
|
|
156
|
+
Example:
|
|
157
|
+
```ruby
|
|
158
|
+
# Store outbox events in a separate database
|
|
159
|
+
class EventsRecord < ActiveRecord::Base
|
|
160
|
+
self.abstract_class = true
|
|
161
|
+
connects_to database: { writing: :events, reading: :events }
|
|
162
|
+
end
|
|
163
|
+
|
|
164
|
+
Journaled.outbox_base_class_name = 'EventsRecord'
|
|
165
|
+
```
|
|
166
|
+
|
|
132
167
|
#### ActiveJob `set` options
|
|
133
168
|
|
|
134
169
|
Both model-level directives accept additional options to be passed into ActiveJob's `set` method:
|
|
@@ -143,6 +178,102 @@ has_audit_log enqueue_with: { priority: 30 }
|
|
|
143
178
|
# Or for custom journaling:
|
|
144
179
|
journal_attributes :email, enqueue_with: { priority: 20, queue: 'journaled' }
|
|
145
180
|
```
|
|
181
|
+
##### Outbox-style Event Processing (Optional)
|
|
182
|
+
|
|
183
|
+
Journaled includes a built-in Outbox-style delivery adapter with horizontally scalable workers.
|
|
184
|
+
|
|
185
|
+
**Setup:**
|
|
186
|
+
|
|
187
|
+
This feature requires creating database tables and is completely optional. Existing users are unaffected.
|
|
188
|
+
|
|
189
|
+
1. **Install migrations:**
|
|
190
|
+
|
|
191
|
+
```bash
|
|
192
|
+
rake journaled:install:migrations
|
|
193
|
+
rails db:migrate
|
|
194
|
+
```
|
|
195
|
+
|
|
196
|
+
This creates a table for storing events:
|
|
197
|
+
- `journaled_outbox_events` - Queue of events to be processed (includes `failed_at` column for tracking failures)
|
|
198
|
+
|
|
199
|
+
2. **Configure to use the database adapter:**
|
|
200
|
+
|
|
201
|
+
```ruby
|
|
202
|
+
# config/initializers/journaled.rb
|
|
203
|
+
|
|
204
|
+
# Use the Outbox-style adapter instead of ActiveJob
|
|
205
|
+
Journaled.delivery_adapter = Journaled::Outbox::Adapter
|
|
206
|
+
|
|
207
|
+
# Optional: Customize worker behavior (these are the defaults)
|
|
208
|
+
Journaled.worker_batch_size = 500 # Max events per Kinesis batch (Kinesis API limit)
|
|
209
|
+
Journaled.worker_poll_interval = 5 # Seconds between polls
|
|
210
|
+
```
|
|
211
|
+
|
|
212
|
+
**Note:** When using the Outbox adapter, you do **not** need to configure an ActiveJob queue adapter (skip step 1 of Installation). The Outbox adapter uses the `journaled_outbox_events` table for event storage and its own worker daemons for processing, making it independent of ActiveJob. Transactional batching still works seamlessly with the Outbox adapter.
|
|
213
|
+
|
|
214
|
+
3. **Start worker daemon(s):**
|
|
215
|
+
|
|
216
|
+
```bash
|
|
217
|
+
bundle exec rake journaled_worker:work
|
|
218
|
+
```
|
|
219
|
+
|
|
220
|
+
4. **Monitoring:**
|
|
221
|
+
|
|
222
|
+
The system emits `ActiveSupport::Notifications` events:
|
|
223
|
+
|
|
224
|
+
```ruby
|
|
225
|
+
# config/initializers/journaled.rb
|
|
226
|
+
|
|
227
|
+
# Emitted for every batch processed (regardless of outcome)
|
|
228
|
+
ActiveSupport::Notifications.subscribe('journaled.worker.batch_process') do |name, start, finish, id, payload|
|
|
229
|
+
Statsd.increment('journaled.worker.batches', tags: ["worker:#{payload[:worker_id]}"])
|
|
230
|
+
end
|
|
231
|
+
|
|
232
|
+
# Emitted for successfully sent events
|
|
233
|
+
ActiveSupport::Notifications.subscribe('journaled.worker.batch_sent') do |name, start, finish, id, payload|
|
|
234
|
+
Statsd.increment('journaled.worker.events_sent', payload[:event_count], tags: ["worker:#{payload[:worker_id]}"])
|
|
235
|
+
end
|
|
236
|
+
|
|
237
|
+
# Emitted for permanently failed events (marked as failed in database)
|
|
238
|
+
ActiveSupport::Notifications.subscribe('journaled.worker.batch_failed') do |name, start, finish, id, payload|
|
|
239
|
+
Statsd.increment('journaled.worker.events_failed', payload[:event_count], tags: ["worker:#{payload[:worker_id]}"])
|
|
240
|
+
end
|
|
241
|
+
|
|
242
|
+
# Emitted for transiently failed events (will be retried)
|
|
243
|
+
ActiveSupport::Notifications.subscribe('journaled.worker.batch_errored') do |name, start, finish, id, payload|
|
|
244
|
+
Statsd.increment('journaled.worker.events_errored', payload[:event_count], tags: ["worker:#{payload[:worker_id]}"])
|
|
245
|
+
end
|
|
246
|
+
|
|
247
|
+
# Emitted once per minute with queue statistics
|
|
248
|
+
ActiveSupport::Notifications.subscribe('journaled.worker.queue_metrics') do |name, start, finish, id, payload|
|
|
249
|
+
Statsd.gauge('journaled.worker.queue.total', payload[:total_count], tags: ["worker:#{payload[:worker_id]}"])
|
|
250
|
+
Statsd.gauge('journaled.worker.queue.workable', payload[:workable_count], tags: ["worker:#{payload[:worker_id]}"])
|
|
251
|
+
Statsd.gauge('journaled.worker.queue.erroring', payload[:erroring_count], tags: ["worker:#{payload[:worker_id]}"])
|
|
252
|
+
Statsd.gauge('journaled.worker.queue.oldest_age_seconds', payload[:oldest_age_seconds], tags: ["worker:#{payload[:worker_id]}"]) if payload[:oldest_age_seconds]
|
|
253
|
+
end
|
|
254
|
+
```
|
|
255
|
+
|
|
256
|
+
Queue metrics payload includes:
|
|
257
|
+
- `total_count` - Total number of events in the queue (including failed)
|
|
258
|
+
- `workable_count` - Events ready to be processed (not failed)
|
|
259
|
+
- `erroring_count` - Events with errors but not yet marked as permanently failed
|
|
260
|
+
- `oldest_non_failed_timestamp` - Timestamp of the oldest non-failed event (extracted from UUID v7)
|
|
261
|
+
- `oldest_age_seconds` - Age in seconds of the oldest non-failed event
|
|
262
|
+
|
|
263
|
+
Note: Metrics are collected in a background thread to avoid blocking the main worker loop.
|
|
264
|
+
|
|
265
|
+
5. **Failed Events:**
|
|
266
|
+
|
|
267
|
+
Inspect and requeue failed events:
|
|
268
|
+
|
|
269
|
+
```ruby
|
|
270
|
+
# Find failed events
|
|
271
|
+
Journaled::Outbox::Event.failed.where(stream_name: 'my_stream')
|
|
272
|
+
|
|
273
|
+
# Requeue a failed event (clears failure info and resets attempts)
|
|
274
|
+
failed_event = Journaled::Outbox::Event.failed.find(123)
|
|
275
|
+
failed_event.requeue!
|
|
276
|
+
```
|
|
146
277
|
|
|
147
278
|
### Attribution
|
|
148
279
|
|
|
@@ -2,8 +2,6 @@
|
|
|
2
2
|
|
|
3
3
|
module Journaled
|
|
4
4
|
class DeliveryJob < ApplicationJob
|
|
5
|
-
DEFAULT_REGION = 'us-east-1'
|
|
6
|
-
|
|
7
5
|
rescue_from(Aws::Kinesis::Errors::InternalFailure, Aws::Kinesis::Errors::ServiceUnavailable, Aws::Kinesis::Errors::Http503Error) do |e|
|
|
8
6
|
Rails.logger.error "Kinesis Error - Server Error occurred - #{e.class}"
|
|
9
7
|
raise KinesisTemporaryFailure
|
|
@@ -20,16 +18,6 @@ module Journaled
|
|
|
20
18
|
journal! if Journaled.enabled?
|
|
21
19
|
end
|
|
22
20
|
|
|
23
|
-
def kinesis_client_config
|
|
24
|
-
{
|
|
25
|
-
region: ENV.fetch('AWS_DEFAULT_REGION', DEFAULT_REGION),
|
|
26
|
-
retry_limit: 0,
|
|
27
|
-
http_idle_timeout: Journaled.http_idle_timeout,
|
|
28
|
-
http_open_timeout: Journaled.http_open_timeout,
|
|
29
|
-
http_read_timeout: Journaled.http_read_timeout,
|
|
30
|
-
}.merge(credentials)
|
|
31
|
-
end
|
|
32
|
-
|
|
33
21
|
private
|
|
34
22
|
|
|
35
23
|
KinesisRecord = Struct.new(:serialized_event, :partition_key, :stream_name, keyword_init: true) do
|
|
@@ -51,42 +39,7 @@ module Journaled
|
|
|
51
39
|
end
|
|
52
40
|
|
|
53
41
|
def kinesis_client
|
|
54
|
-
|
|
55
|
-
end
|
|
56
|
-
|
|
57
|
-
def credentials
|
|
58
|
-
if ENV.key?('JOURNALED_IAM_ROLE_ARN')
|
|
59
|
-
{
|
|
60
|
-
credentials: iam_assume_role_credentials,
|
|
61
|
-
}
|
|
62
|
-
else
|
|
63
|
-
legacy_credentials_hash_if_present
|
|
64
|
-
end
|
|
65
|
-
end
|
|
66
|
-
|
|
67
|
-
def legacy_credentials_hash_if_present
|
|
68
|
-
if ENV.key?('RUBY_AWS_ACCESS_KEY_ID')
|
|
69
|
-
{
|
|
70
|
-
access_key_id: ENV.fetch('RUBY_AWS_ACCESS_KEY_ID'),
|
|
71
|
-
secret_access_key: ENV.fetch('RUBY_AWS_SECRET_ACCESS_KEY'),
|
|
72
|
-
}
|
|
73
|
-
else
|
|
74
|
-
{}
|
|
75
|
-
end
|
|
76
|
-
end
|
|
77
|
-
|
|
78
|
-
def sts_client
|
|
79
|
-
Aws::STS::Client.new({
|
|
80
|
-
region: ENV.fetch('AWS_DEFAULT_REGION', DEFAULT_REGION),
|
|
81
|
-
}.merge(legacy_credentials_hash_if_present))
|
|
82
|
-
end
|
|
83
|
-
|
|
84
|
-
def iam_assume_role_credentials
|
|
85
|
-
@iam_assume_role_credentials ||= Aws::AssumeRoleCredentials.new(
|
|
86
|
-
client: sts_client,
|
|
87
|
-
role_arn: ENV.fetch('JOURNALED_IAM_ROLE_ARN'),
|
|
88
|
-
role_session_name: "JournaledAssumeRoleAccess",
|
|
89
|
-
)
|
|
42
|
+
@kinesis_client ||= KinesisClientFactory.build
|
|
90
43
|
end
|
|
91
44
|
|
|
92
45
|
class KinesisTemporaryFailure < NotTrulyExceptionalError
|
|
@@ -0,0 +1,60 @@
|
|
|
1
|
+
# frozen_string_literal: true
|
|
2
|
+
|
|
3
|
+
module Journaled
|
|
4
|
+
module Outbox
|
|
5
|
+
# ActiveRecord model for Outbox-style event processing
|
|
6
|
+
#
|
|
7
|
+
# This model is only used when the Outbox::Adapter is configured.
|
|
8
|
+
# Events are stored in the database and processed by worker daemons.
|
|
9
|
+
#
|
|
10
|
+
# Successfully delivered events are deleted immediately.
|
|
11
|
+
# Failed events are marked with failed_at and can be queried or requeued.
|
|
12
|
+
class Event < Journaled.outbox_base_class_name.constantize
|
|
13
|
+
self.table_name = 'journaled_outbox_events'
|
|
14
|
+
|
|
15
|
+
self.record_timestamps = false # use db default
|
|
16
|
+
|
|
17
|
+
skip_audit_log
|
|
18
|
+
|
|
19
|
+
attribute :event_data, :json
|
|
20
|
+
|
|
21
|
+
validates :event_type, :event_data, :partition_key, :stream_name, presence: true
|
|
22
|
+
|
|
23
|
+
scope :ready_to_process, -> {
|
|
24
|
+
where(failed_at: nil)
|
|
25
|
+
.order(:id)
|
|
26
|
+
}
|
|
27
|
+
|
|
28
|
+
scope :failed, -> { where.not(failed_at: nil) }
|
|
29
|
+
|
|
30
|
+
# Fetch a batch of events for processing using SELECT FOR UPDATE
|
|
31
|
+
#
|
|
32
|
+
# @return [Array<Journaled::Outbox::Event>] Events locked for processing
|
|
33
|
+
def self.fetch_batch_for_update
|
|
34
|
+
ready_to_process
|
|
35
|
+
.limit(Journaled.worker_batch_size)
|
|
36
|
+
.lock
|
|
37
|
+
.to_a
|
|
38
|
+
end
|
|
39
|
+
|
|
40
|
+
# Requeue a failed event for processing
|
|
41
|
+
#
|
|
42
|
+
# Clears failure information so the event can be retried.
|
|
43
|
+
#
|
|
44
|
+
# @return [Boolean] Whether the requeue was successful
|
|
45
|
+
def requeue!
|
|
46
|
+
update!(
|
|
47
|
+
failed_at: nil,
|
|
48
|
+
failure_reason: nil,
|
|
49
|
+
)
|
|
50
|
+
end
|
|
51
|
+
|
|
52
|
+
# Get the oldest non-failed event's timestamp
|
|
53
|
+
#
|
|
54
|
+
# @return [Time, nil] The timestamp of the oldest event, or nil if no events exist
|
|
55
|
+
def self.oldest_non_failed_timestamp
|
|
56
|
+
ready_to_process.order(:id).limit(1).pick(:created_at)
|
|
57
|
+
end
|
|
58
|
+
end
|
|
59
|
+
end
|
|
60
|
+
end
|
|
@@ -42,23 +42,13 @@ class Journaled::Writer
|
|
|
42
42
|
events.group_by(&:journaled_enqueue_opts).each do |enqueue_opts, batch|
|
|
43
43
|
job_opts = enqueue_opts.reverse_merge(priority: Journaled.job_priority)
|
|
44
44
|
ActiveSupport::Notifications.instrument('journaled.batch.enqueue', batch: batch, **job_opts) do
|
|
45
|
-
Journaled
|
|
45
|
+
Journaled.delivery_adapter.deliver(events: batch, enqueue_opts: job_opts)
|
|
46
46
|
|
|
47
47
|
batch.each { |event| ActiveSupport::Notifications.instrument('journaled.event.enqueue', event: event, **job_opts) }
|
|
48
48
|
end
|
|
49
49
|
end
|
|
50
50
|
end
|
|
51
51
|
|
|
52
|
-
def self.delivery_perform_args(events)
|
|
53
|
-
events.map do |event|
|
|
54
|
-
{
|
|
55
|
-
serialized_event: event.journaled_attributes.to_json,
|
|
56
|
-
partition_key: event.journaled_partition_key,
|
|
57
|
-
stream_name: event.journaled_stream_name,
|
|
58
|
-
}
|
|
59
|
-
end
|
|
60
|
-
end
|
|
61
|
-
|
|
62
52
|
private
|
|
63
53
|
|
|
64
54
|
attr_reader :journaled_event
|
|
@@ -0,0 +1,27 @@
|
|
|
1
|
+
# frozen_string_literal: true
|
|
2
|
+
|
|
3
|
+
class InstallUuidGenerateV7 < ActiveRecord::Migration[7.2]
|
|
4
|
+
def up
|
|
5
|
+
# Enable pgcrypto extension for gen_random_bytes()
|
|
6
|
+
enable_extension 'pgcrypto'
|
|
7
|
+
|
|
8
|
+
# Install UUID v7 generation function
|
|
9
|
+
# Source: https://github.com/Betterment/postgresql-uuid-generate-v7
|
|
10
|
+
execute <<-SQL.squish
|
|
11
|
+
CREATE OR REPLACE FUNCTION uuid_generate_v7()
|
|
12
|
+
RETURNS uuid
|
|
13
|
+
LANGUAGE plpgsql
|
|
14
|
+
PARALLEL SAFE
|
|
15
|
+
AS $$
|
|
16
|
+
DECLARE
|
|
17
|
+
unix_time_ms CONSTANT bytea NOT NULL DEFAULT substring(int8send((extract(epoch FROM clock_timestamp()) * 1000)::bigint) from 3);
|
|
18
|
+
buffer bytea NOT NULL DEFAULT unix_time_ms || gen_random_bytes(10);
|
|
19
|
+
BEGIN
|
|
20
|
+
buffer = set_byte(buffer, 6, (b'0111' || get_byte(buffer, 6)::bit(4))::bit(8)::int);
|
|
21
|
+
buffer = set_byte(buffer, 8, (b'10' || get_byte(buffer, 8)::bit(6))::bit(8)::int);
|
|
22
|
+
RETURN encode(buffer, 'hex');
|
|
23
|
+
END
|
|
24
|
+
$$;
|
|
25
|
+
SQL
|
|
26
|
+
end
|
|
27
|
+
end
|
|
@@ -0,0 +1,22 @@
|
|
|
1
|
+
# frozen_string_literal: true
|
|
2
|
+
|
|
3
|
+
class CreateJournaledEvents < ActiveRecord::Migration[7.2]
|
|
4
|
+
def change
|
|
5
|
+
# UUID v7 primary key (auto-generated by database using uuid_generate_v7())
|
|
6
|
+
create_table :journaled_outbox_events, id: :uuid, default: -> { "uuid_generate_v7()" } do |t|
|
|
7
|
+
# Event identification and data
|
|
8
|
+
t.string :event_type, null: false
|
|
9
|
+
t.jsonb :event_data, null: false
|
|
10
|
+
t.string :partition_key, null: false
|
|
11
|
+
t.string :stream_name, null: false
|
|
12
|
+
|
|
13
|
+
t.text :failure_reason
|
|
14
|
+
|
|
15
|
+
t.datetime :failed_at
|
|
16
|
+
t.datetime :created_at, null: false, default: -> { "clock_timestamp()" }
|
|
17
|
+
end
|
|
18
|
+
|
|
19
|
+
# Index for querying failed events
|
|
20
|
+
add_index :journaled_outbox_events, :failed_at
|
|
21
|
+
end
|
|
22
|
+
end
|
data/lib/journaled/connection.rb
CHANGED
|
@@ -22,17 +22,7 @@ module Journaled
|
|
|
22
22
|
end
|
|
23
23
|
|
|
24
24
|
def connection
|
|
25
|
-
|
|
26
|
-
Delayed::Job.connection
|
|
27
|
-
elsif Journaled.queue_adapter == 'good_job'
|
|
28
|
-
GoodJob::BaseRecord.connection
|
|
29
|
-
elsif Journaled.queue_adapter == 'que'
|
|
30
|
-
Que::ActiveRecord::Model.connection
|
|
31
|
-
elsif Journaled.queue_adapter == 'test' && Rails.env.test?
|
|
32
|
-
ActiveRecord::Base.connection
|
|
33
|
-
else
|
|
34
|
-
raise "Unsupported adapter: #{Journaled.queue_adapter}"
|
|
35
|
-
end
|
|
25
|
+
Journaled.delivery_adapter.transaction_connection
|
|
36
26
|
end
|
|
37
27
|
end
|
|
38
28
|
|
|
@@ -0,0 +1,41 @@
|
|
|
1
|
+
# frozen_string_literal: true
|
|
2
|
+
|
|
3
|
+
module Journaled
|
|
4
|
+
# Base class for delivery adapters
|
|
5
|
+
#
|
|
6
|
+
# Journaled ships with two delivery adapters:
|
|
7
|
+
# - Journaled::DeliveryAdapters::ActiveJobAdapter (default) - delivers via ActiveJob
|
|
8
|
+
# - Journaled::Outbox::Adapter - delivers via Outbox-style workers
|
|
9
|
+
#
|
|
10
|
+
class DeliveryAdapter
|
|
11
|
+
# Delivers a batch of events
|
|
12
|
+
#
|
|
13
|
+
# @param events [Array] Array of journaled events to deliver
|
|
14
|
+
# @param enqueue_opts [Hash] Options for delivery (priority, queue, wait, wait_until, etc.)
|
|
15
|
+
# @return [void]
|
|
16
|
+
def self.deliver(events:, enqueue_opts:) # rubocop:disable Lint/UnusedMethodArgument
|
|
17
|
+
raise NoMethodError, "#{name} must implement .deliver(events:, enqueue_opts:)"
|
|
18
|
+
end
|
|
19
|
+
|
|
20
|
+
# Returns the database connection to use for transactional batching
|
|
21
|
+
#
|
|
22
|
+
# This allows delivery adapters to specify which database connection should be used
|
|
23
|
+
# when staging events during a transaction. This is only needed if you want to support
|
|
24
|
+
# transactional batching with your adapter.
|
|
25
|
+
#
|
|
26
|
+
# @return [ActiveRecord::ConnectionAdapters::AbstractAdapter]
|
|
27
|
+
def self.transaction_connection
|
|
28
|
+
raise NoMethodError, "#{name} must implement .transaction_connection"
|
|
29
|
+
end
|
|
30
|
+
|
|
31
|
+
# Validates that the adapter is properly configured
|
|
32
|
+
#
|
|
33
|
+
# Called during Rails initialization in production mode. Raise an error if the adapter
|
|
34
|
+
# is not configured correctly (e.g., missing required dependencies, invalid configuration).
|
|
35
|
+
#
|
|
36
|
+
# @return [void]
|
|
37
|
+
def self.validate_configuration!
|
|
38
|
+
# Default: no validation required
|
|
39
|
+
end
|
|
40
|
+
end
|
|
41
|
+
end
|
|
@@ -0,0 +1,75 @@
|
|
|
1
|
+
# frozen_string_literal: true
|
|
2
|
+
|
|
3
|
+
module Journaled
|
|
4
|
+
module DeliveryAdapters
|
|
5
|
+
# Default delivery adapter that uses ActiveJob
|
|
6
|
+
#
|
|
7
|
+
# This adapter enqueues events to Journaled::DeliveryJob which
|
|
8
|
+
# sends them to Kinesis. This is the default behavior and maintains
|
|
9
|
+
# backward compatibility with previous versions of the gem.
|
|
10
|
+
class ActiveJobAdapter < Journaled::DeliveryAdapter
|
|
11
|
+
# Delivers events by enqueueing them to Journaled::DeliveryJob
|
|
12
|
+
#
|
|
13
|
+
# @param events [Array] Array of journaled events to deliver
|
|
14
|
+
# @param enqueue_opts [Hash] Options for ActiveJob (priority, queue, wait, wait_until, etc.)
|
|
15
|
+
# @return [void]
|
|
16
|
+
def self.deliver(events:, enqueue_opts:)
|
|
17
|
+
Journaled::DeliveryJob.set(enqueue_opts).perform_later(*delivery_perform_args(events))
|
|
18
|
+
end
|
|
19
|
+
|
|
20
|
+
# Serializes events into the format expected by DeliveryJob
|
|
21
|
+
#
|
|
22
|
+
# @param events [Array] Array of journaled events
|
|
23
|
+
# @return [Array<Hash>] Array of serialized event hashes
|
|
24
|
+
def self.delivery_perform_args(events)
|
|
25
|
+
events.map do |event|
|
|
26
|
+
{
|
|
27
|
+
serialized_event: event.journaled_attributes.to_json,
|
|
28
|
+
partition_key: event.journaled_partition_key,
|
|
29
|
+
stream_name: event.journaled_stream_name,
|
|
30
|
+
}
|
|
31
|
+
end
|
|
32
|
+
end
|
|
33
|
+
|
|
34
|
+
# Returns the database connection to use for transactional batching
|
|
35
|
+
#
|
|
36
|
+
# This is determined by the configured queue adapter, since ActiveJob
|
|
37
|
+
# enqueues jobs to the same database that should be used for transactions.
|
|
38
|
+
#
|
|
39
|
+
# @return [ActiveRecord::ConnectionAdapters::AbstractAdapter] The connection to use
|
|
40
|
+
def self.transaction_connection
|
|
41
|
+
queue_adapter = Journaled.queue_adapter
|
|
42
|
+
|
|
43
|
+
if queue_adapter.in? %w(delayed delayed_job)
|
|
44
|
+
Delayed::Job.connection
|
|
45
|
+
elsif queue_adapter == 'good_job'
|
|
46
|
+
GoodJob::BaseRecord.connection
|
|
47
|
+
elsif queue_adapter == 'que'
|
|
48
|
+
Que::ActiveRecord::Model.connection
|
|
49
|
+
elsif queue_adapter == 'test' && Rails.env.test?
|
|
50
|
+
ActiveRecord::Base.connection
|
|
51
|
+
else
|
|
52
|
+
raise "Unsupported queue adapter: #{queue_adapter}"
|
|
53
|
+
end
|
|
54
|
+
end
|
|
55
|
+
|
|
56
|
+
# Validates that a supported queue adapter is configured
|
|
57
|
+
#
|
|
58
|
+
# @return [void]
|
|
59
|
+
def self.validate_configuration!
|
|
60
|
+
unless Journaled::SUPPORTED_QUEUE_ADAPTERS.include?(Journaled.queue_adapter)
|
|
61
|
+
raise <<~MSG
|
|
62
|
+
Journaled has detected an unsupported ActiveJob queue adapter: `:#{Journaled.queue_adapter}`
|
|
63
|
+
|
|
64
|
+
Journaled jobs must be enqueued transactionally to your primary database.
|
|
65
|
+
|
|
66
|
+
Please install the appropriate gems and set `queue_adapter` to one of the following:
|
|
67
|
+
#{Journaled::SUPPORTED_QUEUE_ADAPTERS.map { |a| "- `:#{a}`" }.join("\n")}
|
|
68
|
+
|
|
69
|
+
Read more at https://github.com/Betterment/journaled
|
|
70
|
+
MSG
|
|
71
|
+
end
|
|
72
|
+
end
|
|
73
|
+
end
|
|
74
|
+
end
|
|
75
|
+
end
|
data/lib/journaled/engine.rb
CHANGED
|
@@ -2,9 +2,11 @@
|
|
|
2
2
|
|
|
3
3
|
module Journaled
|
|
4
4
|
class Engine < ::Rails::Engine
|
|
5
|
+
engine_name 'journaled'
|
|
6
|
+
|
|
5
7
|
config.after_initialize do
|
|
6
8
|
ActiveSupport.on_load(:active_job) do
|
|
7
|
-
Journaled.
|
|
9
|
+
Journaled.delivery_adapter.validate_configuration! unless Journaled.development_or_test?
|
|
8
10
|
end
|
|
9
11
|
|
|
10
12
|
ActiveSupport.on_load(:active_record) do
|
|
@@ -0,0 +1,98 @@
|
|
|
1
|
+
# frozen_string_literal: true
|
|
2
|
+
|
|
3
|
+
module Journaled
|
|
4
|
+
# Sends batches of events to Kinesis using the PutRecord single-event API
|
|
5
|
+
#
|
|
6
|
+
# This class handles:
|
|
7
|
+
# - Sending events individually to support guaranteed ordering
|
|
8
|
+
# - Handling failures on a per-event basis
|
|
9
|
+
# - Classifying errors as transient vs permanent
|
|
10
|
+
#
|
|
11
|
+
# Returns structured results for the caller to handle event state management.
|
|
12
|
+
class KinesisBatchSender
|
|
13
|
+
FailedEvent = Struct.new(:event, :error_code, :error_message, :transient, keyword_init: true) do
|
|
14
|
+
def transient?
|
|
15
|
+
transient
|
|
16
|
+
end
|
|
17
|
+
|
|
18
|
+
def permanent?
|
|
19
|
+
!transient
|
|
20
|
+
end
|
|
21
|
+
end
|
|
22
|
+
|
|
23
|
+
PERMANENT_ERROR_CLASSES = [
|
|
24
|
+
Aws::Kinesis::Errors::ValidationException,
|
|
25
|
+
].freeze
|
|
26
|
+
|
|
27
|
+
# Send a batch of database events to Kinesis
|
|
28
|
+
#
|
|
29
|
+
# Sends events one at a time to guarantee ordering. Stops on first transient failure.
|
|
30
|
+
#
|
|
31
|
+
# @param events [Array<Journaled::Outbox::Event>] Events to send
|
|
32
|
+
# @return [Hash] Result with:
|
|
33
|
+
# - succeeded: Array of successfully sent events
|
|
34
|
+
# - failed: Array of FailedEvent structs (only permanent failures)
|
|
35
|
+
def send_batch(events)
|
|
36
|
+
result = { succeeded: [], failed: [] }
|
|
37
|
+
|
|
38
|
+
events.each do |event|
|
|
39
|
+
event_result = send_event(event)
|
|
40
|
+
if event_result.is_a?(FailedEvent)
|
|
41
|
+
if event_result.transient?
|
|
42
|
+
emit_transient_failure_metric
|
|
43
|
+
break
|
|
44
|
+
else
|
|
45
|
+
result[:failed] << event_result
|
|
46
|
+
end
|
|
47
|
+
else
|
|
48
|
+
result[:succeeded] << event_result
|
|
49
|
+
end
|
|
50
|
+
end
|
|
51
|
+
|
|
52
|
+
result
|
|
53
|
+
end
|
|
54
|
+
|
|
55
|
+
private
|
|
56
|
+
|
|
57
|
+
# Send a single event to Kinesis
|
|
58
|
+
#
|
|
59
|
+
# @param event [Journaled::Outbox::Event] Event to send
|
|
60
|
+
# @return [Journaled::Outbox::Event, FailedEvent] The event on success, or FailedEvent on failure
|
|
61
|
+
def send_event(event)
|
|
62
|
+
# Merge the DB-generated ID into the event data before sending to Kinesis
|
|
63
|
+
event_data_with_id = event.event_data.merge(id: event.id)
|
|
64
|
+
|
|
65
|
+
kinesis_client.put_record(
|
|
66
|
+
stream_name: event.stream_name,
|
|
67
|
+
data: event_data_with_id.to_json,
|
|
68
|
+
partition_key: event.partition_key,
|
|
69
|
+
)
|
|
70
|
+
|
|
71
|
+
event
|
|
72
|
+
rescue *PERMANENT_ERROR_CLASSES => e
|
|
73
|
+
Rails.logger.error("Kinesis event send failed (permanent): #{e.class} - #{e.message}")
|
|
74
|
+
FailedEvent.new(
|
|
75
|
+
event:,
|
|
76
|
+
error_code: e.class.to_s,
|
|
77
|
+
error_message: e.message,
|
|
78
|
+
transient: false,
|
|
79
|
+
)
|
|
80
|
+
rescue StandardError => e
|
|
81
|
+
Rails.logger.error("Kinesis event send failed (transient): #{e.class} - #{e.message}")
|
|
82
|
+
FailedEvent.new(
|
|
83
|
+
event:,
|
|
84
|
+
error_code: e.class.to_s,
|
|
85
|
+
error_message: e.message,
|
|
86
|
+
transient: true,
|
|
87
|
+
)
|
|
88
|
+
end
|
|
89
|
+
|
|
90
|
+
def kinesis_client
|
|
91
|
+
@kinesis_client ||= KinesisClientFactory.build
|
|
92
|
+
end
|
|
93
|
+
|
|
94
|
+
def emit_transient_failure_metric
|
|
95
|
+
ActiveSupport::Notifications.instrument('journaled.kinesis_batch_sender.transient_failure')
|
|
96
|
+
end
|
|
97
|
+
end
|
|
98
|
+
end
|
|
@@ -0,0 +1,62 @@
|
|
|
1
|
+
# frozen_string_literal: true
|
|
2
|
+
|
|
3
|
+
module Journaled
|
|
4
|
+
class KinesisClientFactory
|
|
5
|
+
DEFAULT_REGION = 'us-east-1'
|
|
6
|
+
|
|
7
|
+
def self.build
|
|
8
|
+
new.client
|
|
9
|
+
end
|
|
10
|
+
|
|
11
|
+
def client
|
|
12
|
+
@client ||= Aws::Kinesis::Client.new(config)
|
|
13
|
+
end
|
|
14
|
+
|
|
15
|
+
private
|
|
16
|
+
|
|
17
|
+
def config
|
|
18
|
+
{
|
|
19
|
+
region: ENV.fetch('AWS_DEFAULT_REGION', DEFAULT_REGION),
|
|
20
|
+
retry_limit: 0,
|
|
21
|
+
http_idle_timeout: Journaled.http_idle_timeout,
|
|
22
|
+
http_open_timeout: Journaled.http_open_timeout,
|
|
23
|
+
http_read_timeout: Journaled.http_read_timeout,
|
|
24
|
+
}.merge(credentials)
|
|
25
|
+
end
|
|
26
|
+
|
|
27
|
+
def credentials
|
|
28
|
+
if ENV.key?('JOURNALED_IAM_ROLE_ARN')
|
|
29
|
+
{
|
|
30
|
+
credentials: iam_assume_role_credentials,
|
|
31
|
+
}
|
|
32
|
+
else
|
|
33
|
+
legacy_credentials_hash_if_present
|
|
34
|
+
end
|
|
35
|
+
end
|
|
36
|
+
|
|
37
|
+
def legacy_credentials_hash_if_present
|
|
38
|
+
if ENV.key?('RUBY_AWS_ACCESS_KEY_ID')
|
|
39
|
+
{
|
|
40
|
+
access_key_id: ENV.fetch('RUBY_AWS_ACCESS_KEY_ID'),
|
|
41
|
+
secret_access_key: ENV.fetch('RUBY_AWS_SECRET_ACCESS_KEY'),
|
|
42
|
+
}
|
|
43
|
+
else
|
|
44
|
+
{}
|
|
45
|
+
end
|
|
46
|
+
end
|
|
47
|
+
|
|
48
|
+
def sts_client
|
|
49
|
+
@sts_client ||= Aws::STS::Client.new({
|
|
50
|
+
region: ENV.fetch('AWS_DEFAULT_REGION', DEFAULT_REGION),
|
|
51
|
+
}.merge(legacy_credentials_hash_if_present))
|
|
52
|
+
end
|
|
53
|
+
|
|
54
|
+
def iam_assume_role_credentials
|
|
55
|
+
@iam_assume_role_credentials ||= Aws::AssumeRoleCredentials.new(
|
|
56
|
+
client: sts_client,
|
|
57
|
+
role_arn: ENV.fetch('JOURNALED_IAM_ROLE_ARN'),
|
|
58
|
+
role_session_name: "JournaledAssumeRoleAccess",
|
|
59
|
+
)
|
|
60
|
+
end
|
|
61
|
+
end
|
|
62
|
+
end
|
|
@@ -0,0 +1,96 @@
|
|
|
1
|
+
# frozen_string_literal: true
|
|
2
|
+
|
|
3
|
+
module Journaled
|
|
4
|
+
module Outbox
|
|
5
|
+
# Outbox-style delivery adapter for custom event processing
|
|
6
|
+
#
|
|
7
|
+
# This adapter stores events in a database table instead of enqueuing to ActiveJob.
|
|
8
|
+
# Events are processed by separate worker daemons that poll the database.
|
|
9
|
+
#
|
|
10
|
+
# Setup:
|
|
11
|
+
# 1. Generate migrations: rails generate journaled:database_events
|
|
12
|
+
# 2. Run migrations: rails db:migrate
|
|
13
|
+
# 3. Configure: Journaled.delivery_adapter = Journaled::Outbox::Adapter
|
|
14
|
+
# 4. Start workers: bundle exec rake journaled_worker:work
|
|
15
|
+
class Adapter < Journaled::DeliveryAdapter
|
|
16
|
+
class TableNotFoundError < StandardError; end
|
|
17
|
+
|
|
18
|
+
# Delivers events by inserting them into the database
|
|
19
|
+
#
|
|
20
|
+
# @param events [Array] Array of journaled events to deliver
|
|
21
|
+
# @param ** [Hash] Additional options (ignored, for interface compatibility)
|
|
22
|
+
# @return [void]
|
|
23
|
+
def self.deliver(events:, **)
|
|
24
|
+
check_table_exists!
|
|
25
|
+
|
|
26
|
+
records = events.map do |event|
|
|
27
|
+
# Exclude the application-level id - the database will generate its own using uuid_generate_v7()
|
|
28
|
+
event_data = event.journaled_attributes.except(:id)
|
|
29
|
+
|
|
30
|
+
{
|
|
31
|
+
event_type: event.journaled_attributes[:event_type],
|
|
32
|
+
event_data:,
|
|
33
|
+
partition_key: event.journaled_partition_key,
|
|
34
|
+
stream_name: event.journaled_stream_name,
|
|
35
|
+
}
|
|
36
|
+
end
|
|
37
|
+
|
|
38
|
+
# rubocop:disable Rails/SkipsModelValidations
|
|
39
|
+
Event.insert_all(records) if records.any?
|
|
40
|
+
# rubocop:enable Rails/SkipsModelValidations
|
|
41
|
+
end
|
|
42
|
+
|
|
43
|
+
# Check if the required database table exists
|
|
44
|
+
#
|
|
45
|
+
# @raise [TableNotFoundError] if the table doesn't exist
|
|
46
|
+
def self.check_table_exists!
|
|
47
|
+
return if @table_exists
|
|
48
|
+
|
|
49
|
+
unless Event.table_exists?
|
|
50
|
+
raise TableNotFoundError, <<~ERROR
|
|
51
|
+
Journaled::Outbox::Adapter requires the 'journaled_outbox_events' table.
|
|
52
|
+
|
|
53
|
+
To create the required tables, run:
|
|
54
|
+
|
|
55
|
+
rake journaled:install:migrations
|
|
56
|
+
rails db:migrate
|
|
57
|
+
|
|
58
|
+
For more information, see the README:
|
|
59
|
+
https://github.com/Betterment/journaled#outbox-style-event-processing-optional
|
|
60
|
+
ERROR
|
|
61
|
+
end
|
|
62
|
+
|
|
63
|
+
@table_exists = true
|
|
64
|
+
end
|
|
65
|
+
|
|
66
|
+
# Returns the database connection to use for transactional batching
|
|
67
|
+
#
|
|
68
|
+
# The Outbox adapter uses the same database as the Outbox events table,
|
|
69
|
+
# since events are staged in memory and then written to journaled_events
|
|
70
|
+
# within the same transaction.
|
|
71
|
+
#
|
|
72
|
+
# @return [ActiveRecord::ConnectionAdapters::AbstractAdapter] The connection to use
|
|
73
|
+
def self.transaction_connection
|
|
74
|
+
Event.connection
|
|
75
|
+
end
|
|
76
|
+
|
|
77
|
+
# Validates that PostgreSQL is being used
|
|
78
|
+
#
|
|
79
|
+
# The Outbox adapter requires PostgreSQL for UUID v7 support and row-level locking
|
|
80
|
+
#
|
|
81
|
+
# @raise [StandardError] if the database adapter is not PostgreSQL
|
|
82
|
+
def self.validate_configuration!
|
|
83
|
+
return if Event.connection.adapter_name == 'PostgreSQL'
|
|
84
|
+
|
|
85
|
+
raise <<~ERROR
|
|
86
|
+
Journaled::Outbox::Adapter requires PostgreSQL database adapter.
|
|
87
|
+
|
|
88
|
+
Current adapter: #{Event.connection.adapter_name}
|
|
89
|
+
|
|
90
|
+
The Outbox pattern uses PostgreSQL-specific features like UUID v7 generation
|
|
91
|
+
and row-level locking for distributed worker coordination. Other databases are not supported.
|
|
92
|
+
ERROR
|
|
93
|
+
end
|
|
94
|
+
end
|
|
95
|
+
end
|
|
96
|
+
end
|
|
@@ -0,0 +1,86 @@
|
|
|
1
|
+
# frozen_string_literal: true
|
|
2
|
+
|
|
3
|
+
module Journaled
|
|
4
|
+
module Outbox
|
|
5
|
+
# Processes batches of outbox events
|
|
6
|
+
#
|
|
7
|
+
# This class handles the core business logic of:
|
|
8
|
+
# - Fetching events from the database (with FOR UPDATE)
|
|
9
|
+
# - Sending them to Kinesis one at a time to guarantee ordering
|
|
10
|
+
# - Handling successful deliveries (deleting events)
|
|
11
|
+
# - Handling permanent failures (marking with failed_at)
|
|
12
|
+
# - Handling ephemeral failures (stopping processing and committing)
|
|
13
|
+
#
|
|
14
|
+
# Events are processed one at a time to guarantee ordering. If an event fails
|
|
15
|
+
# with an ephemeral error, processing stops and the transaction commits
|
|
16
|
+
# (deleting successes and marking permanent failures), then the loop re-enters.
|
|
17
|
+
#
|
|
18
|
+
# All operations happen within a single database transaction for consistency.
|
|
19
|
+
# The Worker class delegates to this for actual event processing.
|
|
20
|
+
class BatchProcessor
|
|
21
|
+
def initialize
|
|
22
|
+
@batch_sender = KinesisBatchSender.new
|
|
23
|
+
end
|
|
24
|
+
|
|
25
|
+
# Process a single batch of events
|
|
26
|
+
#
|
|
27
|
+
# Wraps the entire batch processing in a single transaction:
|
|
28
|
+
# 1. SELECT FOR UPDATE (claim events)
|
|
29
|
+
# 2. Send to Kinesis (batch sender handles one-at-a-time and short-circuiting)
|
|
30
|
+
# 3. Delete successful events
|
|
31
|
+
# 4. Mark failed events (batch sender only returns permanent failures)
|
|
32
|
+
#
|
|
33
|
+
# @return [Hash] Statistics with :succeeded, :failed_permanently counts
|
|
34
|
+
def process_batch
|
|
35
|
+
ActiveRecord::Base.transaction do
|
|
36
|
+
events = Event.fetch_batch_for_update
|
|
37
|
+
Rails.logger.info("[journaled] Processing batch of #{events.count} events")
|
|
38
|
+
|
|
39
|
+
result = batch_sender.send_batch(events)
|
|
40
|
+
|
|
41
|
+
# Delete successful events
|
|
42
|
+
Event.where(id: result[:succeeded].map(&:id)).delete_all if result[:succeeded].any?
|
|
43
|
+
|
|
44
|
+
# Mark failed events
|
|
45
|
+
mark_events_as_failed(result[:failed]) if result[:failed].any?
|
|
46
|
+
|
|
47
|
+
Rails.logger.info(
|
|
48
|
+
"[journaled] Batch complete: #{result[:succeeded].count} succeeded, " \
|
|
49
|
+
"#{result[:failed].count} marked as failed (batch size: #{events.count})",
|
|
50
|
+
)
|
|
51
|
+
|
|
52
|
+
{
|
|
53
|
+
succeeded: result[:succeeded].count,
|
|
54
|
+
failed_permanently: result[:failed].count,
|
|
55
|
+
}
|
|
56
|
+
end
|
|
57
|
+
end
|
|
58
|
+
|
|
59
|
+
private
|
|
60
|
+
|
|
61
|
+
attr_reader :batch_sender
|
|
62
|
+
|
|
63
|
+
# Mark events as permanently failed
|
|
64
|
+
# Sets: failed_at = NOW, failure_reason = per-event message
|
|
65
|
+
def mark_events_as_failed(failed_events)
|
|
66
|
+
now = Time.current
|
|
67
|
+
|
|
68
|
+
records = failed_events.map do |failed_event|
|
|
69
|
+
failed_event.event.attributes.except('created_at').merge(
|
|
70
|
+
failed_at: now,
|
|
71
|
+
failure_reason: "#{failed_event.error_code}: #{failed_event.error_message}",
|
|
72
|
+
)
|
|
73
|
+
end
|
|
74
|
+
|
|
75
|
+
# rubocop:disable Rails/SkipsModelValidations
|
|
76
|
+
Event.upsert_all(
|
|
77
|
+
records,
|
|
78
|
+
unique_by: :id,
|
|
79
|
+
on_duplicate: :update,
|
|
80
|
+
update_only: %i(failed_at failure_reason),
|
|
81
|
+
)
|
|
82
|
+
# rubocop:enable Rails/SkipsModelValidations
|
|
83
|
+
end
|
|
84
|
+
end
|
|
85
|
+
end
|
|
86
|
+
end
|
|
@@ -0,0 +1,84 @@
|
|
|
1
|
+
# frozen_string_literal: true
|
|
2
|
+
|
|
3
|
+
module Journaled
|
|
4
|
+
module Outbox
|
|
5
|
+
# Handles metric emission for the Worker
|
|
6
|
+
#
|
|
7
|
+
# This class is responsible for collecting and emitting metrics about the outbox queue.
|
|
8
|
+
class MetricEmitter
|
|
9
|
+
def initialize(worker_id:)
|
|
10
|
+
@worker_id = worker_id
|
|
11
|
+
end
|
|
12
|
+
|
|
13
|
+
# Emit batch processing metrics
|
|
14
|
+
#
|
|
15
|
+
# @param stats [Hash] Processing statistics with :succeeded, :failed_permanently
|
|
16
|
+
def emit_batch_metrics(stats)
|
|
17
|
+
total_events = stats[:succeeded] + stats[:failed_permanently]
|
|
18
|
+
|
|
19
|
+
emit_metric('journaled.worker.batch_process', value: total_events)
|
|
20
|
+
emit_metric('journaled.worker.batch_sent', value: stats[:succeeded])
|
|
21
|
+
emit_metric('journaled.worker.batch_failed', value: stats[:failed_permanently])
|
|
22
|
+
end
|
|
23
|
+
|
|
24
|
+
# Collect and emit queue metrics
|
|
25
|
+
#
|
|
26
|
+
# This calculates various queue statistics and emits individual metrics for each.
|
|
27
|
+
def emit_queue_metrics
|
|
28
|
+
metrics = calculate_queue_metrics
|
|
29
|
+
|
|
30
|
+
emit_metric('journaled.worker.queue_total_count', value: metrics[:total_count])
|
|
31
|
+
emit_metric('journaled.worker.queue_workable_count', value: metrics[:workable_count])
|
|
32
|
+
emit_metric('journaled.worker.queue_erroring_count', value: metrics[:erroring_count])
|
|
33
|
+
emit_metric('journaled.worker.queue_oldest_age_seconds', value: metrics[:oldest_age_seconds])
|
|
34
|
+
|
|
35
|
+
Rails.logger.info(
|
|
36
|
+
"Queue metrics: total=#{metrics[:total_count]}, " \
|
|
37
|
+
"workable=#{metrics[:workable_count]}, " \
|
|
38
|
+
"erroring=#{metrics[:erroring_count]}, " \
|
|
39
|
+
"oldest_age=#{metrics[:oldest_age_seconds].round(2)}s",
|
|
40
|
+
)
|
|
41
|
+
end
|
|
42
|
+
|
|
43
|
+
private
|
|
44
|
+
|
|
45
|
+
attr_reader :worker_id
|
|
46
|
+
|
|
47
|
+
# Emit a single metric notification
|
|
48
|
+
#
|
|
49
|
+
# @param event_name [String] The name of the metric event
|
|
50
|
+
# @param payload [Hash] Additional payload data (event_count, value, etc.)
|
|
51
|
+
def emit_metric(event_name, payload)
|
|
52
|
+
ActiveSupport::Notifications.instrument(
|
|
53
|
+
event_name,
|
|
54
|
+
payload.merge(worker_id:),
|
|
55
|
+
)
|
|
56
|
+
end
|
|
57
|
+
|
|
58
|
+
# Calculate queue metrics
|
|
59
|
+
#
|
|
60
|
+
# @return [Hash] Metrics including counts and oldest event timestamp
|
|
61
|
+
def calculate_queue_metrics
|
|
62
|
+
# Use a single query with COUNT(*) FILTER to calculate all counts in one table scan
|
|
63
|
+
result = Event.connection.select_one(
|
|
64
|
+
Event.select(
|
|
65
|
+
'COUNT(*) AS total_count',
|
|
66
|
+
'COUNT(*) FILTER (WHERE failed_at IS NULL) AS workable_count',
|
|
67
|
+
'COUNT(*) FILTER (WHERE failure_reason IS NOT NULL AND failed_at IS NULL) AS erroring_count',
|
|
68
|
+
'MIN(created_at) FILTER (WHERE failed_at IS NULL) AS oldest_non_failed_timestamp',
|
|
69
|
+
).to_sql,
|
|
70
|
+
)
|
|
71
|
+
|
|
72
|
+
oldest_timestamp = result['oldest_non_failed_timestamp']
|
|
73
|
+
oldest_age_seconds = oldest_timestamp ? Time.current - oldest_timestamp : 0
|
|
74
|
+
|
|
75
|
+
{
|
|
76
|
+
total_count: result['total_count'],
|
|
77
|
+
workable_count: result['workable_count'],
|
|
78
|
+
erroring_count: result['erroring_count'],
|
|
79
|
+
oldest_age_seconds:,
|
|
80
|
+
}
|
|
81
|
+
end
|
|
82
|
+
end
|
|
83
|
+
end
|
|
84
|
+
end
|
|
@@ -0,0 +1,135 @@
|
|
|
1
|
+
# frozen_string_literal: true
|
|
2
|
+
|
|
3
|
+
module Journaled
|
|
4
|
+
module Outbox
|
|
5
|
+
# Worker daemon for processing Outbox-style events
|
|
6
|
+
#
|
|
7
|
+
# This worker polls the database for pending events and sends them to Kinesis in batches.
|
|
8
|
+
# Multiple workers can run concurrently and will coordinate using row-level locking.
|
|
9
|
+
#
|
|
10
|
+
# The Worker handles the daemon lifecycle (start/stop, signal handling, run loop) and
|
|
11
|
+
# delegates actual batch processing to BatchProcessor.
|
|
12
|
+
#
|
|
13
|
+
# Usage:
|
|
14
|
+
# worker = Journaled::Outbox::Worker.new
|
|
15
|
+
# worker.start # Blocks until shutdown signal received
|
|
16
|
+
class Worker
|
|
17
|
+
def initialize
|
|
18
|
+
@worker_id = "#{Socket.gethostname}-#{Process.pid}"
|
|
19
|
+
self.running = false
|
|
20
|
+
@processor = BatchProcessor.new
|
|
21
|
+
@metric_emitter = MetricEmitter.new(worker_id: @worker_id)
|
|
22
|
+
self.shutdown_requested = false
|
|
23
|
+
@last_metrics_emission = Time.current
|
|
24
|
+
end
|
|
25
|
+
|
|
26
|
+
# Start the worker (blocks until shutdown)
|
|
27
|
+
def start
|
|
28
|
+
check_prerequisites!
|
|
29
|
+
|
|
30
|
+
self.running = true
|
|
31
|
+
Rails.logger.info("Journaled worker starting (id: #{worker_id})")
|
|
32
|
+
|
|
33
|
+
setup_signal_handlers
|
|
34
|
+
|
|
35
|
+
run_loop
|
|
36
|
+
ensure
|
|
37
|
+
self.running = false
|
|
38
|
+
Rails.logger.info("Journaled worker stopped (id: #{worker_id})")
|
|
39
|
+
end
|
|
40
|
+
|
|
41
|
+
# Request graceful shutdown
|
|
42
|
+
def shutdown
|
|
43
|
+
self.shutdown_requested = true
|
|
44
|
+
end
|
|
45
|
+
|
|
46
|
+
# Check if worker is still running
|
|
47
|
+
def running?
|
|
48
|
+
running
|
|
49
|
+
end
|
|
50
|
+
|
|
51
|
+
private
|
|
52
|
+
|
|
53
|
+
attr_reader :worker_id, :processor, :metric_emitter
|
|
54
|
+
attr_accessor :shutdown_requested, :running, :last_metrics_emission
|
|
55
|
+
|
|
56
|
+
def run_loop
|
|
57
|
+
loop do
|
|
58
|
+
if shutdown_requested
|
|
59
|
+
Rails.logger.info("Shutdown requested for worker #{worker_id}")
|
|
60
|
+
break
|
|
61
|
+
end
|
|
62
|
+
|
|
63
|
+
events_processed = 0
|
|
64
|
+
begin
|
|
65
|
+
events_processed = process_batch
|
|
66
|
+
emit_metrics_if_needed
|
|
67
|
+
rescue StandardError => e
|
|
68
|
+
Rails.logger.error("Worker error: #{e.class} - #{e.message}")
|
|
69
|
+
Rails.logger.error(e.backtrace.join("\n"))
|
|
70
|
+
end
|
|
71
|
+
|
|
72
|
+
break if shutdown_requested
|
|
73
|
+
|
|
74
|
+
# Only sleep if no events were processed to prevent excessive polling on empty table
|
|
75
|
+
sleep(Journaled.worker_poll_interval) if events_processed.zero?
|
|
76
|
+
end
|
|
77
|
+
end
|
|
78
|
+
|
|
79
|
+
def process_batch
|
|
80
|
+
stats = processor.process_batch
|
|
81
|
+
|
|
82
|
+
instrument_batch_results(stats)
|
|
83
|
+
|
|
84
|
+
stats[:succeeded] + stats[:failed_permanently]
|
|
85
|
+
end
|
|
86
|
+
|
|
87
|
+
def instrument_batch_results(stats)
|
|
88
|
+
metric_emitter.emit_batch_metrics(stats)
|
|
89
|
+
end
|
|
90
|
+
|
|
91
|
+
def check_prerequisites!
|
|
92
|
+
unless Event.table_exists?
|
|
93
|
+
raise <<~ERROR
|
|
94
|
+
The 'journaled_outbox_events' table does not exist.
|
|
95
|
+
|
|
96
|
+
To create the required table, run:
|
|
97
|
+
|
|
98
|
+
rails generate journaled:database_events
|
|
99
|
+
rails db:migrate
|
|
100
|
+
ERROR
|
|
101
|
+
end
|
|
102
|
+
|
|
103
|
+
Rails.logger.info("Prerequisites check passed")
|
|
104
|
+
end
|
|
105
|
+
|
|
106
|
+
def setup_signal_handlers
|
|
107
|
+
%w(INT TERM).each do |signal|
|
|
108
|
+
Signal.trap(signal) do
|
|
109
|
+
shutdown
|
|
110
|
+
end
|
|
111
|
+
end
|
|
112
|
+
end
|
|
113
|
+
|
|
114
|
+
# Emit metrics if the interval has elapsed
|
|
115
|
+
def emit_metrics_if_needed
|
|
116
|
+
return unless Time.current - last_metrics_emission >= 60
|
|
117
|
+
|
|
118
|
+
# Collect and emit metrics in a background thread to avoid blocking the main loop
|
|
119
|
+
Thread.new do
|
|
120
|
+
collect_and_emit_metrics
|
|
121
|
+
rescue StandardError => e
|
|
122
|
+
Rails.logger.error("Error collecting metrics: #{e.class} - #{e.message}")
|
|
123
|
+
Rails.logger.error(e.backtrace.join("\n"))
|
|
124
|
+
end
|
|
125
|
+
|
|
126
|
+
self.last_metrics_emission = Time.current
|
|
127
|
+
end
|
|
128
|
+
|
|
129
|
+
# Collect and emit queue metrics
|
|
130
|
+
def collect_and_emit_metrics
|
|
131
|
+
metric_emitter.emit_queue_metrics
|
|
132
|
+
end
|
|
133
|
+
end
|
|
134
|
+
end
|
|
135
|
+
end
|
data/lib/journaled/version.rb
CHANGED
data/lib/journaled.rb
CHANGED
|
@@ -8,6 +8,14 @@ require "journaled/engine"
|
|
|
8
8
|
require "journaled/current"
|
|
9
9
|
require "journaled/errors"
|
|
10
10
|
require 'journaled/connection'
|
|
11
|
+
require 'journaled/delivery_adapter'
|
|
12
|
+
require 'journaled/delivery_adapters/active_job_adapter'
|
|
13
|
+
require 'journaled/outbox/adapter'
|
|
14
|
+
require 'journaled/kinesis_client_factory'
|
|
15
|
+
require 'journaled/kinesis_batch_sender'
|
|
16
|
+
require 'journaled/outbox/batch_processor'
|
|
17
|
+
require 'journaled/outbox/metric_emitter'
|
|
18
|
+
require 'journaled/outbox/worker'
|
|
11
19
|
|
|
12
20
|
module Journaled
|
|
13
21
|
SUPPORTED_QUEUE_ADAPTERS = %w(delayed delayed_job good_job que).freeze
|
|
@@ -18,8 +26,14 @@ module Journaled
|
|
|
18
26
|
mattr_accessor(:http_open_timeout) { 2 }
|
|
19
27
|
mattr_accessor(:http_read_timeout) { 60 }
|
|
20
28
|
mattr_accessor(:job_base_class_name) { 'ActiveJob::Base' }
|
|
29
|
+
mattr_accessor(:outbox_base_class_name) { 'ActiveRecord::Base' }
|
|
30
|
+
mattr_accessor(:delivery_adapter) { Journaled::DeliveryAdapters::ActiveJobAdapter }
|
|
21
31
|
mattr_writer(:transactional_batching_enabled) { true }
|
|
22
32
|
|
|
33
|
+
# Worker configuration (for Outbox-style event processing)
|
|
34
|
+
mattr_accessor(:worker_batch_size) { 1000 }
|
|
35
|
+
mattr_accessor(:worker_poll_interval) { 1 } # seconds
|
|
36
|
+
|
|
23
37
|
def self.transactional_batching_enabled?
|
|
24
38
|
Thread.current[:journaled_transactional_batching_enabled] || @@transactional_batching_enabled
|
|
25
39
|
end
|
|
@@ -56,21 +70,6 @@ module Journaled
|
|
|
56
70
|
job_base_class_name.constantize.queue_adapter_name
|
|
57
71
|
end
|
|
58
72
|
|
|
59
|
-
def self.detect_queue_adapter!
|
|
60
|
-
unless SUPPORTED_QUEUE_ADAPTERS.include?(queue_adapter)
|
|
61
|
-
raise <<~MSG
|
|
62
|
-
Journaled has detected an unsupported ActiveJob queue adapter: `:#{queue_adapter}`
|
|
63
|
-
|
|
64
|
-
Journaled jobs must be enqueued transactionally to your primary database.
|
|
65
|
-
|
|
66
|
-
Please install the appropriate gems and set `queue_adapter` to one of the following:
|
|
67
|
-
#{SUPPORTED_QUEUE_ADAPTERS.map { |a| "- `:#{a}`" }.join("\n")}
|
|
68
|
-
|
|
69
|
-
Read more at https://github.com/Betterment/journaled
|
|
70
|
-
MSG
|
|
71
|
-
end
|
|
72
|
-
end
|
|
73
|
-
|
|
74
73
|
def self.tagged(**tags)
|
|
75
74
|
existing_tags = Current.tags
|
|
76
75
|
tag!(**tags)
|
metadata
CHANGED
|
@@ -1,17 +1,16 @@
|
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
|
2
2
|
name: journaled
|
|
3
3
|
version: !ruby/object:Gem::Version
|
|
4
|
-
version: 6.
|
|
4
|
+
version: 6.2.0
|
|
5
5
|
platform: ruby
|
|
6
6
|
authors:
|
|
7
7
|
- Jake Lipson
|
|
8
8
|
- Corey Alexander
|
|
9
9
|
- Cyrus Eslami
|
|
10
10
|
- John Mileham
|
|
11
|
-
autorequire:
|
|
12
11
|
bindir: bin
|
|
13
12
|
cert_chain: []
|
|
14
|
-
date:
|
|
13
|
+
date: 1980-01-02 00:00:00.000000000 Z
|
|
15
14
|
dependencies:
|
|
16
15
|
- !ruby/object:Gem::Dependency
|
|
17
16
|
name: activejob
|
|
@@ -89,7 +88,7 @@ dependencies:
|
|
|
89
88
|
requirements:
|
|
90
89
|
- - ">="
|
|
91
90
|
- !ruby/object:Gem::Version
|
|
92
|
-
version: '7.
|
|
91
|
+
version: '7.2'
|
|
93
92
|
- - "<"
|
|
94
93
|
- !ruby/object:Gem::Version
|
|
95
94
|
version: '8.1'
|
|
@@ -99,7 +98,7 @@ dependencies:
|
|
|
99
98
|
requirements:
|
|
100
99
|
- - ">="
|
|
101
100
|
- !ruby/object:Gem::Version
|
|
102
|
-
version: '7.
|
|
101
|
+
version: '7.2'
|
|
103
102
|
- - "<"
|
|
104
103
|
- !ruby/object:Gem::Version
|
|
105
104
|
version: '8.1'
|
|
@@ -255,9 +254,12 @@ files:
|
|
|
255
254
|
- app/models/journaled/event.rb
|
|
256
255
|
- app/models/journaled/json_schema_model/validator.rb
|
|
257
256
|
- app/models/journaled/not_truly_exceptional_error.rb
|
|
257
|
+
- app/models/journaled/outbox/event.rb
|
|
258
258
|
- app/models/journaled/writer.rb
|
|
259
259
|
- config/initializers/change_protection.rb
|
|
260
260
|
- config/spring.rb
|
|
261
|
+
- db/migrate/1_install_uuid_generate_v7.rb
|
|
262
|
+
- db/migrate/2_create_journaled_events.rb
|
|
261
263
|
- journaled_schemas/base_event.json
|
|
262
264
|
- journaled_schemas/journaled/audit_log/event.json
|
|
263
265
|
- journaled_schemas/journaled/change.json
|
|
@@ -266,12 +268,21 @@ files:
|
|
|
266
268
|
- lib/journaled/audit_log.rb
|
|
267
269
|
- lib/journaled/connection.rb
|
|
268
270
|
- lib/journaled/current.rb
|
|
271
|
+
- lib/journaled/delivery_adapter.rb
|
|
272
|
+
- lib/journaled/delivery_adapters/active_job_adapter.rb
|
|
269
273
|
- lib/journaled/engine.rb
|
|
270
274
|
- lib/journaled/errors.rb
|
|
275
|
+
- lib/journaled/kinesis_batch_sender.rb
|
|
276
|
+
- lib/journaled/kinesis_client_factory.rb
|
|
277
|
+
- lib/journaled/outbox/adapter.rb
|
|
278
|
+
- lib/journaled/outbox/batch_processor.rb
|
|
279
|
+
- lib/journaled/outbox/metric_emitter.rb
|
|
280
|
+
- lib/journaled/outbox/worker.rb
|
|
271
281
|
- lib/journaled/relation_change_protection.rb
|
|
272
282
|
- lib/journaled/rspec.rb
|
|
273
283
|
- lib/journaled/transaction_ext.rb
|
|
274
284
|
- lib/journaled/version.rb
|
|
285
|
+
- lib/tasks/journaled_worker.rake
|
|
275
286
|
homepage: http://github.com/Betterment/journaled
|
|
276
287
|
licenses:
|
|
277
288
|
- MIT
|
|
@@ -311,8 +322,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
|
|
|
311
322
|
- !ruby/object:Gem::Version
|
|
312
323
|
version: '0'
|
|
313
324
|
requirements: []
|
|
314
|
-
rubygems_version: 3.
|
|
315
|
-
signing_key:
|
|
325
|
+
rubygems_version: 3.6.8
|
|
316
326
|
specification_version: 4
|
|
317
327
|
summary: Journaling for Betterment apps.
|
|
318
328
|
test_files: []
|