deimos-ruby 1.22.5 → 1.23.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 997678ce4ed796037a0f554a7d0897a9ab58f810f437573da7684a6c23fa5d40
4
- data.tar.gz: e13fc14fb0bf985a02c38ff628a527198aa1e3bbf1eaab52510ea63832a03f37
3
+ metadata.gz: 78613a211afa5b7bc3f065691e02c1417fb5e91568105bf81c812aabdf2480c4
4
+ data.tar.gz: fe640113779b11d13be54be53d434f2865ced8965dafd7493e44c679bb00d704
5
5
  SHA512:
6
- metadata.gz: 90062c59b953e4fff9b1f5ddb0c5107e08f98028a7e221c78637bd9cb57c8d13c081a7417dedad57274e4b2df470a58d31ed5a1a1fd985f853ecca9ab53527fc
7
- data.tar.gz: fa07d49754b91fdfc48fd6edb46a1906f057ddbd9d0b11d0a66d14382ae1907a0ff7fbd2526e4653e6fc74ed53e78db78203fc043621111193fca370e50da777
6
+ metadata.gz: ef4de06e3c106abd55703bf5dafa5fa19fa3425b8aca0081704ae1464e1993f94eb9e418b42f609a4b2771133df770b2139a94fddfbfd55ea2c803706124ba6c
7
+ data.tar.gz: '028fcf4753d456e552af8ad3dd725f0f4448848f88c8dcee542d503f6ddba7daf1aa8e46d4512e43faadee2f7717e910692479e322fd740f1bdf808a2864a74e'
@@ -17,7 +17,7 @@ jobs:
17
17
  BUNDLE_WITHOUT: development:test
18
18
 
19
19
  steps:
20
- - uses: actions/checkout@v2
20
+ - uses: actions/checkout@v3
21
21
 
22
22
  - name: Set up Ruby 2.7
23
23
  uses: ruby/setup-ruby@v1
@@ -39,7 +39,7 @@ jobs:
39
39
  ruby: [ '2.6', '2.7', '3.0', '3.1' ]
40
40
 
41
41
  steps:
42
- - uses: actions/checkout@v2
42
+ - uses: actions/checkout@v3
43
43
  - uses: ruby/setup-ruby@v1
44
44
  with:
45
45
  ruby-version: ${{ matrix.ruby }}
data/CHANGELOG.md CHANGED
@@ -7,6 +7,18 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
7
7
 
8
8
  ## UNRELEASED
9
9
 
10
+ # 1.23.0 - 2024-01-09
11
+
12
+ - Fix: Fixed handler metric for status:received, status:success in batch consumption
13
+ - Feature: Allow pre processing of messages prior to bulk consumption
14
+ - Feature: Add global configuration for custom `bulk_import_id_generator` proc for all consumers
15
+ - Feature: Add individual configuration for custom `bulk_import_id_generator` proc per consumer
16
+ - Feature: Add global `replace_assocations` value for for all consumers
17
+ - Feature: Add individual `replace_assocations` value for for individual consumers
18
+ - Feature: `should_consume?` method accepts BatchRecord associations
19
+ - Feature: Reintroduce `filter_records` for bulk filtering of records prior to insertion
20
+ - Feature: Return valid and invalid records saved during consumption for further processing in `batch_consumption.valid_records` and `batch_consumption.invalid_records` ActiveSupport Notifications
21
+
10
22
  # 1.22.5 - 2023-07-18
11
23
  - Fix: Fixed buffer overflow crash with DB producer.
12
24
 
data/README.md CHANGED
@@ -189,6 +189,12 @@ produced by Phobos and RubyKafka):
189
189
  * exception_object
190
190
  * messages - the batch of messages (in the form of `Deimos::KafkaMessage`s)
191
191
  that failed - this should have only a single message in the batch.
192
+ * `batch_consumption.valid_records` - sent when the consumer has successfully upserted records. Limited by `max_db_batch_size`.
193
+ * consumer: class of the consumer that upserted these records
194
+ * records: Records upserted into the DB (of type `ActiveRecord::Base`)
195
+ * `batch_consumption.invalid_records` - sent when the consumer has rejected records returned from `filtered_records`. Limited by `max_db_batch_size`.
196
+ * consumer: class of the consumer that rejected these records
197
+ * records: Rejected records (of type `Deimos::ActiveRecordConsume::BatchRecord`)
192
198
 
193
199
  Similarly:
194
200
  ```ruby
@@ -100,6 +100,8 @@ offset_commit_threshold|0|Number of messages that can be processed before their
100
100
  offset_retention_time|nil|The time period that committed offsets will be retained, in seconds. Defaults to the broker setting.
101
101
  heartbeat_interval|10|Interval between heartbeats; must be less than the session window.
102
102
  backoff|`(1000..60_000)`|Range representing the minimum and maximum number of milliseconds to back off after a consumer error.
103
+ replace_associations|nil| Whether to delete existing associations for records during bulk consumption for this consumer. If no value is specified the provided/default value from the `consumers` configuration will be used.
104
+ bulk_import_id_generator|nil| Block to determine the `bulk_import_id` generated during bulk consumption. If no block is specified the provided/default block from the `consumers` configuration will be used.
103
105
 
104
106
  ## Defining Database Pollers
105
107
 
@@ -172,6 +174,8 @@ consumers.backoff|`(1000..60_000)`|Range representing the minimum and maximum nu
172
174
  consumers.reraise_errors|false|Default behavior is to swallow uncaught exceptions and log to the metrics provider. Set this to true to instead raise all errors. Note that raising an error will ensure that the message cannot be processed - if there is a bad message which will always raise that error, your consumer will not be able to proceed past it and will be stuck forever until you fix your code. See also the `fatal_error` configuration. This is automatically set to true when using the `TestHelpers` module in RSpec.
173
175
  consumers.report_lag|false|Whether to send the `consumer_lag` metric. This requires an extra thread per consumer.
174
176
  consumers.fatal_error|`proc { false }`|Block taking an exception, payload and metadata and returning true if this should be considered a fatal error and false otherwise. E.g. you can use this to always fail if the database is available. Not needed if reraise_errors is set to true.
177
+ consumers.replace_associations|true|Whether to delete existing associations for records during bulk consumption prior to inserting new associated records
178
+ consumers.bulk_import_id_generator|`proc { SecureRandom.uuid }`| Block to determine the `bulk_import_id` generated during bulk consumption. Block will be used for all bulk consumers unless explicitly set for individual consumers
175
179
 
176
180
  ## Producer Configuration
177
181
 
@@ -28,18 +28,14 @@ module Deimos
28
28
  zip(metadata[:keys]).
29
29
  map { |p, k| Deimos::Message.new(p, nil, key: k) }
30
30
 
31
- tags = %W(topic:#{metadata[:topic]})
32
-
33
- Deimos.instrument('ar_consumer.consume_batch', tags) do
34
- # The entire batch should be treated as one transaction so that if
35
- # any message fails, the whole thing is rolled back or retried
36
- # if there is deadlock
37
- Deimos::Utils::DeadlockRetry.wrap(tags) do
38
- if @compacted || self.class.config[:no_keys]
39
- update_database(compact_messages(messages))
40
- else
41
- uncompacted_update(messages)
42
- end
31
+ tag = metadata[:topic]
32
+ Deimos.config.tracer.active_span.set_tag('topic', tag)
33
+
34
+ Deimos.instrument('ar_consumer.consume_batch', tag) do
35
+ if @compacted || self.class.config[:no_keys]
36
+ update_database(compact_messages(messages))
37
+ else
38
+ uncompacted_update(messages)
43
39
  end
44
40
  end
45
41
  end
@@ -93,8 +89,9 @@ module Deimos
93
89
  end
94
90
 
95
91
  # @param _record [ActiveRecord::Base]
92
+ # @param _associations [Hash]
96
93
  # @return [Boolean]
97
- def should_consume?(_record)
94
+ def should_consume?(_record, _associations=nil)
98
95
  true
99
96
  end
100
97
 
@@ -155,8 +152,13 @@ module Deimos
155
152
  # @return [void]
156
153
  def upsert_records(messages)
157
154
  record_list = build_records(messages)
158
- record_list.filter!(self.method(:should_consume?).to_proc)
159
-
155
+ invalid = filter_records(record_list)
156
+ if invalid.any?
157
+ ActiveSupport::Notifications.instrument('batch_consumption.invalid_records', {
158
+ records: invalid,
159
+ consumer: self.class
160
+ })
161
+ end
160
162
  return if record_list.empty?
161
163
 
162
164
  key_col_proc = self.method(:key_columns).to_proc
@@ -165,13 +167,31 @@ module Deimos
165
167
  updater = MassUpdater.new(@klass,
166
168
  key_col_proc: key_col_proc,
167
169
  col_proc: col_proc,
168
- replace_associations: self.class.config[:replace_associations])
169
- updater.mass_update(record_list)
170
+ replace_associations: self.class.replace_associations,
171
+ bulk_import_id_generator: self.class.bulk_import_id_generator)
172
+ ActiveSupport::Notifications.instrument('batch_consumption.valid_records', {
173
+ records: updater.mass_update(record_list),
174
+ consumer: self.class
175
+ })
176
+ end
177
+
178
+ # @param record_list [BatchRecordList]
179
+ # @return [Array<BatchRecord>]
180
+ def filter_records(record_list)
181
+ record_list.filter!(self.method(:should_consume?).to_proc)
182
+ end
183
+
184
+ # Process messages prior to saving to database
185
+ # @param _messages [Array<Deimos::Message>]
186
+ # @return [Void]
187
+ def pre_process(_messages)
188
+ nil
170
189
  end
171
190
 
172
191
  # @param messages [Array<Deimos::Message>]
173
192
  # @return [BatchRecordList]
174
193
  def build_records(messages)
194
+ pre_process(messages)
175
195
  records = messages.map do |m|
176
196
  attrs = if self.method(:record_attributes).parameters.size == 2
177
197
  record_attributes(m.payload, m.key)
@@ -189,7 +209,8 @@ module Deimos
189
209
 
190
210
  BatchRecord.new(klass: @klass,
191
211
  attributes: attrs,
192
- bulk_import_column: col)
212
+ bulk_import_column: col,
213
+ bulk_import_id_generator: self.class.bulk_import_id_generator)
193
214
  end
194
215
  BatchRecordList.new(records.compact)
195
216
  end
@@ -199,9 +220,11 @@ module Deimos
199
220
  # deleted records.
200
221
  # @return [void]
201
222
  def remove_records(messages)
202
- clause = deleted_query(messages)
223
+ Deimos::Utils::DeadlockRetry.wrap(Deimos.config.tracer.active_span.get_tag('topic')) do
224
+ clause = deleted_query(messages)
203
225
 
204
- clause.delete_all
226
+ clause.delete_all
227
+ end
205
228
  end
206
229
  end
207
230
  end
@@ -17,16 +17,17 @@ module Deimos
17
17
  # @return [String] The column name to use for bulk IDs - defaults to `bulk_import_id`.
18
18
  attr_accessor :bulk_import_column
19
19
 
20
- delegate :valid?, to: :record
20
+ delegate :valid?, :errors, :send, :attributes, to: :record
21
21
 
22
22
  # @param klass [Class < ActiveRecord::Base]
23
23
  # @param attributes [Hash] the full attribute list, including associations.
24
24
  # @param bulk_import_column [String]
25
- def initialize(klass:, attributes:, bulk_import_column: nil)
25
+ # @param bulk_import_id_generator [Proc]
26
+ def initialize(klass:, attributes:, bulk_import_column: nil, bulk_import_id_generator: nil)
26
27
  @klass = klass
27
28
  if bulk_import_column
28
29
  self.bulk_import_column = bulk_import_column
29
- self.bulk_import_id = SecureRandom.uuid
30
+ self.bulk_import_id = bulk_import_id_generator&.call
30
31
  attributes[bulk_import_column] = bulk_import_id
31
32
  end
32
33
  attributes = attributes.with_indifferent_access
@@ -43,7 +44,7 @@ module Deimos
43
44
  return if @klass.column_names.include?(self.bulk_import_column.to_s)
44
45
 
45
46
  raise "Create bulk_import_id on the #{@klass.table_name} table." \
46
- ' Run rails g deimos:bulk_import_id {table} to create the migration.'
47
+ ' Run rails g deimos:bulk_import_id {table} to create the migration.'
47
48
  end
48
49
 
49
50
  # @return [Class < ActiveRecord::Base]
@@ -17,10 +17,19 @@ module Deimos
17
17
  self.bulk_import_column = records.first&.bulk_import_column&.to_sym
18
18
  end
19
19
 
20
- # Filter out any invalid records.
20
+ # Filter and return removed invalid batch records by the specified method
21
21
  # @param method [Proc]
22
+ # @return [Array<BatchRecord>]
22
23
  def filter!(method)
23
- self.batch_records.delete_if { |record| !method.call(record.record) }
24
+ self.batch_records, invalid = self.batch_records.partition do |batch_record|
25
+ case method.parameters.size
26
+ when 2
27
+ method.call(batch_record.record, batch_record.associations)
28
+ else
29
+ method.call(batch_record.record)
30
+ end
31
+ end
32
+ invalid
24
33
  end
25
34
 
26
35
  # Get the original ActiveRecord objects.
@@ -19,9 +19,11 @@ module Deimos
19
19
  # @param key_col_proc [Proc<Class < ActiveRecord::Base>]
20
20
  # @param col_proc [Proc<Class < ActiveRecord::Base>]
21
21
  # @param replace_associations [Boolean]
22
- def initialize(klass, key_col_proc: nil, col_proc: nil, replace_associations: true)
22
+ def initialize(klass, key_col_proc: nil, col_proc: nil,
23
+ replace_associations: true, bulk_import_id_generator: nil)
23
24
  @klass = klass
24
25
  @replace_associations = replace_associations
26
+ @bulk_import_id_generator = bulk_import_id_generator
25
27
 
26
28
  @key_cols = {}
27
29
  @key_col_proc = key_col_proc
@@ -69,7 +71,7 @@ module Deimos
69
71
  def import_associations(record_list)
70
72
  record_list.fill_primary_keys!
71
73
 
72
- import_id = @replace_associations ? SecureRandom.uuid : nil
74
+ import_id = @replace_associations ? @bulk_import_id_generator&.call : nil
73
75
  record_list.associations.each do |assoc|
74
76
  sub_records = record_list.map { |r| r.sub_records(assoc.name, import_id) }.flatten
75
77
  next unless sub_records.any?
@@ -82,9 +84,16 @@ module Deimos
82
84
  end
83
85
 
84
86
  # @param record_list [BatchRecordList]
87
+ # @return [Array<ActiveRecord::Base>]
85
88
  def mass_update(record_list)
86
- save_records_to_database(record_list)
87
- import_associations(record_list) if record_list.associations.any?
89
+ # The entire batch should be treated as one transaction so that if
90
+ # any message fails, the whole thing is rolled back or retried
91
+ # if there is deadlock
92
+ Deimos::Utils::DeadlockRetry.wrap(Deimos.config.tracer.active_span.get_tag('topic')) do
93
+ save_records_to_database(record_list)
94
+ import_associations(record_list) if record_list.associations.any?
95
+ end
96
+ record_list.records
88
97
  end
89
98
 
90
99
  end
@@ -35,6 +35,16 @@ module Deimos
35
35
  config[:bulk_import_id_column]
36
36
  end
37
37
 
38
+ # @return [Proc]
39
+ def bulk_import_id_generator
40
+ config[:bulk_import_id_generator]
41
+ end
42
+
43
+ # @return [Boolean]
44
+ def replace_associations
45
+ config[:replace_associations]
46
+ end
47
+
38
48
  # @param val [Boolean] Turn pre-compaction of the batch on or off. If true,
39
49
  # only the last message for each unique key in a batch is processed.
40
50
  # @return [void]
@@ -79,6 +79,7 @@ module Deimos # rubocop:disable Metrics/ModuleLength
79
79
 
80
80
  # @!visibility private
81
81
  # @param kafka_config [FigTree::ConfigStruct]
82
+ # rubocop:disable Metrics/PerceivedComplexity, Metrics/AbcSize
82
83
  def self.configure_producer_or_consumer(kafka_config)
83
84
  klass = kafka_config.class_name.constantize
84
85
  klass.class_eval do
@@ -90,11 +91,18 @@ module Deimos # rubocop:disable Metrics/ModuleLength
90
91
  if kafka_config.respond_to?(:bulk_import_id_column) # consumer
91
92
  klass.config.merge!(
92
93
  bulk_import_id_column: kafka_config.bulk_import_id_column,
93
- replace_associations: kafka_config.replace_associations
94
+ replace_associations: if kafka_config.replace_associations.nil?
95
+ Deimos.config.consumers.replace_associations
96
+ else
97
+ kafka_config.replace_associations
98
+ end,
99
+ bulk_import_id_generator: kafka_config.bulk_import_id_generator ||
100
+ Deimos.config.consumers.bulk_import_id_generator
94
101
  )
95
102
  end
96
103
  end
97
104
  end
105
+ # rubocop:enable Metrics/PerceivedComplexity, Metrics/AbcSize
98
106
 
99
107
  define_settings do
100
108
 
@@ -242,6 +250,15 @@ module Deimos # rubocop:disable Metrics/ModuleLength
242
250
  # Not needed if reraise_errors is set to true.
243
251
  # @return [Block]
244
252
  setting(:fatal_error, proc { false })
253
+
254
+ # The default function to generate a bulk ID for bulk consumers
255
+ # @return [Block]
256
+ setting(:bulk_import_id_generator, proc { SecureRandom.uuid })
257
+
258
+ # If true, multi-table consumers will blow away associations rather than appending to them.
259
+ # Applies to all consumers unless specified otherwise
260
+ # @return [Boolean]
261
+ setting :replace_associations, true
245
262
  end
246
263
 
247
264
  setting :producers do
@@ -445,7 +462,13 @@ module Deimos # rubocop:disable Metrics/ModuleLength
445
462
  setting :bulk_import_id_column, :bulk_import_id
446
463
  # If true, multi-table consumers will blow away associations rather than appending to them.
447
464
  # @return [Boolean]
448
- setting :replace_associations, true
465
+ setting :replace_associations, nil
466
+
467
+ # The default function to generate a bulk ID for this consumer
468
+ # Uses the consumers proc defined in the consumers config by default unless
469
+ # specified for individual consumers
470
+ # @return [Block]
471
+ setting :bulk_import_id_generator, nil
449
472
 
450
473
  # These are the phobos "listener" configs. See CONFIGURATION.md for more
451
474
  # info.
@@ -64,7 +64,7 @@ module Deimos
64
64
  ))
65
65
  Deimos.config.metrics&.increment(
66
66
  'handler',
67
- by: metadata['batch_size'],
67
+ by: metadata[:batch_size],
68
68
  tags: %W(
69
69
  status:received
70
70
  topic:#{metadata[:topic]}
@@ -115,7 +115,7 @@ module Deimos
115
115
  ))
116
116
  Deimos.config.metrics&.increment(
117
117
  'handler',
118
- by: metadata['batch_size'],
118
+ by: metadata[:batch_size],
119
119
  tags: %W(
120
120
  status:success
121
121
  topic:#{metadata[:topic]}
@@ -28,6 +28,7 @@ module Deimos
28
28
  deimos_config.kafka.seed_brokers ||= ['test_broker']
29
29
  deimos_config.schema.backend = Deimos.schema_backend_class.mock_backend
30
30
  deimos_config.producers.backend = :test
31
+ deimos_config.tracer = Deimos::Tracing::Mock.new
31
32
  end
32
33
  end
33
34
 
@@ -15,11 +15,7 @@ module Deimos
15
15
 
16
16
  # :nodoc:
17
17
  def start(span_name, options={})
18
- span = if ::Datadog.respond_to?(:tracer)
19
- ::Datadog.tracer.trace(span_name)
20
- else
21
- ::Datadog::Tracing.trace(span_name)
22
- end
18
+ span = tracer.trace(span_name)
23
19
  span.service = @service
24
20
  span.resource = options[:resource]
25
21
  span
@@ -30,9 +26,14 @@ module Deimos
30
26
  span.finish
31
27
  end
32
28
 
29
+ # :nodoc:
30
+ def tracer
31
+ @tracer ||= ::Datadog.respond_to?(:tracer) ? ::Datadog.tracer : ::Datadog::Tracing
32
+ end
33
+
33
34
  # :nodoc:
34
35
  def active_span
35
- ::Datadog.tracer.active_span
36
+ tracer.active_span
36
37
  end
37
38
 
38
39
  # :nodoc:
@@ -45,6 +46,11 @@ module Deimos
45
46
  (span || active_span).set_tag(tag, value)
46
47
  end
47
48
 
49
+ # :nodoc:
50
+ def get_tag(tag)
51
+ active_span.get_tag(tag)
52
+ end
53
+
48
54
  end
49
55
  end
50
56
  end
@@ -10,6 +10,7 @@ module Deimos
10
10
  def initialize(logger=nil)
11
11
  @logger = logger || Logger.new(STDOUT)
12
12
  @logger.info('MockTracingProvider initialized')
13
+ @active_span = MockSpan.new
13
14
  end
14
15
 
15
16
  # @param span_name [String]
@@ -32,12 +33,22 @@ module Deimos
32
33
 
33
34
  # :nodoc:
34
35
  def active_span
35
- nil
36
+ @active_span ||= MockSpan.new
36
37
  end
37
38
 
38
39
  # :nodoc:
39
40
  def set_tag(tag, value, span=nil)
40
- nil
41
+ if span
42
+ span.set_tag(tag, value)
43
+ else
44
+ active_span.set_tag(tag, value)
45
+ end
46
+ end
47
+
48
+ # Get a tag from a span with the specified tag.
49
+ # @param tag [String]
50
+ def get_tag(tag)
51
+ @span.get_tag(tag)
41
52
  end
42
53
 
43
54
  # :nodoc:
@@ -47,5 +58,23 @@ module Deimos
47
58
  @logger.info("Mock span '#{name}' set an error: #{exception}")
48
59
  end
49
60
  end
61
+
62
+ # Mock Span class
63
+ class MockSpan
64
+ # :nodoc:
65
+ def initialize
66
+ @span = {}
67
+ end
68
+
69
+ # :nodoc:
70
+ def set_tag(tag, value)
71
+ @span[tag] = value
72
+ end
73
+
74
+ # :nodoc:
75
+ def get_tag(tag)
76
+ @span[tag]
77
+ end
78
+ end
50
79
  end
51
80
  end
@@ -42,6 +42,12 @@ module Deimos
42
42
  raise NotImplementedError
43
43
  end
44
44
 
45
+ # Get a tag from a span with the specified tag.
46
+ # @param tag [String]
47
+ def get_tag(tag)
48
+ raise NotImplementedError
49
+ end
50
+
45
51
  end
46
52
  end
47
53
  end
@@ -25,13 +25,21 @@ module Deimos
25
25
  def instance(payload, schema, namespace='')
26
26
  return payload if payload.is_a?(Deimos::SchemaClass::Base)
27
27
 
28
- constants = modules_for(namespace) + [schema.underscore.camelize.singularize]
29
- klass = constants.join('::').safe_constantize
28
+ klass = klass(schema, namespace)
30
29
  return payload if klass.nil? || payload.nil?
31
30
 
32
31
  klass.new(**payload.symbolize_keys)
33
32
  end
34
33
 
34
+ # Determine and return the SchemaClass with the provided schema and namespace
35
+ # @param schema [String]
36
+ # @param namespace [String]
37
+ # @return [Deimos::SchemaClass]
38
+ def klass(schema, namespace)
39
+ constants = modules_for(namespace) + [schema.underscore.camelize.singularize]
40
+ constants.join('::').safe_constantize
41
+ end
42
+
35
43
  # @param config [Hash] Producer or Consumer config
36
44
  # @return [Boolean]
37
45
  def use?(config)
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module Deimos
4
- VERSION = '1.22.5'
4
+ VERSION = '1.23.0'
5
5
  end
data/lib/deimos.rb CHANGED
@@ -57,7 +57,20 @@ module Deimos
57
57
  # @param namespace [String]
58
58
  # @return [Deimos::SchemaBackends::Base]
59
59
  def schema_backend(schema:, namespace:)
60
- schema_backend_class.new(schema: schema, namespace: namespace)
60
+ if Utils::SchemaClass.use?(config.to_h)
61
+ # Initialize an instance of the provided schema
62
+ # in the event the schema class is an override, the inherited
63
+ # schema and namespace will be applied
64
+ schema_class = Utils::SchemaClass.klass(schema, namespace)
65
+ if schema_class.nil?
66
+ schema_backend_class.new(schema: schema, namespace: namespace)
67
+ else
68
+ schema_instance = schema_class.new
69
+ schema_backend_class.new(schema: schema_instance.schema, namespace: schema_instance.namespace)
70
+ end
71
+ else
72
+ schema_backend_class.new(schema: schema, namespace: namespace)
73
+ end
61
74
  end
62
75
 
63
76
  # @param schema [String]
@@ -96,12 +96,17 @@ module ActiveRecordBatchConsumerTest # rubocop:disable Metrics/ModuleLength
96
96
  key_config plain: true
97
97
  record_class Widget
98
98
 
99
- def should_consume?(record)
99
+ def should_consume?(record, associations)
100
100
  if self.should_consume_proc
101
- return self.should_consume_proc.call(record)
101
+ case self.should_consume_proc.parameters.size
102
+ when 2
103
+ self.should_consume_proc.call(record, associations)
104
+ else
105
+ self.should_consume_proc.call(record)
106
+ end
107
+ else
108
+ true
102
109
  end
103
-
104
- true
105
110
  end
106
111
 
107
112
  def record_attributes(payload, _key)
@@ -269,7 +274,7 @@ module ActiveRecordBatchConsumerTest # rubocop:disable Metrics/ModuleLength
269
274
 
270
275
  context 'with invalid models' do
271
276
  before(:each) do
272
- consumer_class.should_consume_proc = proc { |val| val.some_int <= 10 }
277
+ consumer_class.should_consume_proc = proc { |record| record.some_int <= 10 }
273
278
  end
274
279
 
275
280
  it 'should only save valid models' do
@@ -280,5 +285,27 @@ module ActiveRecordBatchConsumerTest # rubocop:disable Metrics/ModuleLength
280
285
  expect(Widget.count).to eq(2)
281
286
  end
282
287
  end
288
+
289
+ context 'with invalid associations' do
290
+
291
+ before(:each) do
292
+ consumer_class.should_consume_proc = proc { |record, associations|
293
+ record.some_int <= 10 && associations['detail']['title'] != 'invalid'
294
+ }
295
+ end
296
+
297
+ it 'should only save valid associations' do
298
+ publish_batch([
299
+ { key: 2,
300
+ payload: { test_id: 'xyz', some_int: 5, title: 'valid' } },
301
+ { key: 3,
302
+ payload: { test_id: 'abc', some_int: 15, title: 'valid' } },
303
+ { key: 4,
304
+ payload: { test_id: 'abc', some_int: 9, title: 'invalid' } }
305
+ ])
306
+ expect(Widget.count).to eq(2)
307
+ expect(Widget.second.some_int).to eq(5)
308
+ end
309
+ end
283
310
  end
284
311
  end