karafka 2.5.0.beta2 → 2.5.0.rc1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: bbcaf396d7f2eff35ec3b59c96ffe7dd880f1f07294aa28bb623e95ce328e3e9
4
- data.tar.gz: 410b037a79abbbc82fbd4b540dcb26378dca396e408f68280325e354ae2e275b
3
+ metadata.gz: 2a66089d998c0dabb1070e4e8f1895a068e8f2aa8e752fb38ef9da1633b9704d
4
+ data.tar.gz: 188ea36894e0a32168303654510ef2e072d19f8ad39e5f0155547b6c96dbfdb2
5
5
  SHA512:
6
- metadata.gz: dcd79f3bda653b74d95938d440ccfe93d8d2cdc214e02de31d9ad56f738b554699bd1991d83dd1c9c7090e1c3a63669f8837677fef77efc22df4804bc16f25a0
7
- data.tar.gz: dbca53f433a0e13ec7582f9ee1c6da15bf38678a1019a3a36e361e7bbbae2c5e5777bf44e61330692b5f38c4fa011dd9e3950027925767908f617df02e1e8219
6
+ metadata.gz: d3ee8f86dd3b26dea69f9e03972ac3aced8b76d8156a9c359cb2b50114f3156306281a7f147935fa4d566e2d398f0a343205255a96f9560528b9dc2d21ca166c
7
+ data.tar.gz: 843e553470b78b107080df06ae7f7bd716d5d08363fb0d381709522aed397bb5da6d46c0d1aee06a72c6ba063b112118a669ff3f079fef4c53f672cf08ed1ee0
@@ -1,4 +1,4 @@
1
- name: ci
1
+ name: CI
2
2
 
3
3
  concurrency:
4
4
  group: ${{ github.workflow }}-${{ github.ref }}
@@ -31,7 +31,7 @@ jobs:
31
31
  fetch-depth: 0
32
32
 
33
33
  - name: Set up Ruby
34
- uses: ruby/setup-ruby@bb0f760b6c925183520ee0bcc9c4a432a7c8c3c6 # v1.241.0
34
+ uses: ruby/setup-ruby@13e7a03dc3ac6c3798f4570bfead2aed4d96abfb # v1.244.0
35
35
  with:
36
36
  ruby-version: 3.4
37
37
  bundler-cache: true
@@ -118,7 +118,7 @@ jobs:
118
118
  run: rm -f Gemfile.lock
119
119
 
120
120
  - name: Set up Ruby
121
- uses: ruby/setup-ruby@bb0f760b6c925183520ee0bcc9c4a432a7c8c3c6 # v1.241.0
121
+ uses: ruby/setup-ruby@13e7a03dc3ac6c3798f4570bfead2aed4d96abfb # v1.244.0
122
122
  with:
123
123
  ruby-version: ${{matrix.ruby}}
124
124
  bundler-cache: true
@@ -164,7 +164,7 @@ jobs:
164
164
  docker compose up -d || (sleep 5 && docker compose up -d)
165
165
 
166
166
  - name: Set up Ruby
167
- uses: ruby/setup-ruby@bb0f760b6c925183520ee0bcc9c4a432a7c8c3c6 # v1.241.0
167
+ uses: ruby/setup-ruby@13e7a03dc3ac6c3798f4570bfead2aed4d96abfb # v1.244.0
168
168
  with:
169
169
  # Do not use cache here as we run bundle install also later in some of the integration
170
170
  # tests and we need to be able to run it without cache
@@ -228,7 +228,7 @@ jobs:
228
228
  docker compose up -d || (sleep 5 && docker compose up -d)
229
229
 
230
230
  - name: Set up Ruby
231
- uses: ruby/setup-ruby@bb0f760b6c925183520ee0bcc9c4a432a7c8c3c6 # v1.241.0
231
+ uses: ruby/setup-ruby@13e7a03dc3ac6c3798f4570bfead2aed4d96abfb # v1.244.0
232
232
  with:
233
233
  ruby-version: ${{matrix.ruby}}
234
234
  bundler: 'latest'
@@ -24,7 +24,7 @@ jobs:
24
24
  fetch-depth: 0
25
25
 
26
26
  - name: Set up Ruby
27
- uses: ruby/setup-ruby@bb0f760b6c925183520ee0bcc9c4a432a7c8c3c6 # v1.241.0
27
+ uses: ruby/setup-ruby@13e7a03dc3ac6c3798f4570bfead2aed4d96abfb # v1.244.0
28
28
  with:
29
29
  bundler-cache: false
30
30
 
@@ -32,5 +32,4 @@ jobs:
32
32
  run: |
33
33
  bundle install --jobs 4 --retry 3
34
34
 
35
- # Release
36
- - uses: rubygems/release-gem@9e85cb11501bebc2ae661c1500176316d3987059 # v1
35
+ - uses: rubygems/release-gem@a25424ba2ba8b387abc8ef40807c2c85b96cbe32 # v1.1.1
data/CHANGELOG.md CHANGED
@@ -1,6 +1,8 @@
1
1
  # Karafka Framework Changelog
2
2
 
3
3
  ## 2.5.0 (Unreleased)
4
+ - **[Breaking]** Change how consistency of DLQ dispatches works in Pro (`partition_key` vs. direct partition id mapping).
5
+ - **[Breaking]** Remove the headers `source_key` from the Pro DLQ dispatched messages as the original key is now fully preserved.
4
6
  - **[Breaking]** Use DLQ and Piping prefix `source_` instead of `original_` to align with naming convention of Kafka Streams and Apache Flink for future usage.
5
7
  - **[Breaking]** Rename scheduled jobs topics names in their config (Pro).
6
8
  - **[Feature]** Parallel Segments for concurrent processing of the same partition with more than partition count of processes (Pro).
@@ -35,12 +37,17 @@
35
37
  - [Enhancement] Execute the help CLI command when no command provided (similar to Rails) to improve DX.
36
38
  - [Enhancement] Remove backtrace from the CLI error for incorrect commands (similar to Rails) to improve DX.
37
39
  - [Enhancement] Provide `karafka topics help` sub-help due to nesting of Declarative Topics actions.
40
+ - [Enhancement] Use independent keys for different states of reporting in scheduled messages.
41
+ - [Enhancement] Enrich scheduled messages state reporter with debug data.
42
+ - [Enhancement] Introduce a new state called `stopped` to the scheduled messages.
43
+ - [Enhancement] Do not overwrite the `key` in the Pro DLQ dispatched messages for routing reasons.
44
+ - [Enhancement] Introduce `errors_tracker.trace_id` for distributed error details correlation with the Web UI.
38
45
  - [Refactor] Introduce a `bin/verify_kafka_warnings` script to clean Kafka from temporary test-suite topics.
39
46
  - [Refactor] Introduce a `bin/verify_topics_naming` script to ensure proper test topics naming convention.
40
47
  - [Refactor] Make sure all temporary topics have a `it-` prefix in their name.
41
48
  - [Refactor] Improve CI specs parallelization.
42
49
  - [Maintenance] Lower the `Karafka::Admin` `poll_timeout` to 50 ms to improve responsiveness of admin operations.
43
- - [Maintenance] Require `karafka-rdkafka` `>=` `0.19.2` due to usage of `#rd_kafka_global_init`, KIP-82 and the new producer caching engine.
50
+ - [Maintenance] Require `karafka-rdkafka` `>=` `0.19.5` due to usage of `#rd_kafka_global_init`, KIP-82, new producer caching engine and improvements to the `partition_key` assignments.
44
51
  - [Maintenance] Add Deimos routing patch into integration suite not to break it in the future.
45
52
  - [Maintenance] Remove Rails `7.0` specs due to upcoming EOL.
46
53
  - [Fix] Fix Recurring Tasks and Scheduled Messages not working with Swarm (using closed producer).
@@ -58,6 +65,7 @@
58
65
  - [Fix] `karafka` cannot be required without Bundler.
59
66
  - [Fix] Scheduled Messages re-seek moves to `latest` on inheritance of initial offset when `0` offset is compacted.
60
67
  - [Fix] Seek to `:latest` without `topic_partition_position` (-1) will not seek at all.
68
+ - [Fix] Extremely high turn over of scheduled messages can cause them not to reach EOF/Loaded state.
61
69
  - [Change] Move to trusted-publishers and remove signing since no longer needed.
62
70
 
63
71
  ## 2.4.18 (2025-04-09)
data/Gemfile.lock CHANGED
@@ -1,10 +1,10 @@
1
1
  PATH
2
2
  remote: .
3
3
  specs:
4
- karafka (2.5.0.beta2)
4
+ karafka (2.5.0.rc1)
5
5
  base64 (~> 0.2)
6
6
  karafka-core (>= 2.5.0, < 2.6.0)
7
- karafka-rdkafka (>= 0.19.2)
7
+ karafka-rdkafka (>= 0.19.5)
8
8
  waterdrop (>= 2.8.3, < 3.0.0)
9
9
  zeitwerk (~> 2.3)
10
10
 
@@ -28,7 +28,7 @@ GEM
28
28
  tzinfo (~> 2.0, >= 2.0.5)
29
29
  uri (>= 0.13.1)
30
30
  base64 (0.2.0)
31
- benchmark (0.4.0)
31
+ benchmark (0.4.1)
32
32
  bigdecimal (3.1.9)
33
33
  byebug (12.0.0)
34
34
  concurrent-ruby (1.3.5)
@@ -39,7 +39,7 @@ GEM
39
39
  erubi (1.13.1)
40
40
  et-orbi (1.2.11)
41
41
  tzinfo
42
- factory_bot (6.5.1)
42
+ factory_bot (6.5.2)
43
43
  activesupport (>= 6.1.0)
44
44
  ffi (1.17.2)
45
45
  ffi (1.17.2-aarch64-linux-gnu)
@@ -62,11 +62,11 @@ GEM
62
62
  karafka-core (2.5.1)
63
63
  karafka-rdkafka (>= 0.19.2, < 0.21.0)
64
64
  logger (>= 1.6.0)
65
- karafka-rdkafka (0.19.4)
65
+ karafka-rdkafka (0.19.5)
66
66
  ffi (~> 1.15)
67
67
  mini_portile2 (~> 2.6)
68
68
  rake (> 12)
69
- karafka-testing (2.5.0)
69
+ karafka-testing (2.5.1)
70
70
  karafka (>= 2.5.0.beta1, < 2.6.0)
71
71
  waterdrop (>= 2.8.0)
72
72
  karafka-web (0.11.0.beta3)
@@ -81,22 +81,22 @@ GEM
81
81
  ostruct (0.6.1)
82
82
  raabro (1.4.0)
83
83
  rack (3.1.15)
84
- rake (13.2.1)
84
+ rake (13.3.0)
85
85
  roda (3.92.0)
86
86
  rack
87
- rspec (3.13.0)
87
+ rspec (3.13.1)
88
88
  rspec-core (~> 3.13.0)
89
89
  rspec-expectations (~> 3.13.0)
90
90
  rspec-mocks (~> 3.13.0)
91
- rspec-core (3.13.3)
91
+ rspec-core (3.13.4)
92
92
  rspec-support (~> 3.13.0)
93
- rspec-expectations (3.13.4)
93
+ rspec-expectations (3.13.5)
94
94
  diff-lcs (>= 1.2.0, < 2.0)
95
95
  rspec-support (~> 3.13.0)
96
- rspec-mocks (3.13.4)
96
+ rspec-mocks (3.13.5)
97
97
  diff-lcs (>= 1.2.0, < 2.0)
98
98
  rspec-support (~> 3.13.0)
99
- rspec-support (3.13.3)
99
+ rspec-support (3.13.4)
100
100
  securerandom (0.4.1)
101
101
  simplecov (0.22.0)
102
102
  docile (~> 1.1)
File without changes
data/karafka.gemspec CHANGED
@@ -23,7 +23,7 @@ Gem::Specification.new do |spec|
23
23
 
24
24
  spec.add_dependency 'base64', '~> 0.2'
25
25
  spec.add_dependency 'karafka-core', '>= 2.5.0', '< 2.6.0'
26
- spec.add_dependency 'karafka-rdkafka', '>= 0.19.2'
26
+ spec.add_dependency 'karafka-rdkafka', '>= 0.19.5'
27
27
  spec.add_dependency 'waterdrop', '>= 2.8.3', '< 3.0.0'
28
28
  spec.add_dependency 'zeitwerk', '~> 2.3'
29
29
 
@@ -427,6 +427,15 @@ module Karafka
427
427
  @wrapped_kafka.committed(tpl)
428
428
  end
429
429
 
430
+ # Reads watermark offsets for given topic
431
+ #
432
+ # @param topic [String] topic name
433
+ # @param partition [Integer] partition number
434
+ # @return [Array<Integer, Integer>] watermark offsets (low, high)
435
+ def query_watermark_offsets(topic, partition)
436
+ @wrapped_kafka.query_watermark_offsets(topic, partition)
437
+ end
438
+
430
439
  private
431
440
 
432
441
  # When we cannot store an offset, it means we no longer own the partition
@@ -44,7 +44,7 @@ module Karafka
44
44
  # clusters can handle our requests.
45
45
  #
46
46
  # @param topic [String] topic name
47
- # @param partition [Partition]
47
+ # @param partition [Integer] partition number
48
48
  # @return [Array<Integer, Integer>] watermark offsets
49
49
  def query_watermark_offsets(topic, partition)
50
50
  l_config = @config.query_watermark_offsets
@@ -30,6 +30,14 @@ module Karafka
30
30
  @block.call
31
31
  end
32
32
 
33
+ # Runs the requested code bypassing any time frequencies
34
+ # Useful when we have certain actions that usually need to run periodically but in some
35
+ # cases need to run asap
36
+ def call!
37
+ @last_called_at = monotonic_now
38
+ @block.call
39
+ end
40
+
33
41
  # Resets the runner, so next `#call` will run the underlying code
34
42
  def reset
35
43
  @last_called_at = monotonic_now - @interval
@@ -22,6 +22,9 @@ module Karafka
22
22
  # @return [Hash]
23
23
  attr_reader :counts
24
24
 
25
+ # @return [String]
26
+ attr_reader :trace_id
27
+
25
28
  # Max errors we keep in memory.
26
29
  # We do not want to keep more because for DLQ-less this would cause memory-leaks.
27
30
  # We do however count per class for granular error counting
@@ -41,6 +44,7 @@ module Karafka
41
44
  @topic = topic
42
45
  @partition = partition
43
46
  @limit = limit
47
+ @trace_id = SecureRandom.uuid
44
48
  end
45
49
 
46
50
  # Clears all the errors
@@ -54,6 +58,7 @@ module Karafka
54
58
  @errors.shift if @errors.size >= @limit
55
59
  @errors << error
56
60
  @counts[error.class] += 1
61
+ @trace_id = SecureRandom.uuid
57
62
  end
58
63
 
59
64
  # @return [Boolean] is the error tracker empty
@@ -149,15 +149,16 @@ module Karafka
149
149
 
150
150
  dlq_message = {
151
151
  topic: @_dispatch_to_dlq_topic || topic.dead_letter_queue.topic,
152
- key: source_partition,
152
+ key: skippable_message.raw_key,
153
+ partition_key: source_partition,
153
154
  payload: skippable_message.raw_payload,
154
155
  headers: skippable_message.raw_headers.merge(
155
156
  'source_topic' => topic.name,
156
157
  'source_partition' => source_partition,
157
158
  'source_offset' => skippable_message.offset.to_s,
158
159
  'source_consumer_group' => topic.consumer_group.id,
159
- 'source_key' => skippable_message.raw_key.to_s,
160
- 'source_attempts' => attempt.to_s
160
+ 'source_attempts' => attempt.to_s,
161
+ 'source_trace_id' => errors_tracker.trace_id
161
162
  )
162
163
  }
163
164
 
@@ -12,13 +12,23 @@ module Karafka
12
12
  dispatcher_class: %i[scheduled_messages dispatcher_class]
13
13
  )
14
14
 
15
+ # In case there is an extremely high turnover of messages, EOF may never kick in,
16
+ # effectively not changing status from loading to loaded. We use the time consumer instance
17
+ # was created + a buffer time to detect such a case (loading + messages from the time it
18
+ # was already running) to switch the state despite no EOF
19
+ # This is in seconds
20
+ GRACE_PERIOD = 15
21
+
22
+ private_constant :GRACE_PERIOD
23
+
15
24
  # Prepares the initial state of all stateful components
16
25
  def initialized
17
26
  clear!
18
27
  # Max epoch is always moving forward with the time. Never backwards, hence we do not
19
28
  # reset it at all.
20
29
  @max_epoch = MaxEpoch.new
21
- @state = State.new(nil)
30
+ @state = State.new
31
+ @reloads = 0
22
32
  end
23
33
 
24
34
  # Processes messages and runs dispatch (via tick) if needed
@@ -27,11 +37,25 @@ module Karafka
27
37
 
28
38
  messages.each do |message|
29
39
  SchemaValidator.call(message)
40
+
41
+ # We always track offsets of messages, even if they would be later on skipped or
42
+ # ignored for any reason. That way we have debug info that is useful once in a while.
43
+ @tracker.offsets(message)
44
+
30
45
  process_message(message)
31
46
  end
32
47
 
33
48
  @states_reporter.call
34
49
 
50
+ recent_timestamp = messages.last.timestamp.to_i
51
+ post_started_timestamp = @tracker.started_at + GRACE_PERIOD
52
+
53
+ # If we started getting messages that are beyond the current time, it means we have
54
+ # loaded enough to start scheduling. The upcoming messages are from the future looking
55
+ # from perspective of the current consumer start. We add a bit of grace period not to
56
+ # deal with edge cases
57
+ loaded! if @state.loading? && recent_timestamp > post_started_timestamp
58
+
35
59
  eofed if eofed?
36
60
 
37
61
  # Unless given day data is fully loaded we should not dispatch any notifications nor
@@ -55,11 +79,7 @@ module Karafka
55
79
  return if reload!
56
80
 
57
81
  # If end of the partition is reached, it always means all data is loaded
58
- @state.loaded!
59
-
60
- tags.add(:state, @state.to_s)
61
-
62
- @states_reporter.call
82
+ loaded!
63
83
  end
64
84
 
65
85
  # Performs periodic operations when no new data is provided to the topic partition
@@ -90,6 +110,12 @@ module Karafka
90
110
  @states_reporter.call
91
111
  end
92
112
 
113
+ # Move the state to shutdown and publish immediately
114
+ def shutdown
115
+ @state.stopped!
116
+ @states_reporter.call!
117
+ end
118
+
93
119
  private
94
120
 
95
121
  # Takes each message and adds it to the daily accumulator if needed or performs other
@@ -104,7 +130,7 @@ module Karafka
104
130
  time = message.headers['schedule_target_epoch']
105
131
 
106
132
  # Do not track historical below today as those will be reflected in the daily buffer
107
- @tracker.track(message) if time >= @today.starts_at
133
+ @tracker.future(message) if time >= @today.starts_at
108
134
 
109
135
  if time > @today.ends_at || time < @max_epoch.to_i
110
136
  # Clean the message immediately when not needed (won't be scheduled) to preserve
@@ -132,6 +158,7 @@ module Karafka
132
158
  # If this is a new assignment we always need to seek from beginning to load the data
133
159
  if @state.fresh?
134
160
  clear!
161
+ @reloads += 1
135
162
  seek(:earliest)
136
163
 
137
164
  return true
@@ -143,6 +170,7 @@ module Karafka
143
170
  # If day has ended we reload and start new day with new schedules
144
171
  if @today.ended?
145
172
  clear!
173
+ @reloads += 1
146
174
  seek(:earliest)
147
175
 
148
176
  return true
@@ -151,6 +179,13 @@ module Karafka
151
179
  false
152
180
  end
153
181
 
182
+ # Moves the state to loaded and publishes the state update
183
+ def loaded!
184
+ @state.loaded!
185
+ tags.add(:state, @state.to_s)
186
+ @states_reporter.call!
187
+ end
188
+
154
189
  # Resets all buffers and states so we can start a new day with a clean slate
155
190
  # We can fully recreate the dispatcher because any undispatched messages will be dispatched
156
191
  # with the new day dispatcher after it is reloaded.
@@ -158,11 +193,13 @@ module Karafka
158
193
  @daily_buffer = DailyBuffer.new
159
194
  @today = Day.new
160
195
  @tracker = Tracker.new
161
- @state = State.new(false)
196
+ @state = State.new
197
+ @state.loading!
162
198
  @dispatcher = dispatcher_class.new(topic.name, partition)
163
199
  @states_reporter = Helpers::IntervalRunner.new do
164
200
  @tracker.today = @daily_buffer.size
165
201
  @tracker.state = @state.to_s
202
+ @tracker.reloads = @reloads
166
203
 
167
204
  @dispatcher.state(@tracker)
168
205
  end
@@ -70,7 +70,8 @@ module Karafka
70
70
  config.producer.produce_async(
71
71
  topic: "#{@topic}#{config.states_postfix}",
72
72
  payload: @serializer.state(tracker),
73
- key: 'state',
73
+ # We use the state as a key, so we always have one state transition data available
74
+ key: "#{tracker.state}_state",
74
75
  partition: @partition,
75
76
  headers: { 'zlib' => 'true' }
76
77
  )
@@ -16,10 +16,8 @@ module Karafka
16
16
  def state(tracker)
17
17
  data = {
18
18
  schema_version: ScheduledMessages::STATES_SCHEMA_VERSION,
19
- dispatched_at: float_now,
20
- state: tracker.state,
21
- daily: tracker.daily
22
- }
19
+ dispatched_at: float_now
20
+ }.merge(tracker.to_h)
23
21
 
24
22
  compress(
25
23
  serialize(data)
@@ -15,38 +15,35 @@ module Karafka
15
15
  # - loaded - state in which we finished loading all the schedules and we can dispatch
16
16
  # messages when the time comes and we can process real-time incoming schedules and
17
17
  # changes to schedules as they appear in the stream.
18
+ # - shutdown - the states are no longer available as the consumer has shut down
18
19
  class State
19
- # @param loaded [nil, false, true] is the state loaded or not yet. `nil` indicates, it is
20
- # a fresh, pre-seek state.
21
- def initialize(loaded = nil)
22
- @loaded = loaded
23
- end
20
+ # Available states scheduling of messages may be in
21
+ STATES = %w[
22
+ fresh
23
+ loading
24
+ loaded
25
+ stopped
26
+ ].freeze
24
27
 
25
- # @return [Boolean] are we in a fresh, pre-bootstrap state
26
- def fresh?
27
- @loaded.nil?
28
- end
28
+ private_constant :STATES
29
29
 
30
- # Marks the current state as fully loaded
31
- def loaded!
32
- @loaded = true
30
+ def initialize
31
+ @state = 'fresh'
33
32
  end
34
33
 
35
- # @return [Boolean] are we in a loaded state
36
- def loaded?
37
- @loaded == true
34
+ STATES.each do |state|
35
+ define_method :"#{state}!" do
36
+ @state = state
37
+ end
38
+
39
+ define_method :"#{state}?" do
40
+ @state == state
41
+ end
38
42
  end
39
43
 
40
44
  # @return [String] current state string representation
41
45
  def to_s
42
- case @loaded
43
- when nil
44
- 'fresh'
45
- when false
46
- 'loading'
47
- when true
48
- 'loaded'
49
- end
46
+ @state
50
47
  end
51
48
  end
52
49
  end
@@ -10,25 +10,40 @@ module Karafka
10
10
  #
11
11
  # It provides accurate today dispatch taken from daily buffer and estimates for future days
12
12
  class Tracker
13
- # @return [Hash<String, Integer>]
14
- attr_reader :daily
15
-
16
13
  # @return [String] current state
17
14
  attr_accessor :state
18
15
 
16
+ attr_writer :reloads
17
+
18
+ # @return [Integer] time epoch when this tracker was started
19
+ attr_reader :started_at
20
+
19
21
  def initialize
20
22
  @daily = Hash.new { |h, k| h[k] = 0 }
21
- @created_at = Time.now.to_i
23
+ @started_at = Time.now.to_i
24
+ @offsets = { low: -1, high: -1 }
25
+ @state = 'fresh'
26
+ @reloads = 0
22
27
  end
23
28
 
24
- # Accurate (because coming from daily buffer) number of things to schedule
29
+ # Tracks offsets of visited messages
30
+ #
31
+ # @param message [Karafka::Messages::Message]
32
+ def offsets(message)
33
+ message_offset = message.offset
34
+
35
+ @offsets[:low] = message_offset if @offsets[:low].negative?
36
+ @offsets[:high] = message.offset
37
+ end
38
+
39
+ # Accurate (because coming from daily buffer) number of things to schedule daily
25
40
  #
26
41
  # @param sum [Integer]
27
42
  def today=(sum)
28
- @daily[epoch_to_date(@created_at)] = sum
43
+ @daily[epoch_to_date(@started_at)] = sum
29
44
  end
30
45
 
31
- # Tracks message dispatch
46
+ # Tracks future message dispatch
32
47
  #
33
48
  # It is only relevant for future days as for today we use accurate metrics from the daily
34
49
  # buffer
@@ -37,12 +52,23 @@ module Karafka
37
52
  # tombstone message. Tombstone messages cancellations are not tracked because it would
38
53
  # drastically increase complexity. For given day we use the accurate counter and for
39
54
  # future days we use estimates.
40
- def track(message)
55
+ def future(message)
41
56
  epoch = message.headers['schedule_target_epoch']
42
57
 
43
58
  @daily[epoch_to_date(epoch)] += 1
44
59
  end
45
60
 
61
+ # @return [Hash] hash with details that we want to expose
62
+ def to_h
63
+ {
64
+ state: @state,
65
+ offsets: @offsets,
66
+ daily: @daily,
67
+ started_at: @started_at,
68
+ reloads: @reloads
69
+ }.freeze
70
+ end
71
+
46
72
  private
47
73
 
48
74
  # @param epoch [Integer] epoch time
@@ -3,5 +3,5 @@
3
3
  # Main module namespace
4
4
  module Karafka
5
5
  # Current Karafka version
6
- VERSION = '2.5.0.beta2'
6
+ VERSION = '2.5.0.rc1'
7
7
  end
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: karafka
3
3
  version: !ruby/object:Gem::Version
4
- version: 2.5.0.beta2
4
+ version: 2.5.0.rc1
5
5
  platform: ruby
6
6
  authors:
7
7
  - Maciej Mensfeld
@@ -49,14 +49,14 @@ dependencies:
49
49
  requirements:
50
50
  - - ">="
51
51
  - !ruby/object:Gem::Version
52
- version: 0.19.2
52
+ version: 0.19.5
53
53
  type: :runtime
54
54
  prerelease: false
55
55
  version_requirements: !ruby/object:Gem::Requirement
56
56
  requirements:
57
57
  - - ">="
58
58
  - !ruby/object:Gem::Version
59
- version: 0.19.2
59
+ version: 0.19.5
60
60
  - !ruby/object:Gem::Dependency
61
61
  name: waterdrop
62
62
  requirement: !ruby/object:Gem::Requirement
@@ -146,10 +146,11 @@ files:
146
146
  - config/locales/errors.yml
147
147
  - config/locales/pro_errors.yml
148
148
  - docker-compose.yml
149
- - examples/payloads/json/enrollment_event.json
150
- - examples/payloads/json/ingestion_event.json
151
- - examples/payloads/json/transaction_event.json
152
- - examples/payloads/json/user_event.json
149
+ - examples/payloads/avro/.gitkeep
150
+ - examples/payloads/json/sample_set_01/enrollment_event.json
151
+ - examples/payloads/json/sample_set_01/ingestion_event.json
152
+ - examples/payloads/json/sample_set_01/transaction_event.json
153
+ - examples/payloads/json/sample_set_01/user_event.json
153
154
  - karafka.gemspec
154
155
  - lib/active_job/karafka.rb
155
156
  - lib/active_job/queue_adapters/karafka_adapter.rb