karafka-web 0.6.0 → 0.6.2

Sign up to get free protection for your applications and to get access to all the features.
Files changed (64) hide show
  1. checksums.yaml +4 -4
  2. checksums.yaml.gz.sig +0 -0
  3. data/CHANGELOG.md +22 -1
  4. data/Gemfile.lock +1 -1
  5. data/lib/karafka/web/config.rb +2 -0
  6. data/lib/karafka/web/tracking/consumers/contracts/report.rb +7 -3
  7. data/lib/karafka/web/tracking/consumers/reporter.rb +5 -3
  8. data/lib/karafka/web/tracking/consumers/sampler.rb +2 -1
  9. data/lib/karafka/web/tracking/sampler.rb +5 -0
  10. data/lib/karafka/web/ui/base.rb +6 -2
  11. data/lib/karafka/web/ui/controllers/base.rb +17 -0
  12. data/lib/karafka/web/ui/controllers/cluster.rb +5 -2
  13. data/lib/karafka/web/ui/controllers/consumers.rb +3 -1
  14. data/lib/karafka/web/ui/controllers/errors.rb +19 -6
  15. data/lib/karafka/web/ui/controllers/jobs.rb +3 -1
  16. data/lib/karafka/web/ui/controllers/requests/params.rb +10 -0
  17. data/lib/karafka/web/ui/lib/paginations/base.rb +61 -0
  18. data/lib/karafka/web/ui/lib/paginations/offset_based.rb +96 -0
  19. data/lib/karafka/web/ui/lib/paginations/page_based.rb +70 -0
  20. data/lib/karafka/web/ui/lib/paginations/paginators/arrays.rb +33 -0
  21. data/lib/karafka/web/ui/lib/paginations/paginators/base.rb +23 -0
  22. data/lib/karafka/web/ui/lib/paginations/paginators/partitions.rb +52 -0
  23. data/lib/karafka/web/ui/lib/paginations/paginators/sets.rb +85 -0
  24. data/lib/karafka/web/ui/lib/ttl_cache.rb +74 -0
  25. data/lib/karafka/web/ui/models/cluster_info.rb +59 -0
  26. data/lib/karafka/web/ui/models/message.rb +114 -38
  27. data/lib/karafka/web/ui/models/status.rb +34 -8
  28. data/lib/karafka/web/ui/pro/app.rb +11 -3
  29. data/lib/karafka/web/ui/pro/controllers/consumers.rb +3 -1
  30. data/lib/karafka/web/ui/pro/controllers/dlq.rb +1 -2
  31. data/lib/karafka/web/ui/pro/controllers/errors.rb +43 -10
  32. data/lib/karafka/web/ui/pro/controllers/explorer.rb +52 -7
  33. data/lib/karafka/web/ui/pro/views/consumers/consumer/_metrics.erb +6 -1
  34. data/lib/karafka/web/ui/pro/views/errors/_breadcrumbs.erb +8 -6
  35. data/lib/karafka/web/ui/pro/views/errors/_error.erb +1 -1
  36. data/lib/karafka/web/ui/pro/views/errors/_partition_option.erb +1 -1
  37. data/lib/karafka/web/ui/pro/views/errors/_table.erb +21 -0
  38. data/lib/karafka/web/ui/pro/views/errors/_title_with_select.erb +31 -0
  39. data/lib/karafka/web/ui/pro/views/errors/index.erb +9 -56
  40. data/lib/karafka/web/ui/pro/views/errors/partition.erb +17 -0
  41. data/lib/karafka/web/ui/pro/views/explorer/_breadcrumbs.erb +1 -1
  42. data/lib/karafka/web/ui/pro/views/explorer/_message.erb +8 -2
  43. data/lib/karafka/web/ui/pro/views/explorer/_partition_option.erb +1 -1
  44. data/lib/karafka/web/ui/pro/views/explorer/_topic.erb +1 -1
  45. data/lib/karafka/web/ui/pro/views/explorer/partition/_messages.erb +1 -0
  46. data/lib/karafka/web/ui/pro/views/explorer/partition.erb +1 -1
  47. data/lib/karafka/web/ui/pro/views/explorer/topic/_empty.erb +3 -0
  48. data/lib/karafka/web/ui/pro/views/explorer/topic/_limited.erb +4 -0
  49. data/lib/karafka/web/ui/pro/views/explorer/topic/_partitions.erb +11 -0
  50. data/lib/karafka/web/ui/pro/views/explorer/topic.erb +49 -0
  51. data/lib/karafka/web/ui/pro/views/shared/_navigation.erb +1 -1
  52. data/lib/karafka/web/ui/views/cluster/_partition.erb +1 -1
  53. data/lib/karafka/web/ui/views/errors/_error.erb +1 -1
  54. data/lib/karafka/web/ui/views/shared/_pagination.erb +16 -12
  55. data/lib/karafka/web/ui/views/status/failures/_initial_state.erb +1 -10
  56. data/lib/karafka/web/ui/views/status/info/_components.erb +6 -1
  57. data/lib/karafka/web/ui/views/status/show.erb +6 -1
  58. data/lib/karafka/web/ui/views/status/successes/_connection.erb +1 -0
  59. data/lib/karafka/web/ui/views/status/warnings/_connection.erb +11 -0
  60. data/lib/karafka/web/version.rb +1 -1
  61. data.tar.gz.sig +0 -0
  62. metadata +20 -3
  63. metadata.gz.sig +0 -0
  64. data/lib/karafka/web/ui/lib/paginate_array.rb +0 -38
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 1f2269efe1b2e14f38c5265d3a8ffd1e7c1bfb1775b11382b27d2cb7a119de03
4
- data.tar.gz: e2656bacf8540ea3854eb0ace8a4e5bf83d3aa7c9e5a7a06d1c36bea50083790
3
+ metadata.gz: d52e64643f448374a2f4efcb5e27089d5f36c8b8cde0eac5e2434c59b07d371c
4
+ data.tar.gz: '068a2ee3c3d82eeccdde1e001e6788c71980769c58013dd5290ad47002eac47e'
5
5
  SHA512:
6
- metadata.gz: 665fdedafab36bb818a64c6aca5ea17355baca597878c0d5819c56c266228d1c8c14135c2dd27a8fb21d81c08d6e7958e9b89c8b21e2b8eac03a93519d8c0b77
7
- data.tar.gz: 6845812a25677375a2d6931785a0ceb8e1d5ac1a454245d924e271aa8993af733b67697aecc055b65feb799c12a337c1e4969024b903b2130d2f5903c6edef3b
6
+ metadata.gz: 0ef49085501fafc176d09c6813d3bafc7cb5d56231e2b655462ed12da02effe623a46c1934a8f17e9913dccf8da6405877ddede9a0f519b350613d0fbf4a66a1
7
+ data.tar.gz: 8251fe9ac27bab1b0992cb8f9baae68fe15100b48814e245589c6a38cc608ee5fa2d65d62faff0fdc8712dcff7685d82a0e87677d56bcc73fd35b07766f02af8
checksums.yaml.gz.sig CHANGED
Binary file
data/CHANGELOG.md CHANGED
@@ -1,6 +1,27 @@
1
1
  # Karafka Web changelog
2
2
 
3
- ## 0.6.0 (Unreleased)
3
+ ## 0.7.0 (Unreleased)
4
+ - **[Feature]** Introduce per-topic data exploration in the Explorer.
5
+ - [Improvement] Introduce in-memory cluster state cached to improve performance.
6
+ - [Improvement] Switch to offset based pagination instead of per-page pagination.
7
+ - [Improvement] Avoid double-reading of watermark offsets for explorer and errors display.
8
+ - [Improvement] When no params needed for a page, do not include empty params.
9
+ - [Improvement] Do not include page when page is 1 in the url.
10
+ - [Refactor] Reorganize pagination engine to support offset based pagination.
11
+
12
+ ## 0.6.2 (2023-07-22)
13
+ - [Fix] Fix extensive CPU usage when using HPET clock instead of TSC due to interrupt frequency.
14
+
15
+ ## 0.6.1 (2023-06-25)
16
+ - [Improvement] Include the karafka-web version in the status page tags.
17
+ - [Improvement] Report `karafka-web` version that is running in particular processes.
18
+ - [Improvement] Display `karafka-web` version in the per-process view.
19
+ - [Improvement] Report in the web-ui a scenario, where getting cluster info takes more than 500ms as a warning to make people realize, that operating with Kafka with extensive latencies is not recommended.
20
+ - [Improvement] Continue the status assessment flow on warnings.
21
+ - [Fix] Do not recommend running a server as a way to bootstrap the initial state.
22
+ - [Fix] Ensure in the report contract, that `karafka-core`, `karafka-web`, `rdkafka` and `librdkafka` are validated.
23
+
24
+ ## 0.6.0 (2023-06-13)
4
25
  - **[Feature]** Introduce producers errors tracking.
5
26
  - [Improvement] Display the error origin as a badge to align with consumers view topic assignments.
6
27
  - [Improvement] Collect more job metrics for future usage.
data/Gemfile.lock CHANGED
@@ -1,7 +1,7 @@
1
1
  PATH
2
2
  remote: .
3
3
  specs:
4
- karafka-web (0.6.0)
4
+ karafka-web (0.6.2)
5
5
  erubi (~> 1.4)
6
6
  karafka (>= 2.1.4, < 3.0.0)
7
7
  karafka-core (>= 2.0.13, < 3.0.0)
@@ -80,6 +80,8 @@ module Karafka
80
80
  end
81
81
 
82
82
  setting :ui do
83
+ setting :cache, default: Ui::Lib::TtlCache.new(60_000 * 5)
84
+
83
85
  # Should the payload be decrypted for the Pro Web UI. Default to `false` due to security
84
86
  # reasons
85
87
  setting :decrypt, default: false
@@ -12,7 +12,7 @@ module Karafka
12
12
  class Report < Tracking::Contracts::Base
13
13
  configure
14
14
 
15
- required(:schema_version) { |val| val.is_a?(String) }
15
+ required(:schema_version) { |val| val.is_a?(String) && !val.empty? }
16
16
  required(:dispatched_at) { |val| val.is_a?(Numeric) && val.positive? }
17
17
  # We have consumers and producer reports and need to ensure that each is handled
18
18
  # in an expected fashion
@@ -24,7 +24,7 @@ module Karafka
24
24
  required(:memory_usage) { |val| val.is_a?(Integer) && val >= 0 }
25
25
  required(:memory_total_usage) { |val| val.is_a?(Integer) && val >= 0 }
26
26
  required(:memory_size) { |val| val.is_a?(Integer) && val >= 0 }
27
- required(:status) { |val| ::Karafka::Status::STATES.key?(val.to_sym) }
27
+ required(:status) { |val| ::Karafka::Status::STATES.key?(val.to_s.to_sym) }
28
28
  required(:listeners) { |val| val.is_a?(Integer) && val >= 0 }
29
29
  required(:concurrency) { |val| val.is_a?(Integer) && val.positive? }
30
30
  required(:tags) { |val| val.is_a?(Karafka::Core::Taggable::Tags) }
@@ -38,9 +38,13 @@ module Karafka
38
38
  end
39
39
 
40
40
  nested(:versions) do
41
+ required(:ruby) { |val| val.is_a?(String) && !val.empty? }
41
42
  required(:karafka) { |val| val.is_a?(String) && !val.empty? }
43
+ required(:karafka_core) { |val| val.is_a?(String) && !val.empty? }
44
+ required(:karafka_web) { |val| val.is_a?(String) && !val.empty? }
42
45
  required(:waterdrop) { |val| val.is_a?(String) && !val.empty? }
43
- required(:ruby) { |val| val.is_a?(String) && !val.empty? }
46
+ required(:rdkafka) { |val| val.is_a?(String) && !val.empty? }
47
+ required(:librdkafka) { |val| val.is_a?(String) && !val.empty? }
44
48
  end
45
49
 
46
50
  nested(:stats) do
@@ -93,12 +93,14 @@ module Karafka
93
93
  def call
94
94
  @running = true
95
95
 
96
+ # We won't track more often anyhow but want to try frequently not to miss a window
97
+ # We need to convert the sleep interval into seconds for sleep
98
+ sleep_time = ::Karafka::Web.config.tracking.interval.to_f / 1_000 / 10
99
+
96
100
  loop do
97
101
  report
98
102
 
99
- # We won't track more often anyhow but want to try frequently not to miss a window
100
- # We need to convert the sleep interval into seconds for sleep
101
- sleep(::Karafka::Web.config.tracking.interval / 1_000 / 10)
103
+ sleep(sleep_time)
102
104
  end
103
105
  end
104
106
 
@@ -75,8 +75,9 @@ module Karafka
75
75
  versions: {
76
76
  ruby: ruby_version,
77
77
  karafka: karafka_version,
78
- waterdrop: waterdrop_version,
79
78
  karafka_core: karafka_core_version,
79
+ karafka_web: karafka_web_version,
80
+ waterdrop: waterdrop_version,
80
81
  rdkafka: rdkafka_version,
81
82
  librdkafka: librdkafka_version
82
83
  },
@@ -32,6 +32,11 @@ module Karafka
32
32
  ::Karafka::VERSION
33
33
  end
34
34
 
35
+ # @return [String] Karafka Web UI version
36
+ def karafka_web_version
37
+ ::Karafka::Web::VERSION
38
+ end
39
+
35
40
  # @return [String] Karafka::Core version
36
41
  def karafka_core_version
37
42
  ::Karafka::Core::VERSION
@@ -68,8 +68,12 @@ module Karafka
68
68
  # Allows us to build current path with additional params
69
69
  # @param query_data [Hash] query params we want to add to the current path
70
70
  path :current do |query_data = {}|
71
- q = query_data.map { |k, v| "#{k}=#{CGI.escape(v.to_s)}" }.join('&')
72
- "#{request.path}?#{q}"
71
+ q = query_data
72
+ .select { |_, v| v }
73
+ .map { |k, v| "#{k}=#{CGI.escape(v.to_s)}" }
74
+ .join('&')
75
+
76
+ [request.path, q].compact.delete_if(&:empty?).join('?')
73
77
  end
74
78
 
75
79
  # Sets appropriate template variables based on the response object and renders the
@@ -12,6 +12,8 @@ module Karafka
12
12
  @params = params
13
13
  end
14
14
 
15
+ private
16
+
15
17
  # Builds the respond data object with assigned attributes based on instance variables.
16
18
  #
17
19
  # @return [Responses::Data] data that should be used to render appropriate view
@@ -33,6 +35,21 @@ module Karafka
33
35
  attributes
34
36
  )
35
37
  end
38
+
39
+ # Initializes the expected pagination engine and assigns expected arguments
40
+ # @param args Any arguments accepted by the selected pagination engine
41
+ def paginate(*args)
42
+ engine = case args.count
43
+ when 2
44
+ Ui::Lib::Paginations::PageBased
45
+ when 4
46
+ Ui::Lib::Paginations::OffsetBased
47
+ else
48
+ raise ::Karafka::Errors::UnsupportedCaseError, args.count
49
+ end
50
+
51
+ @pagination = engine.new(*args)
52
+ end
36
53
  end
37
54
  end
38
55
  end
@@ -8,7 +8,8 @@ module Karafka
8
8
  class Cluster < Base
9
9
  # List cluster info data
10
10
  def index
11
- @cluster_info = Karafka::Admin.cluster_info
11
+ # Make sure, that for the cluster view we always get the most recent cluster state
12
+ @cluster_info = Models::ClusterInfo.fetch(cached: false)
12
13
 
13
14
  partitions_total = []
14
15
 
@@ -18,11 +19,13 @@ module Karafka
18
19
  end
19
20
  end
20
21
 
21
- @partitions, @next_page = Ui::Lib::PaginateArray.new.call(
22
+ @partitions, last_page = Ui::Lib::Paginations::Paginators::Arrays.call(
22
23
  partitions_total,
23
24
  @params.current_page
24
25
  )
25
26
 
27
+ paginate(@params.current_page, !last_page)
28
+
26
29
  respond
27
30
  end
28
31
 
@@ -11,11 +11,13 @@ module Karafka
11
11
  def index
12
12
  @current_state = Models::State.current!
13
13
  @counters = Models::Counters.new(@current_state)
14
- @processes, @next_page = Lib::PaginateArray.new.call(
14
+ @processes, last_page = Ui::Lib::Paginations::Paginators::Arrays.call(
15
15
  Models::Processes.active(@current_state),
16
16
  @params.current_page
17
17
  )
18
18
 
19
+ paginate(@params.current_page, !last_page)
20
+
19
21
  respond
20
22
  end
21
23
  end
@@ -10,13 +10,15 @@ module Karafka
10
10
  class Errors < Base
11
11
  # Lists first page of the errors
12
12
  def index
13
- @previous_page, @error_messages, @next_page, = Models::Message.page(
14
- errors_topic,
15
- 0,
16
- @params.current_page
17
- )
18
-
19
13
  @watermark_offsets = Ui::Models::WatermarkOffsets.find(errors_topic, 0)
14
+ previous_offset, @error_messages, next_offset, = current_page_data
15
+
16
+ paginate(
17
+ previous_offset,
18
+ @params.current_offset,
19
+ next_offset,
20
+ @error_messages.map(&:offset)
21
+ )
20
22
 
21
23
  respond
22
24
  end
@@ -34,6 +36,17 @@ module Karafka
34
36
 
35
37
  private
36
38
 
39
+ # @return [Array] Array with requested messages as well as pagination details and other
40
+ # obtained metadata
41
+ def current_page_data
42
+ Models::Message.offset_page(
43
+ errors_topic,
44
+ 0,
45
+ @params.current_offset,
46
+ @watermark_offsets
47
+ )
48
+ end
49
+
37
50
  # @return [String] errors topic
38
51
  def errors_topic
39
52
  ::Karafka::Web.config.topics.errors
@@ -19,11 +19,13 @@ module Karafka
19
19
  end
20
20
  end
21
21
 
22
- @jobs, @next_page = Ui::Lib::PaginateArray.new.call(
22
+ @jobs, last_page = Ui::Lib::Paginations::Paginators::Arrays.call(
23
23
  jobs_total,
24
24
  @params.current_page
25
25
  )
26
26
 
27
+ paginate(@params.current_page, !last_page)
28
+
27
29
  respond
28
30
  end
29
31
  end
@@ -22,6 +22,16 @@ module Karafka
22
22
  page.positive? ? page : 1
23
23
  end
24
24
  end
25
+
26
+ # @return [Integer] offset from which we want to start. `-1` indicates, that we want
27
+ # to show the first page discovered based on the high watermark offset. If no offset
28
+ # is provided, we go with the high offset first page approach
29
+ def current_offset
30
+ @current_offset ||= begin
31
+ offset = @request_params.fetch('offset', -1).to_i
32
+ offset < -1 ? -1 : offset
33
+ end
34
+ end
25
35
  end
26
36
  end
27
37
  end
@@ -0,0 +1,61 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Karafka
4
+ module Web
5
+ module Ui
6
+ module Lib
7
+ # Namespace for all the types of pagination engines we want to support
8
+ module Paginations
9
+ # Abstraction on top of pagination, so we can alter pagination key and other things
10
+ # for non-standard pagination views (non page based, etc)
11
+ #
12
+ # @note We do not use `_page` explicitly to indicate, that the page scope may not operate
13
+ # on numerable pages (1,2,3,4) but can operate on offsets or times, etc. `_offset` is
14
+ # more general and may refer to many types of pagination.
15
+ class Base
16
+ attr_reader :previous_offset, :current_offset, :next_offset
17
+
18
+ # @return [Boolean] Should we show pagination at all
19
+ def paginate?
20
+ raise NotImplementedError, 'Implement in a subclass'
21
+ end
22
+
23
+ # @return [Boolean] Should first offset link be active. If false, the first offset link
24
+ # will be disabled
25
+ def first_offset?
26
+ raise NotImplementedError, 'Implement in a subclass'
27
+ end
28
+
29
+ # @return [String] first offset url value
30
+ def first_offset
31
+ raise NotImplementedError, 'Implement in a subclass'
32
+ end
33
+
34
+ # @return [Boolean] Should previous offset link be active. If false, the previous
35
+ # offset link will be disabled
36
+ def previous_offset?
37
+ raise NotImplementedError, 'Implement in a subclass'
38
+ end
39
+
40
+ # @return [Boolean] Should we show current offset. If false, the current offset link
41
+ # will not be visible at all. Useful for non-linear pagination.
42
+ def current_offset?
43
+ raise NotImplementedError, 'Implement in a subclass'
44
+ end
45
+
46
+ # @return [Boolean] Should we show next offset pagination. If false, next offset link
47
+ # will be disabled.
48
+ def next_offset?
49
+ raise NotImplementedError, 'Implement in a subclass'
50
+ end
51
+
52
+ # @return [String] the url offset key
53
+ def offset_key
54
+ raise NotImplementedError, 'Implement in a subclass'
55
+ end
56
+ end
57
+ end
58
+ end
59
+ end
60
+ end
61
+ end
@@ -0,0 +1,96 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Karafka
4
+ module Web
5
+ module Ui
6
+ module Lib
7
+ module Paginations
8
+ # Kafka offset based pagination backend
9
+ #
10
+ # Allows us to support paginating over offsets
11
+ class OffsetBased < Base
12
+ # @param previous_offset [Integer, false] previous offset or false if should not be
13
+ # presented
14
+ # @param current_offset [Integer] current offset
15
+ # @param next_offset [Integer, Boolean] should we show next offset page button. If
16
+ # false it will not be presented.
17
+ # @param visible_offsets [Array<Integer>] offsets that are visible in the paginated
18
+ # view. It is needed for the current page label
19
+ def initialize(
20
+ previous_offset,
21
+ current_offset,
22
+ next_offset,
23
+ visible_offsets
24
+ )
25
+ @previous_offset = previous_offset
26
+ @current_offset = current_offset
27
+ @next_offset = next_offset
28
+ @visible_offsets = visible_offsets
29
+ super()
30
+ end
31
+
32
+ # Show pagination only when there is more than one page of results to be presented
33
+ #
34
+ # @return [Boolean]
35
+ def paginate?
36
+ @current_offset && (!!@previous_offset || !!@next_offset)
37
+ end
38
+
39
+ # @return [Boolean] active only when we are not on the first page. First page is always
40
+ # indicated by the current offset being -1. If there is someone that sets up the
41
+ # current offset to a value equal to the last message in the topic partition, we do
42
+ # not consider it as a first page and we allow to "reset" to -1 via the first page
43
+ # button
44
+ def first_offset?
45
+ @current_offset != -1
46
+ end
47
+
48
+ # @return [Boolean] first page offset is always nothing because we use the default -1
49
+ # for the offset.
50
+ def first_offset
51
+ false
52
+ end
53
+
54
+ # @return [Boolean] Active previous page link when it is not the first page
55
+ def previous_offset?
56
+ !!@previous_offset
57
+ end
58
+
59
+ # @return [Boolean] We show current label with offsets that are present on the given
60
+ # page
61
+ def current_offset?
62
+ true
63
+ end
64
+
65
+ # @return [Boolean] move to the next page if not false. False indicates, that there is
66
+ # no next page to move to
67
+ def next_offset?
68
+ !!@next_offset
69
+ end
70
+
71
+ # If there is no next offset, we point to 0 as there should be no smaller offset than
72
+ # that in Kafka ever
73
+ # @return [Integer]
74
+ def next_offset
75
+ next_offset? ? @next_offset : 0
76
+ end
77
+
78
+ # @return [String] label of the current page. It is combined out of the first and
79
+ # the last offsets to show the range where we are. It will be empty if no offsets
80
+ # but this is not a problem as then we should not display pagination at all
81
+ def current_label
82
+ first = @visible_offsets.first
83
+ last = @visible_offsets.last
84
+ [first, last].compact.uniq.join(' - ').to_s
85
+ end
86
+
87
+ # @return [String] for offset based pagination we use the offset param name
88
+ def offset_key
89
+ 'offset'
90
+ end
91
+ end
92
+ end
93
+ end
94
+ end
95
+ end
96
+ end
@@ -0,0 +1,70 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Karafka
4
+ module Web
5
+ module Ui
6
+ module Lib
7
+ module Paginations
8
+ # Regular page-based pagination engine
9
+ class PageBased < Base
10
+ # @param current_offset [Integer] current page
11
+ # @param show_next_offset [Boolean] should we show next page
12
+ # (value is computed automatically)
13
+ def initialize(
14
+ current_offset,
15
+ show_next_offset
16
+ )
17
+ @previous_offset = current_offset - 1
18
+ @current_offset = current_offset
19
+ @next_offset = show_next_offset ? current_offset + 1 : false
20
+ super()
21
+ end
22
+
23
+ # Show pagination only when there is more than one page
24
+ # @return [Boolean]
25
+ def paginate?
26
+ @current_offset && (@current_offset > 1 || !!@next_offset)
27
+ end
28
+
29
+ # @return [Boolean] active the first page link when we are not on the first page
30
+ def first_offset?
31
+ @current_offset > 1
32
+ end
33
+
34
+ # @return [Boolean] first page for page based pagination is always empty as it moves us
35
+ # to the initial page so we do not include any page info
36
+ def first_offset
37
+ false
38
+ end
39
+
40
+ # @return [Boolean] Active previous page link when it is not the first page
41
+ def previous_offset?
42
+ @current_offset > 1
43
+ end
44
+
45
+ # @return [Boolean] always show current offset pagination value
46
+ def current_offset?
47
+ true
48
+ end
49
+
50
+ # @return [String] label of the current page
51
+ def current_label
52
+ @current_offset.to_s
53
+ end
54
+
55
+ # @return [Boolean] move to the next page if not false. False indicates, that there is
56
+ # no next page to move to
57
+ def next_offset?
58
+ @next_offset
59
+ end
60
+
61
+ # @return [String] for page pages pagination, always use page as the url value
62
+ def offset_key
63
+ 'page'
64
+ end
65
+ end
66
+ end
67
+ end
68
+ end
69
+ end
70
+ end
@@ -0,0 +1,33 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Karafka
4
+ module Web
5
+ module Ui
6
+ module Lib
7
+ module Paginations
8
+ # Namespace for commands that build paginated resources based on the provided page
9
+ module Paginators
10
+ # A simple wrapper for paginating array related data structures
11
+ # We call this with plural (same with `Sets`) to avoid confusion with Ruby classes
12
+ class Arrays < Base
13
+ class << self
14
+ # @param array [Array] array we want to paginate
15
+ # @param current_page [Integer] page we want to be on
16
+ # @return [Array<Array, Boolean>] Array with two elements: first is the array with
17
+ # data of the given page and second is a boolean flag with info if the elements we got
18
+ # are from the last page
19
+ def call(array, current_page)
20
+ slices = array.each_slice(per_page).to_a
21
+ current_data = slices[current_page - 1] || []
22
+ last_page = !(slices.count >= current_page - 1 && current_data.size >= per_page)
23
+
24
+ [current_data, last_page]
25
+ end
26
+ end
27
+ end
28
+ end
29
+ end
30
+ end
31
+ end
32
+ end
33
+ end
@@ -0,0 +1,23 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Karafka
4
+ module Web
5
+ module Ui
6
+ module Lib
7
+ module Paginations
8
+ module Paginators
9
+ # Base paginator
10
+ class Base
11
+ class << self
12
+ # @return [Integer] number of elements per page
13
+ def per_page
14
+ ::Karafka::Web.config.ui.per_page
15
+ end
16
+ end
17
+ end
18
+ end
19
+ end
20
+ end
21
+ end
22
+ end
23
+ end
@@ -0,0 +1,52 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Karafka
4
+ module Web
5
+ module Ui
6
+ module Lib
7
+ module Paginations
8
+ module Paginators
9
+ # Paginator for selecting proper range of partitions for each page
10
+ # For topics with a lot of partitions we cannot get all the data efficiently, that
11
+ # is why we limit number of partitions per page and reduce the operations
12
+ # that way. This allows us to effectively display more while not having to fetch
13
+ # more partitions then the number of messages per page.
14
+ # In cases like this we distribute partitions evenly part of partitions on each of
15
+ # the pages. This may become unreliable for partitions that are not evenly
16
+ # distributed but this allows us to display data for as many partitions as we want
17
+ # without overloading the system
18
+ class Partitions < Base
19
+ class << self
20
+ # Computers the partitions slice, materialized page and the limitations status
21
+ # for a given page
22
+ # @param partitions_count [Integer] number of partitions for a given topic
23
+ # @param current_page [Integer] current page
24
+ # @return [Array<Array<Integer>, Integer, Boolean>] list of partitions that should
25
+ # be active on a given page, materialized page for them and info if we had to
26
+ # limit the partitions number on a given page
27
+ def call(partitions_count, current_page)
28
+ # How many "chunks" of partitions we will have
29
+ slices_count = (partitions_count / per_page.to_f).ceil
30
+ # How many partitions in a single slice should we have
31
+ in_slice = (partitions_count / slices_count.to_f).ceil
32
+ # Which "chunked" page do we want to get
33
+ materialized_page = (current_page / slices_count.to_f).ceil
34
+ # Which slice is the one we are operating on
35
+ active_slice_index = (current_page - 1) % slices_count
36
+ # All available slices so we can pick one that is active
37
+ partitions_slices = (0...partitions_count).each_slice(in_slice).to_a
38
+ # Select active partitions only
39
+ active_partitions = partitions_slices[active_slice_index]
40
+ # Are we limiting ourselves because of partition count
41
+ limited = slices_count > 1
42
+
43
+ [active_partitions, materialized_page, limited]
44
+ end
45
+ end
46
+ end
47
+ end
48
+ end
49
+ end
50
+ end
51
+ end
52
+ end