logstash-filter-kafka_time_machine 2.0.0 → 3.0.0.pre

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: a0ab8be37285b43b8785aea650cc6eb6d456d701ba36aeb0d5c2cbcb020d470d
4
- data.tar.gz: d6ccd08fa8024d7cced6c529708ad5cd5f15ef683c043206966c58b989d28f42
3
+ metadata.gz: 051f813a378da7788b0731000d62645d1475d70672c1845244504f55a0d4b7a9
4
+ data.tar.gz: c0259a2da821f3da532e97db28a14d7f5e02da5afc21448adb4e50683ff9c305
5
5
  SHA512:
6
- metadata.gz: 73b806dbef6c52765e674dc7acec647c90427ff40270c9665eaf07c29818ecca558c564cb812aa9cd81601a9b3009ccdd5b7d465ef18352a4f146c195f9daf16
7
- data.tar.gz: 3f318fbafd7bd1283599ac68b4d7b29ea9187f7772d8353235d0d4d2cc06be34afc2bf440d226600ba125ad4a9a34d058d77f05c87c94c45d9f3339cbdc79e31
6
+ metadata.gz: 32a0b27107ac983b22cc863d555ce9076f065d5faa01fbf64c83cf30083009cb613c8dfe989d2338863f4d52d83851b747ab76d745e768f322ffdab88258d0f4
7
+ data.tar.gz: 21ce90deb6265a9af0d33dfdb630f3562a2840d097a1d62b4b414b9fd6298ec943fabaa91f66a847fa681ea47bc3b64b19607aaf109ed482611761e2faee9339
data/README.md CHANGED
@@ -1,4 +1,5 @@
1
1
  [![Gem Version](https://badge.fury.io/rb/logstash-filter-kafka_time_machine.svg)](https://badge.fury.io/rb/logstash-filter-kafka_time_machine)
2
+ [![Build Status](https://drone.lma.wbx2.com/api/badges/post-deployment/logstash-filter-kafka_time_machine/status.svg)](https://drone.lma.wbx2.com/post-deployment/logstash-filter-kafka_time_machine)
2
3
 
3
4
  # Logstash Plugin: logstash-filter-kafka_time_machine
4
5
 
@@ -23,20 +24,20 @@ The filter leverages metadata inserted into the log event on both `logstash_ship
23
24
  When the `kafka_time_machine` executes it will return a [InfluxDB Line Protocol](https://docs.influxdata.com/influxdb/v1.8/write_protocols/line_protocol_tutorial/) formatted metric, i.e.:
24
25
 
25
26
  ```
26
- ktm,datacenter=kafka_datacenter_shipper-test,lag_type=total,owner=ktm_test@cisco.com lag_ms=300i,payload_size_bytes=40i 1634662795000000000
27
+ ktm,datacenter=kafka_datacenter_shipper-test,es_cluster=some_cluster_name,es_cluster_index=some_cluster_index_name,lag_type=total,owner=ktm_test@cisco.com lag_ms=300i,payload_size_bytes=40i 1634662795000000000
27
28
  ```
28
29
 
29
30
  The plugin will also emit a metric if an error was encountered, i.e.:
30
31
 
31
32
  ```
32
- ktm_error,datacenter=kafka_datacenter_shipper-test,owner=ktm_test@cisco.com,source=shipper count=1i 1634662795000000000
33
+ ktm_error,datacenter=kafka_datacenter_shipper-test,es_cluster=some_cluster_name,es_cluster_index=some_cluster_index_name,owner=ktm_test@cisco.com,source=shipper count=1i 1634662795000000000
33
34
  ```
34
35
 
35
36
  To ensure a logstash `output{}` block can properly route this metric, the new event are tagged with a `[@metadata][ktm_tag][ktm_metric]` field, i.e.:
36
37
 
37
38
  ```
38
39
  {
39
- "ktm_metric" => "ktm,datacenter=kafka_datacenter_shipper-test,lag_type=total,owner=ktm_test@cisco.com lag_ms=300i,payload_size_bytes=40i 1634662795000000000",
40
+ "ktm_metric" => "ktm,datacenter=kafka_datacenter_shipper-test,lag_type=total,es_cluster=some_cluster_name,es_cluster_index=some_cluster_index_name,owner=ktm_test@cisco.com lag_ms=300i,payload_size_bytes=40i 1634662795000000000",
40
41
  "@timestamp" => 2021-10-20T23:46:24.704Z,
41
42
  "@metadata" => {
42
43
  "ktm_tags" => {
@@ -59,6 +60,8 @@ In the case of `ktm` the metric breakdown is:
59
60
  | Line Protocol Element | Line Protocol Type | Description |
60
61
  | --------------------- | ------------------ | ------------------------------------------- |
61
62
  | datacenter | tag | Echo of `kafka_datacenter_shipper` |
63
+ | es_cluster | tag | Echo of `elasticsearch_cluster` |
64
+ | es_cluster_index | tag | Echo of `elasticsearch_cluster_index` |
62
65
  | lag_type | tag | Calculated lag type |
63
66
  | owner | tag | Echo of `event_owner` |
64
67
  | lag_ms | field | Calculated lag in milliseconds |
@@ -72,12 +75,14 @@ Meaning of `lag_type`:
72
75
 
73
76
  In the case of `ktm_error` the metric breakdown is:
74
77
 
75
- | Line Protocol Element | Line Protocol Type | Description |
76
- | --------------------- | ------------------ | ------------------------------------ |
77
- | datacenter | tag | Echo of `kafka_datacenter_shipper` |
78
- | source | tag | Source of the error metric |
79
- | owner | tag | Echo of `event_owner` |
80
- | count | field | Count to track error; not cumulative |
78
+ | Line Protocol Element | Line Protocol Type | Description |
79
+ | --------------------- | ------------------ | ------------------------------------- |
80
+ | datacenter | tag | Echo of `kafka_datacenter_shipper` |
81
+ | es_cluster | tag | Echo of `elasticsearch_cluster` |
82
+ | es_cluster_index | tag | Echo of `elasticsearch_cluster_index` |
83
+ | source | tag | Source of the error metric |
84
+ | owner | tag | Echo of `event_owner` |
85
+ | count | field | Count to track error; not cumulative |
81
86
 
82
87
  Meaning of `source`:
83
88
 
@@ -113,6 +118,8 @@ This plugin requires the following configurations:
113
118
  | [logstash_kafka_read_time_indexer](#logstash_kafka_read_time_indexer) | string | Yes |
114
119
  | [event_owner](#event_owner) | string | Yes |
115
120
  | [event_time_ms](#event_time_ms) | string | Yes |
121
+ | [elasticsearch_cluster](#elasticsearch_cluster) | string | Yes |
122
+ | [elasticsearch_cluster_index](#elasticsearch_cluster_index) | string | Yes |
116
123
 
117
124
  > Why are all settings required?
118
125
  >
@@ -371,4 +378,50 @@ filter {
371
378
  event_time_ms => "%{[dynamic_field]}"
372
379
  }
373
380
  }
374
- ```
381
+ ```
382
+
383
+ ### elasticsearch_cluster
384
+
385
+ - Value type is [string](https://www.elastic.co/guide/en/logstash/7.13/configuration-file-structure.html#string)
386
+ - There is no default value for this setting.
387
+
388
+ Provide identifier for ElasticSearch cluster log was sent to; represents the owner of the log. Field values can be static or dynamic:
389
+
390
+ ```
391
+ filter {
392
+ kafka_time_machine {
393
+ elasticsearch_cluster => "static_field"
394
+ }
395
+ }
396
+ ```
397
+
398
+ ```
399
+ filter {
400
+ kafka_time_machine {
401
+ elasticsearch_cluster => "%{[dynamic_field]}"
402
+ }
403
+ }
404
+ ```
405
+
406
+ ### elasticsearch_cluster_index
407
+
408
+ - Value type is [string](https://www.elastic.co/guide/en/logstash/7.13/configuration-file-structure.html#string)
409
+ - There is no default value for this setting.
410
+
411
+ Provide identifier for ElasticSearch cluster index log will be indexed in; represents the owner of the log. Field values can be static or dynamic:
412
+
413
+ ```
414
+ filter {
415
+ kafka_time_machine {
416
+ elasticsearch_cluster_index => "static_field"
417
+ }
418
+ }
419
+ ```
420
+
421
+ ```
422
+ filter {
423
+ kafka_time_machine {
424
+ elasticsearch_cluster_index => "%{[dynamic_field]}"
425
+ }
426
+ }
427
+ ```
@@ -2,7 +2,7 @@
2
2
  require "logstash/filters/base"
3
3
  require "logstash/namespace"
4
4
  require "logstash/event"
5
- require "influxdb-client"
5
+ require "json"
6
6
 
7
7
  class LogStash::Filters::KafkaTimeMachine < LogStash::Filters::Base
8
8
 
@@ -38,9 +38,15 @@ class LogStash::Filters::KafkaTimeMachine < LogStash::Filters::Base
38
38
  # Owner of the event currenty being process.
39
39
  config :event_owner, :validate => :string, :required => true
40
40
 
41
- # Current time since EPOCH in ms that should be set in the influxdb generated metric
41
+ # Current time since EPOCH in ms that should be set in the generated metric
42
42
  config :event_time_ms, :validate => :string, :required => true
43
43
 
44
+ # Current time since EPOCH in ms that should be set in the generated metric
45
+ config :elasticsearch_cluster, :validate => :string, :required => true
46
+
47
+ # Current time since EPOCH in ms that should be set in the generated metric
48
+ config :elasticsearch_cluster_index, :validate => :string, :required => true
49
+
44
50
  public
45
51
  def register
46
52
 
@@ -61,6 +67,8 @@ class LogStash::Filters::KafkaTimeMachine < LogStash::Filters::Base
61
67
  shipper_kafka_consumer_group = event.sprintf(@kafka_consumer_group_shipper)
62
68
  indexer_kafka_topic = event.sprintf(@kafka_topic_indexer)
63
69
  indexer_kafka_consumer_group = event.sprintf(@kafka_consumer_group_indexer)
70
+ elasticsearch_cluster = event.sprintf(@elasticsearch_cluster)
71
+ elasticsearch_cluster_index = event.sprintf(@elasticsearch_cluster_index)
64
72
 
65
73
  # Extract all the "time" related values to local variables. This need special handling due to the Float() operation.
66
74
  #
@@ -72,10 +80,11 @@ class LogStash::Filters::KafkaTimeMachine < LogStash::Filters::Base
72
80
  indexer_logstash_kafka_read_time = get_numeric(event.sprintf(@logstash_kafka_read_time_indexer))
73
81
 
74
82
  # Validate the shipper data
75
- shipper_kafka_array = Array[shipper_kafka_datacenter, shipper_kafka_topic, shipper_kafka_consumer_group, shipper_kafka_append_time, shipper_logstash_kafka_read_time, event_owner, event_time_ms]
83
+ shipper_kafka_array = Array[shipper_kafka_datacenter, shipper_kafka_topic, shipper_kafka_consumer_group, shipper_kafka_append_time, shipper_logstash_kafka_read_time, event_owner, event_time_ms, elasticsearch_cluster, elasticsearch_cluster_index]
76
84
  if (shipper_kafka_array.any? { |text| text.nil? || text.to_s.empty? })
77
85
  @logger.debug("shipper_kafka_array invalid: Found null")
78
86
  error_string_shipper = sprintf("Error in shipper data: %s", shipper_kafka_array)
87
+ @logger.debug(error_string_shipper)
79
88
  shipper_valid = false
80
89
  else
81
90
  @logger.debug("shipper_kafka_array valid")
@@ -86,10 +95,11 @@ class LogStash::Filters::KafkaTimeMachine < LogStash::Filters::Base
86
95
  end
87
96
 
88
97
  # Validate the indexer data
89
- indexer_kafka_array = Array[shipper_kafka_datacenter, indexer_kafka_topic, indexer_kafka_consumer_group, indexer_kafka_append_time, indexer_logstash_kafka_read_time, event_owner, event_time_ms]
98
+ indexer_kafka_array = Array[shipper_kafka_datacenter, indexer_kafka_topic, indexer_kafka_consumer_group, indexer_kafka_append_time, indexer_logstash_kafka_read_time, event_owner, event_time_ms, elasticsearch_cluster, elasticsearch_cluster_index]
90
99
  if (indexer_kafka_array.any? { |text| text.nil? || text.to_s.empty? })
91
100
  @logger.debug("indexer_kafka_array invalid: Found null")
92
101
  error_string_indexer = sprintf("Error in indexer data: %s", indexer_kafka_array)
102
+ @logger.debug(error_string_indexer)
93
103
  indexer_valid = false
94
104
  else
95
105
  @logger.debug("indexer_kafka_array valid")
@@ -100,12 +110,12 @@ class LogStash::Filters::KafkaTimeMachine < LogStash::Filters::Base
100
110
  end
101
111
 
102
112
  # Add in the size of the payload field
103
- payload_bytesize = 0
113
+ payload_size_bytes = 0
104
114
  if event.get("[payload]")
105
- payload_bytesize = event.get("[payload]").bytesize
115
+ payload_size_bytes = event.get("[payload]").bytesize
106
116
  end
107
117
 
108
- # Set time (nanoseconds) for influxdb line protocol
118
+ # Set time (nanoseconds) for event that is generated
109
119
  epoch_time_ns = nil
110
120
  if (event_time_ms != nil )
111
121
  epoch_time_ns = event_time_ms * 1000000
@@ -118,35 +128,35 @@ class LogStash::Filters::KafkaTimeMachine < LogStash::Filters::Base
118
128
  if (shipper_valid == true && indexer_valid == true && epoch_time_ns != nil)
119
129
  total_kafka_lag_ms = indexer_logstash_kafka_read_time - shipper_kafka_append_time
120
130
 
121
- point_influxdb = create_influxdb_point_ktm(shipper_kafka_datacenter, event_owner, payload_bytesize, "total", total_kafka_lag_ms, epoch_time_ns)
122
- ktm_metric_event_array.push point_influxdb
131
+ point_ktm = create_point_ktm(shipper_kafka_datacenter, event_owner, payload_size_bytes, "total", total_kafka_lag_ms, epoch_time_ns, elasticsearch_cluster, elasticsearch_cluster_index)
132
+ ktm_metric_event_array.push point_ktm
123
133
 
124
134
  elsif (shipper_valid == true && indexer_valid == false && epoch_time_ns != nil)
125
- point_influxdb = create_influxdb_point_ktm(shipper_kafka_datacenter, event_owner, payload_bytesize, "shipper", shipper_kafka_lag_ms, epoch_time_ns)
126
- ktm_metric_event_array.push point_influxdb
135
+ point_ktm = create_point_ktm(shipper_kafka_datacenter, event_owner, payload_size_bytes, "shipper", shipper_kafka_lag_ms, epoch_time_ns, elasticsearch_cluster, elasticsearch_cluster_index)
136
+ ktm_metric_event_array.push point_ktm
127
137
 
128
- point_influxdb = create_influxdb_point_ktm_error(shipper_kafka_datacenter, event_owner, epoch_time_ns, "indexer")
129
- ktm_metric_event_array.push point_influxdb
138
+ point_ktm = create_point_ktm_error(shipper_kafka_datacenter, event_owner, epoch_time_ns, "indexer", elasticsearch_cluster, elasticsearch_cluster_index)
139
+ ktm_metric_event_array.push point_ktm
130
140
 
131
141
  elsif (indexer_valid == true && shipper_valid == false && epoch_time_ns != nil)
132
- point_influxdb = create_influxdb_point_ktm(shipper_kafka_datacenter, event_owner, payload_bytesize, "indexer", indexer_kafka_lag_ms, epoch_time_ns)
133
- ktm_metric_event_array.push point_influxdb
142
+ point_ktm = create_point_ktm(shipper_kafka_datacenter, event_owner, payload_size_bytes, "indexer", indexer_kafka_lag_ms, epoch_time_ns, elasticsearch_cluster, elasticsearch_cluster_index)
143
+ ktm_metric_event_array.push point_ktm
134
144
 
135
- point_influxdb = create_influxdb_point_ktm_error(shipper_kafka_datacenter, event_owner, epoch_time_ns, "shipper")
136
- ktm_metric_event_array.push point_influxdb
145
+ point_ktm = create_point_ktm_error(shipper_kafka_datacenter, event_owner, epoch_time_ns, "shipper", elasticsearch_cluster, elasticsearch_cluster_index)
146
+ ktm_metric_event_array.push point_ktm
137
147
 
138
148
  elsif (indexer_valid == false && shipper_valid == false)
139
149
 
140
- point_influxdb = create_influxdb_point_ktm_error(shipper_kafka_datacenter, event_owner, epoch_time_ns, "insufficient_data")
141
- ktm_metric_event_array.push point_influxdb
150
+ point_ktm = create_point_ktm_error(shipper_kafka_datacenter, event_owner, epoch_time_ns, "insufficient_data", elasticsearch_cluster, elasticsearch_cluster_index)
151
+ ktm_metric_event_array.push point_ktm
142
152
 
143
153
  error_string = sprintf("Error kafka_time_machine: Could not build valid response --> %s, %s", error_string_shipper, error_string_indexer)
144
154
  @logger.debug(error_string)
145
155
 
146
156
  else
147
157
 
148
- point_influxdb = create_influxdb_point_ktm_error(shipper_kafka_datacenter, event_owner, epoch_time_ns, "unknown")
149
- ktm_metric_event_array.push point_influxdb
158
+ point_ktm = create_point_ktm_error(shipper_kafka_datacenter, event_owner, epoch_time_ns, "unknown", elasticsearch_cluster, elasticsearch_cluster_index)
159
+ ktm_metric_event_array.push point_ktm
150
160
 
151
161
  error_string = "Unknown error encountered"
152
162
  @logger.debug(error_string)
@@ -157,9 +167,7 @@ class LogStash::Filters::KafkaTimeMachine < LogStash::Filters::Base
157
167
  ktm_metric_event_array.each do |metric_event|
158
168
 
159
169
  # Create new event for KTM metric
160
- event_ktm = LogStash::Event.new
161
-
162
- event_ktm.set("ktm_metric", metric_event)
170
+ event_ktm = LogStash::Event.new(metric_event)
163
171
  event_ktm.set("[@metadata][ktm_tags][ktm_metric]", "true")
164
172
 
165
173
  filter_matched(event_ktm)
@@ -169,23 +177,34 @@ class LogStash::Filters::KafkaTimeMachine < LogStash::Filters::Base
169
177
 
170
178
  end # def filter
171
179
 
172
- # Creates an Influx DB line-protocol data point to return
180
+ # Creates hash with ktm data point to return
173
181
  public
174
- def create_influxdb_point_ktm(datacenter, event_owner, payload_size_bytes, lag_type, lag_ms, epoch_time_ns)
182
+ def create_point_ktm(datacenter, event_owner, payload_size_bytes, lag_type, lag_ms, epoch_time_ns, elasticsearch_cluster, elasticsearch_cluster_index)
183
+
184
+ point = Hash.new
185
+
186
+ # Name of point and time created
187
+ point["name"] = "ktm"
188
+ point["epotch_time_ns"] = epoch_time_ns
175
189
 
176
- point = InfluxDB2::Point.new( name: "ktm",
177
- tags: {datacenter: datacenter, owner: event_owner, lag_type: lag_type},
178
- fields: {payload_size_bytes: payload_size_bytes, lag_ms: lag_ms},
179
- time: epoch_time_ns)
190
+ # tags
191
+ point["datacenter"] = datacenter
192
+ point["owner"] = event_owner
193
+ point["lag_type"] = lag_type
194
+ point["es_cluster"] = elasticsearch_cluster
195
+ point["es_cluster_index"] = elasticsearch_cluster_index
180
196
 
181
- point_influxdb = point.to_line_protocol
182
- return point_influxdb
197
+ # fields
198
+ point["payload_size_bytes"] = payload_size_bytes
199
+ point["lag_ms"] = lag_ms
183
200
 
184
- end # def create_influxdb_point
201
+ return point
185
202
 
186
- # Creates an Influx DB line-protocol data point to return
203
+ end # def create_point_ktm
204
+
205
+ # Creates hash with ktm data point to return
187
206
  public
188
- def create_influxdb_point_ktm_error(datacenter, event_owner, epoch_time_ns, type)
207
+ def create_point_ktm_error(datacenter, event_owner, epoch_time_ns, type, elasticsearch_cluster, elasticsearch_cluster_index)
189
208
 
190
209
  # Check for nil values
191
210
  if (nil == datacenter)
@@ -201,15 +220,25 @@ class LogStash::Filters::KafkaTimeMachine < LogStash::Filters::Base
201
220
  epoch_time_ns = ((Time.now.to_f * 1000).to_i)*1000000
202
221
  end
203
222
 
204
- point = InfluxDB2::Point.new( name: "ktm_error",
205
- tags: {datacenter: datacenter, owner: event_owner, source: type},
206
- fields: {count: 1},
207
- time: epoch_time_ns)
223
+ point = Hash.new
208
224
 
209
- point_influxdb = point.to_line_protocol
210
- return point_influxdb
225
+ # Name of point and time created
226
+ point["name"] = "ktm_error"
227
+ point["epotch_time_ns"] = epoch_time_ns
228
+
229
+ # tags
230
+ point["datacenter"] = datacenter
231
+ point["owner"] = event_owner
232
+ point["source"] = type
233
+ point["es_cluster"] = elasticsearch_cluster
234
+ point["es_cluster_index"] = elasticsearch_cluster_index
235
+
236
+ # fields
237
+ point["count"] = 1
238
+
239
+ return point
211
240
 
212
- end # def create_influxdb_point
241
+ end # def create_point_ktm_error
213
242
 
214
243
  # Ensures the provided value is numeric; if not returns 'nil'
215
244
  public
@@ -1,6 +1,6 @@
1
1
  Gem::Specification.new do |s|
2
2
  s.name = 'logstash-filter-kafka_time_machine'
3
- s.version = '2.0.0'
3
+ s.version = '3.0.0.pre'
4
4
  s.licenses = ['Apache-2.0']
5
5
  s.summary = "Calculate total time of logstash event that traversed 2 Kafka queues from a shipper site to an indexer site"
6
6
  s.description = "This gem is a logstash plugin required to be installed on top of the Logstash core pipeline using $LS_HOME/bin/logstash-plugin install gemname. This gem is not a stand-alone program"
@@ -20,6 +20,5 @@ Gem::Specification.new do |s|
20
20
 
21
21
  # Gem dependencies
22
22
  s.add_runtime_dependency "logstash-core-plugin-api", ">= 1.60", "<= 2.99"
23
- s.add_runtime_dependency "influxdb-client", "~> 2.0.0"
24
23
  s.add_development_dependency 'logstash-devutils', '~> 0'
25
24
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: logstash-filter-kafka_time_machine
3
3
  version: !ruby/object:Gem::Version
4
- version: 2.0.0
4
+ version: 3.0.0.pre
5
5
  platform: ruby
6
6
  authors:
7
7
  - Chris Foster
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2021-10-21 00:00:00.000000000 Z
11
+ date: 2022-11-10 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: logstash-core-plugin-api
@@ -30,20 +30,6 @@ dependencies:
30
30
  - - "<="
31
31
  - !ruby/object:Gem::Version
32
32
  version: '2.99'
33
- - !ruby/object:Gem::Dependency
34
- name: influxdb-client
35
- requirement: !ruby/object:Gem::Requirement
36
- requirements:
37
- - - "~>"
38
- - !ruby/object:Gem::Version
39
- version: 2.0.0
40
- type: :runtime
41
- prerelease: false
42
- version_requirements: !ruby/object:Gem::Requirement
43
- requirements:
44
- - - "~>"
45
- - !ruby/object:Gem::Version
46
- version: 2.0.0
47
33
  - !ruby/object:Gem::Dependency
48
34
  name: logstash-devutils
49
35
  requirement: !ruby/object:Gem::Requirement
@@ -87,11 +73,11 @@ required_ruby_version: !ruby/object:Gem::Requirement
87
73
  version: '0'
88
74
  required_rubygems_version: !ruby/object:Gem::Requirement
89
75
  requirements:
90
- - - ">="
76
+ - - ">"
91
77
  - !ruby/object:Gem::Version
92
- version: '0'
78
+ version: 1.3.1
93
79
  requirements: []
94
- rubygems_version: 3.0.3
80
+ rubygems_version: 3.0.3.1
95
81
  signing_key:
96
82
  specification_version: 4
97
83
  summary: Calculate total time of logstash event that traversed 2 Kafka queues from