logstash-filter-kafka_time_machine 2.0.1 → 3.0.0.pre
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +4 -4
- data/README.md +63 -10
- data/lib/logstash/filters/kafka_time_machine.rb +61 -42
- data/logstash-filter-kafka_time_machine.gemspec +1 -2
- metadata +5 -19
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA256:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: 051f813a378da7788b0731000d62645d1475d70672c1845244504f55a0d4b7a9
|
4
|
+
data.tar.gz: c0259a2da821f3da532e97db28a14d7f5e02da5afc21448adb4e50683ff9c305
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: 32a0b27107ac983b22cc863d555ce9076f065d5faa01fbf64c83cf30083009cb613c8dfe989d2338863f4d52d83851b747ab76d745e768f322ffdab88258d0f4
|
7
|
+
data.tar.gz: 21ce90deb6265a9af0d33dfdb630f3562a2840d097a1d62b4b414b9fd6298ec943fabaa91f66a847fa681ea47bc3b64b19607aaf109ed482611761e2faee9339
|
data/README.md
CHANGED
@@ -1,4 +1,5 @@
|
|
1
1
|
[![Gem Version](https://badge.fury.io/rb/logstash-filter-kafka_time_machine.svg)](https://badge.fury.io/rb/logstash-filter-kafka_time_machine)
|
2
|
+
[![Build Status](https://drone.lma.wbx2.com/api/badges/post-deployment/logstash-filter-kafka_time_machine/status.svg)](https://drone.lma.wbx2.com/post-deployment/logstash-filter-kafka_time_machine)
|
2
3
|
|
3
4
|
# Logstash Plugin: logstash-filter-kafka_time_machine
|
4
5
|
|
@@ -23,20 +24,20 @@ The filter leverages metadata inserted into the log event on both `logstash_ship
|
|
23
24
|
When the `kafka_time_machine` executes it will return a [InfluxDB Line Protocol](https://docs.influxdata.com/influxdb/v1.8/write_protocols/line_protocol_tutorial/) formatted metric, i.e.:
|
24
25
|
|
25
26
|
```
|
26
|
-
ktm,datacenter=kafka_datacenter_shipper-test,lag_type=total,owner=ktm_test@cisco.com lag_ms=300i,payload_size_bytes=40i 1634662795000000000
|
27
|
+
ktm,datacenter=kafka_datacenter_shipper-test,es_cluster=some_cluster_name,es_cluster_index=some_cluster_index_name,lag_type=total,owner=ktm_test@cisco.com lag_ms=300i,payload_size_bytes=40i 1634662795000000000
|
27
28
|
```
|
28
29
|
|
29
30
|
The plugin will also emit a metric if an error was encountered, i.e.:
|
30
31
|
|
31
32
|
```
|
32
|
-
ktm_error,datacenter=kafka_datacenter_shipper-test,owner=ktm_test@cisco.com,source=shipper count=1i 1634662795000000000
|
33
|
+
ktm_error,datacenter=kafka_datacenter_shipper-test,es_cluster=some_cluster_name,es_cluster_index=some_cluster_index_name,owner=ktm_test@cisco.com,source=shipper count=1i 1634662795000000000
|
33
34
|
```
|
34
35
|
|
35
36
|
To ensure a logstash `output{}` block can properly route this metric, the new event are tagged with a `[@metadata][ktm_tag][ktm_metric]` field, i.e.:
|
36
37
|
|
37
38
|
```
|
38
39
|
{
|
39
|
-
"ktm_metric" => "ktm,datacenter=kafka_datacenter_shipper-test,lag_type=total,owner=ktm_test@cisco.com lag_ms=300i,payload_size_bytes=40i 1634662795000000000",
|
40
|
+
"ktm_metric" => "ktm,datacenter=kafka_datacenter_shipper-test,lag_type=total,es_cluster=some_cluster_name,es_cluster_index=some_cluster_index_name,owner=ktm_test@cisco.com lag_ms=300i,payload_size_bytes=40i 1634662795000000000",
|
40
41
|
"@timestamp" => 2021-10-20T23:46:24.704Z,
|
41
42
|
"@metadata" => {
|
42
43
|
"ktm_tags" => {
|
@@ -59,6 +60,8 @@ In the case of `ktm` the metric breakdown is:
|
|
59
60
|
| Line Protocol Element | Line Protocol Type | Description |
|
60
61
|
| --------------------- | ------------------ | ------------------------------------------- |
|
61
62
|
| datacenter | tag | Echo of `kafka_datacenter_shipper` |
|
63
|
+
| es_cluster | tag | Echo of `elasticsearch_cluster` |
|
64
|
+
| es_cluster_index | tag | Echo of `elasticsearch_cluster_index` |
|
62
65
|
| lag_type | tag | Calculated lag type |
|
63
66
|
| owner | tag | Echo of `event_owner` |
|
64
67
|
| lag_ms | field | Calculated lag in milliseconds |
|
@@ -72,12 +75,14 @@ Meaning of `lag_type`:
|
|
72
75
|
|
73
76
|
In the case of `ktm_error` the metric breakdown is:
|
74
77
|
|
75
|
-
| Line Protocol Element | Line Protocol Type | Description
|
76
|
-
| --------------------- | ------------------ |
|
77
|
-
| datacenter | tag | Echo of `kafka_datacenter_shipper`
|
78
|
-
|
|
79
|
-
|
|
80
|
-
|
|
78
|
+
| Line Protocol Element | Line Protocol Type | Description |
|
79
|
+
| --------------------- | ------------------ | ------------------------------------- |
|
80
|
+
| datacenter | tag | Echo of `kafka_datacenter_shipper` |
|
81
|
+
| es_cluster | tag | Echo of `elasticsearch_cluster` |
|
82
|
+
| es_cluster_index | tag | Echo of `elasticsearch_cluster_index` |
|
83
|
+
| source | tag | Source of the error metric |
|
84
|
+
| owner | tag | Echo of `event_owner` |
|
85
|
+
| count | field | Count to track error; not cumulative |
|
81
86
|
|
82
87
|
Meaning of `source`:
|
83
88
|
|
@@ -113,6 +118,8 @@ This plugin requires the following configurations:
|
|
113
118
|
| [logstash_kafka_read_time_indexer](#logstash_kafka_read_time_indexer) | string | Yes |
|
114
119
|
| [event_owner](#event_owner) | string | Yes |
|
115
120
|
| [event_time_ms](#event_time_ms) | string | Yes |
|
121
|
+
| [elasticsearch_cluster](#elasticsearch_cluster) | string | Yes |
|
122
|
+
| [elasticsearch_cluster_index](#elasticsearch_cluster_index) | string | Yes |
|
116
123
|
|
117
124
|
> Why are all settings required?
|
118
125
|
>
|
@@ -371,4 +378,50 @@ filter {
|
|
371
378
|
event_time_ms => "%{[dynamic_field]}"
|
372
379
|
}
|
373
380
|
}
|
374
|
-
```
|
381
|
+
```
|
382
|
+
|
383
|
+
### elasticsearch_cluster
|
384
|
+
|
385
|
+
- Value type is [string](https://www.elastic.co/guide/en/logstash/7.13/configuration-file-structure.html#string)
|
386
|
+
- There is no default value for this setting.
|
387
|
+
|
388
|
+
Provide identifier for ElasticSearch cluster log was sent to; represents the owner of the log. Field values can be static or dynamic:
|
389
|
+
|
390
|
+
```
|
391
|
+
filter {
|
392
|
+
kafka_time_machine {
|
393
|
+
elasticsearch_cluster => "static_field"
|
394
|
+
}
|
395
|
+
}
|
396
|
+
```
|
397
|
+
|
398
|
+
```
|
399
|
+
filter {
|
400
|
+
kafka_time_machine {
|
401
|
+
elasticsearch_cluster => "%{[dynamic_field]}"
|
402
|
+
}
|
403
|
+
}
|
404
|
+
```
|
405
|
+
|
406
|
+
### elasticsearch_cluster_index
|
407
|
+
|
408
|
+
- Value type is [string](https://www.elastic.co/guide/en/logstash/7.13/configuration-file-structure.html#string)
|
409
|
+
- There is no default value for this setting.
|
410
|
+
|
411
|
+
Provide identifier for ElasticSearch cluster index log will be indexed in; represents the owner of the log. Field values can be static or dynamic:
|
412
|
+
|
413
|
+
```
|
414
|
+
filter {
|
415
|
+
kafka_time_machine {
|
416
|
+
elasticsearch_cluster_index => "static_field"
|
417
|
+
}
|
418
|
+
}
|
419
|
+
```
|
420
|
+
|
421
|
+
```
|
422
|
+
filter {
|
423
|
+
kafka_time_machine {
|
424
|
+
elasticsearch_cluster_index => "%{[dynamic_field]}"
|
425
|
+
}
|
426
|
+
}
|
427
|
+
```
|
@@ -2,7 +2,7 @@
|
|
2
2
|
require "logstash/filters/base"
|
3
3
|
require "logstash/namespace"
|
4
4
|
require "logstash/event"
|
5
|
-
require "
|
5
|
+
require "json"
|
6
6
|
|
7
7
|
class LogStash::Filters::KafkaTimeMachine < LogStash::Filters::Base
|
8
8
|
|
@@ -38,13 +38,13 @@ class LogStash::Filters::KafkaTimeMachine < LogStash::Filters::Base
|
|
38
38
|
# Owner of the event currenty being process.
|
39
39
|
config :event_owner, :validate => :string, :required => true
|
40
40
|
|
41
|
-
# Current time since EPOCH in ms that should be set in the
|
41
|
+
# Current time since EPOCH in ms that should be set in the generated metric
|
42
42
|
config :event_time_ms, :validate => :string, :required => true
|
43
43
|
|
44
|
-
# Current time since EPOCH in ms that should be set in the
|
44
|
+
# Current time since EPOCH in ms that should be set in the generated metric
|
45
45
|
config :elasticsearch_cluster, :validate => :string, :required => true
|
46
46
|
|
47
|
-
# Current time since EPOCH in ms that should be set in the
|
47
|
+
# Current time since EPOCH in ms that should be set in the generated metric
|
48
48
|
config :elasticsearch_cluster_index, :validate => :string, :required => true
|
49
49
|
|
50
50
|
public
|
@@ -110,12 +110,12 @@ class LogStash::Filters::KafkaTimeMachine < LogStash::Filters::Base
|
|
110
110
|
end
|
111
111
|
|
112
112
|
# Add in the size of the payload field
|
113
|
-
|
113
|
+
payload_size_bytes = 0
|
114
114
|
if event.get("[payload]")
|
115
|
-
|
115
|
+
payload_size_bytes = event.get("[payload]").bytesize
|
116
116
|
end
|
117
117
|
|
118
|
-
# Set time (nanoseconds) for
|
118
|
+
# Set time (nanoseconds) for event that is generated
|
119
119
|
epoch_time_ns = nil
|
120
120
|
if (event_time_ms != nil )
|
121
121
|
epoch_time_ns = event_time_ms * 1000000
|
@@ -128,35 +128,35 @@ class LogStash::Filters::KafkaTimeMachine < LogStash::Filters::Base
|
|
128
128
|
if (shipper_valid == true && indexer_valid == true && epoch_time_ns != nil)
|
129
129
|
total_kafka_lag_ms = indexer_logstash_kafka_read_time - shipper_kafka_append_time
|
130
130
|
|
131
|
-
|
132
|
-
ktm_metric_event_array.push
|
131
|
+
point_ktm = create_point_ktm(shipper_kafka_datacenter, event_owner, payload_size_bytes, "total", total_kafka_lag_ms, epoch_time_ns, elasticsearch_cluster, elasticsearch_cluster_index)
|
132
|
+
ktm_metric_event_array.push point_ktm
|
133
133
|
|
134
134
|
elsif (shipper_valid == true && indexer_valid == false && epoch_time_ns != nil)
|
135
|
-
|
136
|
-
ktm_metric_event_array.push
|
135
|
+
point_ktm = create_point_ktm(shipper_kafka_datacenter, event_owner, payload_size_bytes, "shipper", shipper_kafka_lag_ms, epoch_time_ns, elasticsearch_cluster, elasticsearch_cluster_index)
|
136
|
+
ktm_metric_event_array.push point_ktm
|
137
137
|
|
138
|
-
|
139
|
-
ktm_metric_event_array.push
|
138
|
+
point_ktm = create_point_ktm_error(shipper_kafka_datacenter, event_owner, epoch_time_ns, "indexer", elasticsearch_cluster, elasticsearch_cluster_index)
|
139
|
+
ktm_metric_event_array.push point_ktm
|
140
140
|
|
141
141
|
elsif (indexer_valid == true && shipper_valid == false && epoch_time_ns != nil)
|
142
|
-
|
143
|
-
ktm_metric_event_array.push
|
142
|
+
point_ktm = create_point_ktm(shipper_kafka_datacenter, event_owner, payload_size_bytes, "indexer", indexer_kafka_lag_ms, epoch_time_ns, elasticsearch_cluster, elasticsearch_cluster_index)
|
143
|
+
ktm_metric_event_array.push point_ktm
|
144
144
|
|
145
|
-
|
146
|
-
ktm_metric_event_array.push
|
145
|
+
point_ktm = create_point_ktm_error(shipper_kafka_datacenter, event_owner, epoch_time_ns, "shipper", elasticsearch_cluster, elasticsearch_cluster_index)
|
146
|
+
ktm_metric_event_array.push point_ktm
|
147
147
|
|
148
148
|
elsif (indexer_valid == false && shipper_valid == false)
|
149
149
|
|
150
|
-
|
151
|
-
ktm_metric_event_array.push
|
150
|
+
point_ktm = create_point_ktm_error(shipper_kafka_datacenter, event_owner, epoch_time_ns, "insufficient_data", elasticsearch_cluster, elasticsearch_cluster_index)
|
151
|
+
ktm_metric_event_array.push point_ktm
|
152
152
|
|
153
153
|
error_string = sprintf("Error kafka_time_machine: Could not build valid response --> %s, %s", error_string_shipper, error_string_indexer)
|
154
154
|
@logger.debug(error_string)
|
155
155
|
|
156
156
|
else
|
157
157
|
|
158
|
-
|
159
|
-
ktm_metric_event_array.push
|
158
|
+
point_ktm = create_point_ktm_error(shipper_kafka_datacenter, event_owner, epoch_time_ns, "unknown", elasticsearch_cluster, elasticsearch_cluster_index)
|
159
|
+
ktm_metric_event_array.push point_ktm
|
160
160
|
|
161
161
|
error_string = "Unknown error encountered"
|
162
162
|
@logger.debug(error_string)
|
@@ -167,9 +167,7 @@ class LogStash::Filters::KafkaTimeMachine < LogStash::Filters::Base
|
|
167
167
|
ktm_metric_event_array.each do |metric_event|
|
168
168
|
|
169
169
|
# Create new event for KTM metric
|
170
|
-
event_ktm = LogStash::Event.new
|
171
|
-
|
172
|
-
event_ktm.set("ktm_metric", metric_event)
|
170
|
+
event_ktm = LogStash::Event.new(metric_event)
|
173
171
|
event_ktm.set("[@metadata][ktm_tags][ktm_metric]", "true")
|
174
172
|
|
175
173
|
filter_matched(event_ktm)
|
@@ -179,23 +177,34 @@ class LogStash::Filters::KafkaTimeMachine < LogStash::Filters::Base
|
|
179
177
|
|
180
178
|
end # def filter
|
181
179
|
|
182
|
-
# Creates
|
180
|
+
# Creates hash with ktm data point to return
|
183
181
|
public
|
184
|
-
def
|
182
|
+
def create_point_ktm(datacenter, event_owner, payload_size_bytes, lag_type, lag_ms, epoch_time_ns, elasticsearch_cluster, elasticsearch_cluster_index)
|
183
|
+
|
184
|
+
point = Hash.new
|
185
|
+
|
186
|
+
# Name of point and time created
|
187
|
+
point["name"] = "ktm"
|
188
|
+
point["epotch_time_ns"] = epoch_time_ns
|
189
|
+
|
190
|
+
# tags
|
191
|
+
point["datacenter"] = datacenter
|
192
|
+
point["owner"] = event_owner
|
193
|
+
point["lag_type"] = lag_type
|
194
|
+
point["es_cluster"] = elasticsearch_cluster
|
195
|
+
point["es_cluster_index"] = elasticsearch_cluster_index
|
185
196
|
|
186
|
-
|
187
|
-
|
188
|
-
|
189
|
-
time: epoch_time_ns)
|
197
|
+
# fields
|
198
|
+
point["payload_size_bytes"] = payload_size_bytes
|
199
|
+
point["lag_ms"] = lag_ms
|
190
200
|
|
191
|
-
|
192
|
-
return point_influxdb
|
201
|
+
return point
|
193
202
|
|
194
|
-
end # def
|
203
|
+
end # def create_point_ktm
|
195
204
|
|
196
|
-
# Creates
|
205
|
+
# Creates hash with ktm data point to return
|
197
206
|
public
|
198
|
-
def
|
207
|
+
def create_point_ktm_error(datacenter, event_owner, epoch_time_ns, type, elasticsearch_cluster, elasticsearch_cluster_index)
|
199
208
|
|
200
209
|
# Check for nil values
|
201
210
|
if (nil == datacenter)
|
@@ -211,15 +220,25 @@ class LogStash::Filters::KafkaTimeMachine < LogStash::Filters::Base
|
|
211
220
|
epoch_time_ns = ((Time.now.to_f * 1000).to_i)*1000000
|
212
221
|
end
|
213
222
|
|
214
|
-
point =
|
215
|
-
tags: {datacenter: datacenter, owner: event_owner, source: type, es_cluster: elasticsearch_cluster, es_cluster_index: elasticsearch_cluster_index},
|
216
|
-
fields: {count: 1},
|
217
|
-
time: epoch_time_ns)
|
223
|
+
point = Hash.new
|
218
224
|
|
219
|
-
|
220
|
-
|
225
|
+
# Name of point and time created
|
226
|
+
point["name"] = "ktm_error"
|
227
|
+
point["epotch_time_ns"] = epoch_time_ns
|
228
|
+
|
229
|
+
# tags
|
230
|
+
point["datacenter"] = datacenter
|
231
|
+
point["owner"] = event_owner
|
232
|
+
point["source"] = type
|
233
|
+
point["es_cluster"] = elasticsearch_cluster
|
234
|
+
point["es_cluster_index"] = elasticsearch_cluster_index
|
235
|
+
|
236
|
+
# fields
|
237
|
+
point["count"] = 1
|
238
|
+
|
239
|
+
return point
|
221
240
|
|
222
|
-
end # def
|
241
|
+
end # def create_point_ktm_error
|
223
242
|
|
224
243
|
# Ensures the provided value is numeric; if not returns 'nil'
|
225
244
|
public
|
@@ -1,6 +1,6 @@
|
|
1
1
|
Gem::Specification.new do |s|
|
2
2
|
s.name = 'logstash-filter-kafka_time_machine'
|
3
|
-
s.version = '
|
3
|
+
s.version = '3.0.0.pre'
|
4
4
|
s.licenses = ['Apache-2.0']
|
5
5
|
s.summary = "Calculate total time of logstash event that traversed 2 Kafka queues from a shipper site to an indexer site"
|
6
6
|
s.description = "This gem is a logstash plugin required to be installed on top of the Logstash core pipeline using $LS_HOME/bin/logstash-plugin install gemname. This gem is not a stand-alone program"
|
@@ -20,6 +20,5 @@ Gem::Specification.new do |s|
|
|
20
20
|
|
21
21
|
# Gem dependencies
|
22
22
|
s.add_runtime_dependency "logstash-core-plugin-api", ">= 1.60", "<= 2.99"
|
23
|
-
s.add_runtime_dependency "influxdb-client", "~> 2.0.0"
|
24
23
|
s.add_development_dependency 'logstash-devutils', '~> 0'
|
25
24
|
end
|
metadata
CHANGED
@@ -1,14 +1,14 @@
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
2
2
|
name: logstash-filter-kafka_time_machine
|
3
3
|
version: !ruby/object:Gem::Version
|
4
|
-
version:
|
4
|
+
version: 3.0.0.pre
|
5
5
|
platform: ruby
|
6
6
|
authors:
|
7
7
|
- Chris Foster
|
8
8
|
autorequire:
|
9
9
|
bindir: bin
|
10
10
|
cert_chain: []
|
11
|
-
date:
|
11
|
+
date: 2022-11-10 00:00:00.000000000 Z
|
12
12
|
dependencies:
|
13
13
|
- !ruby/object:Gem::Dependency
|
14
14
|
name: logstash-core-plugin-api
|
@@ -30,20 +30,6 @@ dependencies:
|
|
30
30
|
- - "<="
|
31
31
|
- !ruby/object:Gem::Version
|
32
32
|
version: '2.99'
|
33
|
-
- !ruby/object:Gem::Dependency
|
34
|
-
name: influxdb-client
|
35
|
-
requirement: !ruby/object:Gem::Requirement
|
36
|
-
requirements:
|
37
|
-
- - "~>"
|
38
|
-
- !ruby/object:Gem::Version
|
39
|
-
version: 2.0.0
|
40
|
-
type: :runtime
|
41
|
-
prerelease: false
|
42
|
-
version_requirements: !ruby/object:Gem::Requirement
|
43
|
-
requirements:
|
44
|
-
- - "~>"
|
45
|
-
- !ruby/object:Gem::Version
|
46
|
-
version: 2.0.0
|
47
33
|
- !ruby/object:Gem::Dependency
|
48
34
|
name: logstash-devutils
|
49
35
|
requirement: !ruby/object:Gem::Requirement
|
@@ -87,11 +73,11 @@ required_ruby_version: !ruby/object:Gem::Requirement
|
|
87
73
|
version: '0'
|
88
74
|
required_rubygems_version: !ruby/object:Gem::Requirement
|
89
75
|
requirements:
|
90
|
-
- - "
|
76
|
+
- - ">"
|
91
77
|
- !ruby/object:Gem::Version
|
92
|
-
version:
|
78
|
+
version: 1.3.1
|
93
79
|
requirements: []
|
94
|
-
rubygems_version: 3.0.3
|
80
|
+
rubygems_version: 3.0.3.1
|
95
81
|
signing_key:
|
96
82
|
specification_version: 4
|
97
83
|
summary: Calculate total time of logstash event that traversed 2 Kafka queues from
|