logstash-filter-dissect 1.0.12 → 1.1.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
- SHA1:
3
- metadata.gz: edba4758d6e759bae681c335eec23281c0bc8505
4
- data.tar.gz: 7cb793e36f94954a21540f6b5e2f584826ee4223
2
+ SHA256:
3
+ metadata.gz: c56d7ab014406bf2a21c22cb3f3d40bd09dba45fd3b5d26db58fed30461b56c4
4
+ data.tar.gz: 1985e43ac6547d4493fb1493ba3f7065a314b769a3de5a8d7a6b99d60453d059
5
5
  SHA512:
6
- metadata.gz: e4fa09ae61de294335c0eda1c88cc07bb41e672ff50a81dbd69d8175354f9b225c1bbe5a79ad3bd7a35757c158daae51c639d0a3041a2470561fdac245d8c17e
7
- data.tar.gz: 9c9dfe6585d8b6a20114d9ba73c49a0d7df252f0fff297dc39f1808fb00ef51e60b3d67738b24fbbe211bc57d584f5143b4ee1b929c075323d293e1a9b8642d8
6
+ metadata.gz: ae8a49d90ef97ff73010debef4334404fa2b5c994f81388eb8f4de5f9d4849d7bd6afee722dba8afddc3440c3f034176498a15163d092aeb799a1f9ecf294960
7
+ data.tar.gz: 2d8d5e3db0c3131866b54123a64f64b441e5c5433f398a19bae4a757fda45bec60b2dd487a6a9e7eca91ce08851234771ac14afab1134c0f09e550c0cc900ce7
@@ -1,3 +1,8 @@
1
+ ## 1.1.1
2
+ - Fix for "Missing field values cause dissected fields to be out of position" issue. See updated documentation.
3
+ - Fix for "Check empty fields" issue, empty fields handled better.
4
+ - Fix for "Integer conversion does not handle big integers".
5
+
1
6
  ## 1.0.12
2
7
  - Fix some documentation issues
3
8
 
data/Gemfile CHANGED
@@ -2,7 +2,7 @@ source 'https://rubygems.org'
2
2
 
3
3
  gemspec
4
4
 
5
- logstash_path = ENV["LOGSTASH_PATH"] || "../../logstash"
5
+ logstash_path = ENV["LOGSTASH_PATH"] || "../../logstash" # <- is where travis has the source
6
6
  use_logstash_source = ENV["LOGSTASH_SOURCE"] && ENV["LOGSTASH_SOURCE"].to_s == "1"
7
7
 
8
8
  if Dir.exist?(logstash_path) && use_logstash_source
data/VERSION CHANGED
@@ -1 +1 @@
1
- 1.0.12
1
+ 1.1.1
@@ -20,10 +20,12 @@ include::{include_path}/plugin_header.asciidoc[]
20
20
 
21
21
  ==== Description
22
22
 
23
- The Dissect filter is a kind of split operation. Unlike a regular split operation where one delimiter is applied to the whole string, this operation applies a set of delimiters # to a string value. +
23
+ The Dissect filter is a kind of split operation. Unlike a regular split operation where one delimiter is applied to
24
+ the whole string, this operation applies a set of delimiters to a string value. +
24
25
  Dissect does not use regular expressions and is very fast. +
25
26
  However, if the structure of your text varies from line to line then Grok is more suitable. +
26
- There is a hybrid case where Dissect can be used to de-structure the section of the line that is reliably repeated and then Grok can be used on the remaining field values with # more regex predictability and less overall work to do. +
27
+ There is a hybrid case where Dissect can be used to de-structure the section of the line that is reliably repeated and
28
+ then Grok can be used on the remaining field values with more regex predictability and less overall work to do. +
27
29
 
28
30
  A set of fields and delimiters is called a *dissection*.
29
31
 
@@ -34,10 +36,10 @@ The dissection is described using a set of `%{}` sections:
34
36
 
35
37
  A *field* is the text from `%` to `}` inclusive.
36
38
 
37
- A *delimiter* is the text between `}` and `%` characters.
39
+ A *delimiter* is the text between a `}` and next `%{` characters.
38
40
 
39
41
  [NOTE]
40
- delimiters can't contain these `}{%` characters.
42
+ Any set of characters that do not fit `%{`, `'not }'`, `}` pattern is a delimiter.
41
43
 
42
44
  The config might look like this:
43
45
  ....
@@ -49,7 +51,8 @@ The config might look like this:
49
51
  }
50
52
  }
51
53
  ....
52
- When dissecting a string from left to right, text is captured upto the first delimiter - this captured text is stored in the first field. This is repeated for each field/# delimiter pair thereafter until the last delimiter is reached, then *the remaining text is stored in the last field*. +
54
+ When dissecting a string from left to right, text is captured upto the first delimiter - this captured text is stored in the first field.
55
+ This is repeated for each field/# delimiter pair thereafter until the last delimiter is reached, then *the remaining text is stored in the last field*. +
53
56
 
54
57
  *The Key:* +
55
58
  The key is the text between the `%{` and `}`, exclusive of the ?, +, & prefixes and the ordinal suffix. +
@@ -57,7 +60,7 @@ The key is the text between the `%{` and `}`, exclusive of the ?, +, & prefixes
57
60
  `%{+bbb/3}` - key is `bbb` +
58
61
  `%{&ccc}` - key is `ccc` +
59
62
 
60
- *Normal field notation:* +
63
+ ===== Normal field notation
61
64
  The found value is added to the Event using the key. +
62
65
  `%{some_field}` - a normal field has no prefix or suffix
63
66
 
@@ -69,7 +72,7 @@ The key, if supplied, is prefixed with a `?`.
69
72
 
70
73
  `%{?foo}` is a named skip field.
71
74
 
72
- *Append field notation:* +
75
+ ===== Append field notation
73
76
  The value is appended to another value or stored if its the first field seen. +
74
77
  The key is prefixed with a `+`. +
75
78
  The final value is stored in the Event using the key. +
@@ -88,7 +91,7 @@ e.g. for a text of `1 2 3 go`, this `%{+a/2} %{+a/1} %{+a/4} %{+a/3}` will build
88
91
  Append fields without an order modifier will append in declared order. +
89
92
  e.g. for a text of `1 2 3 go`, this `%{a} %{b} %{+a}` will build two key/values of `a => 1 3 go, b => 2` +
90
93
 
91
- *Indirect field notation:* +
94
+ ===== Indirect field notation
92
95
  The found value is added to the Event using the found value of another field as the key. +
93
96
  The key is prefixed with a `&`. +
94
97
  `%{&some_field}` - an indirect field where the key is indirectly sourced from the value of `some_field`. +
@@ -109,20 +112,87 @@ append and indirect cannot be combined and will fail validation. +
109
112
  `%{&+something}` will add a value to the `+something` key, again probably unintended. +
110
113
  ===============================
111
114
 
112
- *Delimiter repetition:* +
113
- In the source text if a field has variable width padded with delimiters, the padding will be ignored. +
114
- e.g. for texts of:
115
+ ==== Multiple Consecutive Delimiter Handling
116
+
117
+ [IMPORTANT]
118
+ ===============================
119
+ Starting from version 1.1.1 of this plugin, multiple found delimiter handling has changed.
120
+ Now multiple consecutive delimiters will be seen as missing fields by default and not padding.
121
+ If you are already using Dissect and your source text has fields padded with extra delimiters,
122
+ you will need to change your config. Please read the section below.
123
+ ===============================
124
+
125
+ ===== Empty data between delimiters
126
+ Given this text as the sample used to create a dissection:
127
+ ....
128
+ John Smith,Big Oaks,Wood Lane,Hambledown,Canterbury,CB34RY
129
+ ....
130
+ The created dissection, with 6 fields, is:
131
+ ....
132
+ %{name},%{addr1},%{addr2},%{addr3},%{city},%{zip}
133
+ ....
134
+ When a line like this is processed:
115
135
  ....
116
- 00000043 ViewReceiver I
117
- 000000b3 Peer I
136
+ Jane Doe,4321 Fifth Avenue,,,New York,87432
137
+ ....
138
+ Dissect will create an event with empty fields for `addr2 and addr3` like so:
139
+ ....
140
+ {
141
+ "name": "Jane Doe",
142
+ "addr1": "4321 Fifth Avenue",
143
+ "addr2": "",
144
+ "addr3": "",
145
+ "city": "New York"
146
+ "zip": "87432"
147
+ }
118
148
  ....
119
- with a dissection of `%{a} %{b} %{c}`; the padding is ignored, `event.get([c]) -> "I"`
120
149
 
121
- [NOTE]
122
- ====
150
+ ===== Delimiters used as padding to visually align fields
151
+ *Padding to the right hand side*
152
+
153
+ Given these texts as the samples used to create a dissection:
154
+ ....
155
+ 00000043 ViewReceive machine-321
156
+ f3000a3b Calc machine-123
157
+ ....
158
+ The dissection, with 3 fields, is:
159
+ ....
160
+ %{id} %{function->} %{server}
161
+ ....
162
+ Note, above, the second field has a `->` suffix which tells Dissect to ignore padding to its right. +
163
+ Dissect will create these events:
164
+ ....
165
+ {
166
+ "id": "00000043",
167
+ "function": "ViewReceive",
168
+ "server": "machine-123"
169
+ }
170
+ {
171
+ "id": "f3000a3b",
172
+ "function": "Calc",
173
+ "server": "machine-321"
174
+ }
175
+ ....
176
+ [IMPORTANT]
177
+ Always add the `->` suffix to the field on the left of the padding.
178
+
179
+ *Padding to the left hand side (to the human eye)*
180
+
181
+ Given these texts as the samples used to create a dissection:
182
+ ....
183
+ 00000043 ViewReceive machine-321
184
+ f3000a3b Calc machine-123
185
+ ....
186
+ The dissection, with 3 fields, is now:
187
+ ....
188
+ %{id->} %{function} %{server}
189
+ ....
190
+ Here the `->` suffix moves to the `id` field because Dissect sees the padding as being to the right of the `id` field. +
191
+
192
+ ==== Conditional processing
193
+
123
194
  You probably want to use this filter inside an `if` block. +
124
195
  This ensures that the event contains a field value with a suitable structure for the dissection.
125
- ====
126
196
 
127
197
  For example...
128
198
  ....
@@ -156,7 +226,7 @@ filter plugins.
156
226
  &nbsp;
157
227
 
158
228
  [id="plugins-{type}s-{plugin}-convert_datatype"]
159
- ===== `convert_datatype`
229
+ ===== `convert_datatype`
160
230
 
161
231
  * Value type is <<hash,hash>>
162
232
  * Default value is `{}`
@@ -177,7 +247,7 @@ filter {
177
247
  }
178
248
 
179
249
  [id="plugins-{type}s-{plugin}-mapping"]
180
- ===== `mapping`
250
+ ===== `mapping`
181
251
 
182
252
  * Value type is <<hash,hash>>
183
253
  * Default value is `{}`
@@ -200,7 +270,7 @@ This is useful if you want to keep the field `description` but also
200
270
  dissect it some more.
201
271
 
202
272
  [id="plugins-{type}s-{plugin}-tag_on_failure"]
203
- ===== `tag_on_failure`
273
+ ===== `tag_on_failure`
204
274
 
205
275
  * Value type is <<array,array>>
206
276
  * Default value is `["_dissectfailure"]`
@@ -210,4 +280,4 @@ Append values to the `tags` field when dissection fails
210
280
 
211
281
 
212
282
  [id="plugins-{type}s-{plugin}-common-options"]
213
- include::{include_path}/{type}.asciidoc[]
283
+ include::{include_path}/{type}.asciidoc[]
@@ -1,4 +1,4 @@
1
1
  # AUTOGENERATED BY THE GRADLE SCRIPT. DO NOT EDIT.
2
2
 
3
3
  require 'jar_dependencies'
4
- require_jar('org.logstash.dissect', 'jruby-dissect-library', '1.0.12')
4
+ require_jar('org.logstash.dissect', 'jruby-dissect-library', '1.1.1')
@@ -168,22 +168,35 @@ module LogStash module Filters class Dissect < LogStash::Filters::Base
168
168
  public
169
169
 
170
170
  def register
171
- @dissector = LogStash::Dissector.new(@mapping)
171
+ needs_decoration = @add_field.size + @add_tag.size + @remove_field.size + @remove_tag.size > 0
172
+ @dissector = LogStash::Dissector.new(@mapping, self, @convert_datatype, needs_decoration)
172
173
  end
173
174
 
174
175
  def filter(event)
175
176
  # all plugin functions happen in the JRuby extension:
176
177
  # debug, warn and error logging, filter_matched, tagging etc.
177
- @dissector.dissect(event, self)
178
+ @dissector.dissect(event)
178
179
  end
179
180
 
180
181
  def multi_filter(events)
181
182
  LogStash::Util.set_thread_plugin(self)
182
- @dissector.dissect_multi(events, self)
183
+ @dissector.dissect_multi(events)
183
184
  events
184
185
  end
185
186
 
187
+ # this method is stubbed during testing
188
+ # a reference to it in the JRuby Extension `initialize` may not be valid
186
189
  def metric_increment(metric_name)
187
190
  metric.increment(metric_name)
188
191
  end
192
+
193
+ # the JRuby Extension `initialize` method stores a DynamicMethod reference to this method
194
+ def increment_matches_metric
195
+ metric_increment(:matches)
196
+ end
197
+
198
+ # the JRuby Extension `initialize` method stores a DynamicMethod reference to this method
199
+ def increment_failures_metric
200
+ metric_increment(:failures)
201
+ end
189
202
  end end end
@@ -3,59 +3,50 @@ require 'spec_helper'
3
3
  require "logstash/filters/dissect"
4
4
 
5
5
  describe LogStash::Filters::Dissect do
6
- class LoggerMock
7
- attr_reader :msgs, :hashes
8
- def initialize()
9
- @msgs = []
10
- @hashes = []
11
- end
12
-
13
- def error(*msg)
14
- @msgs.push(msg[0])
15
- @hashes.push(msg[1])
16
- end
17
-
18
- def warn(*msg)
19
- @msgs.push(msg[0])
20
- @hashes.push(msg[1])
21
- end
22
-
23
- def debug?() true; end
24
6
 
25
- def debug(*msg)
26
- @msgs.push(msg[0])
27
- @hashes.push(msg[1])
28
- end
29
-
30
- def fatal(*msg)
31
- @msgs.push(msg[0])
32
- @hashes.push(msg[1])
7
+ describe "Basic dissection" do
8
+ let(:config) do <<-CONFIG
9
+ filter {
10
+ dissect {
11
+ mapping => {
12
+ message => "[%{occurred_at}] %{code} %{service->} %{ic} %{svc_message}"
13
+ }
14
+ }
15
+ }
16
+ CONFIG
33
17
  end
34
18
 
35
- def trace(*msg)
36
- @msgs.push(msg[0])
37
- @hashes.push(msg[1])
19
+ sample("message" => "[25/05/16 09:10:38:425 BST] 00000001 SystemOut O java.lang:type=MemoryPool,name=class storage") do
20
+ expect(subject.get("occurred_at")).to eq("25/05/16 09:10:38:425 BST")
21
+ expect(subject.get("code")).to eq("00000001")
22
+ expect(subject.get("service")).to eq("SystemOut")
23
+ expect(subject.get("ic")).to eq("O")
24
+ expect(subject.get("svc_message")).to eq("java.lang:type=MemoryPool,name=class storage")
25
+ expect(subject.get("tags")).to be_nil
38
26
  end
39
27
  end
40
28
 
41
- describe "Basic dissection" do
29
+ describe "Basic dissection, like CSV with missing fields" do
42
30
  let(:config) do <<-CONFIG
43
31
  filter {
44
32
  dissect {
45
33
  mapping => {
46
- message => "[%{occurred_at}] %{code} %{service} %{ic} %{svc_message}"
34
+ message => '[%{occurred_at}] %{code} %{service} values: "%{v1}","%{v2}","%{v3}"%{rest}'
47
35
  }
48
36
  }
49
37
  }
50
38
  CONFIG
51
39
  end
52
40
 
53
- sample("message" => "[25/05/16 09:10:38:425 BST] 00000001 SystemOut O java.lang:type=MemoryPool,name=class storage") do
41
+ sample("message" => '[25/05/16 09:10:38:425 BST] 00000001 SystemOut values: "f1","","f3"') do
54
42
  expect(subject.get("occurred_at")).to eq("25/05/16 09:10:38:425 BST")
55
43
  expect(subject.get("code")).to eq("00000001")
56
44
  expect(subject.get("service")).to eq("SystemOut")
57
- expect(subject.get("ic")).to eq("O")
58
- expect(subject.get("svc_message")).to eq("java.lang:type=MemoryPool,name=class storage")
45
+ expect(subject.get("v1")).to eq("f1")
46
+ expect(subject.get("v2")).to eq("")
47
+ expect(subject.get("v3")).to eq("f3")
48
+ expect(subject.get("rest")).to eq("")
49
+ expect(subject.get("tags")).to be_nil
59
50
  end
60
51
  end
61
52
 
@@ -118,32 +109,82 @@ describe LogStash::Filters::Dissect do
118
109
  "mapping" => {"message" => "[%{occurred_at}] %{code} %{service} %{?ic}=%{&ic}% %{svc_message}"},
119
110
  "convert_datatype" => {
120
111
  "ccu" => "float", # ccu field -> nil
121
- "code" => "integer", # only int is supported
122
112
  "other" => "int" # other field -> hash - not coercible
123
113
  }
124
114
  }
125
115
  end
126
116
  let(:event) { LogStash::Event.new("message" => message, "other" => {}) }
127
- let(:loggr) { LoggerMock.new }
128
-
129
- before(:each) do
130
- filter.class.instance_variable_set("@logger", loggr)
131
- end
132
117
 
133
118
  it "tags and log messages are created" do
134
119
  filter.register
135
120
  filter.filter(event)
136
121
  expect(event.get("code")).to eq("00000001")
137
- expect(event.get("tags")).to eq(["_dataconversionnullvalue_ccu_float", "_dataconversionmissing_code_integer", "_dataconversionuncoercible_other_int"])
138
- expect(loggr.msgs).to eq(
139
- [
140
- "Event before dissection",
141
- "Dissector datatype conversion, value cannot be coerced, key: ccu, value: null",
142
- "Dissector datatype conversion, datatype not supported: integer",
143
- "Dissector datatype conversion, value cannot be coerced, key: other, value: {}",
144
- "Event after dissection"
145
- ]
146
- )
122
+ tags = event.get("tags")
123
+ expect(tags).to include("_dataconversionnullvalue_ccu_float")
124
+ expect(tags).to include("_dataconversionuncoercible_other_int")
125
+ # Logging moved to java can't mock ruby logger anymore
126
+ end
127
+ end
128
+
129
+ describe "Basic dissection when the source field does not exist" do
130
+ let(:event) { LogStash::Event.new("message" => "foo", "other" => {}) }
131
+ subject(:filter) { LogStash::Filters::Dissect.new(config) }
132
+ let(:config) do
133
+ {
134
+ "mapping" => {"msg" => "[%{occurred_at}] %{code} %{service} %{?ic}=%{&ic}% %{svc_message}"},
135
+ }
136
+ end
137
+ it "does not raise an error" do
138
+ filter.register
139
+ expect(subject).to receive(:metric_increment).once.with(:failures)
140
+ # it should log a warning, but we can't test that
141
+ expect { filter.filter(event) }.not_to raise_exception
142
+ end
143
+ end
144
+
145
+ describe "Invalid datatype conversion specified integer instead of int" do
146
+ subject(:filter) { LogStash::Filters::Dissect.new(config) }
147
+ let(:config) do
148
+ {
149
+ "convert_datatype" => {
150
+ "code" => "integer", # only int is supported
151
+ }
152
+ }
153
+ end
154
+ it "raises an error" do
155
+ expect { filter.register }.to raise_exception(LogStash::ConvertDatatypeFormatError)
156
+ end
157
+ end
158
+
159
+ describe "Integer datatype conversion, handle large integers" do
160
+ let(:config) do <<-CONFIG
161
+ filter {
162
+ dissect {
163
+ convert_datatype => {
164
+ "big_number" => "int"
165
+ }
166
+ }
167
+ }
168
+ CONFIG
169
+ end
170
+ sample("big_number" => "43947404257507186289") do
171
+ expect(subject.get("big_number")).to eq(43947404257507186289)
172
+ end
173
+ end
174
+
175
+ describe "Float datatype conversion, handle large floats" do
176
+ let(:config) do <<-CONFIG
177
+ filter {
178
+ dissect {
179
+ convert_datatype => {
180
+ "big_number" => "float"
181
+ }
182
+ }
183
+ }
184
+ CONFIG
185
+ end
186
+ sample("big_number" => "43947404257507186289.345324") do
187
+ expect(subject.get("big_number")).to eq(BigDecimal.new("43947404257507186289.345324"))
147
188
  end
148
189
  end
149
190
 
@@ -177,11 +218,6 @@ describe LogStash::Filters::Dissect do
177
218
  let(:message) { "very random message :-)" }
178
219
  let(:config) { {"mapping" => {"blah-di-blah" => "%{timestamp} %{+timestamp}"}} }
179
220
  let(:event) { LogStash::Event.new("message" => message) }
180
- let(:loggr) { LoggerMock.new }
181
-
182
- before(:each) do
183
- filter.class.instance_variable_set("@logger", loggr)
184
- end
185
221
 
186
222
  it "does not raise any exceptions" do
187
223
  expect{filter.register}.not_to raise_exception
@@ -189,8 +225,8 @@ describe LogStash::Filters::Dissect do
189
225
 
190
226
  it "dissect failure key missing is logged" do
191
227
  filter.register
192
- filter.filter(event)
193
- expect(loggr.msgs).to eq(["Event before dissection", "Dissector mapping, key not found in event", "Event after dissection"])
228
+ expect{filter.filter(event)}.not_to raise_exception
229
+ # Logging moved to java, can't Mock Logger
194
230
  end
195
231
  end
196
232
 
@@ -209,7 +245,7 @@ describe LogStash::Filters::Dissect do
209
245
  context "when field is defined as Append and Indirect (+&)" do
210
246
  let(:config) { {"mapping" => {"message" => "%{+&timestamp}"}}}
211
247
  it "raises an error in register" do
212
- msg = "org.logstash.dissect.fields.InvalidFieldException: Field cannot prefix with both Append and Indirect Prefix (+&): +&timestamp"
248
+ msg = /\Aorg\.logstash\.dissect\.fields\.InvalidFieldException: Field cannot prefix with both Append and Indirect Prefix .+/
213
249
  expect{filter.register}.to raise_exception(LogStash::FieldFormatError, msg)
214
250
  end
215
251
  end
@@ -217,7 +253,7 @@ describe LogStash::Filters::Dissect do
217
253
  context "when field is defined as Indirect and Append (&+)" do
218
254
  let(:config) { {"mapping" => {"message" => "%{&+timestamp}"}}}
219
255
  it "raises an error in register" do
220
- msg = "org.logstash.dissect.fields.InvalidFieldException: Field cannot prefix with both Append and Indirect Prefix (&+): &+timestamp"
256
+ msg = /\Aorg\.logstash\.dissect\.fields\.InvalidFieldException: Field cannot prefix with both Append and Indirect Prefix .+/
221
257
  expect{filter.register}.to raise_exception(LogStash::FieldFormatError, msg)
222
258
  end
223
259
  end
@@ -249,6 +285,22 @@ describe LogStash::Filters::Dissect do
249
285
  end
250
286
  end
251
287
 
288
+ describe "When the delimiters contain '{' and '}'" do
289
+ let(:options) { { "mapping" => { "message" => "{%{a}}{%{b}}%{rest}" } } }
290
+ subject { described_class.new(options) }
291
+ let(:event) { LogStash::Event.new({ "message" => "{foo}{bar}" }) }
292
+ before(:each) do
293
+ subject.register
294
+ subject.filter(event)
295
+ end
296
+ it "should dissect properly and not add tags to the event" do
297
+ expect(event.get("a")).to eq("foo")
298
+ expect(event.get("b")).to eq("bar")
299
+ expect(event.get("rest")).to eq("")
300
+ expect(event.get("tags")).to be_nil
301
+ end
302
+ end
303
+
252
304
  describe "Basic dissection" do
253
305
 
254
306
  let(:options) { { "mapping" => { "message" => "%{a} %{b}" } } }
@@ -270,7 +322,9 @@ describe LogStash::Filters::Dissect do
270
322
  context "when field is empty" do
271
323
  let(:event_data) { { "message" => "" } }
272
324
  it "should add tags to the event" do
273
- expect(event.get("tags")).to include("_dissectfailure")
325
+ tags = event.get("tags")
326
+ expect(tags).not_to be_nil
327
+ expect(tags).to include("_dissectfailure")
274
328
  end
275
329
  end
276
330
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: logstash-filter-dissect
3
3
  version: !ruby/object:Gem::Version
4
- version: 1.0.12
4
+ version: 1.1.1
5
5
  platform: ruby
6
6
  authors:
7
7
  - Elastic
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2017-08-15 00:00:00.000000000 Z
11
+ date: 2017-11-02 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  requirement: !ruby/object:Gem::Requirement
@@ -72,7 +72,9 @@ dependencies:
72
72
  - - ">="
73
73
  - !ruby/object:Gem::Version
74
74
  version: '0'
75
- description: This gem is a Logstash plugin required to be installed on top of the Logstash core pipeline using $LS_HOME/bin/logstash-plugin install gemname. This gem is not a stand-alone program
75
+ description: This gem is a Logstash plugin required to be installed on top of the
76
+ Logstash core pipeline using $LS_HOME/bin/logstash-plugin install gemname. This
77
+ gem is not a stand-alone program
76
78
  email: info@elastic.co
77
79
  executables: []
78
80
  extensions: []
@@ -93,7 +95,7 @@ files:
93
95
  - logstash-filter-dissect.gemspec
94
96
  - spec/filters/dissect_spec.rb
95
97
  - spec/spec_helper.rb
96
- - vendor/jars/org/logstash/dissect/jruby-dissect-library/1.0.12/jruby-dissect-library-1.0.12.jar
98
+ - vendor/jars/org/logstash/dissect/jruby-dissect-library/1.1.1/jruby-dissect-library-1.1.1.jar
97
99
  homepage: http://www.elastic.co/guide/en/logstash/current/index.html
98
100
  licenses:
99
101
  - Apache License (2.0)
@@ -117,7 +119,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
117
119
  version: '0'
118
120
  requirements: []
119
121
  rubyforge_project:
120
- rubygems_version: 2.4.8
122
+ rubygems_version: 2.6.13
121
123
  signing_key:
122
124
  specification_version: 4
123
125
  summary: This dissect filter will de-structure text into multiple fields.