logstash-output-elasticsearch 11.0.0-java → 11.0.4-java

Sign up to get free protection for your applications and to get access to all the features.
Files changed (40) hide show
  1. checksums.yaml +4 -4
  2. data/CHANGELOG.md +13 -0
  3. data/docs/index.asciidoc +13 -13
  4. data/lib/logstash/outputs/elasticsearch/data_stream_support.rb +1 -1
  5. data/lib/logstash/outputs/elasticsearch/http_client/pool.rb +2 -28
  6. data/lib/logstash/outputs/elasticsearch/ilm.rb +2 -33
  7. data/lib/logstash/outputs/elasticsearch/license_checker.rb +12 -6
  8. data/lib/logstash/outputs/elasticsearch/template_manager.rb +1 -1
  9. data/lib/logstash/outputs/elasticsearch/templates/ecs-v1/elasticsearch-8x.json +1 -0
  10. data/lib/logstash/outputs/elasticsearch.rb +18 -15
  11. data/lib/logstash/plugin_mixins/elasticsearch/common.rb +2 -1
  12. data/logstash-output-elasticsearch.gemspec +2 -2
  13. data/spec/es_spec_helper.rb +4 -6
  14. data/spec/fixtures/_nodes/{5x_6x.json → 6x.json} +5 -5
  15. data/spec/integration/outputs/compressed_indexing_spec.rb +47 -46
  16. data/spec/integration/outputs/delete_spec.rb +49 -51
  17. data/spec/integration/outputs/ilm_spec.rb +230 -246
  18. data/spec/integration/outputs/index_spec.rb +5 -2
  19. data/spec/integration/outputs/index_version_spec.rb +78 -82
  20. data/spec/integration/outputs/ingest_pipeline_spec.rb +58 -60
  21. data/spec/integration/outputs/painless_update_spec.rb +74 -164
  22. data/spec/integration/outputs/parent_spec.rb +67 -75
  23. data/spec/integration/outputs/retry_spec.rb +2 -2
  24. data/spec/integration/outputs/sniffer_spec.rb +15 -53
  25. data/spec/integration/outputs/templates_spec.rb +79 -81
  26. data/spec/integration/outputs/update_spec.rb +99 -101
  27. data/spec/spec_helper.rb +1 -5
  28. data/spec/unit/outputs/elasticsearch/data_stream_support_spec.rb +0 -14
  29. data/spec/unit/outputs/elasticsearch/http_client/pool_spec.rb +30 -37
  30. data/spec/unit/outputs/elasticsearch/template_manager_spec.rb +9 -9
  31. data/spec/unit/outputs/elasticsearch_spec.rb +54 -18
  32. metadata +8 -22
  33. data/lib/logstash/outputs/elasticsearch/templates/ecs-disabled/elasticsearch-2x.json +0 -95
  34. data/lib/logstash/outputs/elasticsearch/templates/ecs-disabled/elasticsearch-5x.json +0 -46
  35. data/spec/fixtures/_nodes/2x_1x.json +0 -27
  36. data/spec/fixtures/scripts/groovy/scripted_update.groovy +0 -2
  37. data/spec/fixtures/scripts/groovy/scripted_update_nested.groovy +0 -2
  38. data/spec/fixtures/scripts/groovy/scripted_upsert.groovy +0 -2
  39. data/spec/integration/outputs/groovy_update_spec.rb +0 -150
  40. data/spec/integration/outputs/templates_5x_spec.rb +0 -98
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: dd0e368ed484aa214da94fcdc6978f919b3139cf90f8db462aba17f9c1e86670
4
- data.tar.gz: 67b475fdd703d50d7bbb806adebd46c3b6657ead3019e76c7517e3c2428335be
3
+ metadata.gz: f054da12af7761adfdd86d912602ad164e407d89a3965637763f51f341389185
4
+ data.tar.gz: b82ed842fa924dff112cc3b755f4bf08645f6e2ee39f820f39d5ce5f4700f6ed
5
5
  SHA512:
6
- metadata.gz: f1e85fea62c9173d0ffdbf739487d8bcbbcfb41f304893e37333d22a8f42b0f527506d926c9f983480ef1c76eee43869ded32b84df98e19fdd999b9d5a26baa9
7
- data.tar.gz: 0d1df95d541fa6e1d1161763051a3b4dc8fed10f62d878e7b4aa556d00c17bf6daf44a2fdc0c356c14c9c5431457112da601942dcf34dbf5386b05235933146e
6
+ metadata.gz: ea4d348ab10fc9caf3df58a9b13adc8b234d4e13834137006845507d74050f22499f6d957d2559436718ce76ced97ae4e11637d5f2da988f48ae7b8218db1118
7
+ data.tar.gz: ba9983c8a214de9793d30b3845c0f9a536ea2e67b909788d821262f7217d5127343d6dd0fb22a675895cc04aad2df9c723874f40248e87734a10204c8aec81d6
data/CHANGELOG.md CHANGED
@@ -1,3 +1,16 @@
1
+ ## 11.0.4
2
+ - [DOC] Clarify that `http_compression` applies to _requests_, and remove noise about _response_ decompression [#1000](https://github.com/logstash-plugins/logstash-output-elasticsearch/pull/1000)
3
+
4
+ ## 11.0.3
5
+ - Fixed SSL handshake hang indefinitely with proxy setup [#1032](https://github.com/logstash-plugins/logstash-output-elasticsearch/pull/1032)
6
+
7
+ ## 11.0.2
8
+ - Validate that required functionality in Elasticsearch is available upon initial connection [#1015](https://github.com/logstash-plugins/logstash-output-elasticsearch/pull/1015)
9
+
10
+ ## 11.0.1
11
+ - Fix: DLQ regression shipped in 11.0.0 [#1012](https://github.com/logstash-plugins/logstash-output-elasticsearch/pull/1012)
12
+ - [DOC] Fixed broken link in list item [#1011](https://github.com/logstash-plugins/logstash-output-elasticsearch/pull/1011)
13
+
1
14
  ## 11.0.0
2
15
  - Feat: Data stream support [#988](https://github.com/logstash-plugins/logstash-output-elasticsearch/pull/988)
3
16
  - Refactor: reviewed logging format + restored ES (initial) setup error logging
data/docs/index.asciidoc CHANGED
@@ -71,7 +71,7 @@ as logs, events, and metrics) and non-time series data in Elasticsearch.
71
71
  The data stream options are recommended for indexing time series datasets (such
72
72
  as logs, metrics, and events) into {es}:
73
73
 
74
- * <<plugins-{type}s-{plugin}-data_stream>> |<<string,string>>
74
+ * <<plugins-{type}s-{plugin}-data_stream>>
75
75
  * <<plugins-{type}s-{plugin}-data_stream_auto_routing>>
76
76
  * <<plugins-{type}s-{plugin}-data_stream_dataset>>
77
77
  * <<plugins-{type}s-{plugin}-data_stream_namespace>>
@@ -278,16 +278,11 @@ not reevaluate its DNS value while the keepalive is in effect.
278
278
 
279
279
  ==== HTTP Compression
280
280
 
281
- This plugin supports request and response compression. Response compression is
282
- enabled by default for HTTP and for Elasticsearch versions 5.0 and later.
281
+ This plugin supports request compression, and handles compressed responses
282
+ from Elasticsearch.
283
283
 
284
- You don't have to set any configs in Elasticsearch for it to send back a
285
- compressed response. For versions before 5.0, or if HTTPS is enabled,
286
- `http.compression` must be set to `true` {ref}/modules-http.html#modules-http[in
287
- Elasticsearch] to take advantage of response compression when using this plugin.
288
-
289
- For requests compression, regardless of the Elasticsearch version, enable the
290
- `http_compression` setting in the Logstash config file.
284
+ To enable request compression, use the <<plugins-{type}s-{plugin}-http_compression>>
285
+ setting on this plugin.
291
286
 
292
287
  ==== Authentication
293
288
 
@@ -514,7 +509,7 @@ overwritten with a warning.
514
509
  * Default value is `logs`.
515
510
 
516
511
  The data stream type used to construct the data stream at index time.
517
- Currently, only `logs` and `metrics`are supported.
512
+ Currently, only `logs`, `metrics` and `synthetics` are supported.
518
513
 
519
514
  [id="plugins-{type}s-{plugin}-doc_as_upsert"]
520
515
  ===== `doc_as_upsert`
@@ -640,8 +635,13 @@ Any special characters present in the URLs here MUST be URL escaped! This means
640
635
  * Value type is <<boolean,boolean>>
641
636
  * Default value is `false`
642
637
 
643
- Enable gzip compression on requests. Note that response compression is on by
644
- default for Elasticsearch v5.0 and beyond
638
+ Enable gzip compression on requests.
639
+
640
+ This setting allows you to reduce this plugin's outbound network traffic by
641
+ compressing each bulk _request_ to {es}.
642
+
643
+ NOTE: This output plugin reads compressed _responses_ from {es} regardless
644
+ of the value of this setting.
645
645
 
646
646
  [id="plugins-{type}s-{plugin}-ilm_enabled"]
647
647
  ===== `ilm_enabled`
@@ -146,7 +146,7 @@ module LogStash module Outputs class ElasticSearch
146
146
  def data_stream_event_action_tuple(event)
147
147
  event_data = event.to_hash
148
148
  data_stream_event_sync(event_data) if data_stream_sync_fields
149
- ['create', common_event_params(event), event_data] # action always 'create'
149
+ EventActionTuple.new('create', common_event_params(event), event, event_data)
150
150
  end
151
151
 
152
152
  DATA_STREAM_SYNC_FIELDS = [ 'type', 'dataset', 'namespace' ].freeze
@@ -176,15 +176,7 @@ module LogStash; module Outputs; class ElasticSearch; class HttpClient;
176
176
  @logger.warn("Sniff returned no nodes! Will not update hosts.")
177
177
  return nil
178
178
  else
179
- case major_version(url_meta[:version])
180
- when 5, 6, 7, 8
181
- sniff_5x_and_above(nodes)
182
- when 2, 1
183
- sniff_2x_1x(nodes)
184
- else
185
- @logger.warn("Could not determine version for nodes in ES cluster!")
186
- return nil
187
- end
179
+ sniff(nodes)
188
180
  end
189
181
  end
190
182
 
@@ -192,7 +184,7 @@ module LogStash; module Outputs; class ElasticSearch; class HttpClient;
192
184
  version_string.split('.').first.to_i
193
185
  end
194
186
 
195
- def sniff_5x_and_above(nodes)
187
+ def sniff(nodes)
196
188
  nodes.map do |id,info|
197
189
  # Skip master-only nodes
198
190
  next if info["roles"] && info["roles"] == ["master"]
@@ -208,24 +200,6 @@ module LogStash; module Outputs; class ElasticSearch; class HttpClient;
208
200
  end
209
201
  end
210
202
 
211
- def sniff_2x_1x(nodes)
212
- nodes.map do |id,info|
213
- # TODO Make sure this works with shield. Does that listed
214
- # stuff as 'https_address?'
215
-
216
- addr_str = info['http_address'].to_s
217
- next unless addr_str # Skip hosts with HTTP disabled
218
-
219
- # Only connect to nodes that serve data
220
- # this will skip connecting to client, tribe, and master only nodes
221
- # Note that if 'attributes' is NOT set, then that's just a regular node
222
- # with master + data + client enabled, so we allow that
223
- attributes = info['attributes']
224
- next if attributes && attributes['data'] == 'false'
225
- address_str_to_uri(addr_str)
226
- end.compact
227
- end
228
-
229
203
  def stop_sniffer
230
204
  @sniffer.join if @sniffer
231
205
  end
@@ -16,20 +16,12 @@ module LogStash; module Outputs; class ElasticSearch
16
16
  begin
17
17
  if @ilm_enabled == 'auto'
18
18
  if ilm_on_by_default?
19
- ilm_ready, error = ilm_ready?
20
- if !ilm_ready
21
- @logger.info("Index Lifecycle Management is set to 'auto', but will be disabled - #{error}")
22
- false
23
- else
24
- ilm_alias_set?
25
- end
19
+ ilm_alias_set?
26
20
  else
27
21
  @logger.info("Index Lifecycle Management is set to 'auto', but will be disabled - Your Elasticsearch cluster is before 7.0.0, which is the minimum version required to automatically run Index Lifecycle Management")
28
22
  false
29
23
  end
30
24
  elsif @ilm_enabled.to_s == 'true'
31
- ilm_ready, error = ilm_ready?
32
- raise LogStash::ConfigurationError,"Index Lifecycle Management is set to enabled in Logstash, but cannot be used - #{error}" unless ilm_ready
33
25
  ilm_alias_set?
34
26
  else
35
27
  false
@@ -47,29 +39,6 @@ module LogStash; module Outputs; class ElasticSearch
47
39
  maximum_seen_major_version >= 7
48
40
  end
49
41
 
50
- def ilm_ready?
51
- # Check the Elasticsearch instance for ILM readiness - this means that the version has to be a non-OSS release, with ILM feature
52
- # available and enabled.
53
- begin
54
- xpack = client.get_xpack_info
55
- features = xpack.nil? || xpack.empty? ? nil : xpack["features"]
56
- ilm = features.nil? ? nil : features["ilm"]
57
- return false, "Index Lifecycle management is not installed on your Elasticsearch cluster" if features.nil? || ilm.nil?
58
- return false, "Index Lifecycle management is not available in your Elasticsearch cluster" unless ilm['available']
59
- return false, "Index Lifecycle management is not enabled in your Elasticsearch cluster" unless ilm['enabled']
60
- return true, nil
61
- rescue ::LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError => e
62
- # Check xpack endpoint: If no xpack endpoint, then this version of Elasticsearch is not compatible
63
- if e.response_code == 404
64
- return false, "Index Lifecycle management is not installed on your Elasticsearch cluster"
65
- elsif e.response_code == 400
66
- return false, "Index Lifecycle management is not installed on your Elasticsearch cluster"
67
- else
68
- raise e
69
- end
70
- end
71
- end
72
-
73
42
  def default_index?(index)
74
43
  index == @default_index
75
44
  end
@@ -113,4 +82,4 @@ module LogStash; module Outputs; class ElasticSearch
113
82
  LogStash::Json.load(::IO.read(policy_path))
114
83
  end
115
84
  end
116
- end; end; end
85
+ end; end; end
@@ -11,7 +11,7 @@ module LogStash; module Outputs; class ElasticSearch
11
11
  # @param url [LogStash::Util::SafeURI] ES node URL
12
12
  # @return [Boolean] true if provided license is deemed appropriate
13
13
  def appropriate_license?(pool, url)
14
- license = pool.get_license(url)
14
+ license = extract_license(pool.get_license(url))
15
15
  case license_status(license)
16
16
  when 'active'
17
17
  true
@@ -24,20 +24,26 @@ module LogStash; module Outputs; class ElasticSearch
24
24
  end
25
25
  end
26
26
 
27
+ NO_LICENSE = {}.freeze
28
+ private_constant :NO_LICENSE
29
+
30
+ def extract_license(license)
31
+ license.fetch("license", NO_LICENSE)
32
+ end
33
+
27
34
  def license_status(license)
28
- license.fetch("license", {}).fetch("status", nil)
35
+ license.fetch("status", nil)
29
36
  end
30
37
 
31
38
  private
32
39
 
33
40
  def warn_no_license(url)
34
- @logger.error("Connecting to an OSS distribution of Elasticsearch is no longer supported, " +
35
- "please upgrade to the default distribution of Elasticsearch", url: url.sanitized.to_s)
41
+ @logger.error("Could not connect to a compatible version of Elasticsearch", url: url.sanitized.to_s)
36
42
  end
37
43
 
38
44
  def warn_invalid_license(url, license)
39
- @logger.warn("WARNING: Current Elasticsearch license is not active, " +
40
- "please check Elasticsearch's licensing information", url: url.sanitized.to_s, license: license)
45
+ @logger.warn("Elasticsearch license is not active, please check Elasticsearch’s licensing information",
46
+ url: url.sanitized.to_s, license: license)
41
47
  end
42
48
 
43
49
  end
@@ -53,7 +53,7 @@ module LogStash; module Outputs; class ElasticSearch
53
53
  end
54
54
 
55
55
  def self.default_template_path(es_major_version, ecs_compatibility=:disabled)
56
- template_version = es_major_version == 1 ? 2 : es_major_version
56
+ template_version = es_major_version
57
57
  default_template_name = "templates/ecs-#{ecs_compatibility}/elasticsearch-#{template_version}x.json"
58
58
  ::File.expand_path(default_template_name, ::File.dirname(__FILE__))
59
59
  end
@@ -0,0 +1 @@
1
+ lib/logstash/outputs/elasticsearch/templates/ecs-v1/Users/yaauie/src/elastic/logstash-plugins/logsta
@@ -392,7 +392,21 @@ class LogStash::Outputs::ElasticSearch < LogStash::Outputs::Base
392
392
  params[:version] = event.sprintf(@version) if @version
393
393
  params[:version_type] = event.sprintf(@version_type) if @version_type
394
394
 
395
- [action, params, event.to_hash]
395
+ EventActionTuple.new(action, params, event)
396
+ end
397
+
398
+ class EventActionTuple < Array # TODO: acting as an array for compatibility
399
+
400
+ def initialize(action, params, event, event_data = nil)
401
+ super(3)
402
+ self[0] = action
403
+ self[1] = params
404
+ self[2] = event_data || event.to_hash
405
+ @event = event
406
+ end
407
+
408
+ attr_reader :event
409
+
396
410
  end
397
411
 
398
412
  # @return Hash (initial) parameters for given event
@@ -429,7 +443,7 @@ class LogStash::Outputs::ElasticSearch < LogStash::Outputs::Base
429
443
  end
430
444
 
431
445
  def routing_field_name
432
- maximum_seen_major_version >= 6 ? :routing : :_routing
446
+ :routing
433
447
  end
434
448
 
435
449
  # Determine the correct value for the 'type' field for the given event
@@ -442,9 +456,7 @@ class LogStash::Outputs::ElasticSearch < LogStash::Outputs::Base
442
456
  event.sprintf(@document_type)
443
457
  else
444
458
  major_version = maximum_seen_major_version
445
- if major_version < 6
446
- es5_event_type(event)
447
- elsif major_version == 6
459
+ if major_version == 6
448
460
  DEFAULT_EVENT_TYPE_ES6
449
461
  elsif major_version == 7
450
462
  DEFAULT_EVENT_TYPE_ES7
@@ -456,15 +468,6 @@ class LogStash::Outputs::ElasticSearch < LogStash::Outputs::Base
456
468
  type.to_s
457
469
  end
458
470
 
459
- def es5_event_type(event)
460
- type = event.get('type')
461
- return DEFAULT_EVENT_TYPE_ES6 unless type
462
- if !type.is_a?(String) && !type.is_a?(Numeric)
463
- @logger.warn("Bad event type (non-string/integer type value set)", :type_class => type.class, :type_value => type, :event => event.to_hash)
464
- end
465
- type
466
- end
467
-
468
471
  ##
469
472
  # WARNING: This method is overridden in a subclass in Logstash Core 7.7-7.8's monitoring,
470
473
  # where a `client` argument is both required and ignored. In later versions of
@@ -473,7 +476,7 @@ class LogStash::Outputs::ElasticSearch < LogStash::Outputs::Base
473
476
  # @param noop_required_client [nil]: required `nil` for legacy reasons.
474
477
  # @return [Boolean]
475
478
  def use_event_type?(noop_required_client)
476
- # always set type for ES <= 6
479
+ # always set type for ES 6
477
480
  # for ES 7 only set it if the user defined it
478
481
  (maximum_seen_major_version < 7) || (maximum_seen_major_version == 7 && @document_type)
479
482
  end
@@ -200,8 +200,9 @@ module LogStash; module PluginMixins; module ElasticSearch
200
200
  def handle_dlq_status(message, action, status, response)
201
201
  # To support bwc, we check if DLQ exists. otherwise we log and drop event (previous behavior)
202
202
  if @dlq_writer
203
+ event, action = action.event, [action[0], action[1], action[2]]
203
204
  # TODO: Change this to send a map with { :status => status, :action => action } in the future
204
- @dlq_writer.write(action[2], "#{message} status: #{status}, action: #{action}, response: #{response}")
205
+ @dlq_writer.write(event, "#{message} status: #{status}, action: #{action}, response: #{response}")
205
206
  else
206
207
  if dig_value(response, 'index', 'error', 'type') == 'invalid_index_name_exception'
207
208
  level = :error
@@ -1,6 +1,6 @@
1
1
  Gem::Specification.new do |s|
2
2
  s.name = 'logstash-output-elasticsearch'
3
- s.version = '11.0.0'
3
+ s.version = '11.0.4'
4
4
 
5
5
  s.licenses = ['apache-2.0']
6
6
  s.summary = "Stores logs in Elasticsearch"
@@ -21,7 +21,7 @@ Gem::Specification.new do |s|
21
21
  # Special flag to let us know this is actually a logstash plugin
22
22
  s.metadata = { "logstash_plugin" => "true", "logstash_group" => "output" }
23
23
 
24
- s.add_runtime_dependency "manticore", '>= 0.5.4', '< 1.0.0'
24
+ s.add_runtime_dependency "manticore", '>= 0.7.1', '< 1.0.0'
25
25
  s.add_runtime_dependency 'stud', ['>= 0.0.17', '~> 0.0']
26
26
  s.add_runtime_dependency "logstash-core-plugin-api", ">= 1.60", "<= 2.99"
27
27
  s.add_runtime_dependency 'logstash-mixin-ecs_compatibility_support', '~>1.0'
@@ -20,7 +20,9 @@ module ESHelper
20
20
  end
21
21
 
22
22
  def get_client
23
- Elasticsearch::Client.new(:hosts => [get_host_port])
23
+ Elasticsearch::Client.new(:hosts => [get_host_port]).tap do |client|
24
+ allow(client).to receive(:verify_elasticsearch).and_return(true) # bypass client side version checking
25
+ end
24
26
  end
25
27
 
26
28
  def doc_type
@@ -53,11 +55,7 @@ module ESHelper
53
55
  end
54
56
 
55
57
  def routing_field_name
56
- if ESHelper.es_version_satisfies?(">=6")
57
- :routing
58
- else
59
- :_routing
60
- end
58
+ :routing
61
59
  end
62
60
 
63
61
  def self.es_version
@@ -11,7 +11,7 @@
11
11
  "transport_address" : "http://localhost:9200",
12
12
  "host" : "localhost",
13
13
  "ip" : "127.0.0.1",
14
- "version" : "5.5.1",
14
+ "version" : "6.8.10",
15
15
  "build_hash" : "19c13d0",
16
16
  "roles" : [
17
17
  "master"
@@ -29,7 +29,7 @@
29
29
  "transport_address" : "http://localhost:9201",
30
30
  "host" : "localhost",
31
31
  "ip" : "127.0.0.1",
32
- "version" : "5.5.1",
32
+ "version" : "6.8.10",
33
33
  "build_hash" : "19c13d0",
34
34
  "roles" : [
35
35
  "data"
@@ -47,7 +47,7 @@
47
47
  "transport_address" : "http://localhost:9202",
48
48
  "host" : "localhost",
49
49
  "ip" : "127.0.0.1",
50
- "version" : "5.5.1",
50
+ "version" : "6.8.10",
51
51
  "build_hash" : "19c13d0",
52
52
  "roles" : [
53
53
  "data",
@@ -66,7 +66,7 @@
66
66
  "transport_address" : "http://localhost:9203",
67
67
  "host" : "localhost",
68
68
  "ip" : "127.0.0.1",
69
- "version" : "5.5.1",
69
+ "version" : "6.8.10",
70
70
  "build_hash" : "19c13d0",
71
71
  "roles" : [ ],
72
72
  "http" : {
@@ -78,4 +78,4 @@
78
78
  }
79
79
  }
80
80
  }
81
- }
81
+ }
@@ -8,62 +8,63 @@ RSpec::Matchers.define :a_valid_gzip_encoded_string do
8
8
  }
9
9
  end
10
10
 
11
- if ESHelper.es_version_satisfies?(">= 5")
12
- describe "indexing with http_compression turned on", :integration => true do
13
- let(:event) { LogStash::Event.new("message" => "Hello World!", "type" => type) }
14
- let(:index) { 10.times.collect { rand(10).to_s }.join("") }
15
- let(:type) { ESHelper.es_version_satisfies?("< 7") ? "doc" : "_doc" }
16
- let(:event_count) { 10000 + rand(500) }
17
- let(:events) { event_count.times.map { event }.to_a }
18
- let(:config) {
19
- {
20
- "hosts" => get_host_port,
21
- "index" => index,
22
- "http_compression" => true
23
- }
11
+ describe "indexing with http_compression turned on", :integration => true do
12
+ let(:event) { LogStash::Event.new("message" => "Hello World!", "type" => type) }
13
+ let(:index) { 10.times.collect { rand(10).to_s }.join("") }
14
+ let(:type) { ESHelper.es_version_satisfies?("< 7") ? "doc" : "_doc" }
15
+ let(:event_count) { 10000 + rand(500) }
16
+ let(:events) { event_count.times.map { event }.to_a }
17
+ let(:config) {
18
+ {
19
+ "hosts" => get_host_port,
20
+ "index" => index,
21
+ "http_compression" => true
24
22
  }
25
- subject { LogStash::Outputs::ElasticSearch.new(config) }
23
+ }
24
+ subject { LogStash::Outputs::ElasticSearch.new(config) }
26
25
 
27
- let(:es_url) { "http://#{get_host_port}" }
28
- let(:index_url) {"#{es_url}/#{index}"}
29
- let(:http_client_options) { {} }
30
- let(:http_client) do
31
- Manticore::Client.new(http_client_options)
32
- end
26
+ let(:es_url) { "http://#{get_host_port}" }
27
+ let(:index_url) {"#{es_url}/#{index}"}
28
+ let(:http_client_options) { {} }
29
+ let(:http_client) do
30
+ Manticore::Client.new(http_client_options)
31
+ end
33
32
 
34
- before do
35
- subject.register
36
- subject.multi_receive([])
37
- end
33
+ before do
34
+ subject.register
35
+ subject.multi_receive([])
36
+ end
38
37
 
39
- shared_examples "an indexer" do
40
- it "ships events" do
41
- subject.multi_receive(events)
38
+ shared_examples "an indexer" do
39
+ it "ships events" do
40
+ subject.multi_receive(events)
42
41
 
43
- http_client.post("#{es_url}/_refresh").call
42
+ http_client.post("#{es_url}/_refresh").call
44
43
 
45
- response = http_client.get("#{index_url}/_count?q=*")
46
- result = LogStash::Json.load(response.body)
47
- cur_count = result["count"]
48
- expect(cur_count).to eq(event_count)
44
+ response = http_client.get("#{index_url}/_count?q=*")
45
+ result = LogStash::Json.load(response.body)
46
+ cur_count = result["count"]
47
+ expect(cur_count).to eq(event_count)
49
48
 
50
- response = http_client.get("#{index_url}/_search?q=*&size=1000")
51
- result = LogStash::Json.load(response.body)
52
- result["hits"]["hits"].each do |doc|
53
- expect(doc["_type"]).to eq(type) if ESHelper.es_version_satisfies?(">= 6", "< 8")
54
- expect(doc).not_to include("_type") if ESHelper.es_version_satisfies?(">= 8")
55
- expect(doc["_index"]).to eq(index)
49
+ response = http_client.get("#{index_url}/_search?q=*&size=1000")
50
+ result = LogStash::Json.load(response.body)
51
+ result["hits"]["hits"].each do |doc|
52
+ if ESHelper.es_version_satisfies?("< 8")
53
+ expect(doc["_type"]).to eq(type)
54
+ else
55
+ expect(doc).not_to include("_type")
56
56
  end
57
+ expect(doc["_index"]).to eq(index)
57
58
  end
58
59
  end
60
+ end
59
61
 
60
- it "sets the correct content-encoding header and body is compressed" do
61
- expect(subject.client.pool.adapter.client).to receive(:send).
62
- with(anything, anything, {:headers=>{"Content-Encoding"=>"gzip", "Content-Type"=>"application/json"}, :body => a_valid_gzip_encoded_string}).
63
- and_call_original
64
- subject.multi_receive(events)
65
- end
66
-
67
- it_behaves_like("an indexer")
62
+ it "sets the correct content-encoding header and body is compressed" do
63
+ expect(subject.client.pool.adapter.client).to receive(:send).
64
+ with(anything, anything, {:headers=>{"Content-Encoding"=>"gzip", "Content-Type"=>"application/json"}, :body => a_valid_gzip_encoded_string}).
65
+ and_call_original
66
+ subject.multi_receive(events)
68
67
  end
68
+
69
+ it_behaves_like("an indexer")
69
70
  end