logstash-output-elasticsearch 11.15.9-java → 11.17.0-java

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: c9e537b9f31644ce80834b295b99d22566863f666ab319efc34f641c15018d74
4
- data.tar.gz: a99f63dd55f4b0a12e597e812db124dd9a7fe82ce1a5e7af4057992f903eac65
3
+ metadata.gz: 37124c3a166313a2fb9f3831273def114770178158439a44ebfa1bc1d32f9d0f
4
+ data.tar.gz: a09fd3ce2c54908fedc14dc2d780bb3ec48d7091d05e2e8fcb5789e4ec1e30b9
5
5
  SHA512:
6
- metadata.gz: 12fa3b203130210b5d274364ff97e31bfb01aaedd23ac22fc53fea1626cad628d3f33e952dcf12555fc4860d7577235684e255550dfc7668d9dc93d7e6bf55ff
7
- data.tar.gz: 50ca989af2afc85f439995c6dde9c7eeda56924c9d9729ef91426b34dc99146fadecf3e290dc7e122113bb2cbf50bdc1eeac22f8448d2d4373b0d251660fb6a7
6
+ metadata.gz: caac996badd1bbdeb231fad3f40f96a50386baf78ee356587d5fc5d2b4a095f1073bee417a99aab081c13cb1a18802785abed618dc85db504f56145c620c46b6
7
+ data.tar.gz: 697a89b810998154a44338e8e73f02351b72afa00fc35e1fc115a22ce5ecfeddd27d29bd8fdf4fb779a3b17fbaa2f4f361a2c30b68c0b5ce0cd49a6edeba1a1d
data/CHANGELOG.md CHANGED
@@ -1,3 +1,9 @@
1
+ ## 11.17.0
2
+ - Added support to http compression level. Deprecated `http_compression` in favour of `compression_level` and enabled compression level 1 by default. [#1148](https://github.com/logstash-plugins/logstash-output-elasticsearch/pull/1148)
3
+
4
+ ## 11.16.0
5
+ - Added support to Serverless Elasticsearch [#1445](https://github.com/logstash-plugins/logstash-output-elasticsearch/pull/1145)
6
+
1
7
  ## 11.15.9
2
8
  - allow dlq_ settings when using data streams [#1144](https://github.com/logstash-plugins/logstash-output-elasticsearch/pull/1144)
3
9
 
data/docs/index.asciidoc CHANGED
@@ -277,9 +277,9 @@ not reevaluate its DNS value while the keepalive is in effect.
277
277
  ==== HTTP Compression
278
278
 
279
279
  This plugin always reads compressed responses from {es}.
280
- It _can be configured_ to send compressed bulk requests to {es}.
280
+ By default, it sends compressed bulk requests to {es}.
281
281
 
282
- If you are concerned about bandwidth, you can enable <<plugins-{type}s-{plugin}-http_compression>> to trade a small amount of CPU capacity for a significant reduction in network IO.
282
+ If you are concerned about bandwidth, you can set a higher <<plugins-{type}s-{plugin}-compression_level>> to trade CPU capacity for a reduction in network IO.
283
283
 
284
284
  ==== Authentication
285
285
 
@@ -310,6 +310,7 @@ This plugin supports the following configuration options plus the
310
310
  | <<plugins-{type}s-{plugin}-ca_trusted_fingerprint>> |<<string,string>>|No
311
311
  | <<plugins-{type}s-{plugin}-cloud_auth>> |<<password,password>>|No
312
312
  | <<plugins-{type}s-{plugin}-cloud_id>> |<<string,string>>|No
313
+ | <<plugins-{type}s-{plugin}-compression_level>> |<<number,number>>, one of `[0 ~ 9]`|No
313
314
  | <<plugins-{type}s-{plugin}-custom_headers>> |<<hash,hash>>|No
314
315
  | <<plugins-{type}s-{plugin}-data_stream>> |<<string,string>>, one of `["true", "false", "auto"]`|No
315
316
  | <<plugins-{type}s-{plugin}-data_stream_auto_routing>> |<<boolean,boolean>>|No
@@ -459,6 +460,17 @@ Cloud ID, from the Elastic Cloud web console. If set `hosts` should not be used.
459
460
  For more details, check out the
460
461
  {logstash-ref}/connecting-to-cloud.html[Logstash-to-Cloud documentation].
461
462
 
463
+ [id="plugins-{type}s-{plugin}-compression_level"]
464
+ ===== `compression_level`
465
+
466
+ * Value can be any of: `0`, `1`, `2`, `3`, `4`, `5`, `6`, `7`, `8`, `9`
467
+ * Default value is `1`
468
+
469
+ The gzip compression level. Setting this value to `0` disables compression.
470
+ The compression level must be in the range of `1` (best speed) to `9` (best compression).
471
+
472
+ Increasing the compression level will reduce the network usage but will increase the CPU usage.
473
+
462
474
  [id="plugins-{type}s-{plugin}-data_stream"]
463
475
  ===== `data_stream`
464
476
 
@@ -618,7 +630,7 @@ NOTE: Deprecated, refer to <<plugins-{type}s-{plugin}-silence_errors_in_log>>.
618
630
  Pass a set of key value pairs as the headers sent in each request to
619
631
  an elasticsearch node. The headers will be used for any kind of request
620
632
  (_bulk request, template installation, health checks and sniffing).
621
- These custom headers will be overidden by settings like `http_compression`.
633
+ These custom headers will be overidden by settings like `compression_level`.
622
634
 
623
635
  [id="plugins-{type}s-{plugin}-healthcheck_path"]
624
636
  ===== `healthcheck_path`
@@ -659,11 +671,12 @@ Any special characters present in the URLs here MUST be URL escaped! This means
659
671
 
660
672
  [id="plugins-{type}s-{plugin}-http_compression"]
661
673
  ===== `http_compression`
674
+ deprecated[11.17.0, Replaced by <<plugins-{type}s-{plugin}-compression_level>>]
662
675
 
663
676
  * Value type is <<boolean,boolean>>
664
677
  * Default value is `false`
665
678
 
666
- Enable gzip compression on requests.
679
+ Setting `true` enables gzip compression level 1 on requests.
667
680
 
668
681
  This setting allows you to reduce this plugin's outbound network traffic by
669
682
  compressing each bulk _request_ to {es}.
@@ -48,6 +48,8 @@ module LogStash; module Outputs; class ElasticSearch; class HttpClient;
48
48
  :sniffer_delay => 10,
49
49
  }.freeze
50
50
 
51
+ BUILD_FLAVOUR_SERVERLESS = 'serverless'.freeze
52
+
51
53
  def initialize(logger, adapter, initial_urls=[], options={})
52
54
  @logger = logger
53
55
  @adapter = adapter
@@ -75,6 +77,7 @@ module LogStash; module Outputs; class ElasticSearch; class HttpClient;
75
77
  @license_checker = options[:license_checker] || LogStash::PluginMixins::ElasticSearch::NoopLicenseChecker::INSTANCE
76
78
 
77
79
  @last_es_version = Concurrent::AtomicReference.new
80
+ @build_flavour = Concurrent::AtomicReference.new
78
81
  end
79
82
 
80
83
  def start
@@ -250,7 +253,10 @@ module LogStash; module Outputs; class ElasticSearch; class HttpClient;
250
253
  # If no exception was raised it must have succeeded!
251
254
  logger.warn("Restored connection to ES instance", url: url.sanitized.to_s)
252
255
  # We reconnected to this node, check its ES version
253
- es_version = get_es_version(url)
256
+ version_info = get_es_version(url)
257
+ es_version = version_info.fetch('number', nil)
258
+ build_flavour = version_info.fetch('build_flavor', nil)
259
+
254
260
  if es_version.nil?
255
261
  logger.warn("Failed to retrieve Elasticsearch version data from connected endpoint, connection aborted", :url => url.sanitized.to_s)
256
262
  next
@@ -258,6 +264,7 @@ module LogStash; module Outputs; class ElasticSearch; class HttpClient;
258
264
  @state_mutex.synchronize do
259
265
  meta[:version] = es_version
260
266
  set_last_es_version(es_version, url)
267
+ set_build_flavour(build_flavour)
261
268
 
262
269
  alive = @license_checker.appropriate_license?(self, url)
263
270
  meta[:state] = alive ? :alive : :dead
@@ -475,7 +482,7 @@ module LogStash; module Outputs; class ElasticSearch; class HttpClient;
475
482
 
476
483
  response = LogStash::Json.load(response.body)
477
484
 
478
- response.fetch('version', {}).fetch('number', nil)
485
+ response.fetch('version', {})
479
486
  end
480
487
 
481
488
  def last_es_version
@@ -486,6 +493,10 @@ module LogStash; module Outputs; class ElasticSearch; class HttpClient;
486
493
  @state_mutex.synchronize { @maximum_seen_major_version }
487
494
  end
488
495
 
496
+ def serverless?
497
+ @build_flavour.get == BUILD_FLAVOUR_SERVERLESS
498
+ end
499
+
489
500
  private
490
501
 
491
502
  # @private executing within @state_mutex
@@ -515,5 +526,9 @@ module LogStash; module Outputs; class ElasticSearch; class HttpClient;
515
526
  previous_major: @maximum_seen_major_version, new_major: major, node_url: url.sanitized.to_s)
516
527
  end
517
528
 
529
+ def set_build_flavour(flavour)
530
+ @build_flavour.set(flavour)
531
+ end
532
+
518
533
  end
519
534
  end; end; end; end;
@@ -93,6 +93,10 @@ module LogStash; module Outputs; class ElasticSearch;
93
93
  @pool.maximum_seen_major_version
94
94
  end
95
95
 
96
+ def serverless?
97
+ @pool.serverless?
98
+ end
99
+
96
100
  def alive_urls_count
97
101
  @pool.alive_urls_count
98
102
  end
@@ -114,7 +118,7 @@ module LogStash; module Outputs; class ElasticSearch;
114
118
  end
115
119
 
116
120
  body_stream = StringIO.new
117
- if http_compression
121
+ if compression_level?
118
122
  body_stream.set_encoding "BINARY"
119
123
  stream_writer = gzip_writer(body_stream)
120
124
  else
@@ -137,14 +141,14 @@ module LogStash; module Outputs; class ElasticSearch;
137
141
  :batch_offset => (index + 1 - batch_actions.size))
138
142
  bulk_responses << bulk_send(body_stream, batch_actions)
139
143
  body_stream.truncate(0) && body_stream.seek(0)
140
- stream_writer = gzip_writer(body_stream) if http_compression
144
+ stream_writer = gzip_writer(body_stream) if compression_level?
141
145
  batch_actions.clear
142
146
  end
143
147
  stream_writer.write(as_json)
144
148
  batch_actions << action
145
149
  end
146
150
 
147
- stream_writer.close if http_compression
151
+ stream_writer.close if compression_level?
148
152
 
149
153
  logger.debug("Sending final bulk request for batch.",
150
154
  :action_count => batch_actions.size,
@@ -153,7 +157,7 @@ module LogStash; module Outputs; class ElasticSearch;
153
157
  :batch_offset => (actions.size - batch_actions.size))
154
158
  bulk_responses << bulk_send(body_stream, batch_actions) if body_stream.size > 0
155
159
 
156
- body_stream.close if !http_compression
160
+ body_stream.close unless compression_level?
157
161
  join_bulk_responses(bulk_responses)
158
162
  end
159
163
 
@@ -161,7 +165,7 @@ module LogStash; module Outputs; class ElasticSearch;
161
165
  fail(ArgumentError, "Cannot create gzip writer on IO with unread bytes") unless io.eof?
162
166
  fail(ArgumentError, "Cannot create gzip writer on non-empty IO") unless io.pos == 0
163
167
 
164
- Zlib::GzipWriter.new(io, Zlib::DEFAULT_COMPRESSION, Zlib::DEFAULT_STRATEGY)
168
+ Zlib::GzipWriter.new(io, client_settings.fetch(:compression_level), Zlib::DEFAULT_STRATEGY)
165
169
  end
166
170
 
167
171
  def join_bulk_responses(bulk_responses)
@@ -172,7 +176,7 @@ module LogStash; module Outputs; class ElasticSearch;
172
176
  end
173
177
 
174
178
  def bulk_send(body_stream, batch_actions)
175
- params = http_compression ? {:headers => {"Content-Encoding" => "gzip"}} : {}
179
+ params = compression_level? ? {:headers => {"Content-Encoding" => "gzip"}} : {}
176
180
  response = @pool.post(@bulk_path, params, body_stream.string)
177
181
 
178
182
  @bulk_response_metrics.increment(response.code.to_s)
@@ -294,8 +298,10 @@ module LogStash; module Outputs; class ElasticSearch;
294
298
  @_ssl_options ||= client_settings.fetch(:ssl, {})
295
299
  end
296
300
 
297
- def http_compression
298
- client_settings.fetch(:http_compression, false)
301
+ # return true if compression_level is [1..9]
302
+ # return false if it is 0
303
+ def compression_level?
304
+ client_settings.fetch(:compression_level) > 0
299
305
  end
300
306
 
301
307
  def build_adapter(options)
@@ -8,7 +8,7 @@ module LogStash; module Outputs; class ElasticSearch;
8
8
  :pool_max => params["pool_max"],
9
9
  :pool_max_per_route => params["pool_max_per_route"],
10
10
  :check_connection_timeout => params["validate_after_inactivity"],
11
- :http_compression => params["http_compression"],
11
+ :compression_level => params["compression_level"],
12
12
  :headers => params["custom_headers"] || {}
13
13
  }
14
14
 
@@ -14,11 +14,18 @@ module LogStash; module Outputs; class ElasticSearch
14
14
  return @ilm_actually_enabled if defined?(@ilm_actually_enabled)
15
15
  @ilm_actually_enabled =
16
16
  begin
17
- if @ilm_enabled == 'auto'
17
+ if serverless?
18
+ raise LogStash::ConfigurationError, "Invalid ILM configuration `ilm_enabled => true`. " +
19
+ "Serverless Elasticsearch cluster does not support Index Lifecycle Management." if @ilm_enabled.to_s == 'true'
20
+ @logger.info("ILM auto configuration (`ilm_enabled => auto` or unset) resolved to `false`. "\
21
+ "Serverless Elasticsearch cluster does not support Index Lifecycle Management.") if @ilm_enabled == 'auto'
22
+ false
23
+ elsif @ilm_enabled == 'auto'
18
24
  if ilm_on_by_default?
19
25
  ilm_alias_set?
20
26
  else
21
- @logger.info("Index Lifecycle Management is set to 'auto', but will be disabled - Your Elasticsearch cluster is before 7.0.0, which is the minimum version required to automatically run Index Lifecycle Management")
27
+ @logger.info("ILM auto configuration (`ilm_enabled => auto` or unset) resolved to `false`."\
28
+ " Elasticsearch cluster is before 7.0.0, which is the minimum version required to automatically run Index Lifecycle Management")
22
29
  false
23
30
  end
24
31
  elsif @ilm_enabled.to_s == 'true'
@@ -11,6 +11,8 @@ module LogStash; module Outputs; class ElasticSearch
11
11
  # @param url [LogStash::Util::SafeURI] ES node URL
12
12
  # @return [Boolean] true if provided license is deemed appropriate
13
13
  def appropriate_license?(pool, url)
14
+ return true if pool.serverless?
15
+
14
16
  license = extract_license(pool.get_license(url))
15
17
  case license_status(license)
16
18
  when 'active'
@@ -13,8 +13,11 @@ module LogStash; module Outputs; class ElasticSearch
13
13
  "We recommend either setting `template_api => legacy` to continue providing legacy-style templates, " +
14
14
  "or migrating your template to the composable style and setting `template_api => composable`. " +
15
15
  "The legacy template API is slated for removal in Elasticsearch 9.")
16
+ elsif plugin.template_api == 'legacy' && plugin.serverless?
17
+ raise LogStash::ConfigurationError, "Invalid template configuration `template_api => legacy`. Serverless Elasticsearch does not support legacy template API."
16
18
  end
17
19
 
20
+
18
21
  if plugin.template
19
22
  plugin.logger.info("Using mapping template from", :path => plugin.template)
20
23
  template = read_template_file(plugin.template)
@@ -61,11 +64,13 @@ module LogStash; module Outputs; class ElasticSearch
61
64
  plugin.logger.trace("Resolving ILM template settings: under 'settings' key", :template => template, :template_api => plugin.template_api, :es_version => plugin.maximum_seen_major_version)
62
65
  legacy_index_template_settings(template)
63
66
  else
64
- template_endpoint = template_endpoint(plugin)
65
- plugin.logger.trace("Resolving ILM template settings: template doesn't have 'settings' or 'template' fields, falling back to auto detection", :template => template, :template_api => plugin.template_api, :es_version => plugin.maximum_seen_major_version, :template_endpoint => template_endpoint)
66
- template_endpoint == INDEX_TEMPLATE_ENDPOINT ?
67
- composable_index_template_settings(template) :
67
+ use_index_template_api = index_template_api?(plugin)
68
+ plugin.logger.trace("Resolving ILM template settings: template doesn't have 'settings' or 'template' fields, falling back to auto detection", :template => template, :template_api => plugin.template_api, :es_version => plugin.maximum_seen_major_version, :index_template_api => use_index_template_api)
69
+ if use_index_template_api
70
+ composable_index_template_settings(template)
71
+ else
68
72
  legacy_index_template_settings(template)
73
+ end
69
74
  end
70
75
  end
71
76
 
@@ -100,12 +105,25 @@ module LogStash; module Outputs; class ElasticSearch
100
105
  end
101
106
 
102
107
  def self.template_endpoint(plugin)
103
- if plugin.template_api == 'auto'
104
- plugin.maximum_seen_major_version < 8 ? LEGACY_TEMPLATE_ENDPOINT : INDEX_TEMPLATE_ENDPOINT
105
- elsif plugin.template_api.to_s == 'legacy'
106
- LEGACY_TEMPLATE_ENDPOINT
108
+ index_template_api?(plugin) ? INDEX_TEMPLATE_ENDPOINT : LEGACY_TEMPLATE_ENDPOINT
109
+ end
110
+
111
+ def self.index_template_api?(plugin)
112
+ case plugin.serverless?
113
+ when true
114
+ true
107
115
  else
108
- INDEX_TEMPLATE_ENDPOINT
116
+ case plugin.template_api
117
+ when 'auto'
118
+ plugin.maximum_seen_major_version >= 8
119
+ when 'composable'
120
+ true
121
+ when 'legacy'
122
+ false
123
+ else
124
+ plugin.logger.warn("Invalid template_api value #{plugin.template_api}")
125
+ true
126
+ end
109
127
  end
110
128
  end
111
129
 
@@ -276,6 +276,7 @@ class LogStash::Outputs::ElasticSearch < LogStash::Outputs::Base
276
276
  super
277
277
  setup_ecs_compatibility_related_defaults
278
278
  setup_ssl_params!
279
+ setup_compression_level!
279
280
  end
280
281
 
281
282
  def register
@@ -368,6 +369,7 @@ class LogStash::Outputs::ElasticSearch < LogStash::Outputs::Base
368
369
  params['proxy'] = proxy # do not do resolving again
369
370
  end
370
371
  end
372
+
371
373
  super(params)
372
374
  end
373
375
 
@@ -669,6 +671,20 @@ class LogStash::Outputs::ElasticSearch < LogStash::Outputs::Base
669
671
  params['ssl_verification_mode'] = @ssl_verification_mode unless @ssl_verification_mode.nil?
670
672
  end
671
673
 
674
+ def setup_compression_level!
675
+ @compression_level = normalize_config(:compression_level) do |normalize|
676
+ normalize.with_deprecated_mapping(:http_compression) do |http_compression|
677
+ if http_compression == true
678
+ DEFAULT_ZIP_LEVEL
679
+ else
680
+ 0
681
+ end
682
+ end
683
+ end
684
+
685
+ params['compression_level'] = @compression_level unless @compression_level.nil?
686
+ end
687
+
672
688
  # To be overidden by the -java version
673
689
  VALID_HTTP_ACTIONS = ["index", "delete", "create", "update"]
674
690
  def valid_actions
@@ -7,6 +7,7 @@ module LogStash; module PluginMixins; module ElasticSearch
7
7
  # This module defines common options that can be reused by alternate elasticsearch output plugins such as the elasticsearch_data_streams output.
8
8
 
9
9
  DEFAULT_HOST = ::LogStash::Util::SafeURI.new("//127.0.0.1")
10
+ DEFAULT_ZIP_LEVEL = 1
10
11
 
11
12
  CONFIG_PARAMS = {
12
13
  # Username to authenticate to a secure Elasticsearch cluster
@@ -186,7 +187,14 @@ module LogStash; module PluginMixins; module ElasticSearch
186
187
  :validate_after_inactivity => { :validate => :number, :default => 10000 },
187
188
 
188
189
  # Enable gzip compression on requests. Note that response compression is on by default for Elasticsearch v5.0 and beyond
189
- :http_compression => { :validate => :boolean, :default => false },
190
+ # Set `true` to enable compression with level 1
191
+ # Set `false` to disable compression with level 0
192
+ :http_compression => { :validate => :boolean, :default => true, :deprecated => "Set 'compression_level' instead." },
193
+
194
+ # Number `1` ~ `9` are the gzip compression level
195
+ # Set `0` to disable compression
196
+ # Set `1` (best speed) to `9` (best compression) to use compression
197
+ :compression_level => { :validate => [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 ], :default => DEFAULT_ZIP_LEVEL },
190
198
 
191
199
  # Custom Headers to send on each request to elasticsearch nodes
192
200
  :custom_headers => { :validate => :hash, :default => {} },
@@ -145,6 +145,10 @@ module LogStash; module PluginMixins; module ElasticSearch
145
145
  client.maximum_seen_major_version
146
146
  end
147
147
 
148
+ def serverless?
149
+ client.serverless?
150
+ end
151
+
148
152
  def alive_urls_count
149
153
  client.alive_urls_count
150
154
  end
@@ -1,6 +1,6 @@
1
1
  Gem::Specification.new do |s|
2
2
  s.name = 'logstash-output-elasticsearch'
3
- s.version = '11.15.9'
3
+ s.version = '11.17.0'
4
4
  s.licenses = ['apache-2.0']
5
5
  s.summary = "Stores logs in Elasticsearch"
6
6
  s.description = "This gem is a Logstash plugin required to be installed on top of the Logstash core pipeline using $LS_HOME/bin/logstash-plugin install gemname. This gem is not a stand-alone program"
@@ -59,11 +59,14 @@ module ESHelper
59
59
  end
60
60
 
61
61
  def self.es_version
62
- [
63
- nilify(RSpec.configuration.filter[:es_version]),
64
- nilify(ENV['ES_VERSION']),
65
- nilify(ENV['ELASTIC_STACK_VERSION']),
66
- ].compact.first
62
+ {
63
+ "number" => [
64
+ nilify(RSpec.configuration.filter[:es_version]),
65
+ nilify(ENV['ES_VERSION']),
66
+ nilify(ENV['ELASTIC_STACK_VERSION']),
67
+ ].compact.first,
68
+ "build_flavor" => 'default'
69
+ }
67
70
  end
68
71
 
69
72
  RSpec::Matchers.define :have_hits do |expected|
@@ -0,0 +1,16 @@
1
+ {
2
+ "license": {
3
+ "status": "active",
4
+ "uid": "d85d2c6a-b96d-3cc6-96db-5571a789b156",
5
+ "type": "enterprise",
6
+ "issue_date": "1970-01-01T00:00:00.000Z",
7
+ "issue_date_in_millis": 0,
8
+ "expiry_date": "2100-01-01T00:00:00.000Z",
9
+ "expiry_date_in_millis": 4102444800000,
10
+ "max_nodes": null,
11
+ "max_resource_units": 100000,
12
+ "issued_to": "Elastic Cloud",
13
+ "issuer": "API",
14
+ "start_date_in_millis": 0
15
+ }
16
+ }
@@ -0,0 +1,5 @@
1
+ {
2
+ "license": {
3
+ "status": "inactive"
4
+ }
5
+ }
@@ -8,63 +8,64 @@ RSpec::Matchers.define :a_valid_gzip_encoded_string do
8
8
  }
9
9
  end
10
10
 
11
- describe "indexing with http_compression turned on", :integration => true do
12
- let(:event) { LogStash::Event.new("message" => "Hello World!", "type" => type) }
13
- let(:index) { 10.times.collect { rand(10).to_s }.join("") }
14
- let(:type) { ESHelper.es_version_satisfies?("< 7") ? "doc" : "_doc" }
15
- let(:event_count) { 10000 + rand(500) }
16
- let(:events) { event_count.times.map { event }.to_a }
17
- let(:config) {
18
- {
19
- "hosts" => get_host_port,
20
- "index" => index,
21
- "http_compression" => true
11
+ [ {"http_compression" => true}, {"compression_level" => 1} ].each do |compression_config|
12
+ describe "indexing with http_compression turned on", :integration => true do
13
+ let(:event) { LogStash::Event.new("message" => "Hello World!", "type" => type) }
14
+ let(:index) { 10.times.collect { rand(10).to_s }.join("") }
15
+ let(:type) { ESHelper.es_version_satisfies?("< 7") ? "doc" : "_doc" }
16
+ let(:event_count) { 10000 + rand(500) }
17
+ let(:events) { event_count.times.map { event }.to_a }
18
+ let(:config) {
19
+ {
20
+ "hosts" => get_host_port,
21
+ "index" => index
22
+ }
22
23
  }
23
- }
24
- subject { LogStash::Outputs::ElasticSearch.new(config) }
24
+ subject { LogStash::Outputs::ElasticSearch.new(config.merge(compression_config)) }
25
25
 
26
- let(:es_url) { "http://#{get_host_port}" }
27
- let(:index_url) {"#{es_url}/#{index}"}
28
- let(:http_client_options) { {} }
29
- let(:http_client) do
30
- Manticore::Client.new(http_client_options)
31
- end
26
+ let(:es_url) { "http://#{get_host_port}" }
27
+ let(:index_url) {"#{es_url}/#{index}"}
28
+ let(:http_client_options) { {} }
29
+ let(:http_client) do
30
+ Manticore::Client.new(http_client_options)
31
+ end
32
32
 
33
- before do
34
- subject.register
35
- subject.multi_receive([])
36
- end
33
+ before do
34
+ subject.register
35
+ subject.multi_receive([])
36
+ end
37
37
 
38
- shared_examples "an indexer" do
39
- it "ships events" do
40
- subject.multi_receive(events)
38
+ shared_examples "an indexer" do
39
+ it "ships events" do
40
+ subject.multi_receive(events)
41
41
 
42
- http_client.post("#{es_url}/_refresh").call
42
+ http_client.post("#{es_url}/_refresh").call
43
43
 
44
- response = http_client.get("#{index_url}/_count?q=*")
45
- result = LogStash::Json.load(response.body)
46
- cur_count = result["count"]
47
- expect(cur_count).to eq(event_count)
44
+ response = http_client.get("#{index_url}/_count?q=*")
45
+ result = LogStash::Json.load(response.body)
46
+ cur_count = result["count"]
47
+ expect(cur_count).to eq(event_count)
48
48
 
49
- response = http_client.get("#{index_url}/_search?q=*&size=1000")
50
- result = LogStash::Json.load(response.body)
51
- result["hits"]["hits"].each do |doc|
52
- if ESHelper.es_version_satisfies?("< 8")
53
- expect(doc["_type"]).to eq(type)
54
- else
55
- expect(doc).not_to include("_type")
49
+ response = http_client.get("#{index_url}/_search?q=*&size=1000")
50
+ result = LogStash::Json.load(response.body)
51
+ result["hits"]["hits"].each do |doc|
52
+ if ESHelper.es_version_satisfies?("< 8")
53
+ expect(doc["_type"]).to eq(type)
54
+ else
55
+ expect(doc).not_to include("_type")
56
+ end
57
+ expect(doc["_index"]).to eq(index)
56
58
  end
57
- expect(doc["_index"]).to eq(index)
58
59
  end
59
60
  end
60
- end
61
61
 
62
- it "sets the correct content-encoding header and body is compressed" do
63
- expect(subject.client.pool.adapter.client).to receive(:send).
64
- with(anything, anything, {:headers=>{"Content-Encoding"=>"gzip", "Content-Type"=>"application/json"}, :body => a_valid_gzip_encoded_string}).
65
- and_call_original
66
- subject.multi_receive(events)
67
- end
62
+ it "sets the correct content-encoding header and body is compressed" do
63
+ expect(subject.client.pool.adapter.client).to receive(:send).
64
+ with(anything, anything, {:headers=>{"Content-Encoding"=>"gzip", "Content-Type"=>"application/json"}, :body => a_valid_gzip_encoded_string}).
65
+ and_call_original
66
+ subject.multi_receive(events)
67
+ end
68
68
 
69
- it_behaves_like("an indexer")
70
- end
69
+ it_behaves_like("an indexer")
70
+ end
71
+ end
@@ -262,7 +262,8 @@ describe "indexing" do
262
262
  let(:config) {
263
263
  {
264
264
  "hosts" => get_host_port,
265
- "index" => index
265
+ "index" => index,
266
+ "http_compression" => false
266
267
  }
267
268
  }
268
269
  it_behaves_like("an indexer")
@@ -273,7 +274,8 @@ describe "indexing" do
273
274
  let(:config) {
274
275
  {
275
276
  "hosts" => get_host_port,
276
- "index" => index
277
+ "index" => index,
278
+ "http_compression" => false
277
279
  }
278
280
  }
279
281
  it_behaves_like("an indexer")
@@ -291,7 +293,8 @@ describe "indexing" do
291
293
  "password" => password,
292
294
  "ssl_enabled" => true,
293
295
  "ssl_certificate_authorities" => cacert,
294
- "index" => index
296
+ "index" => index,
297
+ "http_compression" => false
295
298
  }
296
299
  end
297
300
 
@@ -351,7 +354,8 @@ describe "indexing" do
351
354
  "hosts" => ["https://#{CGI.escape(user)}:#{CGI.escape(password)}@elasticsearch:9200"],
352
355
  "ssl_enabled" => true,
353
356
  "ssl_certificate_authorities" => "spec/fixtures/test_certs/test.crt",
354
- "index" => index
357
+ "index" => index,
358
+ "http_compression" => false
355
359
  }
356
360
  end
357
361
 
@@ -6,8 +6,8 @@ describe LogStash::Outputs::ElasticSearch::HttpClient::Pool do
6
6
  let(:logger) { Cabin::Channel.get }
7
7
  let(:adapter) { LogStash::Outputs::ElasticSearch::HttpClient::ManticoreAdapter.new(logger, {}) }
8
8
  let(:initial_urls) { [::LogStash::Util::SafeURI.new("http://localhost:9200")] }
9
- let(:options) { {:resurrect_delay => 2, :url_normalizer => proc {|u| u}} } # Shorten the delay a bit to speed up tests
10
- let(:es_node_versions) { [ "0.0.0" ] }
9
+ let(:options) { {:resurrect_delay => 3, :url_normalizer => proc {|u| u}} } # Shorten the delay a bit to speed up tests
10
+ let(:es_version_info) { [ { "number" => '0.0.0', "build_flavor" => 'default'} ] }
11
11
  let(:license_status) { 'active' }
12
12
 
13
13
  subject { described_class.new(logger, adapter, initial_urls, options) }
@@ -22,7 +22,7 @@ describe LogStash::Outputs::ElasticSearch::HttpClient::Pool do
22
22
 
23
23
  allow(::Manticore::Client).to receive(:new).and_return(manticore_double)
24
24
 
25
- allow(subject).to receive(:get_es_version).with(any_args).and_return(*es_node_versions)
25
+ allow(subject).to receive(:get_es_version).with(any_args).and_return(*es_version_info)
26
26
  allow(subject.license_checker).to receive(:license_status).and_return(license_status)
27
27
  end
28
28
 
@@ -267,13 +267,37 @@ describe LogStash::Outputs::ElasticSearch::HttpClient::Pool do
267
267
  end
268
268
 
269
269
  context "if there are nodes with multiple major versions" do
270
- let(:es_node_versions) { [ "0.0.0", "6.0.0" ] }
270
+ let(:es_version_info) { [ { "number" => '0.0.0', "build_flavor" => 'default'}, { "number" => '6.0.0', "build_flavor" => 'default'} ] }
271
271
  it "picks the largest major version" do
272
272
  expect(subject.maximum_seen_major_version).to eq(6)
273
273
  end
274
274
  end
275
275
  end
276
276
 
277
+
278
+ describe "build flavour tracking" do
279
+ let(:initial_urls) { [::LogStash::Util::SafeURI.new("http://somehost:9200")] }
280
+
281
+ let(:es_version_info) { [ { "number" => '8.9.0', "build_flavor" => "serverless" } ] }
282
+
283
+ let(:valid_response) { MockResponse.new(200,
284
+ {"tagline" => "You Know, for Search",
285
+ "version" => {
286
+ "number" => '8.9.0',
287
+ "build_flavor" => LogStash::Outputs::ElasticSearch::HttpClient::Pool::BUILD_FLAVOUR_SERVERLESS} },
288
+ { "X-Elastic-Product" => "Elasticsearch" }
289
+ ) }
290
+
291
+ before(:each) do
292
+ allow(subject).to receive(:perform_request_to_url).and_return(valid_response)
293
+ subject.start
294
+ end
295
+
296
+ it "picks the build flavour" do
297
+ expect(subject.serverless?).to be_truthy
298
+ end
299
+ end
300
+
277
301
  describe "license checking" do
278
302
  before(:each) do
279
303
  allow(subject).to receive(:health_check_request)
@@ -364,7 +388,7 @@ describe "#elasticsearch?" do
364
388
  let(:adapter) { double("Manticore Adapter") }
365
389
  let(:initial_urls) { [::LogStash::Util::SafeURI.new("http://localhost:9200")] }
366
390
  let(:options) { {:resurrect_delay => 2, :url_normalizer => proc {|u| u}} } # Shorten the delay a bit to speed up tests
367
- let(:es_node_versions) { [ "0.0.0" ] }
391
+ let(:es_version_info) { [{ "number" => '0.0.0', "build_flavor" => 'default'}] }
368
392
  let(:license_status) { 'active' }
369
393
 
370
394
  subject { LogStash::Outputs::ElasticSearch::HttpClient::Pool.new(logger, adapter, initial_urls, options) }
@@ -183,6 +183,25 @@ describe LogStash::Outputs::ElasticSearch::HttpClient do
183
183
  end
184
184
  end
185
185
 
186
+ describe "compression_level?" do
187
+ subject { described_class.new(base_options) }
188
+ let(:base_options) { super().merge(:client_settings => {:compression_level => compression_level}) }
189
+
190
+ context "with client_settings `compression_level => 1`" do
191
+ let(:compression_level) { 1 }
192
+ it "gives true" do
193
+ expect(subject.compression_level?).to be_truthy
194
+ end
195
+ end
196
+
197
+ context "with client_settings `compression_level => 0`" do
198
+ let(:compression_level) { 0 }
199
+ it "gives false" do
200
+ expect(subject.compression_level?).to be_falsey
201
+ end
202
+ end
203
+ end
204
+
186
205
  describe "#bulk" do
187
206
  subject(:http_client) { described_class.new(base_options) }
188
207
 
@@ -192,13 +211,14 @@ describe LogStash::Outputs::ElasticSearch::HttpClient do
192
211
  ["index", {:_id=>nil, :_index=>"logstash"}, {"message"=> message}],
193
212
  ]}
194
213
 
195
- [true,false].each do |http_compression_enabled|
196
- context "with `http_compression => #{http_compression_enabled}`" do
214
+ [0, 9].each do |compression_level|
215
+ context "with `compression_level => #{compression_level}`" do
197
216
 
198
- let(:base_options) { super().merge(:client_settings => {:http_compression => http_compression_enabled}) }
217
+ let(:base_options) { super().merge(:client_settings => {:compression_level => compression_level}) }
218
+ let(:compression_level_enabled) { compression_level > 0 }
199
219
 
200
220
  before(:each) do
201
- if http_compression_enabled
221
+ if compression_level_enabled
202
222
  expect(http_client).to receive(:gzip_writer).at_least(:once).and_call_original
203
223
  else
204
224
  expect(http_client).to_not receive(:gzip_writer)
@@ -212,7 +232,7 @@ describe LogStash::Outputs::ElasticSearch::HttpClient do
212
232
  it "should be handled properly" do
213
233
  allow(subject).to receive(:join_bulk_responses)
214
234
  expect(subject).to receive(:bulk_send).once do |data|
215
- if !http_compression_enabled
235
+ if !compression_level_enabled
216
236
  expect(data.size).to be > target_bulk_bytes
217
237
  else
218
238
  expect(Zlib::gunzip(data.string).size).to be > target_bulk_bytes
@@ -73,6 +73,7 @@ describe LogStash::Outputs::ElasticSearch::TemplateManager do
73
73
  let(:template_api) { "composable" }
74
74
 
75
75
  it 'resolves composable index template API compatible setting' do
76
+ expect(plugin).to receive(:serverless?).and_return(false)
76
77
  expect(plugin).to receive(:maximum_seen_major_version).at_least(:once).and_return(8) # required to log
77
78
  template = {}
78
79
  described_class.resolve_template_settings(plugin, template)
@@ -84,6 +85,7 @@ describe LogStash::Outputs::ElasticSearch::TemplateManager do
84
85
  let(:template_api) { "legacy" }
85
86
 
86
87
  it 'resolves legacy index template API compatible setting' do
88
+ expect(plugin).to receive(:serverless?).and_return(false)
87
89
  expect(plugin).to receive(:maximum_seen_major_version).at_least(:once).and_return(7) # required to log
88
90
  template = {}
89
91
  described_class.resolve_template_settings(plugin, template)
@@ -97,6 +99,7 @@ describe LogStash::Outputs::ElasticSearch::TemplateManager do
97
99
  describe "with ES < 8 versions" do
98
100
 
99
101
  it 'resolves legacy index template API compatible setting' do
102
+ expect(plugin).to receive(:serverless?).and_return(false)
100
103
  expect(plugin).to receive(:maximum_seen_major_version).at_least(:once).and_return(7)
101
104
  template = {}
102
105
  described_class.resolve_template_settings(plugin, template)
@@ -106,6 +109,7 @@ describe LogStash::Outputs::ElasticSearch::TemplateManager do
106
109
 
107
110
  describe "with ES >= 8 versions" do
108
111
  it 'resolves composable index template API compatible setting' do
112
+ expect(plugin).to receive(:serverless?).and_return(false)
109
113
  expect(plugin).to receive(:maximum_seen_major_version).at_least(:once).and_return(8)
110
114
  template = {}
111
115
  described_class.resolve_template_settings(plugin, template)
@@ -123,6 +127,7 @@ describe LogStash::Outputs::ElasticSearch::TemplateManager do
123
127
 
124
128
  describe "in version 8+" do
125
129
  it "should use index template API" do
130
+ expect(plugin).to receive(:serverless?).and_return(false)
126
131
  expect(plugin).to receive(:maximum_seen_major_version).at_least(:once).and_return(8)
127
132
  endpoint = described_class.template_endpoint(plugin)
128
133
  expect(endpoint).to be_equal(LogStash::Outputs::ElasticSearch::TemplateManager::INDEX_TEMPLATE_ENDPOINT)
@@ -131,6 +136,7 @@ describe LogStash::Outputs::ElasticSearch::TemplateManager do
131
136
 
132
137
  describe "in version < 8" do
133
138
  it "should use legacy template API" do
139
+ expect(plugin).to receive(:serverless?).and_return(false)
134
140
  expect(plugin).to receive(:maximum_seen_major_version).at_least(:once).and_return(7)
135
141
  endpoint = described_class.template_endpoint(plugin)
136
142
  expect(endpoint).to be_equal(LogStash::Outputs::ElasticSearch::TemplateManager::LEGACY_TEMPLATE_ENDPOINT)
@@ -144,6 +150,7 @@ describe LogStash::Outputs::ElasticSearch::TemplateManager do
144
150
 
145
151
  describe "in version 8+" do
146
152
  it "should use legacy template API" do
153
+ expect(plugin).to receive(:serverless?).and_return(false)
147
154
  expect(plugin).to receive(:maximum_seen_major_version).never
148
155
  endpoint = described_class.template_endpoint(plugin)
149
156
  expect(endpoint).to be_equal(LogStash::Outputs::ElasticSearch::TemplateManager::LEGACY_TEMPLATE_ENDPOINT)
@@ -157,11 +164,26 @@ describe LogStash::Outputs::ElasticSearch::TemplateManager do
157
164
 
158
165
  describe "in version 8+" do
159
166
  it "should use legacy template API" do
167
+ expect(plugin).to receive(:serverless?).and_return(false)
160
168
  expect(plugin).to receive(:maximum_seen_major_version).never
161
169
  endpoint = described_class.template_endpoint(plugin)
162
- expect(endpoint).to be_equal(LogStash::Outputs::ElasticSearch::TemplateManager:: INDEX_TEMPLATE_ENDPOINT)
170
+ expect(endpoint).to be_equal(LogStash::Outputs::ElasticSearch::TemplateManager::INDEX_TEMPLATE_ENDPOINT)
163
171
  end
164
172
  end
165
173
  end
174
+
175
+ describe "in serverless" do
176
+ [:auto, :composable, :legacy].each do |api|
177
+ let(:plugin_settings) { {"manage_template" => true, "template_api" => api.to_s} }
178
+ let(:plugin) { LogStash::Outputs::ElasticSearch.new(plugin_settings) }
179
+
180
+ it "use index template API when template_api set to #{api}" do
181
+ expect(plugin).to receive(:serverless?).and_return(true)
182
+ endpoint = described_class.template_endpoint(plugin)
183
+ expect(endpoint).to be_equal(LogStash::Outputs::ElasticSearch::TemplateManager::INDEX_TEMPLATE_ENDPOINT)
184
+ end
185
+ end
186
+
187
+ end
166
188
  end
167
189
  end
@@ -474,7 +474,7 @@ describe LogStash::Outputs::ElasticSearch do
474
474
 
475
475
  context "unexpected bulk response" do
476
476
  let(:options) do
477
- { "hosts" => "127.0.0.1:9999", "index" => "%{foo}", "manage_template" => false }
477
+ { "hosts" => "127.0.0.1:9999", "index" => "%{foo}", "manage_template" => false, "http_compression" => false }
478
478
  end
479
479
 
480
480
  let(:events) { [ ::LogStash::Event.new("foo" => "bar1"), ::LogStash::Event.new("foo" => "bar2") ] }
@@ -624,6 +624,7 @@ describe LogStash::Outputs::ElasticSearch do
624
624
  end
625
625
 
626
626
  context '413 errors' do
627
+ let(:options) { super().merge("http_compression" => "false") }
627
628
  let(:payload_size) { LogStash::Outputs::ElasticSearch::TARGET_BULK_BYTES + 1024 }
628
629
  let(:event) { ::LogStash::Event.new("message" => ("a" * payload_size ) ) }
629
630
 
@@ -1557,6 +1558,37 @@ describe LogStash::Outputs::ElasticSearch do
1557
1558
  end
1558
1559
  end
1559
1560
 
1561
+ describe "http compression" do
1562
+ describe "initialize setting" do
1563
+ context "with `http_compression` => true" do
1564
+ let(:options) { super().merge('http_compression' => true) }
1565
+ it "set compression level to 1" do
1566
+ subject.register
1567
+ expect(subject.instance_variable_get(:@compression_level)).to eq(1)
1568
+ end
1569
+ end
1570
+
1571
+ context "with `http_compression` => false" do
1572
+ let(:options) { super().merge('http_compression' => false) }
1573
+ it "set compression level to 0" do
1574
+ subject.register
1575
+ expect(subject.instance_variable_get(:@compression_level)).to eq(0)
1576
+ end
1577
+ end
1578
+
1579
+ [0, 9].each do |config|
1580
+ context "with `compression_level` => #{config}" do
1581
+ let(:options) { super().merge('compression_level' => config) }
1582
+ it "keeps the setting" do
1583
+ subject.register
1584
+ expect(subject.instance_variable_get(:@compression_level)).to eq(config)
1585
+ end
1586
+ end
1587
+ end
1588
+ end
1589
+
1590
+ end
1591
+
1560
1592
  @private
1561
1593
 
1562
1594
  def stub_manticore_client!(manticore_double = nil)
@@ -21,5 +21,37 @@ describe LogStash::Outputs::ElasticSearch::LicenseChecker do
21
21
  expect(LogStash::Outputs::ElasticSearch::HttpClient::Pool.instance_methods).to include(:get_license)
22
22
  end
23
23
  end
24
+
25
+ context "appropriate license" do
26
+ let(:logger) { double("logger") }
27
+ let(:url) { LogStash::Util::SafeURI.new("https://cloud.elastic.co") }
28
+ let(:pool) { double("pool") }
29
+ subject { described_class.new(logger) }
30
+
31
+ it "is true when connect to serverless" do
32
+ allow(pool).to receive(:serverless?).and_return(true)
33
+ expect(subject.appropriate_license?(pool, url)).to eq true
34
+ end
35
+
36
+ it "is true when license status is active" do
37
+ allow(pool).to receive(:serverless?).and_return(false)
38
+ allow(pool).to receive(:get_license).with(url).and_return(LogStash::Json.load File.read("spec/fixtures/license_check/active.json"))
39
+ expect(subject.appropriate_license?(pool, url)).to eq true
40
+ end
41
+
42
+ it "is true when license status is inactive" do
43
+ allow(logger).to receive(:warn).with(instance_of(String), anything)
44
+ allow(pool).to receive(:serverless?).and_return(false)
45
+ allow(pool).to receive(:get_license).with(url).and_return(LogStash::Json.load File.read("spec/fixtures/license_check/inactive.json"))
46
+ expect(subject.appropriate_license?(pool, url)).to eq true
47
+ end
48
+
49
+ it "is false when no license return" do
50
+ allow(logger).to receive(:error).with(instance_of(String), anything)
51
+ allow(pool).to receive(:serverless?).and_return(false)
52
+ allow(pool).to receive(:get_license).with(url).and_return(LogStash::Json.load('{}'))
53
+ expect(subject.appropriate_license?(pool, url)).to eq false
54
+ end
55
+ end
24
56
  end
25
57
 
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: logstash-output-elasticsearch
3
3
  version: !ruby/object:Gem::Version
4
- version: 11.15.9
4
+ version: 11.17.0
5
5
  platform: java
6
6
  authors:
7
7
  - Elastic
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2023-07-18 00:00:00.000000000 Z
11
+ date: 2023-09-14 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  requirement: !ruby/object:Gem::Requirement
@@ -279,6 +279,8 @@ files:
279
279
  - spec/fixtures/_nodes/6x.json
280
280
  - spec/fixtures/_nodes/7x.json
281
281
  - spec/fixtures/htpasswd
282
+ - spec/fixtures/license_check/active.json
283
+ - spec/fixtures/license_check/inactive.json
282
284
  - spec/fixtures/nginx_reverse_proxy.conf
283
285
  - spec/fixtures/scripts/painless/scripted_update.painless
284
286
  - spec/fixtures/scripts/painless/scripted_update_nested.painless
@@ -365,6 +367,8 @@ test_files:
365
367
  - spec/fixtures/_nodes/6x.json
366
368
  - spec/fixtures/_nodes/7x.json
367
369
  - spec/fixtures/htpasswd
370
+ - spec/fixtures/license_check/active.json
371
+ - spec/fixtures/license_check/inactive.json
368
372
  - spec/fixtures/nginx_reverse_proxy.conf
369
373
  - spec/fixtures/scripts/painless/scripted_update.painless
370
374
  - spec/fixtures/scripts/painless/scripted_update_nested.painless