logstash-input-elasticsearch 5.1.0 → 5.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: dc85b0081373116cbedc717e9da3e383c8ec17288ae6fbd57cb0ed3878d5e954
4
- data.tar.gz: 33feb6083ba4c7ce074517f366f2ad079d40ab25238841559759fcadae9f8e04
3
+ metadata.gz: 718adf02c14b980691bd1572ac7e46b14f0f7850d82cd267fa52ddbde8289892
4
+ data.tar.gz: e10f582747a7ae11d707c4268ffce6485ed35453afbe140d999857485e022cda
5
5
  SHA512:
6
- metadata.gz: acde0d0c551d2f91f8dea194499dedec6e3285ea4149a0a15111484e1a95d13e97a38fdc97cbe36d57b554aa7092e4fdc6e3214cf901f44315a6855356a25c67
7
- data.tar.gz: 18d066e72ff514e0c2ba0777a6f5f755424b2873015b0f0417100dd18124d0caaf4ef7e8ca72edc89548c159628358620ff75a2f90be1673ae00516c69490caa
6
+ metadata.gz: e88b12e47cfad23b4a1128ab05c1510c1f89bd76d20511064ada02999b6fa694d118a8e37ec2fede700d2008beaef045255b3e345d21eadcfa5b492a250d01dd
7
+ data.tar.gz: f533570ba4268088ddbe73572ec1e49924fe7febf3f6dad1c6e957b6eff5d414410970749521b0543b6bd03264776d4ac6ea2082589704cf1e7ea5faf36c1f07
data/CHANGELOG.md CHANGED
@@ -1,3 +1,6 @@
1
+ ## 5.2.0
2
+ - ES|QL support [#233](https://github.com/logstash-plugins/logstash-input-elasticsearch/pull/233)
3
+
1
4
  ## 5.1.0
2
5
  - Add "cursor"-like index tracking [#205](https://github.com/logstash-plugins/logstash-input-elasticsearch/pull/205)
3
6
 
data/docs/index.asciidoc CHANGED
@@ -230,6 +230,110 @@ The next scheduled run:
230
230
  * uses {ref}/point-in-time-api.html#point-in-time-api[Point in time (PIT)] + {ref}/paginate-search-results.html#search-after[Search after] to paginate through all the data, and
231
231
  * updates the value of the field at the end of the pagination.
232
232
 
233
+ [id="plugins-{type}s-{plugin}-esql"]
234
+ ==== {esql} support
235
+
236
+ .Technical Preview
237
+ ****
238
+ The {esql} feature that allows using ES|QL queries with this plugin is in Technical Preview.
239
+ Configuration options and implementation details are subject to change in minor releases without being preceded by deprecation warnings.
240
+ ****
241
+
242
+ {es} Query Language ({esql}) provides a SQL-like interface for querying your {es} data.
243
+
244
+ To use {esql}, this plugin needs to be installed in {ls} 8.17.4 or newer, and must be connected to {es} 8.11 or newer.
245
+
246
+ To configure {esql} query in the plugin, set the `query_type` to `esql` and provide your {esql} query in the `query` parameter.
247
+
248
+ IMPORTANT: {esql} is evolving and may still have limitations with regard to result size or supported field types. We recommend understanding https://www.elastic.co/guide/en/elasticsearch/reference/current/esql-limitations.html[ES|QL current limitations] before using it in production environments.
249
+
250
+ The following is a basic scheduled {esql} query that runs hourly:
251
+ [source, ruby]
252
+ input {
253
+ elasticsearch {
254
+ id => hourly_cron_job
255
+ hosts => [ 'https://..']
256
+ api_key => '....'
257
+ query_type => 'esql'
258
+ query => '
259
+ FROM food-index
260
+ | WHERE spicy_level = "hot" AND @timestamp > NOW() - 1 hour
261
+ | LIMIT 500
262
+ '
263
+ schedule => '0 * * * *' # every hour at min 0
264
+ }
265
+ }
266
+
267
+ Set `config.support_escapes: true` in `logstash.yml` if you need to escape special chars in the query.
268
+
269
+ NOTE: With {esql} query, {ls} doesn't generate `event.original`.
270
+
271
+ [id="plugins-{type}s-{plugin}-esql-event-mapping"]
272
+ ===== Mapping {esql} result to {ls} event
273
+ {esql} returns query results in a structured tabular format, where data is organized into _columns_ (fields) and _values_ (entries).
274
+ The plugin maps each value entry to an event, populating corresponding fields.
275
+ For example, a query might produce a table like:
276
+
277
+ [cols="2,1,1,1,2",options="header"]
278
+ |===
279
+ |`timestamp` |`user_id` | `action` | `status.code` | `status.desc`
280
+
281
+ |2025-04-10T12:00:00 |123 |login |200 | Success
282
+ |2025-04-10T12:05:00 |456 |purchase |403 | Forbidden (unauthorized user)
283
+ |===
284
+
285
+ For this case, the plugin emits two events look like
286
+ [source, json]
287
+ [
288
+ {
289
+ "timestamp": "2025-04-10T12:00:00",
290
+ "user_id": 123,
291
+ "action": "login",
292
+ "status": {
293
+ "code": 200,
294
+ "desc": "Success"
295
+ }
296
+ },
297
+ {
298
+ "timestamp": "2025-04-10T12:05:00",
299
+ "user_id": 456,
300
+ "action": "purchase",
301
+ "status": {
302
+ "code": 403,
303
+ "desc": "Forbidden (unauthorized user)"
304
+ }
305
+ }
306
+ ]
307
+
308
+ NOTE: If your index has a mapping with sub-objects where `status.code` and `status.desc` actually dotted fields, they appear in {ls} events as a nested structure.
309
+
310
+ [id="plugins-{type}s-{plugin}-esql-multifields"]
311
+ ===== Conflict on multi-fields
312
+
313
+ {esql} query fetches all parent and sub-fields fields if your {es} index has https://www.elastic.co/docs/reference/elasticsearch/mapping-reference/multi-fields[multi-fields] or https://www.elastic.co/docs/reference/elasticsearch/mapping-reference/subobjects[subobjects].
314
+ Since {ls} events cannot contain parent field's concrete value and sub-field values together, the plugin ignores sub-fields with warning and includes parent.
315
+ We recommend using the `RENAME` (or `DROP` to avoid warnings) keyword in your {esql} query explicitly rename the fields to include sub-fields into the event.
316
+
317
+ This a common occurrence if your template or mapping follows the pattern of always indexing strings as "text" (`field`) + " keyword" (`field.keyword`) multi-field.
318
+ In this case it's recommended to do `KEEP field` if the string is identical and there is only one subfield as the engine will optimize and retrieve the keyword, otherwise you can do `KEEP field.keyword | RENAME field.keyword as field`.
319
+
320
+ To illustrate the situation with example, assuming your mapping has a time `time` field with `time.min` and `time.max` sub-fields as following:
321
+ [source, ruby]
322
+ "properties": {
323
+ "time": { "type": "long" },
324
+ "time.min": { "type": "long" },
325
+ "time.max": { "type": "long" }
326
+ }
327
+
328
+ The {esql} result will contain all three fields but the plugin cannot map them into {ls} event.
329
+ To avoid this, you can use the `RENAME` keyword to rename the `time` parent field to get all three fields with unique fields.
330
+ [source, ruby]
331
+ ...
332
+ query => 'FROM my-index | RENAME time AS time.current'
333
+ ...
334
+
335
+ For comprehensive {esql} syntax reference and best practices, see the https://www.elastic.co/guide/en/elasticsearch/reference/current/esql-syntax.html[{esql} documentation].
336
+
233
337
  [id="plugins-{type}s-{plugin}-options"]
234
338
  ==== Elasticsearch Input configuration options
235
339
 
@@ -257,6 +361,7 @@ Please check out <<plugins-{type}s-{plugin}-obsolete-options>> for details.
257
361
  | <<plugins-{type}s-{plugin}-password>> |<<password,password>>|No
258
362
  | <<plugins-{type}s-{plugin}-proxy>> |<<uri,uri>>|No
259
363
  | <<plugins-{type}s-{plugin}-query>> |<<string,string>>|No
364
+ | <<plugins-{type}s-{plugin}-query_type>> |<<string,string>>, one of `["dsl","esql"]`|No
260
365
  | <<plugins-{type}s-{plugin}-response_type>> |<<string,string>>, one of `["hits","aggregations"]`|No
261
366
  | <<plugins-{type}s-{plugin}-request_timeout_seconds>> | <<number,number>>|No
262
367
  | <<plugins-{type}s-{plugin}-schedule>> |<<string,string>>|No
@@ -498,22 +603,35 @@ environment variables e.g. `proxy => '${LS_PROXY:}'`.
498
603
  * Value type is <<string,string>>
499
604
  * Default value is `'{ "sort": [ "_doc" ] }'`
500
605
 
501
- The query to be executed. Read the {ref}/query-dsl.html[Elasticsearch query DSL
502
- documentation] for more information.
606
+ The query to be executed.
607
+ Accepted query shape is DSL or {esql} (when `query_type => 'esql'`).
608
+ Read the {ref}/query-dsl.html[{es} query DSL documentation] or {ref}/esql.html[{esql} documentation] for more information.
503
609
 
504
610
  When <<plugins-{type}s-{plugin}-search_api>> resolves to `search_after` and the query does not specify `sort`,
505
611
  the default sort `'{ "sort": { "_shard_doc": "asc" } }'` will be added to the query. Please refer to the {ref}/paginate-search-results.html#search-after[Elasticsearch search_after] parameter to know more.
506
612
 
613
+ [id="plugins-{type}s-{plugin}-query_type"]
614
+ ===== `query_type`
615
+
616
+ * Value can be `dsl` or `esql`
617
+ * Default value is `dsl`
618
+
619
+ Defines the <<plugins-{type}s-{plugin}-query>> shape.
620
+ When `dsl`, the query shape must be valid {es} JSON-style string.
621
+ When `esql`, the query shape must be a valid {esql} string and `index`, `size`, `slices`, `search_api`, `docinfo`, `docinfo_target`, `docinfo_fields`, `response_type` and `tracking_field` parameters are not allowed.
622
+
507
623
  [id="plugins-{type}s-{plugin}-response_type"]
508
624
  ===== `response_type`
509
625
 
510
- * Value can be any of: `hits`, `aggregations`
626
+ * Value can be any of: `hits`, `aggregations`, `esql`
511
627
  * Default value is `hits`
512
628
 
513
629
  Which part of the result to transform into Logstash events when processing the
514
630
  response from the query.
631
+
515
632
  The default `hits` will generate one event per returned document (i.e. "hit").
516
- When set to `aggregations`, a single Logstash event will be generated with the
633
+
634
+ When set to `aggregations`, a single {ls} event will be generated with the
517
635
  contents of the `aggregations` object of the query's response. In this case the
518
636
  `hits` object will be ignored. The parameter `size` will be always be set to
519
637
  0 regardless of the default or user-defined value set in this plugin.
@@ -0,0 +1,153 @@
1
+ require 'logstash/helpers/loggable_try'
2
+
3
+ module LogStash
4
+ module Inputs
5
+ class Elasticsearch
6
+ class Esql
7
+ include LogStash::Util::Loggable
8
+
9
+ ESQL_JOB = "ES|QL job"
10
+
11
+ ESQL_PARSERS_BY_TYPE = Hash.new(lambda { |x| x }).merge(
12
+ 'date' => ->(value) { value && LogStash::Timestamp.new(value) },
13
+ )
14
+
15
+ # Initialize the ESQL query executor
16
+ # @param client [Elasticsearch::Client] The Elasticsearch client instance
17
+ # @param plugin [LogStash::Inputs::Elasticsearch] The parent plugin instance
18
+ def initialize(client, plugin)
19
+ @client = client
20
+ @event_decorator = plugin.method(:decorate_event)
21
+ @retries = plugin.params["retries"]
22
+
23
+ target_field = plugin.params["target"]
24
+ if target_field
25
+ def self.apply_target(path); "[#{target_field}][#{path}]"; end
26
+ else
27
+ def self.apply_target(path); path; end
28
+ end
29
+
30
+ @query = plugin.params["query"]
31
+ unless @query.include?('METADATA')
32
+ logger.info("`METADATA` not found the query. `_id`, `_version` and `_index` will not be available in the result", {:query => @query})
33
+ end
34
+ logger.debug("ES|QL executor initialized with", {:query => @query})
35
+ end
36
+
37
+ # Execute the ESQL query and process results
38
+ # @param output_queue [Queue] The queue to push processed events to
39
+ # @param query A query (to obey interface definition)
40
+ def do_run(output_queue, query)
41
+ logger.info("ES|QL executor has started")
42
+ response = retryable(ESQL_JOB) do
43
+ @client.esql.query({ body: { query: @query }, format: 'json', drop_null_columns: true })
44
+ end
45
+ # retriable already printed error details
46
+ return if response == false
47
+
48
+ if response&.headers&.dig("warning")
49
+ logger.warn("ES|QL executor received warning", {:warning_message => response.headers["warning"]})
50
+ end
51
+ columns = response['columns']&.freeze
52
+ values = response['values']&.freeze
53
+ logger.debug("ES|QL query response size: #{values&.size}")
54
+
55
+ process_response(columns, values, output_queue) if columns && values
56
+ end
57
+
58
+ # Execute a retryable operation with proper error handling
59
+ # @param job_name [String] Name of the job for logging purposes
60
+ # @yield The block to execute
61
+ # @return [Boolean] true if successful, false otherwise
62
+ def retryable(job_name, &block)
63
+ stud_try = ::LogStash::Helpers::LoggableTry.new(logger, job_name)
64
+ stud_try.try((@retries + 1).times) { yield }
65
+ rescue => e
66
+ error_details = {:message => e.message, :cause => e.cause}
67
+ error_details[:backtrace] = e.backtrace if logger.debug?
68
+ logger.error("#{job_name} failed with ", error_details)
69
+ false
70
+ end
71
+
72
+ private
73
+
74
+ # Process the ESQL response and push events to the output queue
75
+ # @param columns [Array[Hash]] The ESQL query response columns
76
+ # @param values [Array[Array]] The ESQL query response hits
77
+ # @param output_queue [Queue] The queue to push processed events to
78
+ def process_response(columns, values, output_queue)
79
+ column_specs = columns.map { |column| ColumnSpec.new(column) }
80
+ sub_element_mark_map = mark_sub_elements(column_specs)
81
+ multi_fields = sub_element_mark_map.filter_map { |key, val| key.name if val == true }
82
+ logger.warn("Multi-fields found in ES|QL result and they will not be available in the event. Please use `RENAME` command if you want to include them.", { :detected_multi_fields => multi_fields }) if multi_fields.any?
83
+
84
+ values.each do |row|
85
+ event = column_specs.zip(row).each_with_object(LogStash::Event.new) do |(column, value), event|
86
+ # `unless value.nil?` is a part of `drop_null_columns` that if some of columns' values are not `nil`, `nil` values appear
87
+ # we should continuously filter out them to achieve full `drop_null_columns` on each individual row (ideal `LIMIT 1` result)
88
+ # we also exclude sub-elements of main field
89
+ if value && sub_element_mark_map[column] == false
90
+ field_reference = apply_target(column.field_reference)
91
+ event.set(field_reference, ESQL_PARSERS_BY_TYPE[column.type].call(value))
92
+ end
93
+ end
94
+ @event_decorator.call(event)
95
+ output_queue << event
96
+ rescue => e
97
+ # if event creation fails with whatever reason, inform user and tag with failure and return entry as it is
98
+ logger.warn("Event creation error, ", message: e.message, exception: e.class, data: { "columns" => columns, "values" => [row] })
99
+ failed_event = LogStash::Event.new("columns" => columns, "values" => [row], "tags" => ['_elasticsearch_input_failure'])
100
+ output_queue << failed_event
101
+ end
102
+ end
103
+
104
+ # Determines whether each column in a collection is a nested sub-element (example "user.age")
105
+ # of another column in the same collection (example "user").
106
+ #
107
+ # @param columns [Array<ColumnSpec>] An array of objects with a `name` attribute representing field paths.
108
+ # @return [Hash<ColumnSpec, Boolean>] A hash mapping each column to `true` if it is a sub-element of another field, `false` otherwise.
109
+ # Time complexity: (O(NlogN+N*K)) where K is the number of conflict depth
110
+ # without (`prefix_set`) memoization, it would be O(N^2)
111
+ def mark_sub_elements(columns)
112
+ # Sort columns by name length (ascending)
113
+ sorted_columns = columns.sort_by { |c| c.name.length }
114
+ prefix_set = Set.new # memoization set
115
+
116
+ sorted_columns.each_with_object({}) do |column, memo|
117
+ # Split the column name into parts (e.g., "user.profile.age" → ["user", "profile", "age"])
118
+ parts = column.name.split('.')
119
+
120
+ # Generate all possible parent prefixes (e.g., "user", "user.profile")
121
+ # and check if any parent prefix exists in the set
122
+ parent_prefixes = (0...parts.size - 1).map { |i| parts[0..i].join('.') }
123
+ memo[column] = parent_prefixes.any? { |prefix| prefix_set.include?(prefix) }
124
+ prefix_set.add(column.name)
125
+ end
126
+ end
127
+ end
128
+
129
+ # Class representing a column specification in the ESQL response['columns']
130
+ # The class's main purpose is to provide a structure for the event key
131
+ # columns is an array with `name` and `type` pair (example: `{"name"=>"@timestamp", "type"=>"date"}`)
132
+ # @attr_reader :name [String] The name of the column
133
+ # @attr_reader :type [String] The type of the column
134
+ class ColumnSpec
135
+ attr_reader :name, :type
136
+
137
+ def initialize(spec)
138
+ @name = isolate(spec.fetch('name'))
139
+ @type = isolate(spec.fetch('type'))
140
+ end
141
+
142
+ def field_reference
143
+ @_field_reference ||= '[' + name.gsub('.', '][') + ']'
144
+ end
145
+
146
+ private
147
+ def isolate(value)
148
+ value.frozen? ? value : value.clone.freeze
149
+ end
150
+ end
151
+ end
152
+ end
153
+ end
@@ -74,6 +74,7 @@ class LogStash::Inputs::Elasticsearch < LogStash::Inputs::Base
74
74
  require 'logstash/inputs/elasticsearch/paginated_search'
75
75
  require 'logstash/inputs/elasticsearch/aggregation'
76
76
  require 'logstash/inputs/elasticsearch/cursor_tracker'
77
+ require 'logstash/inputs/elasticsearch/esql'
77
78
 
78
79
  include LogStash::PluginMixins::ECSCompatibilitySupport(:disabled, :v1, :v8 => :v1)
79
80
  include LogStash::PluginMixins::ECSCompatibilitySupport::TargetCheck
@@ -96,15 +97,21 @@ class LogStash::Inputs::Elasticsearch < LogStash::Inputs::Base
96
97
  # The index or alias to search.
97
98
  config :index, :validate => :string, :default => "logstash-*"
98
99
 
99
- # The query to be executed. Read the Elasticsearch query DSL documentation
100
- # for more info
101
- # https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl.html
100
+ # A type of Elasticsearch query, provided by @query. This will validate query shape and other params.
101
+ config :query_type, :validate => %w[dsl esql], :default => 'dsl'
102
+
103
+ # The query to be executed. DSL or ES|QL (when `query_type => 'esql'`) query shape is accepted.
104
+ # Read the following documentations for more info
105
+ # Query DSL: https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl.html
106
+ # ES|QL: https://www.elastic.co/guide/en/elasticsearch/reference/current/esql.html
102
107
  config :query, :validate => :string, :default => '{ "sort": [ "_doc" ] }'
103
108
 
104
- # This allows you to speccify the response type: either hits or aggregations
105
- # where hits: normal search request
106
- # aggregations: aggregation request
107
- config :response_type, :validate => ['hits', 'aggregations'], :default => 'hits'
109
+ # This allows you to specify the DSL response type: one of [hits, aggregations]
110
+ # where
111
+ # hits: normal search request
112
+ # aggregations: aggregation request
113
+ # Note that this param is invalid when `query_type => 'esql'`, ES|QL response shape is always a tabular format
114
+ config :response_type, :validate => %w[hits aggregations], :default => 'hits'
108
115
 
109
116
  # This allows you to set the maximum number of hits returned per scroll.
110
117
  config :size, :validate => :number, :default => 1000
@@ -286,6 +293,9 @@ class LogStash::Inputs::Elasticsearch < LogStash::Inputs::Base
286
293
  DEFAULT_EAV_HEADER = { "Elastic-Api-Version" => "2023-10-31" }.freeze
287
294
  INTERNAL_ORIGIN_HEADER = { 'x-elastic-product-origin' => 'logstash-input-elasticsearch'}.freeze
288
295
 
296
+ LS_ESQL_SUPPORT_VERSION = "8.17.4" # the version started using elasticsearch-ruby v8
297
+ ES_ESQL_SUPPORT_VERSION = "8.11.0"
298
+
289
299
  def initialize(params={})
290
300
  super(params)
291
301
 
@@ -302,10 +312,17 @@ class LogStash::Inputs::Elasticsearch < LogStash::Inputs::Base
302
312
  fill_hosts_from_cloud_id
303
313
  setup_ssl_params!
304
314
 
305
- @base_query = LogStash::Json.load(@query)
306
- if @slices
307
- @base_query.include?('slice') && fail(LogStash::ConfigurationError, "Elasticsearch Input Plugin's `query` option cannot specify specific `slice` when configured to manage parallel slices with `slices` option")
308
- @slices < 1 && fail(LogStash::ConfigurationError, "Elasticsearch Input Plugin's `slices` option must be greater than zero, got `#{@slices}`")
315
+ if @query_type == 'esql'
316
+ validate_ls_version_for_esql_support!
317
+ validate_esql_query!
318
+ not_allowed_options = original_params.keys & %w(index size slices search_api docinfo docinfo_target docinfo_fields response_type tracking_field)
319
+ raise(LogStash::ConfigurationError, "Configured #{not_allowed_options} params are not allowed while using ES|QL query") if not_allowed_options&.size > 1
320
+ else
321
+ @base_query = LogStash::Json.load(@query)
322
+ if @slices
323
+ @base_query.include?('slice') && fail(LogStash::ConfigurationError, "Elasticsearch Input Plugin's `query` option cannot specify specific `slice` when configured to manage parallel slices with `slices` option")
324
+ @slices < 1 && fail(LogStash::ConfigurationError, "Elasticsearch Input Plugin's `slices` option must be greater than zero, got `#{@slices}`")
325
+ end
309
326
  end
310
327
 
311
328
  @retries < 0 && fail(LogStash::ConfigurationError, "Elasticsearch Input Plugin's `retries` option must be equal or greater than zero, got `#{@retries}`")
@@ -341,11 +358,13 @@ class LogStash::Inputs::Elasticsearch < LogStash::Inputs::Base
341
358
 
342
359
  test_connection!
343
360
 
361
+ validate_es_for_esql_support!
362
+
344
363
  setup_serverless
345
364
 
346
365
  setup_search_api
347
366
 
348
- setup_query_executor
367
+ @query_executor = create_query_executor
349
368
 
350
369
  setup_cursor_tracker
351
370
 
@@ -363,16 +382,6 @@ class LogStash::Inputs::Elasticsearch < LogStash::Inputs::Base
363
382
  end
364
383
  end
365
384
 
366
- def get_query_object
367
- if @cursor_tracker
368
- query = @cursor_tracker.inject_cursor(@query)
369
- @logger.debug("new query is #{query}")
370
- else
371
- query = @query
372
- end
373
- LogStash::Json.load(query)
374
- end
375
-
376
385
  ##
377
386
  # This can be called externally from the query_executor
378
387
  public
@@ -383,6 +392,23 @@ class LogStash::Inputs::Elasticsearch < LogStash::Inputs::Base
383
392
  record_last_value(event)
384
393
  end
385
394
 
395
+ def decorate_event(event)
396
+ decorate(event)
397
+ end
398
+
399
+ private
400
+
401
+ def get_query_object
402
+ return @query if @query_type == 'esql'
403
+ if @cursor_tracker
404
+ query = @cursor_tracker.inject_cursor(@query)
405
+ @logger.debug("new query is #{query}")
406
+ else
407
+ query = @query
408
+ end
409
+ LogStash::Json.load(query)
410
+ end
411
+
386
412
  def record_last_value(event)
387
413
  @cursor_tracker.record_last_value(event) if @tracking_field
388
414
  end
@@ -414,8 +440,6 @@ class LogStash::Inputs::Elasticsearch < LogStash::Inputs::Base
414
440
  event.set(@docinfo_target, docinfo_target)
415
441
  end
416
442
 
417
- private
418
-
419
443
  def hosts_default?(hosts)
420
444
  hosts.nil? || ( hosts.is_a?(Array) && hosts.empty? )
421
445
  end
@@ -664,18 +688,16 @@ class LogStash::Inputs::Elasticsearch < LogStash::Inputs::Base
664
688
 
665
689
  end
666
690
 
667
- def setup_query_executor
668
- @query_executor = case @response_type
669
- when 'hits'
670
- if @resolved_search_api == "search_after"
671
- LogStash::Inputs::Elasticsearch::SearchAfter.new(@client, self)
672
- else
673
- logger.warn("scroll API is no longer recommended for pagination. Consider using search_after instead.") if es_major_version >= 8
674
- LogStash::Inputs::Elasticsearch::Scroll.new(@client, self)
675
- end
676
- when 'aggregations'
677
- LogStash::Inputs::Elasticsearch::Aggregation.new(@client, self)
678
- end
691
+ def create_query_executor
692
+ return LogStash::Inputs::Elasticsearch::Esql.new(@client, self) if @query_type == 'esql'
693
+
694
+ # DSL query executor
695
+ return LogStash::Inputs::Elasticsearch::Aggregation.new(@client, self) if @response_type == 'aggregations'
696
+ # response_type is hits, executor can be search_after or scroll type
697
+ return LogStash::Inputs::Elasticsearch::SearchAfter.new(@client, self) if @resolved_search_api == "search_after"
698
+
699
+ logger.warn("scroll API is no longer recommended for pagination. Consider using search_after instead.") if es_major_version >= 8
700
+ LogStash::Inputs::Elasticsearch::Scroll.new(@client, self)
679
701
  end
680
702
 
681
703
  def setup_cursor_tracker
@@ -714,6 +736,26 @@ class LogStash::Inputs::Elasticsearch < LogStash::Inputs::Base
714
736
  ::Elastic::Transport::Transport::HTTP::Manticore
715
737
  end
716
738
 
739
+ def validate_ls_version_for_esql_support!
740
+ if Gem::Version.create(LOGSTASH_VERSION) < Gem::Version.create(LS_ESQL_SUPPORT_VERSION)
741
+ fail("Current version of Logstash does not include Elasticsearch client which supports ES|QL. Please upgrade Logstash to at least #{LS_ESQL_SUPPORT_VERSION}")
742
+ end
743
+ end
744
+
745
+ def validate_esql_query!
746
+ fail(LogStash::ConfigurationError, "`query` cannot be empty") if @query.strip.empty?
747
+ source_commands = %w[FROM ROW SHOW]
748
+ contains_source_command = source_commands.any? { |source_command| @query.strip.start_with?(source_command) }
749
+ fail(LogStash::ConfigurationError, "`query` needs to start with any of #{source_commands}") unless contains_source_command
750
+ end
751
+
752
+ def validate_es_for_esql_support!
753
+ return unless @query_type == 'esql'
754
+ # make sure connected ES supports ES|QL (8.11+)
755
+ es_supports_esql = Gem::Version.create(es_version) >= Gem::Version.create(ES_ESQL_SUPPORT_VERSION)
756
+ fail("Connected Elasticsearch #{es_version} version does not supports ES|QL. ES|QL feature requires at least Elasticsearch #{ES_ESQL_SUPPORT_VERSION} version.") unless es_supports_esql
757
+ end
758
+
717
759
  module URIOrEmptyValidator
718
760
  ##
719
761
  # @override to provide :uri_or_empty validator
@@ -1,7 +1,7 @@
1
1
  Gem::Specification.new do |s|
2
2
 
3
3
  s.name = 'logstash-input-elasticsearch'
4
- s.version = '5.1.0'
4
+ s.version = '5.2.0'
5
5
  s.licenses = ['Apache License (2.0)']
6
6
  s.summary = "Reads query results from an Elasticsearch cluster"
7
7
  s.description = "This gem is a Logstash plugin required to be installed on top of the Logstash core pipeline using $LS_HOME/bin/logstash-plugin install gemname. This gem is not a stand-alone program"
@@ -0,0 +1,180 @@
1
+ # encoding: utf-8
2
+ require "logstash/devutils/rspec/spec_helper"
3
+ require "logstash/inputs/elasticsearch"
4
+ require "elasticsearch"
5
+
6
+ describe LogStash::Inputs::Elasticsearch::Esql do
7
+ let(:client) { instance_double(Elasticsearch::Client) }
8
+ let(:esql_client) { double("esql-client") }
9
+
10
+ let(:plugin) { instance_double(LogStash::Inputs::Elasticsearch, params: plugin_config, decorate_event: nil) }
11
+ let(:plugin_config) do
12
+ {
13
+ "query" => "FROM test-index | STATS count() BY field",
14
+ "retries" => 3
15
+ }
16
+ end
17
+ let(:esql_executor) { described_class.new(client, plugin) }
18
+
19
+ describe "#initialization" do
20
+ it "sets up the ESQL client with correct parameters" do
21
+ expect(esql_executor.instance_variable_get(:@query)).to eq(plugin_config["query"])
22
+ expect(esql_executor.instance_variable_get(:@retries)).to eq(plugin_config["retries"])
23
+ expect(esql_executor.instance_variable_get(:@target_field)).to eq(nil)
24
+ end
25
+ end
26
+
27
+ describe "#execution" do
28
+ let(:output_queue) { Queue.new }
29
+
30
+ context "when faces error while retrying" do
31
+ it "retries the given block the specified number of times" do
32
+ attempts = 0
33
+ result = esql_executor.retryable("Test Job") do
34
+ attempts += 1
35
+ raise StandardError if attempts < 3
36
+ "success"
37
+ end
38
+ expect(attempts).to eq(3)
39
+ expect(result).to eq("success")
40
+ end
41
+
42
+ it "returns false if the block fails all attempts" do
43
+ result = esql_executor.retryable("Test Job") do
44
+ raise StandardError
45
+ end
46
+ expect(result).to eq(false)
47
+ end
48
+ end
49
+
50
+ context "when executing chain of processes" do
51
+ let(:response) { { 'values' => [%w[foo bar]], 'columns' => [{ 'name' => 'a.b.1.d', 'type' => 'keyword' },
52
+ { 'name' => 'h_g.k$l.m.0', 'type' => 'keyword' }] } }
53
+
54
+ before do
55
+ allow(esql_executor).to receive(:retryable).and_yield
56
+ allow(client).to receive_message_chain(:esql, :query).and_return(response)
57
+ end
58
+
59
+ it "executes the ESQL query and processes the results" do
60
+ allow(response).to receive(:headers).and_return({})
61
+ esql_executor.do_run(output_queue, plugin_config["query"])
62
+ expect(output_queue.size).to eq(1)
63
+
64
+ event = output_queue.pop
65
+ expect(event.get('[a][b][1][d]')).to eq('foo')
66
+ expect(event.get('[h_g][k$l][m][0]')).to eq('bar')
67
+ end
68
+
69
+ it "logs a warning if the response contains a warning header" do
70
+ allow(response).to receive(:headers).and_return({ "warning" => "some warning" })
71
+ expect(esql_executor.logger).to receive(:warn).with("ES|QL executor received warning", { :warning_message => "some warning" })
72
+ esql_executor.do_run(output_queue, plugin_config["query"])
73
+ end
74
+
75
+ it "does not log a warning if the response does not contain a warning header" do
76
+ allow(response).to receive(:headers).and_return({})
77
+ expect(esql_executor.logger).not_to receive(:warn)
78
+ esql_executor.do_run(output_queue, plugin_config["query"])
79
+ end
80
+ end
81
+
82
+ describe "multiple rows in the result" do
83
+ let(:response) { { 'values' => rows, 'columns' => [{ 'name' => 'key.1', 'type' => 'keyword' },
84
+ { 'name' => 'key.2', 'type' => 'keyword' }] } }
85
+
86
+ before do
87
+ allow(esql_executor).to receive(:retryable).and_yield
88
+ allow(client).to receive_message_chain(:esql, :query).and_return(response)
89
+ allow(response).to receive(:headers).and_return({})
90
+ end
91
+
92
+ context "when mapping" do
93
+ let(:rows) { [%w[foo bar], %w[hello world]] }
94
+
95
+ it "1:1 maps rows to events" do
96
+ esql_executor.do_run(output_queue, plugin_config["query"])
97
+ expect(output_queue.size).to eq(2)
98
+
99
+ event_1 = output_queue.pop
100
+ expect(event_1.get('[key][1]')).to eq('foo')
101
+ expect(event_1.get('[key][2]')).to eq('bar')
102
+
103
+ event_2 = output_queue.pop
104
+ expect(event_2.get('[key][1]')).to eq('hello')
105
+ expect(event_2.get('[key][2]')).to eq('world')
106
+ end
107
+ end
108
+
109
+ context "when partial nil values appear" do
110
+ let(:rows) { [[nil, "bar"], ["hello", nil]] }
111
+
112
+ it "ignores the nil values" do
113
+ esql_executor.do_run(output_queue, plugin_config["query"])
114
+ expect(output_queue.size).to eq(2)
115
+
116
+ event_1 = output_queue.pop
117
+ expect(event_1.get('[key][1]')).to eq(nil)
118
+ expect(event_1.get('[key][2]')).to eq('bar')
119
+
120
+ event_2 = output_queue.pop
121
+ expect(event_2.get('[key][1]')).to eq('hello')
122
+ expect(event_2.get('[key][2]')).to eq(nil)
123
+ end
124
+ end
125
+ end
126
+
127
+ context "when sub-elements occur in the result" do
128
+ let(:response) { {
129
+ 'values' => [[50, 1, 100], [50, 0, 1000], [50, 9, 99999]],
130
+ 'columns' =>
131
+ [
132
+ { 'name' => 'time', 'type' => 'long' },
133
+ { 'name' => 'time.min', 'type' => 'long' },
134
+ { 'name' => 'time.max', 'type' => 'long' },
135
+ ]
136
+ } }
137
+
138
+ before do
139
+ allow(esql_executor).to receive(:retryable).and_yield
140
+ allow(client).to receive_message_chain(:esql, :query).and_return(response)
141
+ allow(response).to receive(:headers).and_return({})
142
+ end
143
+
144
+ it "includes 1st depth elements into event" do
145
+ esql_executor.do_run(output_queue, plugin_config["query"])
146
+
147
+ expect(output_queue.size).to eq(3)
148
+ 3.times do
149
+ event = output_queue.pop
150
+ expect(event.get('time')).to eq(50)
151
+ expect(event.get('[time][min]')).to eq(nil)
152
+ expect(event.get('[time][max]')).to eq(nil)
153
+ end
154
+ end
155
+ end
156
+ end
157
+
158
+ describe "#column spec" do
159
+ let(:valid_spec) { { 'name' => 'field.name', 'type' => 'keyword' } }
160
+ let(:column_spec) { LogStash::Inputs::Elasticsearch::ColumnSpec.new(valid_spec) }
161
+
162
+ context "when initializes" do
163
+ it "sets the name and type attributes" do
164
+ expect(column_spec.name).to eq("field.name")
165
+ expect(column_spec.type).to eq("keyword")
166
+ end
167
+
168
+ it "freezes the name and type attributes" do
169
+ expect(column_spec.name).to be_frozen
170
+ expect(column_spec.type).to be_frozen
171
+ end
172
+ end
173
+
174
+ context "when calls the field reference" do
175
+ it "returns the correct field reference format" do
176
+ expect(column_spec.field_reference).to eq("[field][name]")
177
+ end
178
+ end
179
+ end
180
+ end if LOGSTASH_VERSION >= LogStash::Inputs::Elasticsearch::LS_ESQL_SUPPORT_VERSION
@@ -1370,4 +1370,129 @@ describe LogStash::Inputs::Elasticsearch, :ecs_compatibility_support do
1370
1370
  client.transport.respond_to?(:transport) ? client.transport.transport : client.transport
1371
1371
  end
1372
1372
 
1373
+ describe "#ESQL" do
1374
+ let(:config) do
1375
+ {
1376
+ "query" => "FROM test-index | STATS count() BY field",
1377
+ "query_type" => "esql",
1378
+ "retries" => 3
1379
+ }
1380
+ end
1381
+ let(:es_version) { LogStash::Inputs::Elasticsearch::ES_ESQL_SUPPORT_VERSION }
1382
+ let(:ls_version) { LogStash::Inputs::Elasticsearch::LS_ESQL_SUPPORT_VERSION }
1383
+
1384
+ before(:each) do
1385
+ stub_const("LOGSTASH_VERSION", ls_version)
1386
+ end
1387
+
1388
+ describe "#initialize" do
1389
+ it "sets up the ESQL client with correct parameters" do
1390
+ expect(plugin.instance_variable_get(:@query_type)).to eq(config["query_type"])
1391
+ expect(plugin.instance_variable_get(:@query)).to eq(config["query"])
1392
+ expect(plugin.instance_variable_get(:@retries)).to eq(config["retries"])
1393
+ end
1394
+ end
1395
+
1396
+ describe "#register" do
1397
+ before(:each) do
1398
+ Elasticsearch::Client.send(:define_method, :ping) { }
1399
+ allow_any_instance_of(Elasticsearch::Client).to receive(:info).and_return(cluster_info)
1400
+ end
1401
+ it "creates ES|QL executor" do
1402
+ plugin.register
1403
+ expect(plugin.instance_variable_get(:@query_executor)).to be_an_instance_of(LogStash::Inputs::Elasticsearch::Esql)
1404
+ end
1405
+ end
1406
+
1407
+ describe "#validation" do
1408
+
1409
+ describe "LS version" do
1410
+ context "when compatible" do
1411
+
1412
+ it "does not raise an error" do
1413
+ expect { plugin.send(:validate_ls_version_for_esql_support!) }.not_to raise_error
1414
+ end
1415
+ end
1416
+
1417
+ context "when incompatible" do
1418
+ before(:each) do
1419
+ stub_const("LOGSTASH_VERSION", "8.10.0")
1420
+ end
1421
+
1422
+ it "raises a runtime error" do
1423
+ expect { plugin.send(:validate_ls_version_for_esql_support!) }
1424
+ .to raise_error(RuntimeError, /Current version of Logstash does not include Elasticsearch client which supports ES|QL. Please upgrade Logstash to at least #{ls_version}/)
1425
+ end
1426
+ end
1427
+ end
1428
+
1429
+ describe "ES version" do
1430
+ before(:each) do
1431
+ allow(plugin).to receive(:es_version).and_return("8.10.5")
1432
+ end
1433
+
1434
+ context "when incompatible" do
1435
+ it "raises a runtime error" do
1436
+ expect { plugin.send(:validate_es_for_esql_support!) }
1437
+ .to raise_error(RuntimeError, /Connected Elasticsearch 8.10.5 version does not supports ES|QL. ES|QL feature requires at least Elasticsearch #{es_version} version./)
1438
+ end
1439
+ end
1440
+ end
1441
+
1442
+ context "ES|QL query and DSL params used together" do
1443
+ let(:config) {
1444
+ super().merge({
1445
+ "index" => "my-index",
1446
+ "size" => 1,
1447
+ "slices" => 1,
1448
+ "search_api" => "auto",
1449
+ "docinfo" => true,
1450
+ "docinfo_target" => "[@metadata][docinfo]",
1451
+ "docinfo_fields" => ["_index"],
1452
+ "response_type" => "hits",
1453
+ "tracking_field" => "[@metadata][tracking]"
1454
+ })}
1455
+
1456
+ it "raises a config error" do
1457
+ mixed_fields = %w[index size slices docinfo_fields response_type tracking_field]
1458
+ expect { plugin.register }.to raise_error(LogStash::ConfigurationError, /Configured #{mixed_fields} params are not allowed while using ES|QL query/)
1459
+ end
1460
+ end
1461
+
1462
+ describe "ES|QL query" do
1463
+ context "when query is valid" do
1464
+ it "does not raise an error" do
1465
+ expect { plugin.send(:validate_esql_query!) }.not_to raise_error
1466
+ end
1467
+ end
1468
+
1469
+ context "when query is empty" do
1470
+ let(:config) do
1471
+ {
1472
+ "query" => " "
1473
+ }
1474
+ end
1475
+
1476
+ it "raises a configuration error" do
1477
+ expect { plugin.send(:validate_esql_query!) }
1478
+ .to raise_error(LogStash::ConfigurationError, /`query` cannot be empty/)
1479
+ end
1480
+ end
1481
+
1482
+ context "when query doesn't align with ES syntax" do
1483
+ let(:config) do
1484
+ {
1485
+ "query" => "RANDOM query"
1486
+ }
1487
+ end
1488
+
1489
+ it "raises a configuration error" do
1490
+ source_commands = %w[FROM ROW SHOW]
1491
+ expect { plugin.send(:validate_esql_query!) }
1492
+ .to raise_error(LogStash::ConfigurationError, "`query` needs to start with any of #{source_commands}")
1493
+ end
1494
+ end
1495
+ end
1496
+ end
1497
+ end
1373
1498
  end
@@ -0,0 +1,150 @@
1
+ # encoding: utf-8
2
+ require "logstash/devutils/rspec/spec_helper"
3
+ require "logstash/inputs/elasticsearch"
4
+ require "elasticsearch"
5
+ require_relative "../../../spec/es_helper"
6
+
7
+ describe LogStash::Inputs::Elasticsearch, integration: true do
8
+
9
+ SECURE_INTEGRATION = ENV['SECURE_INTEGRATION'].eql? 'true'
10
+ ES_HOSTS = ["http#{SECURE_INTEGRATION ? 's' : nil}://#{ESHelper.get_host_port}"]
11
+
12
+ let(:plugin) { described_class.new(config) }
13
+ let(:es_index) { "logstash-esql-integration-#{rand(1000)}" }
14
+ let(:test_documents) do
15
+ [
16
+ { "message" => "test message 1", "type" => "a", "count" => 1 },
17
+ { "message" => "test message 2", "type" => "a", "count" => 2 },
18
+ { "message" => "test message 3", "type" => "b", "count" => 3 },
19
+ { "message" => "test message 4", "type" => "b", "count" => 4 },
20
+ { "message" => "test message 5", "type" => "c", "count" => 5 }
21
+ ]
22
+ end
23
+ let(:config) do
24
+ {
25
+ "hosts" => ES_HOSTS,
26
+ "query_type" => "esql"
27
+ }
28
+ end
29
+ let(:es_client) do
30
+ Elasticsearch::Client.new(hosts: ES_HOSTS)
31
+ end
32
+
33
+ before(:all) do
34
+ is_ls_with_esql_supported_client = Gem::Version.create(LOGSTASH_VERSION) >= Gem::Version.create(LogStash::Inputs::Elasticsearch::LS_ESQL_SUPPORT_VERSION)
35
+ skip "LS version does not have ES client which supports ES|QL" unless is_ls_with_esql_supported_client
36
+
37
+ # Skip tests if ES version doesn't support ES||QL
38
+ es_client = Elasticsearch::Client.new(hosts: ES_HOSTS) # need to separately create since let isn't allowed in before(:context)
39
+ es_version_info = es_client.info["version"]
40
+ es_gem_version = Gem::Version.create(es_version_info["number"])
41
+ skip "ES version does not support ES|QL" if es_gem_version.nil? || es_gem_version < Gem::Version.create(LogStash::Inputs::Elasticsearch::ES_ESQL_SUPPORT_VERSION)
42
+ end
43
+
44
+ before(:each) do
45
+ # Create index with test documents
46
+ es_client.indices.create(index: es_index, body: {}) unless es_client.indices.exists?(index: es_index)
47
+
48
+ test_documents.each do |doc|
49
+ es_client.index(index: es_index, body: doc, refresh: true)
50
+ end
51
+ end
52
+
53
+ after(:each) do
54
+ es_client.indices.delete(index: es_index) if es_client.indices.exists?(index: es_index)
55
+ end
56
+
57
+ context "#run ES|QL queries" do
58
+
59
+ before do
60
+ stub_const("LOGSTASH_VERSION", LogStash::Inputs::Elasticsearch::LS_ESQL_SUPPORT_VERSION)
61
+ allow_any_instance_of(LogStash::Inputs::Elasticsearch).to receive(:exit_plugin?).and_return false, true
62
+ end
63
+
64
+ before(:each) do
65
+ plugin.register
66
+ end
67
+
68
+ shared_examples "ESQL query execution" do |expected_count|
69
+ it "correctly retrieves documents" do
70
+ queue = Queue.new
71
+ plugin.run(queue)
72
+
73
+ event_count = 0
74
+ expected_count.times do |i|
75
+ event = queue.pop
76
+ expect(event).to be_a(LogStash::Event)
77
+ event_count += 1
78
+ end
79
+ expect(event_count).to eq(expected_count)
80
+ end
81
+ end
82
+
83
+ context "#FROM query" do
84
+ let(:config) do
85
+ super().merge("query" => "FROM #{es_index} | SORT count")
86
+ end
87
+
88
+ include_examples "ESQL query execution", 5
89
+ end
90
+
91
+ context "#FROM query and WHERE clause" do
92
+ let(:config) do
93
+ super().merge("query" => "FROM #{es_index} | WHERE type == \"a\" | SORT count")
94
+ end
95
+
96
+ include_examples "ESQL query execution", 2
97
+ end
98
+
99
+ context "#STATS aggregation" do
100
+ let(:config) do
101
+ super().merge("query" => "FROM #{es_index} | STATS avg(count) BY type")
102
+ end
103
+
104
+ it "retrieves aggregated stats" do
105
+ queue = Queue.new
106
+ plugin.run(queue)
107
+ results = []
108
+ 3.times do
109
+ event = queue.pop
110
+ expect(event).to be_a(LogStash::Event)
111
+ results << event.get("avg(count)")
112
+ end
113
+
114
+ expected_averages = [1.5, 3.5, 5.0]
115
+ expect(results.sort).to eq(expected_averages)
116
+ end
117
+ end
118
+
119
+ context "#METADATA" do
120
+ let(:config) do
121
+ super().merge("query" => "FROM #{es_index} METADATA _index, _id, _version | DROP message.keyword, type.keyword | SORT count")
122
+ end
123
+
124
+ it "includes document metadata" do
125
+ queue = Queue.new
126
+ plugin.run(queue)
127
+
128
+ 5.times do
129
+ event = queue.pop
130
+ expect(event).to be_a(LogStash::Event)
131
+ expect(event.get("_index")).not_to be_nil
132
+ expect(event.get("_id")).not_to be_nil
133
+ expect(event.get("_version")).not_to be_nil
134
+ end
135
+ end
136
+ end
137
+
138
+ context "#invalid ES|QL query" do
139
+ let(:config) do
140
+ super().merge("query" => "FROM undefined index | LIMIT 1")
141
+ end
142
+
143
+ it "doesn't produce events" do
144
+ queue = Queue.new
145
+ plugin.run(queue)
146
+ expect(queue.empty?).to eq(true)
147
+ end
148
+ end
149
+ end
150
+ end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: logstash-input-elasticsearch
3
3
  version: !ruby/object:Gem::Version
4
- version: 5.1.0
4
+ version: 5.2.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Elastic
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2025-04-07 00:00:00.000000000 Z
11
+ date: 2025-06-06 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  requirement: !ruby/object:Gem::Requirement
@@ -279,6 +279,7 @@ files:
279
279
  - lib/logstash/inputs/elasticsearch.rb
280
280
  - lib/logstash/inputs/elasticsearch/aggregation.rb
281
281
  - lib/logstash/inputs/elasticsearch/cursor_tracker.rb
282
+ - lib/logstash/inputs/elasticsearch/esql.rb
282
283
  - lib/logstash/inputs/elasticsearch/paginated_search.rb
283
284
  - lib/logstash/inputs/elasticsearch/patches/_elasticsearch_transport_connections_selector.rb
284
285
  - lib/logstash/inputs/elasticsearch/patches/_elasticsearch_transport_http_manticore.rb
@@ -293,8 +294,10 @@ files:
293
294
  - spec/fixtures/test_certs/es.key
294
295
  - spec/fixtures/test_certs/renew.sh
295
296
  - spec/inputs/cursor_tracker_spec.rb
297
+ - spec/inputs/elasticsearch_esql_spec.rb
296
298
  - spec/inputs/elasticsearch_spec.rb
297
299
  - spec/inputs/elasticsearch_ssl_spec.rb
300
+ - spec/inputs/integration/elasticsearch_esql_spec.rb
298
301
  - spec/inputs/integration/elasticsearch_spec.rb
299
302
  - spec/inputs/paginated_search_spec.rb
300
303
  homepage: https://elastic.co/logstash
@@ -333,7 +336,9 @@ test_files:
333
336
  - spec/fixtures/test_certs/es.key
334
337
  - spec/fixtures/test_certs/renew.sh
335
338
  - spec/inputs/cursor_tracker_spec.rb
339
+ - spec/inputs/elasticsearch_esql_spec.rb
336
340
  - spec/inputs/elasticsearch_spec.rb
337
341
  - spec/inputs/elasticsearch_ssl_spec.rb
342
+ - spec/inputs/integration/elasticsearch_esql_spec.rb
338
343
  - spec/inputs/integration/elasticsearch_spec.rb
339
344
  - spec/inputs/paginated_search_spec.rb