logstash-input-elasticsearch 4.23.0 → 5.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 330a1fd55cb3fa00918a73dcd41b66e63ce81d6fc79dc68f4209385429e588d4
4
- data.tar.gz: 5ba0377bcaaa9b428a4a848e32fff5019353ca7f6b3c8bb77944156d05230d6f
3
+ metadata.gz: b34b6c6d814152e88f320525ea0bb80bbf1e63ff962e022aaac0a2385dd087b6
4
+ data.tar.gz: d142df9148ad69bf838d62badeec71382118741938db61e6aad0676bdb918a37
5
5
  SHA512:
6
- metadata.gz: 8be2dc35edde5b3b83c2c5711941c58c9aa3e45330e3785fef21269af134e13d031e10cdc324cf31e35b6c42a48f215f9ff2e8d58bc3a77fcc0c5a31c2084998
7
- data.tar.gz: b0456a0a04f365a34e35d6b8f8040e75a4d9a0f73718a2958f3c87823e5e41a5b04308b8f1710b8d5aa57091015aba273f308c75bd79c61494fecea4baf00d8e
6
+ metadata.gz: 19b2b1325ded83b5b93966365f855f104ba1881f2c991ffdbe92216e08d12d18a7b3ddd4a14d755f6d55c85c98e00d12ca566188c63706d6db1f0aa5b085048b
7
+ data.tar.gz: ff5de17e75281d8ddd0be70167f2c4dee0a90eef328c7e486b704e79fe10db7b7108b733f77438386a7abb18d504efbef5aaf7b0f34a6c8edd62791640514b7b
data/CHANGELOG.md CHANGED
@@ -1,18 +1,10 @@
1
- ## 4.23.0
2
- - ES|QL support [#235](https://github.com/logstash-plugins/logstash-input-elasticsearch/pull/235)
3
-
4
- ## 4.22.0
5
- - Add "cursor"-like index tracking [#205](https://github.com/logstash-plugins/logstash-input-elasticsearch/pull/205)
6
-
7
- ## 4.21.2
8
- - Add elastic-transport client support used in elasticsearch-ruby 8.x [#225](https://github.com/logstash-plugins/logstash-input-elasticsearch/pull/225)
9
-
10
- ## 4.21.1
11
- - Fix: prevent plugin crash when hits contain illegal structure [#183](https://github.com/logstash-plugins/logstash-input-elasticsearch/pull/183)
12
- - When a hit cannot be converted to an event, the input now emits an event tagged with `_elasticsearch_input_failure` with an `[event][original]` containing a JSON-encoded string representation of the entire hit.
13
-
14
- ## 4.21.0
15
- - Add support for custom headers [#217](https://github.com/logstash-plugins/logstash-input-elasticsearch/pull/217)
1
+ ## 5.0.0
2
+ - SSL settings that were marked deprecated in version `4.17.0` are now marked obsolete, and will prevent the plugin from starting.
3
+ - These settings are:
4
+ - `ssl`, which should bre replaced by `ssl_enabled`
5
+ - `ca_file`, which should bre replaced by `ssl_certificate_authorities`
6
+ - `ssl_certificate_verification`, which should bre replaced by `ssl_verification_mode`
7
+ - [#213](https://github.com/logstash-plugins/logstash-input-elasticsearch/pull/213)
16
8
 
17
9
  ## 4.20.5
18
10
  - Add `x-elastic-product-origin` header to Elasticsearch requests [#211](https://github.com/logstash-plugins/logstash-input-elasticsearch/pull/211)
data/docs/index.asciidoc CHANGED
@@ -48,7 +48,7 @@ This would create an Elasticsearch query with the following format:
48
48
  "sort": [ "_doc" ]
49
49
  }'
50
50
 
51
- [id="plugins-{type}s-{plugin}-scheduling"]
51
+
52
52
  ==== Scheduling
53
53
 
54
54
  Input from this plugin can be scheduled to run periodically according to a specific
@@ -93,251 +93,16 @@ The plugin logs a warning when ECS is enabled and `target` isn't set.
93
93
 
94
94
  TIP: Set the `target` option to avoid potential schema conflicts.
95
95
 
96
- [id="plugins-{type}s-{plugin}-failure-handling"]
97
- ==== Failure handling
98
-
99
- When this input plugin cannot create a structured `Event` from a hit result, it will instead create an `Event` that is tagged with `_elasticsearch_input_failure` whose `[event][original]` is a JSON-encoded string representation of the entire hit.
100
-
101
- Common causes are:
102
-
103
- - When the hit result contains top-level fields that are {logstash-ref}/processing.html#reserved-fields[reserved in Logstash] but do not have the expected shape. Use the <<plugins-{type}s-{plugin}-target>> directive to avoid conflicts with the top-level namespace.
104
- - When <<plugins-{type}s-{plugin}-docinfo>> is enabled and the docinfo fields cannot be merged into the hit result. Combine <<plugins-{type}s-{plugin}-target>> and <<plugins-{type}s-{plugin}-docinfo_target>> to avoid conflict.
105
-
106
- [id="plugins-{type}s-{plugin}-cursor"]
107
- ==== Tracking a field's value across runs
108
-
109
- .Technical Preview: Tracking a field's value
110
- ****
111
- The feature that allows tracking a field's value across runs is in _Technical Preview_.
112
- Configuration options and implementation details are subject to change in minor releases without being preceded by deprecation warnings.
113
- ****
114
-
115
- Some uses cases require tracking the value of a particular field between two jobs.
116
- Examples include:
117
-
118
- * avoiding the need to re-process the entire result set of a long query after an unplanned restart
119
- * grabbing only new data from an index instead of processing the entire set on each job.
120
-
121
- The Elasticsearch input plugin provides the <<plugins-{type}s-{plugin}-tracking_field>> and <<plugins-{type}s-{plugin}-tracking_field_seed>> options.
122
- When <<plugins-{type}s-{plugin}-tracking_field>> is set, the plugin records the value of that field for the last document retrieved in a run into
123
- a file.
124
- (The file location defaults to <<plugins-{type}s-{plugin}-last_run_metadata_path>>.)
125
-
126
- You can then inject this value in the query using the placeholder `:last_value`.
127
- The value will be injected into the query before execution, and then updated after the query completes if new data was found.
128
-
129
- This feature works best when:
130
-
131
- * the query sorts by the tracking field,
132
- * the timestamp field is added by {es}, and
133
- * the field type has enough resolution so that two events are unlikely to have the same value.
134
-
135
- Consider using a tracking field whose type is https://www.elastic.co/guide/en/elasticsearch/reference/current/date_nanos.html[date nanoseconds].
136
- If the tracking field is of this data type, you can use an extra placeholder called `:present` to inject the nano-second based value of "now-30s".
137
- This placeholder is useful as the right-hand side of a range filter, allowing the collection of
138
- new data but leaving partially-searchable bulk request data to the next scheduled job.
139
-
140
- [id="plugins-{type}s-{plugin}-tracking-sample"]
141
- ===== Sample configuration: Track field value across runs
142
-
143
- This section contains a series of steps to help you set up the "tailing" of data being written to a set of indices, using a date nanosecond field added by an Elasticsearch ingest pipeline and the `tracking_field` capability of this plugin.
144
-
145
- . Create ingest pipeline that adds Elasticsearch's `_ingest.timestamp` field to the documents as `event.ingested`:
146
- +
147
- [source, json]
148
- PUT _ingest/pipeline/my-pipeline
149
- {
150
- "processors": [
151
- {
152
- "script": {
153
- "lang": "painless",
154
- "source": "ctx.putIfAbsent(\"event\", [:]); ctx.event.ingested = metadata().now.format(DateTimeFormatter.ISO_INSTANT);"
155
- }
156
- }
157
- ]
158
- }
159
-
160
- [start=2]
161
- . Create an index mapping where the tracking field is of date nanosecond type and invokes the defined pipeline:
162
- +
163
- [source, json]
164
- PUT /_template/my_template
165
- {
166
- "index_patterns": ["test-*"],
167
- "settings": {
168
- "index.default_pipeline": "my-pipeline",
169
- },
170
- "mappings": {
171
- "properties": {
172
- "event": {
173
- "properties": {
174
- "ingested": {
175
- "type": "date_nanos",
176
- "format": "strict_date_optional_time_nanos"
177
- }
178
- }
179
- }
180
- }
181
- }
182
- }
183
-
184
- [start=3]
185
- . Define a query that looks at all data of the indices, sorted by the tracking field, and with a range filter since the last value seen until present:
186
- +
187
- [source,json]
188
- {
189
- "query": {
190
- "range": {
191
- "event.ingested": {
192
- "gt": ":last_value",
193
- "lt": ":present"
194
- }
195
- }
196
- },
197
- "sort": [
198
- {
199
- "event.ingested": {
200
- "order": "asc",
201
- "format": "strict_date_optional_time_nanos",
202
- "numeric_type": "date_nanos"
203
- }
204
- }
205
- ]
206
- }
207
-
208
- [start=4]
209
- . Configure the Elasticsearch input to query the indices with the query defined above, every minute, and track the `event.ingested` field:
210
- +
211
- [source, ruby]
212
- input {
213
- elasticsearch {
214
- id => tail_test_index
215
- hosts => [ 'https://..']
216
- api_key => '....'
217
- index => 'test-*'
218
- query => '{ "query": { "range": { "event.ingested": { "gt": ":last_value", "lt": ":present"}}}, "sort": [ { "event.ingested": {"order": "asc", "format": "strict_date_optional_time_nanos", "numeric_type" : "date_nanos" } } ] }'
219
- tracking_field => "[event][ingested]"
220
- slices => 5 # optional use of slices to speed data processing, should be equal to or less than number of primary shards
221
- schedule => '* * * * *' # every minute
222
- schedule_overlap => false # don't accumulate jobs if one takes longer than 1 minute
223
- }
224
- }
225
-
226
- With this sample setup, new documents are indexed into a `test-*` index.
227
- The next scheduled run:
228
-
229
- * selects all new documents since the last observed value of the tracking field,
230
- * uses {ref}/point-in-time-api.html#point-in-time-api[Point in time (PIT)] + {ref}/paginate-search-results.html#search-after[Search after] to paginate through all the data, and
231
- * updates the value of the field at the end of the pagination.
232
-
233
- [id="plugins-{type}s-{plugin}-esql"]
234
- ==== {esql} support
235
-
236
- .Technical Preview
237
- ****
238
- The {esql} feature that allows using ES|QL queries with this plugin is in Technical Preview.
239
- Configuration options and implementation details are subject to change in minor releases without being preceded by deprecation warnings.
240
- ****
241
-
242
- {es} Query Language ({esql}) provides a SQL-like interface for querying your {es} data.
243
-
244
- To use {esql}, this plugin needs to be installed in {ls} 8.17.4 or newer, and must be connected to {es} 8.11 or newer.
245
-
246
- To configure {esql} query in the plugin, set the `query_type` to `esql` and provide your {esql} query in the `query` parameter.
247
-
248
- IMPORTANT: {esql} is evolving and may still have limitations with regard to result size or supported field types. We recommend understanding https://www.elastic.co/guide/en/elasticsearch/reference/current/esql-limitations.html[ES|QL current limitations] before using it in production environments.
249
-
250
- The following is a basic scheduled {esql} query that runs hourly:
251
- [source, ruby]
252
- input {
253
- elasticsearch {
254
- id => hourly_cron_job
255
- hosts => [ 'https://..']
256
- api_key => '....'
257
- query_type => 'esql'
258
- query => '
259
- FROM food-index
260
- | WHERE spicy_level = "hot" AND @timestamp > NOW() - 1 hour
261
- | LIMIT 500
262
- '
263
- schedule => '0 * * * *' # every hour at min 0
264
- }
265
- }
266
-
267
- Set `config.support_escapes: true` in `logstash.yml` if you need to escape special chars in the query.
268
-
269
- NOTE: With {esql} query, {ls} doesn't generate `event.original`.
270
-
271
- [id="plugins-{type}s-{plugin}-esql-event-mapping"]
272
- ===== Mapping {esql} result to {ls} event
273
- {esql} returns query results in a structured tabular format, where data is organized into _columns_ (fields) and _values_ (entries).
274
- The plugin maps each value entry to an event, populating corresponding fields.
275
- For example, a query might produce a table like:
276
-
277
- [cols="2,1,1,1,2",options="header"]
278
- |===
279
- |`timestamp` |`user_id` | `action` | `status.code` | `status.desc`
280
-
281
- |2025-04-10T12:00:00 |123 |login |200 | Success
282
- |2025-04-10T12:05:00 |456 |purchase |403 | Forbidden (unauthorized user)
283
- |===
284
-
285
- For this case, the plugin emits two events look like
286
- [source, json]
287
- [
288
- {
289
- "timestamp": "2025-04-10T12:00:00",
290
- "user_id": 123,
291
- "action": "login",
292
- "status": {
293
- "code": 200,
294
- "desc": "Success"
295
- }
296
- },
297
- {
298
- "timestamp": "2025-04-10T12:05:00",
299
- "user_id": 456,
300
- "action": "purchase",
301
- "status": {
302
- "code": 403,
303
- "desc": "Forbidden (unauthorized user)"
304
- }
305
- }
306
- ]
307
-
308
- NOTE: If your index has a mapping with sub-objects where `status.code` and `status.desc` actually dotted fields, they appear in {ls} events as a nested structure.
309
-
310
- [id="plugins-{type}s-{plugin}-esql-multifields"]
311
- ===== Conflict on multi-fields
312
-
313
- {esql} query fetches all parent and sub-fields fields if your {es} index has https://www.elastic.co/docs/reference/elasticsearch/mapping-reference/multi-fields[multi-fields] or https://www.elastic.co/docs/reference/elasticsearch/mapping-reference/subobjects[subobjects].
314
- Since {ls} events cannot contain parent field's concrete value and sub-field values together, the plugin ignores sub-fields with warning and includes parent.
315
- We recommend using the `RENAME` (or `DROP` to avoid warnings) keyword in your {esql} query explicitly rename the fields to include sub-fields into the event.
316
-
317
- This a common occurrence if your template or mapping follows the pattern of always indexing strings as "text" (`field`) + " keyword" (`field.keyword`) multi-field.
318
- In this case it's recommended to do `KEEP field` if the string is identical and there is only one subfield as the engine will optimize and retrieve the keyword, otherwise you can do `KEEP field.keyword | RENAME field.keyword as field`.
319
-
320
- To illustrate the situation with example, assuming your mapping has a time `time` field with `time.min` and `time.max` sub-fields as following:
321
- [source, ruby]
322
- "properties": {
323
- "time": { "type": "long" },
324
- "time.min": { "type": "long" },
325
- "time.max": { "type": "long" }
326
- }
327
-
328
- The {esql} result will contain all three fields but the plugin cannot map them into {ls} event.
329
- To avoid this, you can use the `RENAME` keyword to rename the `time` parent field to get all three fields with unique fields.
330
- [source, ruby]
331
- ...
332
- query => 'FROM my-index | RENAME time AS time.current'
333
- ...
334
-
335
- For comprehensive {esql} syntax reference and best practices, see the https://www.elastic.co/guide/en/elasticsearch/reference/current/esql-syntax.html[{esql} documentation].
336
-
337
96
  [id="plugins-{type}s-{plugin}-options"]
338
97
  ==== Elasticsearch Input configuration options
339
98
 
340
- This plugin supports the following configuration options plus the <<plugins-{type}s-{plugin}-common-options>> and the <<plugins-{type}s-{plugin}-deprecated-options>> described later.
99
+ This plugin supports these configuration options plus the <<plugins-{type}s-{plugin}-common-options>> described later.
100
+
101
+ NOTE: As of version `5.0.0` of this plugin, a number of previously deprecated settings related to SSL have been removed.
102
+ Please check out <<plugins-{type}s-{plugin}-obsolete-options>> for details.
103
+
104
+ NOTE: As of version `5.0.0` of this plugin, a number of previously deprecated settings related to SSL have been removed.
105
+ Please check out <<plugins-{type}s-{plugin}-obsolete-options>> for details.
341
106
 
342
107
  [cols="<,<,<",options="header",]
343
108
  |=======================================================================
@@ -354,15 +119,12 @@ This plugin supports the following configuration options plus the <<plugins-{typ
354
119
  | <<plugins-{type}s-{plugin}-ecs_compatibility>> |<<string,string>>|No
355
120
  | <<plugins-{type}s-{plugin}-hosts>> |<<array,array>>|No
356
121
  | <<plugins-{type}s-{plugin}-index>> |<<string,string>>|No
357
- | <<plugins-{type}s-{plugin}-last_run_metadata_path>> |<<string,string>>|No
358
122
  | <<plugins-{type}s-{plugin}-password>> |<<password,password>>|No
359
123
  | <<plugins-{type}s-{plugin}-proxy>> |<<uri,uri>>|No
360
124
  | <<plugins-{type}s-{plugin}-query>> |<<string,string>>|No
361
- | <<plugins-{type}s-{plugin}-query_type>> |<<string,string>>, one of `["dsl","esql"]`|No
362
125
  | <<plugins-{type}s-{plugin}-response_type>> |<<string,string>>, one of `["hits","aggregations"]`|No
363
126
  | <<plugins-{type}s-{plugin}-request_timeout_seconds>> | <<number,number>>|No
364
127
  | <<plugins-{type}s-{plugin}-schedule>> |<<string,string>>|No
365
- | <<plugins-{type}s-{plugin}-schedule_overlap>> |<<boolean,boolean>>|No
366
128
  | <<plugins-{type}s-{plugin}-scroll>> |<<string,string>>|No
367
129
  | <<plugins-{type}s-{plugin}-search_api>> |<<string,string>>, one of `["auto", "search_after", "scroll"]`|No
368
130
  | <<plugins-{type}s-{plugin}-size>> |<<number,number>>|No
@@ -382,8 +144,6 @@ This plugin supports the following configuration options plus the <<plugins-{typ
382
144
  | <<plugins-{type}s-{plugin}-ssl_verification_mode>> |<<string,string>>, one of `["full", "none"]`|No
383
145
  | <<plugins-{type}s-{plugin}-socket_timeout_seconds>> | <<number,number>>|No
384
146
  | <<plugins-{type}s-{plugin}-target>> | {logstash-ref}/field-references-deepdive.html[field reference] | No
385
- | <<plugins-{type}s-{plugin}-tracking_field>> |<<string,string>>|No
386
- | <<plugins-{type}s-{plugin}-tracking_field_seed>> |<<string,string>>|No
387
147
  | <<plugins-{type}s-{plugin}-retries>> | <<number,number>>|No
388
148
  | <<plugins-{type}s-{plugin}-user>> |<<string,string>>|No
389
149
  |=======================================================================
@@ -563,17 +323,6 @@ Check out {ref}/api-conventions.html#api-multi-index[Multi Indices
563
323
  documentation] in the Elasticsearch documentation for info on
564
324
  referencing multiple indices.
565
325
 
566
- [id="plugins-{type}s-{plugin}-last_run_metadata_path"]
567
- ===== `last_run_metadata_path`
568
-
569
- * Value type is <<string,string>>
570
- * There is no default value for this setting.
571
-
572
- The path to store the last observed value of the tracking field, when used.
573
- By default this file is stored as `<path.data>/plugins/inputs/elasticsearch/<pipeline_id>/last_run_value`.
574
-
575
- This setting should point to file, not a directory, and Logstash must have read+write access to this file.
576
-
577
326
  [id="plugins-{type}s-{plugin}-password"]
578
327
  ===== `password`
579
328
 
@@ -600,35 +349,22 @@ environment variables e.g. `proxy => '${LS_PROXY:}'`.
600
349
  * Value type is <<string,string>>
601
350
  * Default value is `'{ "sort": [ "_doc" ] }'`
602
351
 
603
- The query to be executed.
604
- Accepted query shape is DSL or {esql} (when `query_type => 'esql'`).
605
- Read the {ref}/query-dsl.html[{es} query DSL documentation] or {ref}/esql.html[{esql} documentation] for more information.
352
+ The query to be executed. Read the {ref}/query-dsl.html[Elasticsearch query DSL
353
+ documentation] for more information.
606
354
 
607
355
  When <<plugins-{type}s-{plugin}-search_api>> resolves to `search_after` and the query does not specify `sort`,
608
356
  the default sort `'{ "sort": { "_shard_doc": "asc" } }'` will be added to the query. Please refer to the {ref}/paginate-search-results.html#search-after[Elasticsearch search_after] parameter to know more.
609
357
 
610
- [id="plugins-{type}s-{plugin}-query_type"]
611
- ===== `query_type`
612
-
613
- * Value can be `dsl` or `esql`
614
- * Default value is `dsl`
615
-
616
- Defines the <<plugins-{type}s-{plugin}-query>> shape.
617
- When `dsl`, the query shape must be valid {es} JSON-style string.
618
- When `esql`, the query shape must be a valid {esql} string and `index`, `size`, `slices`, `search_api`, `docinfo`, `docinfo_target`, `docinfo_fields`, `response_type` and `tracking_field` parameters are not allowed.
619
-
620
358
  [id="plugins-{type}s-{plugin}-response_type"]
621
359
  ===== `response_type`
622
360
 
623
- * Value can be any of: `hits`, `aggregations`, `esql`
361
+ * Value can be any of: `hits`, `aggregations`
624
362
  * Default value is `hits`
625
363
 
626
364
  Which part of the result to transform into Logstash events when processing the
627
365
  response from the query.
628
-
629
366
  The default `hits` will generate one event per returned document (i.e. "hit").
630
-
631
- When set to `aggregations`, a single {ls} event will be generated with the
367
+ When set to `aggregations`, a single Logstash event will be generated with the
632
368
  contents of the `aggregations` object of the query's response. In this case the
633
369
  `hits` object will be ignored. The parameter `size` will be always be set to
634
370
  0 regardless of the default or user-defined value set in this plugin.
@@ -667,19 +403,6 @@ for example: "* * * * *" (execute query every minute, on the minute)
667
403
  There is no schedule by default. If no schedule is given, then the statement is run
668
404
  exactly once.
669
405
 
670
- [id="plugins-{type}s-{plugin}-schedule_overlap"]
671
- ===== `schedule_overlap`
672
-
673
- * Value type is <<boolean,boolean>>
674
- * Default value is `true`
675
-
676
- Whether to allow queuing of a scheduled run if a run is occurring.
677
- While this is ideal for ensuring a new run happens immediately after the previous on finishes if there
678
- is a lot of work to do, but given the queue is unbounded it may lead to an out of memory over long periods of time
679
- if the queue grows continuously.
680
-
681
- When in doubt, set `schedule_overlap` to false (it may become the default value in the future).
682
-
683
406
  [id="plugins-{type}s-{plugin}-scroll"]
684
407
  ===== `scroll`
685
408
 
@@ -772,6 +495,8 @@ Enable SSL/TLS secured communication to Elasticsearch cluster.
772
495
  Leaving this unspecified will use whatever scheme is specified in the URLs listed in <<plugins-{type}s-{plugin}-hosts>> or extracted from the <<plugins-{type}s-{plugin}-cloud_id>>.
773
496
  If no explicit protocol is specified plain HTTP will be used.
774
497
 
498
+ When not explicitly set, SSL will be automatically enabled if any of the specified hosts use HTTPS.
499
+
775
500
  [id="plugins-{type}s-{plugin}-ssl_key"]
776
501
  ===== `ssl_key`
777
502
  * Value type is <<path,path>>
@@ -890,28 +615,6 @@ When the `target` is set to a field reference, the `_source` of the hit is place
890
615
  This option can be useful to avoid populating unknown fields when a downstream schema such as ECS is enforced.
891
616
  It is also possible to target an entry in the event's metadata, which will be available during event processing but not exported to your outputs (e.g., `target \=> "[@metadata][_source]"`).
892
617
 
893
- [id="plugins-{type}s-{plugin}-tracking_field"]
894
- ===== `tracking_field`
895
-
896
- * Value type is <<string,string>>
897
- * There is no default value for this setting.
898
-
899
- Which field from the last event of a previous run will be used a cursor value for the following run.
900
- The value of this field is injected into each query if the query uses the placeholder `:last_value`.
901
- For the first query after a pipeline is started, the value used is either read from <<plugins-{type}s-{plugin}-last_run_metadata_path>> file,
902
- or taken from <<plugins-{type}s-{plugin}-tracking_field_seed>> setting.
903
-
904
- Note: The tracking value is updated after each page is read and at the end of each Point in Time. In case of a crash the last saved value will be used so some duplication of data can occur. For this reason the use of unique document IDs for each event is recommended in the downstream destination.
905
-
906
- [id="plugins-{type}s-{plugin}-tracking_field_seed"]
907
- ===== `tracking_field_seed`
908
-
909
- * Value type is <<string,string>>
910
- * Default value is `"1970-01-01T00:00:00.000000000Z"`
911
-
912
- The starting value for the <<plugins-{type}s-{plugin}-tracking_field>> if there is no <<plugins-{type}s-{plugin}-last_run_metadata_path>> already.
913
- This field defaults to the nanosecond precision ISO8601 representation of `epoch`, or "1970-01-01T00:00:00.000000000Z", given nano-second precision timestamps are the
914
- most reliable data format to use for this feature.
915
618
 
916
619
  [id="plugins-{type}s-{plugin}-user"]
917
620
  ===== `user`
@@ -924,56 +627,21 @@ option when authenticating to the Elasticsearch server. If set to an
924
627
  empty string authentication will be disabled.
925
628
 
926
629
 
927
- [id="plugins-{type}s-{plugin}-deprecated-options"]
928
- ==== Elasticsearch Input deprecated configuration options
630
+ [id="plugins-{type}s-{plugin}-obsolete-options"]
631
+ ==== Elasticsearch Input Obsolete Configuration Options
929
632
 
930
- This plugin supports the following deprecated configurations.
633
+ WARNING: As of version `5.0.0` of this plugin, some configuration options have been replaced.
634
+ The plugin will fail to start if it contains any of these obsolete options.
931
635
 
932
- WARNING: Deprecated options are subject to removal in future releases.
933
636
 
934
- [cols="<,<,<",options="header",]
637
+ [cols="<,<",options="header",]
935
638
  |=======================================================================
936
- |Setting|Input type|Replaced by
937
- | <<plugins-{type}s-{plugin}-ca_file>> |a valid filesystem path|<<plugins-{type}s-{plugin}-ssl_certificate_authorities>>
938
- | <<plugins-{type}s-{plugin}-ssl>> |<<boolean,boolean>>|<<plugins-{type}s-{plugin}-ssl_enabled>>
939
- | <<plugins-{type}s-{plugin}-ssl_certificate_verification>> |<<boolean,boolean>>|<<plugins-{type}s-{plugin}-ssl_verification_mode>>
639
+ |Setting|Replaced by
640
+ | ca_file | <<plugins-{type}s-{plugin}-ssl_certificate_authorities>>
641
+ | ssl | <<plugins-{type}s-{plugin}-ssl_enabled>>
642
+ | ssl_certificate_verification | <<plugins-{type}s-{plugin}-ssl_verification_mode>>
940
643
  |=======================================================================
941
644
 
942
- [id="plugins-{type}s-{plugin}-ca_file"]
943
- ===== `ca_file`
944
- deprecated[4.17.0, Replaced by <<plugins-{type}s-{plugin}-ssl_certificate_authorities>>]
945
-
946
- * Value type is <<path,path>>
947
- * There is no default value for this setting.
948
-
949
- SSL Certificate Authority file in PEM encoded format, must also include any chain certificates as necessary.
950
-
951
- [id="plugins-{type}s-{plugin}-ssl"]
952
- ===== `ssl`
953
- deprecated[4.17.0, Replaced by <<plugins-{type}s-{plugin}-ssl_enabled>>]
954
-
955
- * Value type is <<boolean,boolean>>
956
- * Default value is `false`
957
-
958
- If enabled, SSL will be used when communicating with the Elasticsearch
959
- server (i.e. HTTPS will be used instead of plain HTTP).
960
-
961
-
962
- [id="plugins-{type}s-{plugin}-ssl_certificate_verification"]
963
- ===== `ssl_certificate_verification`
964
- deprecated[4.17.0, Replaced by <<plugins-{type}s-{plugin}-ssl_verification_mode>>]
965
-
966
- * Value type is <<boolean,boolean>>
967
- * Default value is `true`
968
-
969
- Option to validate the server's certificate. Disabling this severely compromises security.
970
- When certificate validation is disabled, this plugin implicitly trusts the machine
971
- resolved at the given address without validating its proof-of-identity.
972
- In this scenario, the plugin can transmit credentials to or process data from an untrustworthy
973
- man-in-the-middle or other compromised infrastructure.
974
- More information on the importance of certificate verification:
975
- **https://www.cs.utexas.edu/~shmat/shmat_ccs12.pdf**.
976
-
977
645
  [id="plugins-{type}s-{plugin}-common-options"]
978
646
  include::{include_path}/{type}.asciidoc[]
979
647
 
@@ -12,9 +12,14 @@ module LogStash
12
12
  @client = client
13
13
  @plugin_params = plugin.params
14
14
 
15
- @index = @plugin_params["index"]
16
15
  @size = @plugin_params["size"]
16
+ @query = @plugin_params["query"]
17
17
  @retries = @plugin_params["retries"]
18
+ @agg_options = {
19
+ :index => @plugin_params["index"],
20
+ :size => 0
21
+ }.merge(:body => @query)
22
+
18
23
  @plugin = plugin
19
24
  end
20
25
 
@@ -28,18 +33,10 @@ module LogStash
28
33
  false
29
34
  end
30
35
 
31
- def aggregation_options(query_object)
32
- {
33
- :index => @index,
34
- :size => 0,
35
- :body => query_object
36
- }
37
- end
38
-
39
- def do_run(output_queue, query_object)
36
+ def do_run(output_queue)
40
37
  logger.info("Aggregation starting")
41
38
  r = retryable(AGGREGATION_JOB) do
42
- @client.search(aggregation_options(query_object))
39
+ @client.search(@agg_options)
43
40
  end
44
41
  @plugin.push_hit(r, output_queue, 'aggregations') if r
45
42
  end
@@ -21,10 +21,9 @@ module LogStash
21
21
  @pipeline_id = plugin.pipeline_id
22
22
  end
23
23
 
24
- def do_run(output_queue, query)
25
- @query = query
26
-
24
+ def do_run(output_queue)
27
25
  return retryable_search(output_queue) if @slices.nil? || @slices <= 1
26
+
28
27
  retryable_slice_search(output_queue)
29
28
  end
30
29
 
@@ -123,13 +122,6 @@ module LogStash
123
122
  PIT_JOB = "create point in time (PIT)"
124
123
  SEARCH_AFTER_JOB = "search_after paginated search"
125
124
 
126
- attr_accessor :cursor_tracker
127
-
128
- def do_run(output_queue, query)
129
- super(output_queue, query)
130
- @cursor_tracker.checkpoint_cursor(intermediate: false) if @cursor_tracker
131
- end
132
-
133
125
  def pit?(id)
134
126
  !!id&.is_a?(String)
135
127
  end
@@ -200,8 +192,6 @@ module LogStash
200
192
  end
201
193
  end
202
194
 
203
- @cursor_tracker.checkpoint_cursor(intermediate: true) if @cursor_tracker
204
-
205
195
  logger.info("Query completed", log_details)
206
196
  end
207
197