fluent-plugin-test 0.0.17 → 0.0.24
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/LICENSE.txt +22 -201
- data/README.md +347 -1496
- data/fluent-plugin-cloudwatch-logs.gemspec +29 -0
- data/lib/fluent/plugin/cloudwatch/logs/version.rb +9 -0
- data/lib/fluent/plugin/cloudwatch/logs.rb +11 -0
- data/lib/fluent/plugin/in_cloudwatch_logs.rb +374 -0
- data/lib/fluent/plugin/out_cloudwatch_logs.rb +574 -0
- metadata +48 -127
- data/lib/fluent/log-ext.rb +0 -64
- data/lib/fluent/plugin/filter_opensearch_genid.rb +0 -103
- data/lib/fluent/plugin/in_opensearch.rb +0 -441
- data/lib/fluent/plugin/oj_serializer.rb +0 -48
- data/lib/fluent/plugin/opensearch_constants.rb +0 -39
- data/lib/fluent/plugin/opensearch_error.rb +0 -31
- data/lib/fluent/plugin/opensearch_error_handler.rb +0 -182
- data/lib/fluent/plugin/opensearch_fallback_selector.rb +0 -36
- data/lib/fluent/plugin/opensearch_index_template.rb +0 -155
- data/lib/fluent/plugin/opensearch_simple_sniffer.rb +0 -36
- data/lib/fluent/plugin/opensearch_tls.rb +0 -96
- data/lib/fluent/plugin/out_opensearch.rb +0 -1162
- data/lib/fluent/plugin/out_opensearch_data_stream.rb +0 -231
data/README.md
CHANGED
@@ -1,1524 +1,379 @@
|
|
1
|
-
#
|
2
|
-
|
3
|
-
[](http://badge.fury.io/rb/fluent-plugin-opensearch)
|
4
|
-

|
5
|
-

|
6
|
-

|
7
|
-
[](https://coveralls.io/github/fluent/fluent-plugin-opensearch?branch=main)
|
8
|
-
|
9
|
-
Send your logs to OpenSearch (and search them with OpenSearch Dashboards maybe?)
|
10
|
-
|
11
|
-
* [Installation](#installation)
|
12
|
-
* [Usage](#usage)
|
13
|
-
+ [Index templates](#index-templates)
|
14
|
-
* [Configuration](#configuration)
|
15
|
-
+ [host](#host)
|
16
|
-
+ [port](#port)
|
17
|
-
+ [emit_error_for_missing_id](#emit_error_for_missing_id)
|
18
|
-
+ [hosts](#hosts)
|
19
|
-
+ [user, password, path, scheme, ssl_verify](#user-password-path-scheme-ssl_verify)
|
20
|
-
+ [logstash_format](#logstash_format)
|
21
|
-
+ [logstash_prefix](#logstash_prefix)
|
22
|
-
+ [logstash_prefix_separator](#logstash_prefix_separator)
|
23
|
-
+ [logstash_dateformat](#logstash_dateformat)
|
24
|
-
+ [pipeline](#pipeline)
|
25
|
-
+ [time_key_format](#time_key_format)
|
26
|
-
+ [time_precision](#time_precision)
|
27
|
-
+ [time_key](#time_key)
|
28
|
-
+ [time_key_exclude_timestamp](#time_key_exclude_timestamp)
|
29
|
-
+ [include_timestamp](#include_timestamp)
|
30
|
-
+ [utc_index](#utc_index)
|
31
|
-
+ [suppress_type_name](#suppress_type_name)
|
32
|
-
+ [target_index_key](#target_index_key)
|
33
|
-
+ [target_type_key](#target_type_key)
|
34
|
-
+ [target_index_affinity](#target_index_affinity)
|
35
|
-
+ [template_name](#template_name)
|
36
|
-
+ [template_file](#template_file)
|
37
|
-
+ [template_overwrite](#template_overwrite)
|
38
|
-
+ [customize_template](#customize_template)
|
39
|
-
+ [index_date_pattern](#index_date_pattern)
|
40
|
-
+ [application_name](#application_name)
|
41
|
-
+ [index_prefix](#index_prefix)
|
42
|
-
+ [templates](#templates)
|
43
|
-
+ [max_retry_putting_template](#max_retry_putting_template)
|
44
|
-
+ [fail_on_putting_template_retry_exceed](#fail_on_putting_template_retry_exceed)
|
45
|
-
+ [fail_on_detecting_os_version_retry_exceed](#fail_on_detecting_os_version_retry_exceed)
|
46
|
-
+ [max_retry_get_os_version](#max_retry_get_os_version)
|
47
|
-
+ [request_timeout](#request_timeout)
|
48
|
-
+ [reload_connections](#reload_connections)
|
49
|
-
+ [reload_on_failure](#reload_on_failure)
|
50
|
-
+ [resurrect_after](#resurrect_after)
|
51
|
-
+ [include_tag_key, tag_key](#include_tag_key-tag_key)
|
52
|
-
+ [id_key](#id_key)
|
53
|
-
+ [parent_key](#parent_key)
|
54
|
-
+ [routing_key](#routing_key)
|
55
|
-
+ [remove_keys](#remove_keys)
|
56
|
-
+ [remove_keys_on_update](#remove_keys_on_update)
|
57
|
-
+ [remove_keys_on_update_key](#remove_keys_on_update_key)
|
58
|
-
+ [retry_tag](#retry_tag)
|
59
|
-
+ [write_operation](#write_operation)
|
60
|
-
+ [time_parse_error_tag](#time_parse_error_tag)
|
61
|
-
+ [reconnect_on_error](#reconnect_on_error)
|
62
|
-
+ [with_transporter_log](#with_transporter_log)
|
63
|
-
+ [content_type](#content_type)
|
64
|
-
+ [include_index_in_url](#include_index_in_url)
|
65
|
-
+ [http_backend](#http_backend)
|
66
|
-
+ [http_backend_excon_nonblock](#http_backend_excon_nonblock)
|
67
|
-
+ [prefer_oj_serializer](#prefer_oj_serializer)
|
68
|
-
+ [compression_level](#compression_level)
|
69
|
-
+ [Client/host certificate options](#clienthost-certificate-options)
|
70
|
-
+ [Proxy Support](#proxy-support)
|
71
|
-
+ [Buffer options](#buffer-options)
|
72
|
-
+ [Hash flattening](#hash-flattening)
|
73
|
-
+ [Generate Hash ID](#generate-hash-id)
|
74
|
-
+ [sniffer_class_name](#sniffer-class-name)
|
75
|
-
+ [selector_class_name](#selector-class-name)
|
76
|
-
+ [reload_after](#reload-after)
|
77
|
-
+ [validate_client_version](#validate-client-version)
|
78
|
-
+ [unrecoverable_error_types](#unrecoverable-error-types)
|
79
|
-
+ [unrecoverable_record_types](#unrecoverable-record-types)
|
80
|
-
+ [emit_error_label_event](#emit-error-label-event)
|
81
|
-
+ [verify os version at startup](#verify_os_version_at_startup)
|
82
|
-
+ [default_opensearch_version](#default_opensearch_version)
|
83
|
-
+ [custom_headers](#custom_headers)
|
84
|
-
+ [api_key](#api_key)
|
85
|
-
+ [Not seeing a config you need?](#not-seeing-a-config-you-need)
|
86
|
-
+ [Dynamic configuration](#dynamic-configuration)
|
87
|
-
+ [Placeholders](#placeholders)
|
88
|
-
+ [Multi workers](#multi-workers)
|
89
|
-
+ [log_os_400_reason](#log_os_400_reason)
|
90
|
-
+ [suppress_doc_wrap](#suppress_doc_wrap)
|
91
|
-
+ [ignore_exceptions](#ignore_exceptions)
|
92
|
-
+ [exception_backup](#exception_backup)
|
93
|
-
+ [bulk_message_request_threshold](#bulk_message_request_threshold)
|
94
|
-
+ [truncate_caches_interval](#truncate_caches_interval)
|
95
|
-
+ [use_legacy_template](#use_legacy_template)
|
96
|
-
+ [metadata section](#metadata-section)
|
97
|
-
+ [include_chunk_id](#include_chunk_id)
|
98
|
-
+ [chunk_id_key](#chunk_id_key)
|
99
|
-
* [Configuration - OpenSearch Input](#configuration---opensearch-input)
|
100
|
-
* [Configuration - OpenSearch Filter GenID](#configuration---opensearch-filter-genid)
|
101
|
-
* [Configuration - OpenSearch Output Data Stream](#configuration---opensearch-output-data-stream)
|
102
|
-
* [Configuration - AWS OpenSearch Service](#configuration---aws-opensearch-service)
|
103
|
-
* [Troubleshooting](#troubleshooting)
|
104
|
-
* [Contact](#contact)
|
105
|
-
* [Contributing](#contributing)
|
106
|
-
* [Running tests](#running-tests)
|
1
|
+
# fluent-plugin-cloudwatch-logs
|
107
2
|
|
108
|
-
|
109
|
-
|
110
|
-
| fluent-plugin-opensearch | fluentd | ruby |
|
111
|
-
|:----------------------------:|:-----------:|:------:|
|
112
|
-
| >= 1.0.0 | >= v1.x | >= 2.4 |
|
113
|
-
|
114
|
-
NOTE: Since fluent-plugin-opensearch 1.1.0, it requires faraday 2.0 or later.
|
115
|
-
|
116
|
-
NOTE: This documentation is for fluent-plugin-opensearch 1.x or later.
|
117
|
-
|
118
|
-
## Installation
|
119
|
-
|
120
|
-
```sh
|
121
|
-
$ gem install fluent-plugin-opensearch
|
122
|
-
```
|
123
|
-
|
124
|
-
## Usage
|
125
|
-
|
126
|
-
In your Fluentd configuration, use `@type opensearch`. Additional configuration is optional, default values would look like this:
|
127
|
-
|
128
|
-
```
|
129
|
-
<match my.logs>
|
130
|
-
@type opensearch
|
131
|
-
host localhost
|
132
|
-
port 9200
|
133
|
-
index_name fluentd
|
134
|
-
</match>
|
135
|
-
```
|
136
|
-
|
137
|
-
NOTE: `type_name` parameter is fixed value and cannot change and configure from `_doc` value for OpenSearch 1.
|
138
|
-
|
139
|
-
### Index templates
|
140
|
-
|
141
|
-
This plugin creates OpenSearch indices by merely writing to them. Consider using [Index Templates](https://opensearch.org/docs/latest/opensearch/index-templates/) to gain control of what get indexed and how.
|
142
|
-
|
143
|
-
## Configuration
|
144
|
-
|
145
|
-
### host
|
146
|
-
|
147
|
-
```
|
148
|
-
host user-custom-host.domain # default localhost
|
149
|
-
```
|
150
|
-
|
151
|
-
You can specify OpenSearch host by this parameter.
|
152
|
-
|
153
|
-
To use IPv6 address on `host` parameter, you can use the following styles:
|
154
|
-
|
155
|
-
#### string style
|
156
|
-
|
157
|
-
To use string style, you must quote IPv6 address due to prevent to be interpreted as JSON:
|
158
|
-
|
159
|
-
```
|
160
|
-
host "[2404:7a80:d440:3000:192a:a292:bd7f:ca10]"
|
161
|
-
```
|
162
|
-
|
163
|
-
#### raw style
|
164
|
-
|
165
|
-
You can also specify raw IPv6 address. This will be handled as `[specified IPv6 address]`:
|
166
|
-
|
167
|
-
```
|
168
|
-
host 2404:7a80:d440:3000:192a:a292:bd7f:ca10
|
169
|
-
```
|
170
|
-
|
171
|
-
### port
|
172
|
-
|
173
|
-
```
|
174
|
-
port 9201 # defaults to 9200
|
175
|
-
```
|
176
|
-
|
177
|
-
You can specify OpenSearch port by this parameter.
|
178
|
-
|
179
|
-
### emit_error_for_missing_id
|
180
|
-
|
181
|
-
```
|
182
|
-
emit_error_for_missing_id true
|
183
|
-
```
|
184
|
-
When `write_operation` is configured to anything other then `index`, setting this value to `true` will
|
185
|
-
cause the plugin to `emit_error_event` of any records which do not include an `_id` field. The default (`false`)
|
186
|
-
behavior is to silently drop the records.
|
187
|
-
|
188
|
-
### hosts
|
189
|
-
|
190
|
-
```
|
191
|
-
hosts host1:port1,host2:port2,host3:port3
|
192
|
-
```
|
193
|
-
|
194
|
-
You can specify multiple OpenSearch hosts with separator ",".
|
195
|
-
|
196
|
-
If you specify multiple hosts, this plugin will load balance updates to OpenSearch. This is an [opensearch-ruby](https://github.com/opensearch-project/opensearch-ruby) feature, the default strategy is round-robin.
|
197
|
-
|
198
|
-
If you specify `hosts` option, `host` and `port` options are ignored.
|
199
|
-
|
200
|
-
```
|
201
|
-
host user-custom-host.domain # ignored
|
202
|
-
port 9200 # ignored
|
203
|
-
hosts host1:port1,host2:port2,host3:port3
|
204
|
-
```
|
205
|
-
|
206
|
-
If you specify `hosts` option without port, `port` option is used.
|
207
|
-
|
208
|
-
```
|
209
|
-
port 9200
|
210
|
-
hosts host1:port1,host2:port2,host3 # port3 is 9200
|
211
|
-
```
|
212
|
-
|
213
|
-
**Note:** If you will use scheme https, do not include "https://" in your hosts ie. host "https://domain", this will cause ES cluster to be unreachable and you will receive an error "Can not reach OpenSearch cluster"
|
214
|
-
|
215
|
-
**Note:** Embedded the username/password in the URL syntax is not recommended to use because it was found to cause serious connection problems. Please do not use its style on your settings and use the `user` and `password` field (described below) instead.
|
216
|
-
|
217
|
-
#### IPv6 addresses
|
218
|
-
|
219
|
-
When you want to specify IPv6 addresses, you must specify schema together:
|
220
|
-
|
221
|
-
```
|
222
|
-
hosts http://[2404:7a80:d440:3000:de:7311:6329:2e6c]:port1,http://[2404:7a80:d440:3000:de:7311:6329:1e6c]:port2,http://[2404:7a80:d440:3000:de:6311:6329:2e6c]:port3
|
223
|
-
```
|
224
|
-
|
225
|
-
If you don't specify hosts with schema together, OpenSearch plugin complains Invalid URI for them.
|
226
|
-
|
227
|
-
### user, password, path, scheme, ssl_verify
|
228
|
-
|
229
|
-
```
|
230
|
-
user demo
|
231
|
-
password secret
|
232
|
-
path /elastic_search/
|
233
|
-
scheme https
|
234
|
-
```
|
235
|
-
|
236
|
-
You can specify user and password for HTTP Basic authentication.
|
237
|
-
|
238
|
-
And this plugin will escape required URL encoded characters within `%{}` placeholders.
|
239
|
-
|
240
|
-
```
|
241
|
-
user %{demo+}
|
242
|
-
password %{@secret}
|
243
|
-
```
|
244
|
-
|
245
|
-
Specify `ssl_verify false` to skip ssl verification (defaults to true)
|
246
|
-
|
247
|
-
### logstash_format
|
248
|
-
|
249
|
-
```
|
250
|
-
logstash_format true # defaults to false
|
251
|
-
```
|
252
|
-
|
253
|
-
This is meant to make writing data into OpenSearch indices compatible to what [Logstash](https://www.elastic.co/products/logstash) calls them. By doing this, one could take advantage of [Kibana](https://www.elastic.co/products/kibana). See logstash\_prefix and logstash\_dateformat to customize this index name pattern. The index name will be `#{logstash_prefix}-#{formatted_date}`
|
254
|
-
|
255
|
-
:warning: Setting this option to `true` will ignore the `index_name` setting. The default index name prefix is `logstash-`.
|
256
|
-
|
257
|
-
### include_timestamp
|
258
|
-
|
259
|
-
```
|
260
|
-
include_timestamp true # defaults to false
|
261
|
-
```
|
262
|
-
|
263
|
-
Adds a `@timestamp` field to the log, following all settings `logstash_format` does, except without the restrictions on `index_name`. This allows one to log to an alias in OpenSearch and utilize the rollover API.
|
264
|
-
|
265
|
-
### logstash_prefix
|
266
|
-
|
267
|
-
```
|
268
|
-
logstash_prefix mylogs # defaults to "logstash"
|
269
|
-
```
|
270
|
-
|
271
|
-
### logstash_prefix_separator
|
272
|
-
|
273
|
-
```
|
274
|
-
logstash_prefix_separator _ # defaults to "-"
|
275
|
-
```
|
276
|
-
|
277
|
-
### logstash_dateformat
|
278
|
-
|
279
|
-
The strftime format to generate index target index name when `logstash_format` is set to true. By default, the records are inserted into index `logstash-YYYY.MM.DD`. This option, alongwith `logstash_prefix` lets us insert into specified index like `mylogs-YYYYMM` for a monthly index.
|
280
|
-
|
281
|
-
```
|
282
|
-
logstash_dateformat %Y.%m. # defaults to "%Y.%m.%d"
|
283
|
-
```
|
284
|
-
|
285
|
-
### pipeline
|
286
|
-
|
287
|
-
This param is to set a pipeline id of your opensearch to be added into the request, you can configure ingest node.
|
288
|
-
|
289
|
-
```
|
290
|
-
pipeline pipeline_id
|
291
|
-
```
|
292
|
-
|
293
|
-
### time_key_format
|
294
|
-
|
295
|
-
The format of the time stamp field (`@timestamp` or what you specify with [time_key](#time_key)). This parameter only has an effect when [logstash_format](#logstash_format) is true as it only affects the name of the index we write to. Please see [Time#strftime](http://ruby-doc.org/core-1.9.3/Time.html#method-i-strftime) for information about the value of this format.
|
296
|
-
|
297
|
-
Setting this to a known format can vastly improve your log ingestion speed if all most of your logs are in the same format. If there is an error parsing this format the timestamp will default to the ingestion time. You can get a further performance improvement by installing the "strptime" gem: `fluent-gem install strptime`.
|
298
|
-
|
299
|
-
For example to parse ISO8601 times with sub-second precision:
|
300
|
-
|
301
|
-
```
|
302
|
-
time_key_format %Y-%m-%dT%H:%M:%S.%N%z
|
303
|
-
```
|
304
|
-
|
305
|
-
### time_precision
|
306
|
-
|
307
|
-
Should the record not include a `time_key`, define the degree of sub-second time precision to preserve from the `time` portion of the routed event.
|
308
|
-
|
309
|
-
For example, should your input plugin not include a `time_key` in the record but it able to pass a `time` to the router when emitting the event (AWS CloudWatch events are an example of this), then this setting will allow you to preserve the sub-second time resolution of those events. This is the case for: [fluent-plugin-cloudwatch-ingest](https://github.com/sampointer/fluent-plugin-cloudwatch-ingest).
|
310
|
-
|
311
|
-
### time_key
|
312
|
-
|
313
|
-
By default, when inserting records in [Logstash](https://www.elastic.co/products/logstash) format, `@timestamp` is dynamically created with the time at log ingestion. If you'd like to use a custom time, include an `@timestamp` with your record.
|
314
|
-
|
315
|
-
```
|
316
|
-
{"@timestamp": "2014-04-07T000:00:00-00:00"}
|
317
|
-
```
|
318
|
-
|
319
|
-
You can specify an option `time_key` (like the option described in [tail Input Plugin](http://docs.fluentd.org/articles/in_tail)) to replace `@timestamp` key.
|
320
|
-
|
321
|
-
Suppose you have settings
|
322
|
-
|
323
|
-
```
|
324
|
-
logstash_format true
|
325
|
-
time_key vtm
|
326
|
-
```
|
327
|
-
|
328
|
-
Your input is:
|
329
|
-
```
|
330
|
-
{
|
331
|
-
"title": "developer",
|
332
|
-
"vtm": "2014-12-19T08:01:03Z"
|
333
|
-
}
|
334
|
-
```
|
335
|
-
|
336
|
-
The output will be
|
337
|
-
```
|
338
|
-
{
|
339
|
-
"title": "developer",
|
340
|
-
"@timestamp": "2014-12-19T08:01:03Z",
|
341
|
-
"vtm": "2014-12-19T08:01:03Z"
|
342
|
-
}
|
343
|
-
```
|
344
|
-
|
345
|
-
See `time_key_exclude_timestamp` to avoid adding `@timestamp`.
|
346
|
-
|
347
|
-
### time_key_exclude_timestamp
|
348
|
-
|
349
|
-
```
|
350
|
-
time_key_exclude_timestamp false
|
351
|
-
```
|
352
|
-
|
353
|
-
By default, setting `time_key` will copy the value to an additional field `@timestamp`. When setting `time_key_exclude_timestamp true`, no additional field will be added.
|
354
|
-
|
355
|
-
### utc_index
|
356
|
-
|
357
|
-
```
|
358
|
-
utc_index true
|
359
|
-
```
|
360
|
-
|
361
|
-
By default, the records inserted into index `logstash-YYMMDD` with UTC (Coordinated Universal Time). This option allows to use local time if you describe utc_index to false.
|
362
|
-
|
363
|
-
|
364
|
-
### suppress_type_name
|
365
|
-
|
366
|
-
If OpenSearch cluster complains types removal warnings, this can be suppressed with:
|
367
|
-
|
368
|
-
```
|
369
|
-
suppress_type_name true
|
370
|
-
```
|
371
|
-
|
372
|
-
### target_index_key
|
373
|
-
|
374
|
-
Tell this plugin to find the index name to write to in the record under this key in preference to other mechanisms. Key can be specified as path to nested record using dot ('.') as a separator.
|
375
|
-
|
376
|
-
If it is present in the record (and the value is non falsy) the value will be used as the index name to write to and then removed from the record before output; if it is not found then it will use logstash_format or index_name settings as configured.
|
377
|
-
|
378
|
-
Suppose you have the following settings
|
379
|
-
|
380
|
-
```
|
381
|
-
target_index_key @target_index
|
382
|
-
index_name fallback
|
383
|
-
```
|
384
|
-
|
385
|
-
If your input is:
|
386
|
-
```
|
387
|
-
{
|
388
|
-
"title": "developer",
|
389
|
-
"@timestamp": "2014-12-19T08:01:03Z",
|
390
|
-
"@target_index": "logstash-2014.12.19"
|
391
|
-
}
|
392
|
-
```
|
393
|
-
|
394
|
-
The output would be
|
395
|
-
|
396
|
-
```
|
397
|
-
{
|
398
|
-
"title": "developer",
|
399
|
-
"@timestamp": "2014-12-19T08:01:03Z",
|
400
|
-
}
|
401
|
-
```
|
402
|
-
|
403
|
-
and this record will be written to the specified index (`logstash-2014.12.19`) rather than `fallback`.
|
404
|
-
|
405
|
-
### target_type_key
|
406
|
-
|
407
|
-
Similar to `target_index_key` config, find the type name to write to in the record under this key (or nested record). If key not found in record - fallback to `type_name` (default "fluentd").
|
408
|
-
|
409
|
-
### target_index_affinity
|
410
|
-
|
411
|
-
Enable plugin to dynamically select logstash time based target index in update/upsert operations based on already indexed data rather than current time of indexing.
|
412
|
-
|
413
|
-
```
|
414
|
-
target_index_affinity true # defaults to false
|
415
|
-
```
|
416
|
-
|
417
|
-
By default plugin writes data of logstash format index based on current time. For example daily based index after mignight data is written to newly created index. This is normally ok when data is coming from single source and not updated after indexing.
|
418
|
-
|
419
|
-
But if you have a use case where data is also updated after indexing and `id_key` is used to identify the document uniquely for updating. Logstash format is wanted to be used for easy data managing and retention. Updates are done right after indexing to complete the data (all data not available from single source) and no updates are done anymore later point on time. In this case problem happends at index rotation time where write to 2 indexes with same id_key value may happen.
|
420
|
-
|
421
|
-
This setting will search existing data by using elastic search's [id query](https://www.elastic.co/guide/en/opensearch/reference/current/query-dsl-ids-query.html) using `id_key` value (with logstash_prefix and logstash_prefix_separator index pattarn e.g. `logstash-*`). The index of found data is used for update/upsert. When no data is found, data is written to current logstash index as normally.
|
422
|
-
|
423
|
-
This setting requires following other settings:
|
424
|
-
```
|
425
|
-
logstash_format true
|
426
|
-
id_key myId # Some field on your data to identify the data uniquely
|
427
|
-
write_operation upsert # upsert or update
|
428
|
-
```
|
429
|
-
|
430
|
-
Suppose you have the following situation where you have 2 different match to consume data from 2 different Kafka topics independently but close in time with each other (order not known).
|
431
|
-
|
432
|
-
```
|
433
|
-
<match data1>
|
434
|
-
@type opensearch
|
435
|
-
...
|
436
|
-
id_key myId
|
437
|
-
write_operation upsert
|
438
|
-
logstash_format true
|
439
|
-
logstash_dateformat %Y.%m.%d
|
440
|
-
logstash_prefix myindexprefix
|
441
|
-
target_index_affinity true
|
442
|
-
...
|
443
|
-
|
444
|
-
<match data2>
|
445
|
-
@type opensearch
|
446
|
-
...
|
447
|
-
id_key myId
|
448
|
-
write_operation upsert
|
449
|
-
logstash_format true
|
450
|
-
logstash_dateformat %Y.%m.%d
|
451
|
-
logstash_prefix myindexprefix
|
452
|
-
target_index_affinity true
|
453
|
-
...
|
454
|
-
```
|
455
|
-
|
456
|
-
If your first (data1) input is:
|
457
|
-
```
|
458
|
-
{
|
459
|
-
"myId": "myuniqueId1",
|
460
|
-
"datafield1": "some value",
|
461
|
-
}
|
462
|
-
```
|
463
|
-
|
464
|
-
and your second (data2) input is:
|
465
|
-
```
|
466
|
-
{
|
467
|
-
"myId": "myuniqueId1",
|
468
|
-
"datafield99": "some important data from other source tightly related to id myuniqueId1 and wanted to be in same document.",
|
469
|
-
}
|
470
|
-
```
|
471
|
-
|
472
|
-
Date today is 10.05.2021 so data is written to index `myindexprefix-2021.05.10` when both data1 and data2 is consumed during today.
|
473
|
-
But when we are close to index rotation and data1 is consumed and indexed at `2021-05-10T23:59:55.59707672Z` and data2
|
474
|
-
is consumed a bit later at `2021-05-11T00:00:58.222079Z` i.e. logstash index has been rotated and normally data2 would have been written
|
475
|
-
to index `myindexprefix-2021.05.11`. But with target_index_affinity setting as value true, data2 is now written to index `myindexprefix-2021.05.10`
|
476
|
-
into same document with data1 as wanted and duplicated document is avoided.
|
477
|
-
|
478
|
-
### template_name
|
479
|
-
|
480
|
-
The name of the template to define. If a template by the name given is already present, it will be left unchanged, unless [template_overwrite](#template_overwrite) is set, in which case the template will be updated.
|
481
|
-
|
482
|
-
This parameter along with template_file allow the plugin to behave similarly to Logstash (it installs a template at creation time) so that raw records are available. See [https://github.com/uken/fluent-plugin-elasticsearch/issues/33](https://github.com/uken/fluent-plugin-elasticsearch/issues/33).
|
483
|
-
|
484
|
-
[template_file](#template_file) must also be specified.
|
485
|
-
|
486
|
-
### template_file
|
487
|
-
|
488
|
-
The path to the file containing the template to install.
|
489
|
-
|
490
|
-
[template_name](#template_name) must also be specified.
|
491
|
-
|
492
|
-
### templates
|
493
|
-
|
494
|
-
Specify index templates in form of hash. Can contain multiple templates.
|
495
|
-
|
496
|
-
```
|
497
|
-
templates { "template_name_1": "path_to_template_1_file", "template_name_2": "path_to_template_2_file"}
|
498
|
-
```
|
499
|
-
|
500
|
-
**Note:** Before ES plugin v4.1.2, if `template_file` and `template_name` are set, then this parameter will be ignored. In 4.1.3 or later, `template_file` and `template_name` can work with `templates`.
|
501
|
-
|
502
|
-
### customize_template
|
503
|
-
|
504
|
-
Specify the string and its value to be replaced in form of hash. Can contain multiple key value pair that would be replaced in the specified template_file.
|
505
|
-
|
506
|
-
```
|
507
|
-
customize_template {"string_1": "subs_value_1", "string_2": "subs_value_2"}
|
508
|
-
```
|
509
|
-
|
510
|
-
If [template_file](#template_file) and [template_name](#template_name) are set, then this parameter will be in effect otherwise ignored.
|
511
|
-
|
512
|
-
### index_date_pattern
|
513
|
-
|
514
|
-
Specify this to override the index date pattern for creating a rollover index. The default is to use "now/d",
|
515
|
-
for example: <logstash-default-{now/d}-000001>. Overriding this changes the rollover time period. Setting
|
516
|
-
"now/w{xxxx.ww}" would create weekly rollover indexes instead of daily.
|
517
|
-
|
518
|
-
```
|
519
|
-
index_date_pattern "now/w{xxxx.ww}" # defaults to "now/d"
|
520
|
-
```
|
521
|
-
|
522
|
-
If empty string(`""`) is specified in `index_date_pattern`, index date pattern is not used.
|
523
|
-
OpenSearch plugin just creates <`target_index`-`application_name`-000001> rollover index instead of <`target_index`-`application_name`-`{index_date_pattern}`-000001>.
|
524
|
-
|
525
|
-
If [customize_template](#customize_template) is set, then this parameter will be in effect otherwise ignored.
|
526
|
-
|
527
|
-
### index_prefix
|
528
|
-
|
529
|
-
This parameter is marked as obsoleted.
|
530
|
-
|
531
|
-
### application_name
|
532
|
-
|
533
|
-
Specify the application name for the rollover index to be created.
|
534
|
-
```
|
535
|
-
application_name default # defaults to "default"
|
536
|
-
```
|
537
|
-
|
538
|
-
### template_overwrite
|
539
|
-
|
540
|
-
Always update the template, even if it already exists.
|
541
|
-
|
542
|
-
```
|
543
|
-
template_overwrite true # defaults to false
|
544
|
-
```
|
545
|
-
|
546
|
-
One of [template_file](#template_file) or [templates](#templates) must also be specified if this is set.
|
547
|
-
|
548
|
-
### max_retry_putting_template
|
549
|
-
|
550
|
-
You can specify times of retry putting template.
|
551
|
-
|
552
|
-
This is useful when OpenSearch plugin cannot connect OpenSearch to put template.
|
553
|
-
Usually, booting up clustered OpenSearch containers are much slower than launching Fluentd container.
|
554
|
-
|
555
|
-
```
|
556
|
-
max_retry_putting_template 15 # defaults to 10
|
557
|
-
```
|
558
|
-
|
559
|
-
### fail_on_putting_template_retry_exceed
|
560
|
-
|
561
|
-
Indicates whether to fail when `max_retry_putting_template` is exceeded.
|
562
|
-
If you have multiple output plugin, you could use this property to do not fail on fluentd statup.
|
563
|
-
|
564
|
-
```
|
565
|
-
fail_on_putting_template_retry_exceed false # defaults to true
|
566
|
-
```
|
567
|
-
|
568
|
-
### fail_on_detecting_os_version_retry_exceed
|
569
|
-
|
570
|
-
Indicates whether to fail when `max_retry_get_os_version` is exceeded.
|
571
|
-
If you want to use fallback mechanism for obtaining OpenSearch version, you could use this property to do not fail on fluentd statup.
|
572
|
-
|
573
|
-
```
|
574
|
-
fail_on_detecting_os_version_retry_exceed false
|
575
|
-
```
|
576
|
-
|
577
|
-
And the following parameters should be working with:
|
578
|
-
|
579
|
-
```
|
580
|
-
verify_os_version_at_startup true
|
581
|
-
max_retry_get_os_version 2 # greater than 0.
|
582
|
-
default_opensearch_version 1 # This version is used when occurring fallback.
|
583
|
-
```
|
584
|
-
|
585
|
-
### max_retry_get_os_version
|
586
|
-
|
587
|
-
You can specify times of retry obtaining OpenSearch version.
|
588
|
-
|
589
|
-
This is useful when OpenSearch plugin cannot connect OpenSearch to obtain OpenSearch version.
|
590
|
-
Usually, booting up clustered OpenSearch containers are much slower than launching Fluentd container.
|
591
|
-
|
592
|
-
```
|
593
|
-
max_retry_get_os_version 17 # defaults to 15
|
594
|
-
```
|
595
|
-
|
596
|
-
### request_timeout
|
597
|
-
|
598
|
-
You can specify HTTP request timeout.
|
599
|
-
|
600
|
-
This is useful when OpenSearch cannot return response for bulk request within the default of 5 seconds.
|
601
|
-
|
602
|
-
```
|
603
|
-
request_timeout 15s # defaults to 5s
|
604
|
-
```
|
605
|
-
|
606
|
-
### reload_connections
|
607
|
-
|
608
|
-
You can tune how the opensearch-transport host reloading feature works. By default it off but if you set true it will reload the host list from the server every 10,000th request to spread the load. This can be an issue if your OpenSearch cluster is behind a Reverse Proxy, as Fluentd process may not have direct network access to the OpenSearch nodes.
|
609
|
-
|
610
|
-
```
|
611
|
-
reload_connections false # defaults to false
|
612
|
-
```
|
613
|
-
|
614
|
-
### reload_on_failure
|
615
|
-
|
616
|
-
Indicates that the opensearch-transport will try to reload the nodes addresses if there is a failure while making the
|
617
|
-
request, this can be useful to quickly remove a dead node from the list of addresses.
|
618
|
-
|
619
|
-
```
|
620
|
-
reload_on_failure true # defaults to false
|
621
|
-
```
|
622
|
-
|
623
|
-
### resurrect_after
|
624
|
-
|
625
|
-
You can set in the opensearch-transport how often dead connections from the opensearch-transport's pool will be resurrected.
|
626
|
-
|
627
|
-
```
|
628
|
-
resurrect_after 5s # defaults to 60s
|
629
|
-
```
|
630
|
-
|
631
|
-
### include_tag_key, tag_key
|
632
|
-
|
633
|
-
```
|
634
|
-
include_tag_key true # defaults to false
|
635
|
-
tag_key tag # defaults to tag
|
636
|
-
```
|
637
|
-
|
638
|
-
This will add the Fluentd tag in the JSON record. For instance, if you have a config like this:
|
639
|
-
|
640
|
-
```
|
641
|
-
<match my.logs>
|
642
|
-
@type opensearch
|
643
|
-
include_tag_key true
|
644
|
-
tag_key _key
|
645
|
-
</match>
|
646
|
-
```
|
647
|
-
|
648
|
-
The record inserted into OpenSearch would be
|
649
|
-
|
650
|
-
```
|
651
|
-
{"_key": "my.logs", "name": "Johnny Doeie"}
|
652
|
-
```
|
653
|
-
|
654
|
-
### id_key
|
655
|
-
|
656
|
-
```
|
657
|
-
id_key request_id # use "request_id" field as a record id in ES
|
658
|
-
```
|
659
|
-
|
660
|
-
By default, all records inserted into OpenSearch get a random _id. This option allows to use a field in the record as an identifier.
|
661
|
-
|
662
|
-
This following record `{"name": "Johnny", "request_id": "87d89af7daffad6"}` will trigger the following OpenSearch command
|
663
|
-
|
664
|
-
```
|
665
|
-
{ "index" : { "_index": "logstash-2013.01.01", "_type": "fluentd", "_id": "87d89af7daffad6" } }
|
666
|
-
{ "name": "Johnny", "request_id": "87d89af7daffad6" }
|
667
|
-
```
|
668
|
-
|
669
|
-
Fluentd re-emits events that failed to be indexed/ingested in OpenSearch with a new and unique `_id` value, this means that congested OpenSearch clusters that reject events (due to command queue overflow, for example) will cause Fluentd to re-emit the event with a new `_id`, however OpenSearch may actually process both (or more) attempts (with some delay) and create duplicate events in the index (since each have a unique `_id` value), one possible workaround is to use the [fluent-plugin-genhashvalue](https://github.com/mtakemi/fluent-plugin-genhashvalue) plugin to generate a unique `_hash` key in the record of each event, this `_hash` record can be used as the `id_key` to prevent OpenSearch from creating duplicate events.
|
670
|
-
|
671
|
-
```
|
672
|
-
id_key _hash
|
673
|
-
```
|
674
|
-
|
675
|
-
Example configuration for [fluent-plugin-genhashvalue](https://github.com/mtakemi/fluent-plugin-genhashvalue) (review the documentation of the plugin for more details)
|
676
|
-
```
|
677
|
-
<filter logs.**>
|
678
|
-
@type genhashvalue
|
679
|
-
keys session_id,request_id
|
680
|
-
hash_type md5 # md5/sha1/sha256/sha512
|
681
|
-
base64_enc true
|
682
|
-
base91_enc false
|
683
|
-
set_key _hash
|
684
|
-
separator _
|
685
|
-
inc_time_as_key true
|
686
|
-
inc_tag_as_key true
|
687
|
-
</filter>
|
688
|
-
```
|
689
|
-
|
690
|
-
:warning: In order to avoid hash-collisions and loosing data careful consideration is required when choosing the keys in the event record that should be used to calculate the hash
|
691
|
-
|
692
|
-
#### Using nested key
|
693
|
-
|
694
|
-
Nested key specifying syntax is also supported.
|
695
|
-
|
696
|
-
With the following configuration
|
697
|
-
|
698
|
-
```aconf
|
699
|
-
id_key $.nested.request_id
|
700
|
-
```
|
701
|
-
|
702
|
-
and the following nested record
|
703
|
-
|
704
|
-
```json
|
705
|
-
{"nested":{"name": "Johnny", "request_id": "87d89af7daffad6"}}
|
706
|
-
```
|
707
|
-
|
708
|
-
will trigger the following OpenSearch command
|
709
|
-
|
710
|
-
```
|
711
|
-
{"index":{"_index":"fluentd","_type":"fluentd","_id":"87d89af7daffad6"}}
|
712
|
-
{"nested":{"name":"Johnny","request_id":"87d89af7daffad6"}}
|
713
|
-
```
|
714
|
-
|
715
|
-
:warning: Note that [Hash flattening](#hash-flattening) may be conflict nested record feature.
|
716
|
-
|
717
|
-
### parent_key
|
718
|
-
|
719
|
-
```
|
720
|
-
parent_key a_parent # use "a_parent" field value to set _parent in opensearch command
|
721
|
-
```
|
722
|
-
|
723
|
-
If your input is
|
724
|
-
```
|
725
|
-
{ "name": "Johnny", "a_parent": "my_parent" }
|
726
|
-
```
|
727
|
-
|
728
|
-
OpenSearch command would be
|
729
|
-
|
730
|
-
```
|
731
|
-
{ "index" : { "_index": "****", "_type": "****", "_id": "****", "_parent": "my_parent" } }
|
732
|
-
{ "name": "Johnny", "a_parent": "my_parent" }
|
733
|
-
```
|
734
|
-
|
735
|
-
if `parent_key` is not configed or the `parent_key` is absent in input record, nothing will happen.
|
736
|
-
|
737
|
-
#### Using nested key
|
738
|
-
|
739
|
-
Nested key specifying syntax is also supported.
|
740
|
-
|
741
|
-
With the following configuration
|
742
|
-
|
743
|
-
```aconf
|
744
|
-
parent_key $.nested.a_parent
|
745
|
-
```
|
746
|
-
|
747
|
-
and the following nested record
|
748
|
-
|
749
|
-
```json
|
750
|
-
{"nested":{ "name": "Johnny", "a_parent": "my_parent" }}
|
751
|
-
```
|
752
|
-
|
753
|
-
will trigger the following OpenSearch command
|
754
|
-
|
755
|
-
```
|
756
|
-
{"index":{"_index":"fluentd","_type":"fluentd","_parent":"my_parent"}}
|
757
|
-
{"nested":{"name":"Johnny","a_parent":"my_parent"}}
|
758
|
-
```
|
759
|
-
|
760
|
-
:warning: Note that [Hash flattening](#hash-flattening) may be conflict nested record feature.
|
761
|
-
|
762
|
-
### routing_key
|
763
|
-
|
764
|
-
Similar to `parent_key` config, will add `_routing` into opensearch command if `routing_key` is set and the field does exist in input event.
|
765
|
-
|
766
|
-
### remove_keys
|
767
|
-
|
768
|
-
```
|
769
|
-
parent_key a_parent
|
770
|
-
routing_key a_routing
|
771
|
-
remove_keys a_parent, a_routing # a_parent and a_routing fields won't be sent to opensearch
|
772
|
-
```
|
773
|
-
|
774
|
-
### remove_keys_on_update
|
775
|
-
|
776
|
-
Remove keys on update will not update the configured keys in opensearch when a record is being updated.
|
777
|
-
This setting only has any effect if the write operation is update or upsert.
|
778
|
-
|
779
|
-
If the write setting is upsert then these keys are only removed if the record is being
|
780
|
-
updated, if the record does not exist (by id) then all of the keys are indexed.
|
781
|
-
|
782
|
-
```
|
783
|
-
remove_keys_on_update foo,bar
|
784
|
-
```
|
785
|
-
|
786
|
-
### remove_keys_on_update_key
|
787
|
-
|
788
|
-
This setting allows `remove_keys_on_update` to be configured with a key in each record, in much the same way as `target_index_key` works.
|
789
|
-
The configured key is removed before indexing in opensearch. If both `remove_keys_on_update` and `remove_keys_on_update_key` is
|
790
|
-
present in the record then the keys in record are used, if the `remove_keys_on_update_key` is not present then the value of
|
791
|
-
`remove_keys_on_update` is used as a fallback.
|
792
|
-
|
793
|
-
```
|
794
|
-
remove_keys_on_update_key keys_to_skip
|
795
|
-
```
|
796
|
-
|
797
|
-
### retry_tag
|
798
|
-
|
799
|
-
This setting allows custom routing of messages in response to bulk request failures. The default behavior is to emit
|
800
|
-
failed records using the same tag that was provided. When set to a value other then `nil`, failed messages are emitted
|
801
|
-
with the specified tag:
|
802
|
-
|
803
|
-
```
|
804
|
-
retry_tag 'retry_es'
|
805
|
-
```
|
806
|
-
**NOTE:** `retry_tag` is optional. If you would rather use labels to reroute retries, add a label (e.g '@label @SOMELABEL') to your fluent
|
807
|
-
opensearch plugin configuration. Retry records are, by default, submitted for retry to the ROOT label, which means
|
808
|
-
records will flow through your fluentd pipeline from the beginning. This may nor may not be a problem if the pipeline
|
809
|
-
is idempotent - that is - you can process a record again with no changes. Use tagging or labeling to ensure your retry
|
810
|
-
records are not processed again by your fluentd processing pipeline.
|
811
|
-
|
812
|
-
### write_operation
|
813
|
-
|
814
|
-
The write_operation can be any of:
|
815
|
-
|
816
|
-
| Operation | Description |
|
817
|
-
| ------------- | ----------- |
|
818
|
-
| index (default) | new data is added while existing data (based on its id) is replaced (reindexed).|
|
819
|
-
| create | adds new data - if the data already exists (based on its id), the op is skipped.|
|
820
|
-
| update | updates existing data (based on its id). If no data is found, the op is skipped.|
|
821
|
-
| upsert | known as merge or insert if the data does not exist, updates if the data exists (based on its id).|
|
822
|
-
|
823
|
-
**Please note, id is required in create, update, and upsert scenario. Without id, the message will be dropped.**
|
824
|
-
|
825
|
-
### time_parse_error_tag
|
826
|
-
|
827
|
-
With `logstash_format true`, opensearch plugin parses timestamp field for generating index name. If the record has invalid timestamp value, this plugin emits an error event to `@ERROR` label with `time_parse_error_tag` configured tag.
|
828
|
-
|
829
|
-
Default value is `opensearch_plugin.output.time.error`. Note that this default values is quite different from Elasticsearch plugin.
|
830
|
-
|
831
|
-
### reconnect_on_error
|
832
|
-
Indicates that the plugin should reset connection on any error (reconnect on next send).
|
833
|
-
By default it will reconnect only on "host unreachable exceptions".
|
834
|
-
We recommended to set this true in the presence of opensearch shield.
|
835
|
-
```
|
836
|
-
reconnect_on_error true # defaults to false
|
837
|
-
```
|
838
|
-
|
839
|
-
### with_transporter_log
|
840
|
-
|
841
|
-
This is debugging purpose option to enable to obtain transporter layer log.
|
842
|
-
Default value is `false` for backward compatibility.
|
843
|
-
|
844
|
-
We recommend to set this true if you start to debug this plugin.
|
845
|
-
|
846
|
-
```
|
847
|
-
with_transporter_log true
|
848
|
-
```
|
849
|
-
|
850
|
-
### content_type
|
851
|
-
|
852
|
-
With `content_type application/x-ndjson`, opensearch plugin adds `application/x-ndjson` as `Content-Type` in payload.
|
853
|
-
|
854
|
-
Default value is `application/json` which is default Content-Type of OpenSearch requests.
|
855
|
-
If you will not use template, it recommends to set `content_type application/x-ndjson`.
|
856
|
-
|
857
|
-
```
|
858
|
-
content_type application/x-ndjson
|
859
|
-
```
|
860
|
-
|
861
|
-
### include_index_in_url
|
862
|
-
|
863
|
-
With this option set to true, Fluentd manifests the index name in the request URL (rather than in the request body).
|
864
|
-
You can use this option to enforce an URL-based access control.
|
865
|
-
|
866
|
-
```
|
867
|
-
include_index_in_url true
|
868
|
-
```
|
869
|
-
|
870
|
-
### http_backend
|
871
|
-
|
872
|
-
With `http_backend typhoeus`, opensearch plugin uses typhoeus faraday http backend.
|
873
|
-
Typhoeus can handle HTTP keepalive.
|
874
|
-
|
875
|
-
Default value is `excon` which is default http_backend of opensearch plugin.
|
876
|
-
|
877
|
-
```
|
878
|
-
http_backend typhoeus
|
879
|
-
```
|
880
|
-
|
881
|
-
### http_backend_excon_nonblock
|
882
|
-
|
883
|
-
With `http_backend_excon_nonblock false`, opensearch plugin use excon with nonblock=false.
|
884
|
-
If you use opensearch plugin with jRuby for https, you may need to consider to set `false` to avoid follwoing problems.
|
885
|
-
- https://github.com/geemus/excon/issues/106
|
886
|
-
- https://github.com/jruby/jruby-ossl/issues/19
|
887
|
-
|
888
|
-
But for all other case, it strongly reccomend to set `true` to avoid process hangin problem reported in https://github.com/uken/fluent-plugin-elasticsearch/issues/732
|
889
|
-
|
890
|
-
Default value is `true`.
|
891
|
-
|
892
|
-
```
|
893
|
-
http_backend_excon_nonblock false
|
894
|
-
```
|
3
|
+
[](http://badge.fury.io/rb/fluent-plugin-cloudwatch-logs)
|
895
4
|
|
896
|
-
|
897
|
-
You can add gzip compression of output data. In this case `default_compression`, `best_compression` or `best speed` option should be chosen.
|
898
|
-
By default there is no compression, default value for this option is `no_compression`
|
899
|
-
```
|
900
|
-
compression_level best_compression
|
901
|
-
```
|
902
|
-
|
903
|
-
### prefer_oj_serializer
|
904
|
-
|
905
|
-
With default behavior, OpenSearch client uses `Yajl` as JSON encoder/decoder.
|
906
|
-
`Oj` is the alternative high performance JSON encoder/decoder.
|
907
|
-
When this parameter sets as `true`, OpenSearch client uses `Oj` as JSON encoder/decoder.
|
908
|
-
|
909
|
-
Default value is `false`.
|
910
|
-
|
911
|
-
```
|
912
|
-
prefer_oj_serializer true
|
913
|
-
```
|
914
|
-
|
915
|
-
### Client/host certificate options
|
916
|
-
|
917
|
-
Need to verify OpenSearch's certificate? You can use the following parameter to specify a CA instead of using an environment variable.
|
918
|
-
```
|
919
|
-
ca_file /path/to/your/ca/cert
|
920
|
-
```
|
921
|
-
|
922
|
-
Does your OpenSearch cluster want to verify client connections? You can specify the following parameters to use your client certificate, key, and key password for your connection.
|
923
|
-
```
|
924
|
-
client_cert /path/to/your/client/cert
|
925
|
-
client_key /path/to/your/private/key
|
926
|
-
client_key_pass password
|
927
|
-
```
|
928
|
-
|
929
|
-
If you want to configure SSL/TLS version, you can specify ssl\_version parameter.
|
930
|
-
```
|
931
|
-
ssl_version TLSv1_2 # or [SSLv23, TLSv1, TLSv1_1]
|
932
|
-
```
|
933
|
-
|
934
|
-
:warning: If SSL/TLS enabled, it might have to be required to set ssl\_version.
|
935
|
-
|
936
|
-
In OpenSearch plugin v4.0.2 with Ruby 2.5 or later combination, OpenSearch plugin also support `ssl_max_version` and `ssl_min_version`.
|
937
|
-
|
938
|
-
```
|
939
|
-
ssl_max_version TLSv1_3
|
940
|
-
ssl_min_version TLSv1_2
|
941
|
-
```
|
942
|
-
|
943
|
-
OpenSearch plugin will use TLSv1.2 as minimum ssl version and TLSv1.3 as maximum ssl version on transportation with TLS. Note that when they are used in Elastissearch plugin configuration, *`ssl_version` is not used* to set up TLS version.
|
944
|
-
|
945
|
-
If they are *not* specified in the OpenSearch plugin configuration, `ssl_max_version` and `ssl_min_version` is set up with:
|
946
|
-
|
947
|
-
In OpenSearch plugin v1.0.0 or later with Ruby 2.5 or later environment, `ssl_max_version` should be `TLSv1_3` and `ssl_min_version` should be `TLSv1_2`.
|
948
|
-
|
949
|
-
### Proxy Support
|
950
|
-
|
951
|
-
Starting with version 0.8.0, this gem uses excon, which supports proxy with environment variables - https://github.com/excon/excon#proxy-support
|
952
|
-
|
953
|
-
### Buffer options
|
954
|
-
|
955
|
-
`fluentd-plugin-opensearch` extends [Fluentd's builtin Output plugin](https://docs.fluentd.org/output#overview) and use `compat_parameters` plugin helper. It adds the following options:
|
956
|
-
|
957
|
-
```
|
958
|
-
buffer_type memory
|
959
|
-
flush_interval 60s
|
960
|
-
retry_limit 17
|
961
|
-
retry_wait 1.0
|
962
|
-
num_threads 1
|
963
|
-
```
|
964
|
-
|
965
|
-
The value for option `buffer_chunk_limit` should not exceed value `http.max_content_length` in your OpenSearch setup (by default it is 100mb).
|
966
|
-
|
967
|
-
**Note**: If you use or evaluate Fluentd v0.14, you can use `<buffer>` directive to specify buffer configuration, too. In more detail, please refer to the [buffer configuration options for v0.14](https://docs.fluentd.org/v0.14/articles/buffer-plugin-overview#configuration-parameters)
|
968
|
-
|
969
|
-
**Note**: If you use `disable_retry_limit` in v0.12 or `retry_forever` in v0.14 or later, please be careful to consume memory inexhaustibly.
|
970
|
-
|
971
|
-
### Hash flattening
|
972
|
-
|
973
|
-
OpenSearch will complain if you send object and concrete values to the same field. For example, you might have logs that look this, from different places:
|
974
|
-
|
975
|
-
{"people" => 100}
|
976
|
-
{"people" => {"some" => "thing"}}
|
977
|
-
|
978
|
-
The second log line will be rejected by the OpenSearch parser because objects and concrete values can't live in the same field. To combat this, you can enable hash flattening.
|
979
|
-
|
980
|
-
```
|
981
|
-
flatten_hashes true
|
982
|
-
flatten_hashes_separator _
|
983
|
-
```
|
984
|
-
|
985
|
-
This will produce opensearch output that looks like this:
|
986
|
-
{"people_some" => "thing"}
|
987
|
-
|
988
|
-
Note that the flattener does not deal with arrays at this time.
|
989
|
-
|
990
|
-
### Generate Hash ID
|
991
|
-
|
992
|
-
By default, the fluentd opensearch plugin does not emit records with a \_id field, leaving it to OpenSearch to generate a unique \_id as the record is indexed. When an OpenSearch cluster is congested and begins to take longer to respond than the configured request_timeout, the fluentd opensearch plugin will re-send the same bulk request. Since OpenSearch can't tell its actually the same request, all documents in the request are indexed again resulting in duplicate data. In certain scenarios, this can result in essentially and infinite loop generating multiple copies of the same data.
|
993
|
-
|
994
|
-
The bundled opensearch\_genid filter can generate a unique \_hash key for each record, this key may be passed to the id_key parameter in the opensearch plugin to communicate to OpenSearch the uniqueness of the requests so that duplicates will be rejected or simply replace the existing records.
|
995
|
-
Here is a sample config:
|
996
|
-
|
997
|
-
```
|
998
|
-
<filter **>
|
999
|
-
@type opensearch_genid
|
1000
|
-
hash_id_key _hash # storing generated hash id key (default is _hash)
|
1001
|
-
</filter>
|
1002
|
-
<match **>
|
1003
|
-
@type opensearch
|
1004
|
-
id_key _hash # specify same key name which is specified in hash_id_key
|
1005
|
-
remove_keys _hash # OpenSearch doesn't like keys that start with _
|
1006
|
-
# other settings are omitted.
|
1007
|
-
</match>
|
1008
|
-
```
|
1009
|
-
|
1010
|
-
### Sniffer Class Name
|
1011
|
-
|
1012
|
-
The default Sniffer used by the `OpenSearch::Transport` class works well when Fluentd has a direct connection
|
1013
|
-
to all of the OpenSearch servers and can make effective use of the `_nodes` API. This doesn't work well
|
1014
|
-
when Fluentd must connect through a load balancer or proxy. The parameter `sniffer_class_name` gives you the
|
1015
|
-
ability to provide your own Sniffer class to implement whatever connection reload logic you require. In addition,
|
1016
|
-
there is a new `Fluent::Plugin::OpenSearchSimpleSniffer` class which reuses the hosts given in the configuration, which
|
1017
|
-
is typically the hostname of the load balancer or proxy. For example, a configuration like this would cause
|
1018
|
-
connections to `logging-os` to reload every 100 operations:
|
1019
|
-
|
1020
|
-
```
|
1021
|
-
host logging-os
|
1022
|
-
port 9200
|
1023
|
-
reload_connections true
|
1024
|
-
sniffer_class_name Fluent::Plugin::OpenSearchSimpleSniffer
|
1025
|
-
reload_after 100
|
1026
|
-
```
|
1027
|
-
|
1028
|
-
#### Tips
|
1029
|
-
|
1030
|
-
The included sniffer class is not required `out_opensearch`.
|
1031
|
-
You should tell Fluentd where the sniffer class exists.
|
1032
|
-
|
1033
|
-
If you use td-agent, you must put the following lines into `TD_AGENT_DEFAULT` file:
|
1034
|
-
|
1035
|
-
```
|
1036
|
-
sniffer=$(td-agent-gem contents fluent-plugin-opensearch|grep opensearch_simple_sniffer.rb)
|
1037
|
-
TD_AGENT_OPTIONS="--use-v1-config -r $sniffer"
|
1038
|
-
```
|
1039
|
-
|
1040
|
-
If you use Fluentd directly, you must pass the following lines as Fluentd command line option:
|
5
|
+
[CloudWatch Logs](http://aws.amazon.com/blogs/aws/cloudwatch-log-service/) Plugin for Fluentd
|
1041
6
|
|
1042
|
-
|
1043
|
-
sniffer=$(td-agent-gem contents fluent-plugin-opensearch|grep opensearch_simple_sniffer.rb)
|
1044
|
-
$ fluentd -r $sniffer [AND YOUR OTHER OPTIONS]
|
1045
|
-
```
|
1046
|
-
|
1047
|
-
### Selector Class Name
|
1048
|
-
|
1049
|
-
The default selector used by the `OpenSearch::Transport` class works well when Fluentd should behave round robin and random selector cases. This doesn't work well when Fluentd should behave fallbacking from exhausted ES cluster to normal ES cluster.
|
1050
|
-
The parameter `selector_class_name` gives you the ability to provide your own Selector class to implement whatever selection nodes logic you require.
|
1051
|
-
|
1052
|
-
The below configuration is using plugin built-in `OpenSearchFallbackSelector`:
|
1053
|
-
|
1054
|
-
```
|
1055
|
-
hosts exhausted-host:9201,normal-host:9200
|
1056
|
-
selector_class_name "Fluent::Plugin::OpenSeartchFallbackSelector"
|
1057
|
-
```
|
1058
|
-
|
1059
|
-
#### Tips
|
1060
|
-
|
1061
|
-
The included selector class is required in `out_opensearch` by default.
|
1062
|
-
But, your custom selector class is not required in `out_opensearch`.
|
1063
|
-
You should tell Fluentd where the selector class exists.
|
1064
|
-
|
1065
|
-
If you use td-agent, you must put the following lines into `TD_AGENT_DEFAULT` file:
|
1066
|
-
|
1067
|
-
```
|
1068
|
-
selector=/path/to/your_awesome_selector.rb
|
1069
|
-
TD_AGENT_OPTIONS="--use-v1-config -r $selector"
|
1070
|
-
```
|
1071
|
-
|
1072
|
-
If you use Fluentd directly, you must pass the following lines as Fluentd command line option:
|
1073
|
-
|
1074
|
-
```
|
1075
|
-
selector=/path/to/your_awesome_selector.rb
|
1076
|
-
$ fluentd -r $selector [AND YOUR OTHER OPTIONS]
|
1077
|
-
```
|
7
|
+
## Requirements
|
1078
8
|
|
1079
|
-
|
9
|
+
|fluent-plugin-cloudwatch-logs| fluentd | ruby |
|
10
|
+
|-----------------------------|------------------|--------|
|
11
|
+
| >= 0.8.0 | >= 1.8.0 | >= 2.4 |
|
12
|
+
| >= 0.5.0 && < 0.8.0 | >= 0.14.15 | >= 2.1 |
|
13
|
+
| <= 0.4.5 | ~> 0.12.0 * | >= 1.9 |
|
1080
14
|
|
1081
|
-
|
1082
|
-
reload the connections. The default value is 10000.
|
15
|
+
* May not support all future fluentd features
|
1083
16
|
|
1084
|
-
|
17
|
+
## Installation
|
1085
18
|
|
1086
|
-
|
19
|
+
### For Fluentd
|
1087
20
|
|
21
|
+
```sh
|
22
|
+
gem install fluent-plugin-cloudwatch-logs
|
1088
23
|
```
|
1089
|
-
validate_client_version true
|
1090
|
-
```
|
1091
|
-
|
1092
|
-
### Unrecoverable Error Types
|
1093
|
-
|
1094
|
-
Default `unrecoverable_error_types` parameter is set up strictly.
|
1095
|
-
Because `rejected_execution_exception` is caused by exceeding OpenSearch's thread pool capacity.
|
1096
|
-
Advanced users can increase its capacity, but normal users should follow default behavior.
|
1097
24
|
|
1098
|
-
|
25
|
+
### For fluent-package
|
1099
26
|
|
1100
|
-
|
1101
|
-
|
1102
|
-
|
1103
|
-
```yaml
|
1104
|
-
thread_pool.bulk.queue_size: 1000
|
27
|
+
```sh
|
28
|
+
fluent-gem install fluent-plugin-cloudwatch-logs
|
1105
29
|
```
|
1106
30
|
|
1107
|
-
|
31
|
+
### For td-agent
|
1108
32
|
|
1109
|
-
```
|
1110
|
-
|
33
|
+
```sh
|
34
|
+
td-agent-gem install fluent-plugin-cloudwatch-logs
|
1111
35
|
```
|
1112
36
|
|
37
|
+
## Preparation
|
1113
38
|
|
1114
|
-
|
39
|
+
Create IAM user with a policy like the following:
|
1115
40
|
|
1116
|
-
|
1117
|
-
|
1118
|
-
|
1119
|
-
|
1120
|
-
|
1121
|
-
|
1122
|
-
|
1123
|
-
|
41
|
+
```json
|
42
|
+
{
|
43
|
+
"Version": "2012-10-17",
|
44
|
+
"Statement": [
|
45
|
+
{
|
46
|
+
"Effect": "Allow",
|
47
|
+
"Action": [
|
48
|
+
"logs:*",
|
49
|
+
"s3:GetObject"
|
50
|
+
],
|
51
|
+
"Resource": [
|
52
|
+
"arn:aws:logs:us-east-1:*:*",
|
53
|
+
"arn:aws:s3:::*"
|
54
|
+
]
|
55
|
+
}
|
56
|
+
]
|
57
|
+
}
|
1124
58
|
```
|
1125
59
|
|
1126
|
-
|
60
|
+
More restricted IAM policy for `out_cloudwatch_logs` is:
|
1127
61
|
|
1128
|
-
|
1129
|
-
|
1130
|
-
|
1131
|
-
|
1132
|
-
|
1133
|
-
|
1134
|
-
|
1135
|
-
|
1136
|
-
|
1137
|
-
|
62
|
+
```json
|
63
|
+
{
|
64
|
+
"Version": "2012-10-17",
|
65
|
+
"Statement": [
|
66
|
+
{
|
67
|
+
"Action": [
|
68
|
+
"logs:PutLogEvents",
|
69
|
+
"logs:CreateLogGroup",
|
70
|
+
"logs:PutRetentionPolicy",
|
71
|
+
"logs:CreateLogStream",
|
72
|
+
"logs:DescribeLogGroups",
|
73
|
+
"logs:DescribeLogStreams"
|
74
|
+
],
|
75
|
+
"Effect": "Allow",
|
76
|
+
"Resource": "*"
|
77
|
+
}
|
78
|
+
]
|
79
|
+
}
|
1138
80
|
```
|
1139
81
|
|
1140
|
-
|
1141
|
-
|
1142
|
-
Because OpenSearch plugin will ought to change behavior each of OpenSearch major versions.
|
1143
|
-
|
1144
|
-
For example, OpenSearch 1 requests to handle only `_doc` type_name in index.
|
82
|
+
Also, more restricted IAM policy for `in_cloudwatch_logs` is:
|
1145
83
|
|
1146
|
-
|
1147
|
-
|
1148
|
-
|
1149
|
-
|
1150
|
-
|
1151
|
-
|
1152
|
-
|
84
|
+
```json
|
85
|
+
{
|
86
|
+
"Version": "2012-10-17",
|
87
|
+
"Statement": [
|
88
|
+
{
|
89
|
+
"Action": [
|
90
|
+
"logs:GetLogEvents",
|
91
|
+
"logs:DescribeLogStreams"
|
92
|
+
],
|
93
|
+
"Effect": "Allow",
|
94
|
+
"Resource": "*"
|
95
|
+
}
|
96
|
+
]
|
97
|
+
}
|
1153
98
|
```
|
1154
99
|
|
1155
|
-
|
1156
|
-
|
1157
|
-
### default_opensearch_version
|
1158
|
-
|
1159
|
-
This parameter changes that OpenSearch plugin assumes the default OpenSearch version. The default value is `1`.
|
1160
|
-
|
1161
|
-
### custom_headers
|
1162
|
-
|
1163
|
-
This parameter adds additional headers to request. The default value is `{}`.
|
100
|
+
## Authentication
|
1164
101
|
|
1165
|
-
|
1166
|
-
|
1167
|
-
```
|
102
|
+
There are several methods to provide authentication credentials. Be aware that there are various tradeoffs for these methods,
|
103
|
+
although most of these tradeoffs are highly dependent on the specific environment.
|
1168
104
|
|
1169
|
-
###
|
105
|
+
### Environment
|
1170
106
|
|
1171
|
-
|
107
|
+
Set region and credentials via the environment:
|
1172
108
|
|
1173
|
-
|
1174
|
-
|
1175
|
-
|
1176
|
-
|
1177
|
-
@type forest
|
1178
|
-
subtype opensearch
|
1179
|
-
remove_prefix my.logs
|
1180
|
-
<template>
|
1181
|
-
logstash_prefix ${tag}
|
1182
|
-
# ...
|
1183
|
-
</template>
|
1184
|
-
</match>
|
109
|
+
```sh
|
110
|
+
export AWS_REGION=us-east-1
|
111
|
+
export AWS_ACCESS_KEY_ID="YOUR_ACCESS_KEY"
|
112
|
+
export AWS_SECRET_ACCESS_KEY="YOUR_SECRET_ACCESS_KEY"
|
1185
113
|
```
|
1186
114
|
|
1187
|
-
|
1188
|
-
|
1189
|
-
**Note**: If you use or evaluate Fluentd v0.14, you can use builtin placeholders. In more detail, please refer to [Placeholders](#placeholders) section.
|
1190
|
-
|
1191
|
-
### Placeholders
|
115
|
+
Note: For this to work persistently the environment will need to be set in the startup scripts or docker variables.
|
1192
116
|
|
1193
|
-
|
117
|
+
### AWS Configuration
|
1194
118
|
|
1195
|
-
|
1196
|
-
|
119
|
+
The plugin will look for the `$HOME/.aws/config` and `$HOME/.aws/credentials` for configuration information. To setup, as the
|
120
|
+
fluentd user, run:
|
1197
121
|
|
1198
|
-
|
1199
|
-
|
1200
|
-
#### tag
|
1201
|
-
|
1202
|
-
```aconf
|
1203
|
-
<match my.logs>
|
1204
|
-
@type opensearch
|
1205
|
-
index_name elastic.${tag} #=> replaced with each event's tag. e.g.) elastic.test.tag
|
1206
|
-
<buffer tag>
|
1207
|
-
@type memory
|
1208
|
-
</buffer>
|
1209
|
-
# <snip>
|
1210
|
-
</match>
|
122
|
+
```sh
|
123
|
+
aws configure
|
1211
124
|
```
|
1212
125
|
|
1213
|
-
|
126
|
+
### Configuration Parameters
|
1214
127
|
|
1215
|
-
|
1216
|
-
<match my.logs>
|
1217
|
-
@type opensearch
|
1218
|
-
index_name elastic.%Y%m%d #=> e.g.) elastic.20170811
|
1219
|
-
<buffer tag, time>
|
1220
|
-
@type memory
|
1221
|
-
timekey 3600
|
1222
|
-
</buffer>
|
1223
|
-
# <snip>
|
1224
|
-
</match>
|
1225
|
-
```
|
128
|
+
The authentication information can also be set
|
1226
129
|
|
1227
|
-
|
130
|
+
## Example
|
1228
131
|
|
1229
|
-
|
1230
|
-
records = {key1: "value1", key2: "value2"}
|
1231
|
-
```
|
132
|
+
Start fluentd:
|
1232
133
|
|
1233
|
-
```
|
1234
|
-
|
1235
|
-
@type opensearch
|
1236
|
-
index_name elastic.${key1}.${key2} # => e.g.) elastic.value1.value2
|
1237
|
-
<buffer tag, key1, key2>
|
1238
|
-
@type memory
|
1239
|
-
</buffer>
|
1240
|
-
# <snip>
|
1241
|
-
</match>
|
134
|
+
```sh
|
135
|
+
fluentd -c example/fluentd.conf
|
1242
136
|
```
|
1243
137
|
|
1244
|
-
|
138
|
+
Send sample log to CloudWatch Logs:
|
1245
139
|
|
1246
|
-
|
1247
|
-
|
1248
|
-
```
|
1249
|
-
<system>
|
1250
|
-
workers N # where N is a natural number (N >= 1).
|
1251
|
-
</system>
|
140
|
+
```sh
|
141
|
+
echo '{"hello":"world"}' | fluent-cat test.cloudwatch_logs.out
|
1252
142
|
```
|
1253
143
|
|
1254
|
-
|
1255
|
-
|
1256
|
-
By default, the error logger won't record the reason for a 400 error from the OpenSearch API unless you set log_level to debug. However, this results in a lot of log spam, which isn't desirable if all you want is the 400 error reasons. You can set this `true` to capture the 400 error reasons without all the other debug logs.
|
144
|
+
Fetch sample log from CloudWatch Logs:
|
1257
145
|
|
1258
|
-
|
1259
|
-
|
1260
|
-
|
1261
|
-
|
1262
|
-
By default, record body is wrapped by 'doc'. This behavior can not handle update script requests. You can set this to suppress doc wrapping and allow record body to be untouched.
|
1263
|
-
|
1264
|
-
Default value is `false`.
|
1265
|
-
|
1266
|
-
## ignore_exceptions
|
1267
|
-
|
1268
|
-
A list of exception that will be ignored - when the exception occurs the chunk will be discarded and the buffer retry mechanism won't be called. It is possible also to specify classes at higher level in the hierarchy. For example
|
1269
|
-
|
1270
|
-
```
|
1271
|
-
ignore_exceptions ["OpenSearch::Transport::Transport::ServerError"]
|
146
|
+
```sh
|
147
|
+
# stdout
|
148
|
+
2014-07-17 00:28:02 +0900 test.cloudwatch_logs.in: {"hello":"world"}
|
1272
149
|
```
|
1273
150
|
|
1274
|
-
|
1275
|
-
|
1276
|
-
Default value is empty list (no exception is ignored).
|
1277
|
-
|
1278
|
-
## exception_backup
|
1279
|
-
|
1280
|
-
Indicates whether to backup chunk when ignore exception occurs.
|
1281
|
-
|
1282
|
-
Default value is `true`.
|
1283
|
-
|
1284
|
-
## bulk_message_request_threshold
|
1285
|
-
|
1286
|
-
Configure `bulk_message` request splitting threshold size.
|
1287
|
-
|
1288
|
-
Default value is `-1`(unlimited).
|
1289
|
-
|
1290
|
-
If you specify this size as negative number, `bulk_message` request splitting feature will be disabled.
|
1291
|
-
|
1292
|
-
## truncate_caches_interval
|
1293
|
-
|
1294
|
-
Specify truncating caches interval.
|
1295
|
-
|
1296
|
-
If it is set, timer for clearing `alias_indexes` and `template_names` caches will be launched and executed.
|
1297
|
-
|
1298
|
-
Default value is `nil`.
|
1299
|
-
|
1300
|
-
## use_legacy_template
|
1301
|
-
|
1302
|
-
Use legacy template or not.
|
1303
|
-
|
1304
|
-
Composable template documentation is [Put Index Template API | OpenSearch Reference](https://opensearch.org/docs/latest/opensearch/index-templates/). OpenSearch still supports legacy template but we recommend to migrate to use Composable template.
|
1305
|
-
|
1306
|
-
Please confirm that whether the using OpenSearch cluster(s) support the composable template feature or not when turn on the brand new feature with this parameter.
|
1307
|
-
|
1308
|
-
## <metadata\> section
|
151
|
+
## Configuration
|
1309
152
|
|
1310
|
-
|
153
|
+
### out_cloudwatch_logs
|
1311
154
|
|
1312
155
|
```aconf
|
1313
|
-
<match
|
1314
|
-
@type
|
1315
|
-
|
1316
|
-
|
1317
|
-
|
1318
|
-
|
1319
|
-
|
156
|
+
<match tag>
|
157
|
+
@type cloudwatch_logs
|
158
|
+
log_group_name log-group-name
|
159
|
+
log_stream_name log-stream-name
|
160
|
+
auto_create_stream true
|
161
|
+
#message_keys key1,key2,key3,...
|
162
|
+
#max_message_length 32768
|
163
|
+
#use_tag_as_group false
|
164
|
+
#use_tag_as_stream false
|
165
|
+
#include_time_key true
|
166
|
+
#localtime true
|
167
|
+
#log_group_name_key group_name_key
|
168
|
+
#log_stream_name_key stream_name_key
|
169
|
+
#remove_log_group_name_key true
|
170
|
+
#remove_log_stream_name_key true
|
171
|
+
#put_log_events_retry_wait 1s
|
172
|
+
#put_log_events_retry_limit 17
|
173
|
+
#put_log_events_disable_retry_limit false
|
174
|
+
#endpoint http://localhost:5000/
|
175
|
+
#json_handler json
|
176
|
+
#log_rejected_request true
|
177
|
+
#<web_identity_credentials>
|
178
|
+
# role_arn "#{ENV['AWS_ROLE_ARN']}"
|
179
|
+
# role_session_name ROLE_SESSION_NAME
|
180
|
+
# web_identity_token_file "#{ENV['AWS_WEB_IDENTITY_TOKEN_FILE']}"
|
181
|
+
#</web_identity_credentials>
|
182
|
+
#<format>
|
183
|
+
# @type ltsv
|
184
|
+
#</format>
|
1320
185
|
</match>
|
1321
186
|
```
|
1322
187
|
|
1323
|
-
|
1324
|
-
|
1325
|
-
|
188
|
+
* `auto_create_stream`: to create log group and stream automatically. (defaults to false)
|
189
|
+
* `aws_key_id`: AWS Access Key. See [Authentication](#authentication) for more information.
|
190
|
+
* `aws_sec_key`: AWS Secret Access Key. See [Authentication](#authentication) for more information.
|
191
|
+
* `concurrency`: use to set the number of threads pushing data to CloudWatch. (default: 1)
|
192
|
+
* `endpoint`: use this parameter to connect to the local API endpoint (for testing)
|
193
|
+
* `ssl_verify_peer`: when `true` (default), SSL peer certificates are verified when establishing a connection. Setting to `false` can be useful for testing.
|
194
|
+
* `http_proxy`: use to set an optional HTTP proxy
|
195
|
+
* `include_time_key`: include time key as part of the log entry (defaults to UTC)
|
196
|
+
* `json_handler`: name of the library to be used to handle JSON data. For now, supported libraries are `json` (default) and `yajl`.
|
197
|
+
* `localtime`: use localtime timezone for `include_time_key` output (overrides UTC default)
|
198
|
+
* `log_group_aws_tags`: set a hash with keys and values to tag the log group resource
|
199
|
+
* `log_group_aws_tags_key`: use specified field of records as AWS tags for the log group
|
200
|
+
* `log_group_name`: name of log group to store logs
|
201
|
+
* `log_group_name_key`: use specified field of records as log group name
|
202
|
+
* `log_rejected_request`: output `rejected_log_events_info` request log. (defaults to false)
|
203
|
+
* `log_stream_name`: name of log stream to store logs
|
204
|
+
* `log_stream_name_key`: use specified field of records as log stream name
|
205
|
+
* `max_events_per_batch`: maximum number of events to send at once (default 10000)
|
206
|
+
* `max_message_length`: maximum length of the message
|
207
|
+
* `message_keys`: keys to send messages as events
|
208
|
+
* `put_log_events_disable_retry_limit`: if true, `put_log_events_retry_limit` will be ignored
|
209
|
+
* `put_log_events_retry_limit`: maximum count of retry (if exceeding this, the events will be discarded)
|
210
|
+
* `put_log_events_retry_wait`: time before retrying PutLogEvents (retry interval increases exponentially like `put_log_events_retry_wait * (2 ^ retry_count)`)
|
211
|
+
* `region`: AWS Region. See [Authentication](#authentication) for more information.
|
212
|
+
* `remove_log_group_aws_tags_key`: remove field specified by `log_group_aws_tags_key`
|
213
|
+
* `remove_log_group_name_key`: remove field specified by `log_group_name_key`
|
214
|
+
* `remove_log_stream_name_key`: remove field specified by `log_stream_name_key`
|
215
|
+
* `remove_retention_in_days_key`: remove field specified by `retention_in_days_key`
|
216
|
+
* `retention_in_days`: use to set the expiry time for log group when created with `auto_create_stream`. (default to no expiry)
|
217
|
+
* `retention_in_days_key`: use specified field of records as retention period
|
218
|
+
* `use_tag_as_group`: to use tag as a group name
|
219
|
+
* `use_tag_as_stream`: to use tag as a stream name
|
220
|
+
* `<web_identity_credentials>`: For EKS authentication.
|
221
|
+
* `role_arn`: The Amazon Resource Name (ARN) of the role to assume. This parameter is required when using `<web_identity_credentials>`.
|
222
|
+
* `role_session_name`: An identifier for the assumed role session. This parameter is required when using `<web_identity_credentials>`.
|
223
|
+
* `web_identity_token_file`: The absolute path to the file on disk containing the OIDC token. This parameter is required when using `<web_identity_credentials>`.
|
224
|
+
* `policy`: An IAM policy in JSON format. (default `nil`)
|
225
|
+
* `duration_seconds`: The duration, in seconds, of the role session. The value can range from
|
226
|
+
900 seconds (15 minutes) to 43200 seconds (12 hours). By default, the value
|
227
|
+
is set to 3600 seconds (1 hour). (default `nil`)
|
228
|
+
* `<format>`: For specifying records format. See [formatter overview](https://docs.fluentd.org/formatter) and [formatter section overview](https://docs.fluentd.org/configuration/format-section) on the official documentation.
|
229
|
+
|
230
|
+
**NOTE:** `retention_in_days` requests additional IAM permission `logs:PutRetentionPolicy` for log_group.
|
231
|
+
Please refer to [the PutRetentionPolicy column in documentation](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/permissions-reference-cwl.html) for details.
|
232
|
+
|
233
|
+
### in_cloudwatch_logs
|
1326
234
|
|
1327
235
|
```aconf
|
1328
|
-
<
|
1329
|
-
@type
|
1330
|
-
|
1331
|
-
|
1332
|
-
|
1333
|
-
|
1334
|
-
|
1335
|
-
|
1336
|
-
|
1337
|
-
|
1338
|
-
|
1339
|
-
|
1340
|
-
|
236
|
+
<source>
|
237
|
+
@type cloudwatch_logs
|
238
|
+
tag cloudwatch.in
|
239
|
+
log_group_name group
|
240
|
+
#add_log_group_name true
|
241
|
+
#log_group_name_key group_name_key
|
242
|
+
#use_log_group_name_prefix true
|
243
|
+
log_stream_name stream
|
244
|
+
#use_log_stream_name_prefix true
|
245
|
+
state_file /var/lib/fluent/group_stream.in.state
|
246
|
+
#endpoint http://localhost:5000/
|
247
|
+
#json_handler json
|
248
|
+
# start_time "2020-03-01 00:00:00Z"
|
249
|
+
# end_time "2020-04-30 15:00:00Z"
|
250
|
+
# time_range_format "%Y-%m-%d %H:%M:%S%z"
|
251
|
+
# Users can use `format` or `<parse>` directive to parse non-JSON CloudwatchLogs' log
|
252
|
+
# format none # or csv, tsv, regexp etc.
|
253
|
+
#<parse>
|
254
|
+
# @type none # or csv, tsv, regexp etc.
|
255
|
+
#</parse>
|
256
|
+
#<storage>
|
257
|
+
# @type local # or redis, memcached, etc.
|
258
|
+
#</storage>
|
259
|
+
#<web_identity_credentials>
|
260
|
+
# role_arn "#{ENV['AWS_ROLE_ARN']}"
|
261
|
+
# role_session_name ROLE_SESSION_NAME
|
262
|
+
# web_identity_token_file "#{ENV['AWS_WEB_IDENTITY_TOKEN_FILE']}"
|
263
|
+
#</web_identity_credentials>
|
264
|
+
</source>
|
265
|
+
```
|
266
|
+
|
267
|
+
* `aws_key_id`: AWS Access Key. See [Authentication](#authentication) for more information.
|
268
|
+
* `aws_sec_key`: AWS Secret Access Key. See [Authentication](#authentication) for more information.
|
269
|
+
* `aws_sts_role_arn`: the role ARN to assume when using cross-account sts authentication
|
270
|
+
* `aws_sts_session_name`: the session name to use with sts authentication (default: `fluentd`)
|
271
|
+
* `aws_use_sts`: use [AssumeRoleCredentials](http://docs.aws.amazon.com/sdkforruby/api/Aws/AssumeRoleCredentials.html) to authenticate, rather than the [default credential hierarchy](http://docs.aws.amazon.com/sdkforruby/api/Aws/CloudWatchLogs/Client.html#initialize-instance_method). See 'Cross-Account Operation' below for more detail.
|
272
|
+
* `endpoint`: use this parameter to connect to the local API endpoint (for testing)
|
273
|
+
* `ssl_verify_peer`: when `true` (default), SSL peer certificates are verified when establishing a connection. Setting to `false` can be useful for testing.
|
274
|
+
* `fetch_interval`: time period in seconds between checking CloudWatch for new logs. (default: 60)
|
275
|
+
* `http_proxy`: use to set an optional HTTP proxy
|
276
|
+
* `json_handler`: name of the library to be used to handle JSON data. For now, supported libraries are `json` (default) and `yajl`.
|
277
|
+
* `log_group_name`: name of log group to fetch logs
|
278
|
+
* `add_log_group_name`: add record into the name of log group (default `false`)
|
279
|
+
* `log_group_name_key`: specify the key where adding record into the name of log group (default `'log_group'`)
|
280
|
+
* `use_log_group_name_prefix`: to use `log_group_name` as log group name prefix (default `false`)
|
281
|
+
* `log_stream_name`: name of log stream to fetch logs
|
282
|
+
* `region`: AWS Region. See [Authentication](#authentication) for more information.
|
283
|
+
* `throttling_retry_seconds`: time period in seconds to retry a request when aws CloudWatch rate limit exceeds (default: nil)
|
284
|
+
* `include_metadata`: include metadata such as `log_group_name` and `log_stream_name`. (default: false)
|
285
|
+
* `state_file`: file to store current state (e.g. next\_forward\_token). This parameter is deprecated. Use `<storage>` instead.
|
286
|
+
* `tag`: fluentd tag
|
287
|
+
* `use_log_stream_name_prefix`: to use `log_stream_name` as log stream name prefix (default false)
|
288
|
+
* `use_todays_log_stream`: use todays and yesterdays date as log stream name prefix (formatted YYYY/MM/DD). (default: `false`)
|
289
|
+
* `use_aws_timestamp`: get timestamp from Cloudwatch event for non json logs, otherwise fluentd will parse the log to get the timestamp (default `false`)
|
290
|
+
* `start_time`: specify starting time range for obtaining logs. (default: `nil`)
|
291
|
+
* `end_time`: specify ending time range for obtaining logs. (default: `nil`)
|
292
|
+
* `time_range_format`: specify time format for time range. (default: `%Y-%m-%d %H:%M:%S`)
|
293
|
+
* `format`: specify CloudWatchLogs' log format. (default `nil`)
|
294
|
+
* `<parse>`: specify parser plugin configuration. see also: https://docs.fluentd.org/v/1.0/parser#how-to-use
|
295
|
+
* `<storage>`: specify storage plugin configuration. see also: https://docs.fluentd.org/v/1.0/storage#how-to-use
|
296
|
+
* `<web_identity_credentials>`: For EKS authentication.
|
297
|
+
* `role_arn`: The Amazon Resource Name (ARN) of the role to assume. This parameter is required when using `<web_identity_credentials>`.
|
298
|
+
* `role_session_name`: An identifier for the assumed role session. This parameter is required when using `<web_identity_credentials>`.
|
299
|
+
* `web_identity_token_file`: The absolute path to the file on disk containing the OIDC token. This parameter is required when using `<web_identity_credentials>`.
|
300
|
+
* `policy`: An IAM policy in JSON format. (default `nil`)
|
301
|
+
* `duration_seconds`: The duration, in seconds, of the role session. The value can range from
|
302
|
+
900 seconds (15 minutes) to 43200 seconds (12 hours). By default, the value
|
303
|
+
is set to 3600 seconds (1 hour). (default `nil`)
|
304
|
+
|
305
|
+
## Test
|
306
|
+
|
307
|
+
Set credentials:
|
1341
308
|
|
1342
309
|
```aconf
|
1343
|
-
|
1344
|
-
|
1345
|
-
|
1346
|
-
<metadata>
|
1347
|
-
include_chunk_id
|
1348
|
-
chunk_id_key chunk_hex
|
1349
|
-
</metadata>
|
1350
|
-
</match>
|
1351
|
-
```
|
1352
|
-
|
1353
|
-
## Configuration - OpenSearch Input
|
1354
|
-
|
1355
|
-
See [OpenSearch Input plugin document](README.OpenSearchInput.md)
|
1356
|
-
|
1357
|
-
## Configuration - OpenSearch Filter GenID
|
1358
|
-
|
1359
|
-
See [OpenSearch Filter GenID document](README.OpenSearchGenID.md)
|
1360
|
-
|
1361
|
-
## Configuration - OpenSearch Output Data Stream
|
1362
|
-
|
1363
|
-
Since Elasticsearch 7.9 that is predessor software of OpenSearch, Data Streams was introduced.
|
1364
|
-
|
1365
|
-
**NOTE:** This feature is slated to official support. Currently, this fetaure is beta.
|
1366
|
-
|
1367
|
-
You can enable this feature by specifying `@type opensearch_data_stream`.
|
1368
|
-
|
1369
|
-
```
|
1370
|
-
@type opensearch_data_stream
|
1371
|
-
data_stream_name test
|
310
|
+
$ export AWS_REGION=us-east-1
|
311
|
+
$ export AWS_ACCESS_KEY_ID="YOUR_ACCESS_KEY"
|
312
|
+
$ export AWS_SECRET_ACCESS_KEY="YOUR_SECRET_KEY"
|
1372
313
|
```
|
1373
314
|
|
1374
|
-
|
1375
|
-
|
1376
|
-
### data_stream_name
|
1377
|
-
|
1378
|
-
You can specify OpenSearch data stream name by this parameter.
|
1379
|
-
This parameter is mandatory for `opensearch_data_stream`.
|
1380
|
-
|
1381
|
-
### data_stream_template_name
|
1382
|
-
|
1383
|
-
You can specify an existing matching index template for the data stream. If not present, it creates a new matching index template.
|
315
|
+
Run tests:
|
1384
316
|
|
1385
|
-
|
1386
|
-
|
1387
|
-
## Configuration - AWS OpenSearch Service
|
1388
|
-
|
1389
|
-
This settings are effective for AWS OpenSearch Service that is successor of AWS Elasticsearch service.
|
1390
|
-
|
1391
|
-
|
1392
|
-
### \<endpoint\> section
|
1393
|
-
|
1394
|
-
AWS OpenSearch Service related settings are placed in `<endpoint>` directive.
|
1395
|
-
This is because `elasticsearch-ruby` does not work with OpenSearch and `opensearch-ruby` does not work with Elasticsearch.
|
1396
|
-
|
1397
|
-
Configuration example is below:
|
1398
|
-
|
1399
|
-
```aconf
|
1400
|
-
<match es.**>
|
1401
|
-
type opensearch
|
1402
|
-
logstash_format true
|
1403
|
-
include_tag_key true
|
1404
|
-
flush_interval 1s
|
1405
|
-
|
1406
|
-
<endpoint>
|
1407
|
-
url https://CLUSTER_ENDPOINT_URL
|
1408
|
-
region YOUR_AWS_REGION
|
1409
|
-
# access_key_id "secret"
|
1410
|
-
# secret_access_key "foo_secret"
|
1411
|
-
</endpoint>
|
1412
|
-
</match>
|
317
|
+
```sh
|
318
|
+
rake test
|
1413
319
|
```
|
1414
320
|
|
1415
|
-
|
321
|
+
Or, If you do not want to use IAM roll or ENV(this is just like writing to configuration file) :
|
1416
322
|
|
1417
|
-
|
1418
|
-
|
1419
|
-
```aconf
|
1420
|
-
<endpoint>
|
1421
|
-
region us-east-2 # e.g.) AWS Ohio region
|
1422
|
-
# other stuffs.
|
1423
|
-
</endpoint>
|
323
|
+
```sh
|
324
|
+
rake aws_key_id=YOUR_ACCESS_KEY aws_sec_key=YOUR_SECRET_KEY region=us-east-1 test
|
1424
325
|
```
|
1425
326
|
|
1426
|
-
|
1427
|
-
|
1428
|
-
Specify AWS OpenSearch Service endpoint.
|
327
|
+
If you want to run the test suite against a mock server, set `endpoint` as below:
|
1429
328
|
|
1430
|
-
```
|
1431
|
-
|
1432
|
-
|
1433
|
-
# other stuffs.
|
1434
|
-
</endpoint>
|
329
|
+
```sh
|
330
|
+
export endpoint='http://localhost:5000/'
|
331
|
+
rake test
|
1435
332
|
```
|
1436
333
|
|
1437
|
-
**NOTE:** This plugin will remove trailing slashes automatically. You don't need to pay attention to the trailing slash characters.
|
1438
334
|
|
1439
|
-
|
335
|
+
## Caution
|
1440
336
|
|
1441
|
-
|
337
|
+
If an event message exceeds API limit (1MB), the event will be discarded.
|
1442
338
|
|
1443
|
-
|
1444
|
-
<endpoint>
|
1445
|
-
access_key_id YOUR_AWS_ACCESS_KEY
|
1446
|
-
# other stuffs.
|
1447
|
-
</endpoint>
|
1448
|
-
```
|
1449
|
-
|
1450
|
-
#### secret_access_key
|
1451
|
-
|
1452
|
-
Specify AWS secret access key.
|
339
|
+
## Cross-Account Operation
|
1453
340
|
|
1454
|
-
|
1455
|
-
<endpoint>
|
1456
|
-
secret_access_key YOUR_AWS_SECRET_ACCESS_KEY
|
1457
|
-
# other stuffs.
|
1458
|
-
</endpoint>
|
1459
|
-
```
|
341
|
+
In order to have an instance of this plugin running in one AWS account to fetch logs from another account cross-account IAM authentication is required. Whilst this can be accomplished by configuring specific instances of the plugin manually with credentials for the source account in question this is not desirable for a number of reasons.
|
1460
342
|
|
1461
|
-
|
343
|
+
In this case IAM can be used to allow the fluentd instance in one account ("A") to ingest Cloudwatch logs from another ("B") via the following mechanic:
|
1462
344
|
|
1463
|
-
|
345
|
+
* plugin instance running in account "A" has an IAM instance role assigned to the underlying EC2 instance
|
346
|
+
* The IAM instance role and associated policies permit the EC2 instance to assume a role in another account
|
347
|
+
* An IAM role in account "B" and associated policies allow read access to the Cloudwatch Logs service, as appropriate.
|
1464
348
|
|
1465
|
-
|
349
|
+
### IAM Detail: Consuming Account "A"
|
1466
350
|
|
1467
|
-
|
1468
|
-
|
1469
|
-
|
1470
|
-
Then, you should configure a policy for the OpenSearch cluster policy with substitution s for the capitalized terms:
|
351
|
+
* Create an IAM role `cloudwatch`
|
352
|
+
* Attach a policy to allow the role holder to assume another role (where `ACCOUNT-B` is substituted for the appropriate account number):
|
1471
353
|
|
1472
354
|
```json
|
1473
355
|
{
|
1474
|
-
|
1475
|
-
|
1476
|
-
|
1477
|
-
|
1478
|
-
|
1479
|
-
|
1480
|
-
|
1481
|
-
|
1482
|
-
|
1483
|
-
|
1484
|
-
{
|
1485
|
-
"Effect": "Allow",
|
1486
|
-
"Principal": {
|
1487
|
-
"AWS": "*"
|
1488
|
-
},
|
1489
|
-
"Action": "es:*",
|
1490
|
-
"Resource": "arn:aws:es:AWS_REGION:ACCOUNT:domain/OPENSEARCH_DOMAIN/*",
|
1491
|
-
"Condition": {
|
1492
|
-
"IpAddress": {
|
1493
|
-
"aws:SourceIp": [
|
1494
|
-
"1.2.3.4/32",
|
1495
|
-
"5.6.7.8/32"
|
1496
|
-
]
|
356
|
+
"Version": "2012-10-17",
|
357
|
+
"Statement": [
|
358
|
+
{
|
359
|
+
"Effect": "Allow",
|
360
|
+
"Action": [
|
361
|
+
"sts:*"
|
362
|
+
],
|
363
|
+
"Resource": [
|
364
|
+
"arn:aws:iam::ACCOUNT-B:role/fluentd"
|
365
|
+
]
|
1497
366
|
}
|
1498
|
-
|
1499
|
-
}
|
1500
|
-
]
|
367
|
+
]
|
1501
368
|
}
|
1502
369
|
```
|
1503
370
|
|
1504
|
-
|
371
|
+
* Ensure the EC2 instance on which this plugin is executing as role `cloudwatch` as its assigned IAM instance role.
|
1505
372
|
|
1506
|
-
|
1507
|
-
|
1508
|
-
Additionally, you can use a STS assumed role as the authenticating factor and instruct the plugin to assume this role.
|
1509
|
-
This is useful for cross-account access and when assigning a standard role is not possible. In this case, the endpoint configuration looks like:
|
1510
|
-
|
1511
|
-
```aconf
|
1512
|
-
<endpoint>
|
1513
|
-
url https://CLUSTER_ENDPOINT_URL
|
1514
|
-
region YOUR_AWS_REGION
|
1515
|
-
assume_role_arn arn:aws:sts::ACCOUNT:role/ROLE
|
1516
|
-
assume_role_session_name SESSION_ID # Defaults to fluentd if omitted
|
1517
|
-
sts_credentials_region YOUR_AWS_STS_REGION # Defaults to region if omitted
|
1518
|
-
</endpoint>
|
1519
|
-
```
|
373
|
+
### IAM Detail: Log Source Account "B"
|
1520
374
|
|
1521
|
-
|
375
|
+
* Create an IAM role `fluentd`
|
376
|
+
* Ensure the `fluentd` role as account "A" as a trusted entity:
|
1522
377
|
|
1523
378
|
```json
|
1524
379
|
{
|
@@ -1527,96 +382,92 @@ The policy attached into your OpenSearch cluster becomes something like:
|
|
1527
382
|
{
|
1528
383
|
"Effect": "Allow",
|
1529
384
|
"Principal": {
|
1530
|
-
"AWS": "arn:aws:
|
385
|
+
"AWS": "arn:aws:iam::ACCOUNT-A:root"
|
1531
386
|
},
|
1532
|
-
"Action": "
|
1533
|
-
"Resource": "arn:aws:es:AWS_REGION:ACCOUNT:domain/ES_DOMAIN/*"
|
387
|
+
"Action": "sts:AssumeRole"
|
1534
388
|
}
|
1535
389
|
]
|
1536
390
|
}
|
1537
391
|
```
|
1538
392
|
|
1539
|
-
|
1540
|
-
|
1541
|
-
You'll need to ensure that the environment where the Fluentd plugin runs to have the capability to assume this role, by attaching a policy something like this to the instance profile:
|
393
|
+
* Attach a policy:
|
1542
394
|
|
1543
395
|
```json
|
1544
396
|
{
|
1545
397
|
"Version": "2012-10-17",
|
1546
|
-
"Statement":
|
1547
|
-
|
1548
|
-
|
1549
|
-
|
1550
|
-
|
398
|
+
"Statement": [
|
399
|
+
{
|
400
|
+
"Effect": "Allow",
|
401
|
+
"Action": [
|
402
|
+
"logs:DescribeDestinations",
|
403
|
+
"logs:DescribeExportTasks",
|
404
|
+
"logs:DescribeLogGroups",
|
405
|
+
"logs:DescribeLogStreams",
|
406
|
+
"logs:DescribeMetricFilters",
|
407
|
+
"logs:DescribeSubscriptionFilters",
|
408
|
+
"logs:FilterLogEvents",
|
409
|
+
"logs:GetLogEvents"
|
410
|
+
],
|
411
|
+
"Resource": [
|
412
|
+
"arn:aws:logs:eu-west-1:ACCOUNT-B:log-group:LOG_GROUP_NAME_FOR_CONSUMPTION:*"
|
413
|
+
]
|
414
|
+
}
|
415
|
+
]
|
1551
416
|
}
|
1552
417
|
```
|
1553
418
|
|
1554
|
-
###
|
1555
|
-
|
1556
|
-
If you want to use IAM roles for service accounts on your Amazon EKS clusters, please refer to the official documentation and specify a Service Account for your fluentd Pod.
|
1557
|
-
|
1558
|
-
In this case, the endpoint configuration looks like:
|
419
|
+
### Configuring the plugin for STS authentication
|
1559
420
|
|
1560
421
|
```aconf
|
1561
|
-
<
|
1562
|
-
|
1563
|
-
region
|
1564
|
-
|
1565
|
-
|
1566
|
-
|
422
|
+
<source>
|
423
|
+
@type cloudwatch_logs
|
424
|
+
region us-east-1 # You must supply a region
|
425
|
+
aws_use_sts true
|
426
|
+
aws_sts_role_arn arn:aws:iam::ACCOUNT-B:role/fluentd
|
427
|
+
log_group_name LOG_GROUP_NAME_FOR_CONSUMPTION
|
428
|
+
log_stream_name SOME_PREFIX
|
429
|
+
use_log_stream_name_prefix true
|
430
|
+
state_file /path/to/state_file
|
431
|
+
format /(?<message>.+)/
|
432
|
+
</source>
|
1567
433
|
```
|
1568
434
|
|
1569
|
-
###
|
435
|
+
### Using build-in placeholders, but they don't replace placeholders with actual values, why?
|
436
|
+
|
437
|
+
Built-in placeholders use buffer metadata when replacing placeholders with actual values.
|
438
|
+
So, you should specify buffer attributes what you want to replace placeholders with.
|
1570
439
|
|
1571
|
-
|
440
|
+
Using `${tag}` placeholders, you should specify `tag` attributes in buffer:
|
1572
441
|
|
1573
442
|
```aconf
|
1574
|
-
<
|
1575
|
-
|
1576
|
-
|
1577
|
-
# ...
|
1578
|
-
refresh_credentials_interval 3h # default is 5h (five hours).
|
1579
|
-
</endpoint>
|
443
|
+
<buffer tag>
|
444
|
+
@type memory
|
445
|
+
</buffer>
|
1580
446
|
```
|
1581
447
|
|
1582
|
-
|
1583
|
-
|
1584
|
-
If you want to use Serverless version of OpenSearch service, you have to specify `aoss` in `aws_service_name` under `endpoint` section:
|
448
|
+
Using `%Y%m%d` placeholders, you should specify `time` attributes in buffer:
|
1585
449
|
|
1586
450
|
```aconf
|
1587
|
-
<
|
1588
|
-
|
1589
|
-
|
1590
|
-
|
1591
|
-
aws_service_name aoss # default is es that is for AWS OpenSearch Service not Serverless.
|
1592
|
-
</endpoint>
|
451
|
+
<buffer time>
|
452
|
+
@type memory
|
453
|
+
timekey 3600
|
454
|
+
</buffer>
|
1593
455
|
```
|
1594
456
|
|
1595
|
-
|
1596
|
-
|
1597
|
-
See [Troubleshooting document](README.Troubleshooting.md)
|
457
|
+
In more detail, please refer to [the officilal document for built-in placeholders](https://docs.fluentd.org/v1.0/articles/buffer-section#placeholders).
|
1598
458
|
|
1599
|
-
##
|
459
|
+
## TODO
|
1600
460
|
|
1601
|
-
|
461
|
+
* out_cloudwatch_logs
|
462
|
+
* if the data is too big for API, split into multiple requests
|
463
|
+
* check data size
|
464
|
+
* in_cloudwatch_logs
|
465
|
+
* fallback to start_time because next_token expires after 24 hours
|
1602
466
|
|
1603
467
|
## Contributing
|
1604
468
|
|
1605
|
-
|
1606
|
-
|
1607
|
-
|
1608
|
-
|
1609
|
-
|
1610
|
-
|
1611
|
-
## Running tests
|
1612
|
-
|
1613
|
-
Install dev dependencies:
|
1614
|
-
|
1615
|
-
```sh
|
1616
|
-
$ gem install bundler
|
1617
|
-
$ bundle install
|
1618
|
-
$ bundle exec rake test
|
1619
|
-
# To just run the test you are working on:
|
1620
|
-
$ bundle exec rake test TEST=test/plugin/test_out_opensearch.rb TESTOPTS='--verbose --name=test_bulk_error_retags_with_error_when_configured_and_fullfilled_buffer'
|
1621
|
-
|
1622
|
-
```
|
469
|
+
1. Fork it ( https://github.com/[my-github-username]/fluent-plugin-cloudwatch-logs/fork )
|
470
|
+
2. Create your feature branch (`git checkout -b my-new-feature`)
|
471
|
+
3. Commit your changes (`git commit -am 'Add some feature'`)
|
472
|
+
4. Push to the branch (`git push origin my-new-feature`)
|
473
|
+
5. Create a new Pull Request
|