logstash-output-opensearch 1.3.0-java → 2.0.2-java
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- checksums.yaml.gz.sig +0 -0
- data/COMPATIBILITY.md +5 -3
- data/MAINTAINERS.md +14 -64
- data/README.md +51 -34
- data/RELEASING.md +5 -1
- data/docs/ecs_compatibility.md +42 -0
- data/lib/logstash/outputs/opensearch/http_client/manticore_adapter.rb +43 -12
- data/lib/logstash/outputs/opensearch/http_client/pool.rb +11 -2
- data/lib/logstash/outputs/opensearch/http_client.rb +28 -6
- data/lib/logstash/outputs/opensearch/http_client_builder.rb +4 -2
- data/lib/logstash/outputs/opensearch/template_manager.rb +6 -5
- data/lib/logstash/outputs/opensearch/templates/ecs-disabled/1x_index.json +66 -0
- data/lib/logstash/outputs/opensearch/templates/ecs-disabled/2x_index.json +66 -0
- data/lib/logstash/outputs/opensearch/templates/ecs-disabled/7x_index.json +66 -0
- data/lib/logstash/outputs/opensearch/templates/ecs-v8/1x_index.json +5254 -0
- data/lib/logstash/outputs/opensearch/templates/ecs-v8/2x_index.json +5254 -0
- data/lib/logstash/outputs/opensearch/templates/ecs-v8/7x_index.json +5254 -0
- data/lib/logstash/outputs/opensearch.rb +7 -0
- data/logstash-output-opensearch.gemspec +2 -2
- data/spec/unit/outputs/opensearch/http_client/manticore_adapter_spec.rb +72 -14
- data/spec/unit/outputs/opensearch/http_client_spec.rb +20 -0
- data/spec/unit/outputs/opensearch/template_manager_spec.rb +8 -20
- data.tar.gz.sig +0 -0
- metadata +20 -19
- metadata.gz.sig +0 -0
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA256:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: a9700db92f60b264d7edf26bb34281f9fac13c27d93564d80ce007e5e7b73928
|
4
|
+
data.tar.gz: a4feb8d63b747cd2db45ec1b47df5e30a6aea6dbee3a30dde51c94773b192ec8
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: a7ad196b7d23790cf9bed976a3c51b0a2ebbd2d1bb9fa92617a9994620d0be287c142c5ce944cb941626f598491d38b112d68797f7953e63180204522a639af3
|
7
|
+
data.tar.gz: cdd308cd216f67382e466e98546f04e6ac1a011e6d4f64e4b27e23d44809ae6a069c3849e62986a0f79ccb26d40eaa12fb998271072efaf87be1a5ca765da48a
|
checksums.yaml.gz.sig
CHANGED
Binary file
|
data/COMPATIBILITY.md
CHANGED
@@ -5,23 +5,25 @@
|
|
5
5
|
| logstash-output-opensearch | Logstash OSS|
|
6
6
|
| ------------- | ------------- |
|
7
7
|
| 1.0.0+ | 7.13.2 |
|
8
|
+
| 2.0.0+ | 7.13.2 |
|
8
9
|
|
9
10
|
### Matrix for OpenSearch
|
10
11
|
|
11
12
|
| logstash-output-opensearch | OpenSearch |
|
12
13
|
| ------------- | ------------- |
|
13
|
-
| 1.0.0+ | 1.0.0 |
|
14
|
-
|
14
|
+
| 1.0.0+ | 1.0.0 |
|
15
|
+
| 2.0.0+ | 1.0.0 |
|
15
16
|
|
16
17
|
### Matrix for ODFE
|
17
18
|
|
18
19
|
| logstash-output-opensearch | ODFE |
|
19
20
|
| ------------- | ------------- |
|
20
21
|
| 1.0.0+ | 1.x - 1.13.2 |
|
21
|
-
|
22
|
+
| 2.0.0+ | 1.x - 1.13.2 |
|
22
23
|
|
23
24
|
### Matrix for Elasticsearch OSS
|
24
25
|
|
25
26
|
| logstash-output-opensearch | Elasticsearch OSS|
|
26
27
|
| ------------- | ------------- |
|
27
28
|
| 1.0.0+ | 7.x - 7.10.2 |
|
29
|
+
| 2.0.0+ | 7.x - 7.10.2 |
|
data/MAINTAINERS.md
CHANGED
@@ -1,74 +1,24 @@
|
|
1
|
-
- [Overview](#overview)
|
2
|
-
- [Current Maintainers](#current-maintainers)
|
3
|
-
- [Maintainer Responsibilities](#maintainer-responsibilities)
|
4
|
-
- [Uphold Code of Conduct](#uphold-code-of-conduct)
|
5
|
-
- [Prioritize Security](#prioritize-security)
|
6
|
-
- [Review Pull Requests](#review-pull-requests)
|
7
|
-
- [Triage Open Issues](#triage-open-issues)
|
8
|
-
- [Be Responsive](#be-responsive)
|
9
|
-
- [Maintain Overall Health of the Repo](#maintain-overall-health-of-the-repo)
|
10
|
-
- [Use Semver](#use-semver)
|
11
|
-
- [Release Frequently](#release-frequently)
|
12
|
-
- [Promote Other Maintainers](#promote-other-maintainers)
|
13
|
-
|
14
1
|
## Overview
|
15
2
|
|
16
|
-
This document
|
3
|
+
This document contains a list of maintainers in this repo. See [opensearch-project/.github/RESPONSIBILITIES.md](https://github.com/opensearch-project/.github/blob/main/RESPONSIBILITIES.md#maintainer-responsibilities) that explains what the role of maintainer means, what maintainers do in this and other repos, and how they should be doing it. If you're interested in contributing, and becoming a maintainer, see [CONTRIBUTING](CONTRIBUTING.md).
|
17
4
|
|
18
5
|
## Current Maintainers
|
19
6
|
|
20
|
-
| Maintainer | GitHub ID | Affiliation |
|
21
|
-
| ------------------------ | --------------------------------------- | ----------- |
|
22
|
-
| Jack Mazanec | [jmazanec15](https://github.com/jmazanec15) | Amazon |
|
23
|
-
| Vamshi Vijay Nakkirtha | [vamshin](https://github.com/vamshin) | Amazon |
|
24
|
-
| Vijayan Balasubramanian | [VijayanB](https://github.com/VijayanB) | Amazon |
|
25
|
-
| Deep Datta | [deepdatta](https://github.com/deepdatta) | Amazon |
|
26
|
-
| David Venable | [dlvenable](https://github.com/dlvenable) | Amazon |
|
27
|
-
| Shivani Shukla | [sshivanii](https://github.com/sshivanii) | Amazon |
|
28
|
-
|
29
|
-
## Maintainer Responsibilities
|
30
|
-
|
31
|
-
Maintainers are active and visible members of the community, and have [maintain-level permissions on a repository](https://docs.github.com/en/organizations/managing-access-to-your-organizations-repositories/repository-permission-levels-for-an-organization). Use those privileges to serve the community and evolve code as follows.
|
32
|
-
|
33
|
-
### Uphold Code of Conduct
|
34
|
-
|
35
|
-
Model the behavior set forward by the [Code of Conduct](CODE_OF_CONDUCT.md) and raise any violations to other maintainers and admins.
|
36
|
-
|
37
|
-
### Prioritize Security
|
38
|
-
|
39
|
-
Security is your number one priority. Maintainer's Github keys must be password protected securely and any reported security vulnerabilities are addressed before features or bugs.
|
40
|
-
|
41
|
-
Note that this repository is monitored and supported 24/7 by Amazon Security, see [Reporting a Vulnerability](SECURITY.md) for details.
|
42
|
-
|
43
|
-
### Review Pull Requests
|
44
|
-
|
45
|
-
Review pull requests regularly, comment, suggest, reject, merge and close. Accept only high quality pull-requests. Provide code reviews and guidance on incomming pull requests. Don't let PRs be stale and do your best to be helpful to contributors.
|
46
|
-
|
47
|
-
### Triage Open Issues
|
48
|
-
|
49
|
-
Manage labels, review issues regularly, and triage by labelling them.
|
50
|
-
|
51
|
-
All repositories in this organization have a standard set of labels, including `bug`, `documentation`, `duplicate`, `enhancement`, `good first issue`, `help wanted`, `blocker`, `invalid`, `question`, `wontfix`, and `untriaged`, along with release labels, such as `v1.0.0`, `v1.1.0` and `v2.0.0`, and `backport`.
|
52
|
-
|
53
|
-
Use labels to target an issue or a PR for a given release, add `help wanted` to good issues for new community members, and `blocker` for issues that scare you or need immediate attention. Request for more information from a submitter if an issue is not clear. Create new labels as needed by the project.
|
54
|
-
|
55
|
-
### Be Responsive
|
56
|
-
|
57
|
-
Respond to enhancement requests, and forum posts. Allocate time to reviewing and commenting on issues and conversations as they come in.
|
58
|
-
|
59
|
-
### Maintain Overall Health of the Repo
|
60
|
-
|
61
|
-
Keep the `main` branch at production quality at all times. Backport features as needed. Cut release branches and tags to enable future patches.
|
62
|
-
|
63
|
-
### Use Semver
|
64
|
-
|
65
|
-
Use and enforce [semantic versioning](https://semver.org/) and do not let breaking changes be made outside of major releases.
|
66
7
|
|
67
|
-
|
8
|
+
| Maintainer | GitHub ID | Affiliation |
|
9
|
+
| ----------------------- | ------------------------------------------- | ----------- |
|
10
|
+
| Asif Sohail Mohammed | [asifsmohammed](https://github.com/asifsmohammed) | Amazon |
|
11
|
+
| David Venable | [dlvenable](https://github.com/dlvenable) | Amazon |
|
12
|
+
| Hai Yan | [oeyh](https://github.com/oeyh) | Amazon |
|
68
13
|
|
69
|
-
Make frequent project releases to the community.
|
70
14
|
|
71
|
-
### Promote Other Maintainers
|
72
15
|
|
73
|
-
|
16
|
+
## Emeritus
|
74
17
|
|
18
|
+
| Maintainer | GitHub ID | Affiliation |
|
19
|
+
| ----------------------- | ------------------------------------------- | ----------- |
|
20
|
+
| Jack Mazanec | [jmazanec15](https://github.com/jmazanec15) | Amazon |
|
21
|
+
| Vamshi Vijay Nakkirtha | [vamshin](https://github.com/vamshin) | Amazon |
|
22
|
+
| Vijayan Balasubramanian | [VijayanB](https://github.com/VijayanB) | Amazon |
|
23
|
+
| Deep Datta | [deepdatta](https://github.com/deepdatta) | Amazon |
|
24
|
+
| Shivani Shukla | [sshivanii](https://github.com/sshivanii) | Amazon |
|
data/README.md
CHANGED
@@ -18,7 +18,8 @@ The logstash-output-opensearch plugin helps to ship events from Logstash to Open
|
|
18
18
|
## Project Resources
|
19
19
|
|
20
20
|
* [Project Website](https://opensearch.org/)
|
21
|
-
* [Documentation](https://opensearch.org/docs/
|
21
|
+
* [Detailed Documentation](https://opensearch.org/docs/latest/tools/logstash/ship-to-opensearch/)
|
22
|
+
* [Logstash Overview](https://opensearch.org/docs/clients/logstash/index/)
|
22
23
|
* [Developer Guide](DEVELOPER_GUIDE.md)
|
23
24
|
* Need help? Try [Forums](https://discuss.opendistrocommunity.dev/)
|
24
25
|
* [Project Principles](https://opensearch.org/#principles)
|
@@ -44,17 +45,17 @@ output {
|
|
44
45
|
|
45
46
|
To run the Logstash Output Opensearch plugin using aws_iam authentication, refer to the sample configuration shown below:
|
46
47
|
```
|
47
|
-
output {
|
48
|
-
opensearch {
|
49
|
-
hosts => ["hostname:port"]
|
50
|
-
auth_type => {
|
51
|
-
type => 'aws_iam'
|
52
|
-
aws_access_key_id => 'ACCESS_KEY'
|
53
|
-
aws_secret_access_key => 'SECRET_KEY'
|
54
|
-
region => 'us-west-2'
|
55
|
-
}
|
56
|
-
index => "logstash-logs-%{+YYYY.MM.dd}"
|
57
|
-
}
|
48
|
+
output {
|
49
|
+
opensearch {
|
50
|
+
hosts => ["hostname:port"]
|
51
|
+
auth_type => {
|
52
|
+
type => 'aws_iam'
|
53
|
+
aws_access_key_id => 'ACCESS_KEY'
|
54
|
+
aws_secret_access_key => 'SECRET_KEY'
|
55
|
+
region => 'us-west-2'
|
56
|
+
}
|
57
|
+
index => "logstash-logs-%{+YYYY.MM.dd}"
|
58
|
+
}
|
58
59
|
}
|
59
60
|
```
|
60
61
|
|
@@ -62,37 +63,53 @@ In addition to the existing authentication mechanisms, if we want to add new aut
|
|
62
63
|
|
63
64
|
Example Configuration for basic authentication:
|
64
65
|
```
|
65
|
-
output {
|
66
|
-
opensearch {
|
67
|
-
hosts => ["hostname:port"]
|
68
|
-
auth_type => {
|
69
|
-
type => 'basic'
|
70
|
-
user => 'admin'
|
71
|
-
password => 'admin'
|
72
|
-
}
|
73
|
-
index => "logstash-logs-%{+YYYY.MM.dd}"
|
74
|
-
}
|
75
|
-
}
|
66
|
+
output {
|
67
|
+
opensearch {
|
68
|
+
hosts => ["hostname:port"]
|
69
|
+
auth_type => {
|
70
|
+
type => 'basic'
|
71
|
+
user => 'admin'
|
72
|
+
password => 'admin'
|
73
|
+
}
|
74
|
+
index => "logstash-logs-%{+YYYY.MM.dd}"
|
75
|
+
}
|
76
|
+
}
|
76
77
|
```
|
77
78
|
|
78
79
|
To ingest data into a `data stream` through logstash, we need to create the data stream and specify the name of data stream and the `op_type` of `create` in the output configuration. The sample configuration is shown below:
|
79
80
|
|
80
81
|
```yml
|
81
|
-
output {
|
82
|
-
opensearch {
|
83
|
-
hosts => ["https://hostname:port"]
|
84
|
-
auth_type => {
|
85
|
-
type => 'basic'
|
86
|
-
user => 'admin'
|
87
|
-
password => 'admin'
|
82
|
+
output {
|
83
|
+
opensearch {
|
84
|
+
hosts => ["https://hostname:port"]
|
85
|
+
auth_type => {
|
86
|
+
type => 'basic'
|
87
|
+
user => 'admin'
|
88
|
+
password => 'admin'
|
88
89
|
}
|
89
90
|
index => "my-data-stream"
|
90
91
|
action => "create"
|
91
|
-
}
|
92
|
-
}
|
92
|
+
}
|
93
|
+
}
|
94
|
+
```
|
95
|
+
|
96
|
+
Starting in 2.0.0, the aws sdk version is bumped to v3. In order for all other AWS plugins to work together, please remove pre-installed aws plugins and install logstash-integration-aws plugin as follows. See also https://github.com/logstash-plugins/logstash-mixin-aws/issues/38
|
97
|
+
```
|
98
|
+
# Remove existing logstash aws plugins and install logstash-integration-aws to keep sdk dependency the same
|
99
|
+
# https://github.com/logstash-plugins/logstash-mixin-aws/issues/38
|
100
|
+
/usr/share/logstash/bin/logstash-plugin remove logstash-input-s3
|
101
|
+
/usr/share/logstash/bin/logstash-plugin remove logstash-input-sqs
|
102
|
+
/usr/share/logstash/bin/logstash-plugin remove logstash-output-s3
|
103
|
+
/usr/share/logstash/bin/logstash-plugin remove logstash-output-sns
|
104
|
+
/usr/share/logstash/bin/logstash-plugin remove logstash-output-sqs
|
105
|
+
/usr/share/logstash/bin/logstash-plugin remove logstash-output-cloudwatch
|
106
|
+
|
107
|
+
/usr/share/logstash/bin/logstash-plugin install --version 0.1.0.pre logstash-integration-aws
|
108
|
+
bin/logstash-plugin install --version 2.0.0 logstash-output-opensearch
|
93
109
|
```
|
110
|
+
## ECS Compatibility
|
111
|
+
[Elastic Common Schema(ECS)](https://www.elastic.co/guide/en/ecs/current/index.html]) compatibility for V8 was added in 1.3.0. For more details on ECS support refer to this [documentation](docs/ecs_compatibility.md).
|
94
112
|
|
95
|
-
For more details refer to this [documentation](https://opensearch.org/docs/latest/clients/logstash/ship-to-opensearch/#opensearch-output-plugin)
|
96
113
|
|
97
114
|
## Code of Conduct
|
98
115
|
|
@@ -104,4 +121,4 @@ This project is licensed under the [Apache v2.0 License](LICENSE).
|
|
104
121
|
|
105
122
|
## Copyright
|
106
123
|
|
107
|
-
Copyright OpenSearch Contributors. See [NOTICE](NOTICE) for details.
|
124
|
+
Copyright OpenSearch Contributors. See [NOTICE](NOTICE) for details.
|
data/RELEASING.md
CHANGED
@@ -33,4 +33,8 @@ Repositories create consistent release labels, such as `v1.0.0`, `v1.1.0` and `v
|
|
33
33
|
|
34
34
|
The release process is standard across repositories in this org and is run by a release manager volunteering from amongst [MAINTAINERS](MAINTAINERS.md).
|
35
35
|
|
36
|
-
|
36
|
+
1. Create a tag, e.g. 1.0.0, and push it to this GitHub repository.
|
37
|
+
1. The [release-drafter.yml](.github/workflows/release-drafter.yml) will be automatically kicked off and a draft release will be created.
|
38
|
+
1. This draft release triggers the [jenkins release workflow](https://build.ci.opensearch.org/job/logstash-ouput-opensearch-release) as a As a result of which the logstash-output-plugin is released on [rubygems.org](https://rubygems.org/gems/logstash-output-opensearch). Please note that the release workflow is triggered only if created release is in draft state.
|
39
|
+
1. Once the above release workflow is successful, the drafted release on GitHub is published automatically.
|
40
|
+
1. Increment "version" in [logstash-output-opensearch.gemspec](./logstash-output-opensearch.gemspec) to the next iteration, e.g. 1.0.1.
|
@@ -0,0 +1,42 @@
|
|
1
|
+
# ECS Compatibility Support in logstash-output-opensearch
|
2
|
+
Compatibility for ECS v8 was added in release 1.3.0 of the plugin. This would enable the plugin to work with Logstash 8.x without the need to disable ecs_compatibility.
|
3
|
+
The output plugin doesn't create any events itself, but merely forwards events to OpenSearch that were shaped by plugins preceding it in the pipeline. So, it doesn't play a direct role in making the events ECS compatible.
|
4
|
+
However, the default index templates that the plugin installs into OpenSearch in the ecs_compatibility modes (v1 & v8) ensures that the document fields stored in the indices are ECS compatible. OpenSearch would throw errors for documents that have fields which are ECS incompatible and can't be coerced.
|
5
|
+
|
6
|
+
## ECS index templates used by logstash-output-opensearch 1.3.0
|
7
|
+
* v8 [ECS 8.0.0](https://github.com/elastic/ecs/releases/tag/v8.0.0) [ecs-8.0.0/generated/elasticsearch/legacy/template.json](https://raw.githubusercontent.com/elastic/ecs/v8.0.0/generated/elasticsearch/legacy/template.json)
|
8
|
+
* v1 [ECS 1.9.0](https://github.com/elastic/ecs/releases/tag/v1.9.0) [ecs-1.9.0/generated/elasticsearch/7/template.json](https://raw.githubusercontent.com/elastic/ecs/v1.9.0/generated/elasticsearch/7/template.json)
|
9
|
+
|
10
|
+
## ECS incompatibility
|
11
|
+
Incompatibility can arise for an event when it has fields with the same name as an ECS defined field but a type which can't be coerced into the ECS field.
|
12
|
+
It is Ok to have fields that are not defined in ECS [ref](https://www.elastic.co/guide/en/ecs/current/ecs-faq.html#addl-fields).
|
13
|
+
The dynamic template included in the ECS templates would dynamically map any non-ECS fields.
|
14
|
+
|
15
|
+
### Example
|
16
|
+
[ECS defines the "server"](https://www.elastic.co/guide/en/ecs/current/ecs-server.html) field as an object. But let's say the plugin gets the event below, with a string in the `server` field. It will receive an error from OpenSearch as strings can't be coerced into an object and the ECS incompatible event won't be indexed.
|
17
|
+
```
|
18
|
+
{
|
19
|
+
"@timestamp": "2022-08-22T15:39:18.142175244Z",
|
20
|
+
"@version": "1",
|
21
|
+
"message": "Doc1",
|
22
|
+
"server": "remoteserver.com"
|
23
|
+
}
|
24
|
+
```
|
25
|
+
|
26
|
+
Error received from OpenSearch in the Logstash logs
|
27
|
+
```
|
28
|
+
[2022-08-23T00:01:53,366][WARN ][logstash.outputs.opensearch][main][a36555c6fad3f301db8efff2dfbed768fd85e0b6f4ee35626abe62432f83b95d] Could not index event to OpenSearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"ecs-logstash-2022.08.23", :routing=>nil}, {"@timestamp"=>2022-08-22T15:39:18.142175244Z, "@version"=>"1", "server"=>"remoteserver.com", "message"=>"Doc1"}], :response=>{"index"=>{"_index"=>"ecs-logstash-2022.08.23", "_id"=>"CAEUyYIBQM7JQrwxF5NR", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"object mapping for [server] tried to parse field [server] as object, but found a concrete value"}}}}
|
29
|
+
```
|
30
|
+
## How to ensure ECS compatibility
|
31
|
+
* The plugins in the pipeline that create the events like the `input` and `codec` plugins should all use ECS defined fields.
|
32
|
+
* Filter plugins like [mutate](https://github.com/logstash-plugins/logstash-filter-mutate/blob/main/docs/index.asciidoc) can be used to map incompatible fields into ECS compatible ones.
|
33
|
+
In the above example the `server` field can be mapped to the `server.domain` field to make it compatible.
|
34
|
+
* You can use your own custom template in the plugin using the `template` and `template_name` configs.
|
35
|
+
[According to this](https://www.elastic.co/guide/en/ecs/current/ecs-faq.html#type-interop) some field types can be changed while staying compatible.
|
36
|
+
|
37
|
+
As a last resort the `ecs_compatibility` config for the logstash-output-opensearch can be set to `disabled`.
|
38
|
+
|
39
|
+
|
40
|
+
_______________
|
41
|
+
## References
|
42
|
+
* [Elastic Common Schema (ECS) Reference](https://www.elastic.co/guide/en/ecs/current/index.html)
|
@@ -12,6 +12,9 @@ require 'cgi'
|
|
12
12
|
require 'manticore'
|
13
13
|
require 'uri'
|
14
14
|
|
15
|
+
java_import 'org.apache.http.util.EntityUtils'
|
16
|
+
java_import 'org.apache.http.entity.StringEntity'
|
17
|
+
|
15
18
|
module LogStash; module Outputs; class OpenSearch; class HttpClient;
|
16
19
|
AWS_DEFAULT_PORT = 443
|
17
20
|
AWS_DEFAULT_PROFILE = 'default'
|
@@ -57,7 +60,7 @@ module LogStash; module Outputs; class OpenSearch; class HttpClient;
|
|
57
60
|
if options[:proxy]
|
58
61
|
options[:proxy] = manticore_proxy_hash(options[:proxy])
|
59
62
|
end
|
60
|
-
|
63
|
+
|
61
64
|
@manticore = ::Manticore::Client.new(options)
|
62
65
|
end
|
63
66
|
|
@@ -76,6 +79,7 @@ module LogStash; module Outputs; class OpenSearch; class HttpClient;
|
|
76
79
|
instance_cred_timeout = options[:auth_type]["instance_profile_credentials_timeout"] || AWS_DEFAULT_PROFILE_CREDENTIAL_TIMEOUT
|
77
80
|
region = options[:auth_type]["region"] || AWS_DEFAULT_REGION
|
78
81
|
set_aws_region(region)
|
82
|
+
set_service_name(options[:auth_type]["service_name"] || AWS_SERVICE)
|
79
83
|
|
80
84
|
credential_config = AWSIAMCredential.new(aws_access_key_id, aws_secret_access_key, session_token, profile, instance_cred_retries, instance_cred_timeout, region)
|
81
85
|
@credentials = Aws::CredentialProviderChain.new(credential_config).resolve
|
@@ -93,6 +97,14 @@ module LogStash; module Outputs; class OpenSearch; class HttpClient;
|
|
93
97
|
@region
|
94
98
|
end
|
95
99
|
|
100
|
+
def set_service_name(service_name)
|
101
|
+
@service_name = service_name
|
102
|
+
end
|
103
|
+
|
104
|
+
def get_service_name()
|
105
|
+
@service_name
|
106
|
+
end
|
107
|
+
|
96
108
|
def set_user_password(options)
|
97
109
|
@user = options[:auth_type]["user"]
|
98
110
|
@password = options[:auth_type]["password"]
|
@@ -105,7 +117,7 @@ module LogStash; module Outputs; class OpenSearch; class HttpClient;
|
|
105
117
|
def get_password()
|
106
118
|
@password
|
107
119
|
end
|
108
|
-
|
120
|
+
|
109
121
|
# Transform the proxy option to a hash. Manticore's support for non-hash
|
110
122
|
# proxy options is broken. This was fixed in https://github.com/cheald/manticore/commit/34a00cee57a56148629ed0a47c329181e7319af5
|
111
123
|
# but this is not yet released
|
@@ -137,12 +149,12 @@ module LogStash; module Outputs; class OpenSearch; class HttpClient;
|
|
137
149
|
params[:body] = body if body
|
138
150
|
|
139
151
|
if url.user
|
140
|
-
params[:auth] = {
|
152
|
+
params[:auth] = {
|
141
153
|
:user => CGI.unescape(url.user),
|
142
154
|
# We have to unescape the password here since manticore won't do it
|
143
155
|
# for us unless its part of the URL
|
144
|
-
:password => CGI.unescape(url.password),
|
145
|
-
:eager => true
|
156
|
+
:password => CGI.unescape(url.password),
|
157
|
+
:eager => true
|
146
158
|
}
|
147
159
|
elsif @type == BASIC_AUTH_TYPE
|
148
160
|
add_basic_auth_to_params(params)
|
@@ -172,13 +184,32 @@ module LogStash; module Outputs; class OpenSearch; class HttpClient;
|
|
172
184
|
resp
|
173
185
|
end
|
174
186
|
|
187
|
+
# from Manticore, https://github.com/cheald/manticore/blob/acc25cac2999f4658a77a0f39f60ddbca8fe14a4/lib/manticore/client.rb#L536
|
188
|
+
ISO_8859_1 = "ISO-8859-1".freeze
|
189
|
+
|
190
|
+
def minimum_encoding_for(string)
|
191
|
+
if string.ascii_only?
|
192
|
+
ISO_8859_1
|
193
|
+
else
|
194
|
+
string.encoding.to_s
|
195
|
+
end
|
196
|
+
end
|
197
|
+
|
175
198
|
def sign_aws_request(request_uri, path, method, params)
|
176
199
|
url = URI::HTTPS.build({:host=>URI(request_uri.to_s).host, :port=>AWS_DEFAULT_PORT.to_s, :path=>path})
|
177
|
-
|
178
|
-
|
179
|
-
|
180
|
-
|
181
|
-
|
200
|
+
|
201
|
+
request = Seahorse::Client::Http::Request.new(options={:endpoint=>url, :http_method => method.to_s.upcase,
|
202
|
+
:headers => params[:headers],:body => params[:body]})
|
203
|
+
|
204
|
+
aws_signer = Aws::Sigv4::Signer.new(service: @service_name, region: @region, credentials_provider: @credentials)
|
205
|
+
signed_key = aws_signer.sign_request(
|
206
|
+
http_method: request.http_method,
|
207
|
+
url: url,
|
208
|
+
headers: params[:headers],
|
209
|
+
# match encoding of the HTTP adapter, see https://github.com/opensearch-project/logstash-output-opensearch/issues/207
|
210
|
+
body: params[:body] ? EntityUtils.toString(StringEntity.new(params[:body], minimum_encoding_for(params[:body]))) : nil
|
211
|
+
)
|
212
|
+
params[:headers] = params[:headers].merge(signed_key.headers)
|
182
213
|
end
|
183
214
|
|
184
215
|
def add_basic_auth_to_params(params)
|
@@ -192,7 +223,7 @@ module LogStash; module Outputs; class OpenSearch; class HttpClient;
|
|
192
223
|
# Returned urls from this method should be checked for double escaping.
|
193
224
|
def format_url(url, path_and_query=nil)
|
194
225
|
request_uri = url.clone
|
195
|
-
|
226
|
+
|
196
227
|
# We excise auth info from the URL in case manticore itself tries to stick
|
197
228
|
# sensitive data in a thrown exception or log data
|
198
229
|
request_uri.user = nil
|
@@ -208,7 +239,7 @@ module LogStash; module Outputs; class OpenSearch; class HttpClient;
|
|
208
239
|
new_query_parts = [request_uri.query, parsed_path_and_query.query].select do |part|
|
209
240
|
part && !part.empty? # Skip empty nil and ""
|
210
241
|
end
|
211
|
-
|
242
|
+
|
212
243
|
request_uri.query = new_query_parts.join("&") unless new_query_parts.empty?
|
213
244
|
|
214
245
|
# use `raw_path`` as `path` will unescape any escaped '/' in the path
|
@@ -40,6 +40,7 @@ module LogStash; module Outputs; class OpenSearch; class HttpClient;
|
|
40
40
|
end
|
41
41
|
|
42
42
|
attr_reader :logger, :adapter, :sniffing, :sniffer_delay, :resurrect_delay, :healthcheck_path, :sniffing_path, :bulk_path
|
43
|
+
attr_reader :default_server_major_version
|
43
44
|
|
44
45
|
ROOT_URI_PATH = '/'.freeze
|
45
46
|
|
@@ -68,6 +69,7 @@ module LogStash; module Outputs; class OpenSearch; class HttpClient;
|
|
68
69
|
@resurrect_delay = merged[:resurrect_delay]
|
69
70
|
@sniffing = merged[:sniffing]
|
70
71
|
@sniffer_delay = merged[:sniffer_delay]
|
72
|
+
@default_server_major_version = merged[:default_server_major_version]
|
71
73
|
end
|
72
74
|
|
73
75
|
# Used for all concurrent operations in this class
|
@@ -412,8 +414,15 @@ module LogStash; module Outputs; class OpenSearch; class HttpClient;
|
|
412
414
|
end
|
413
415
|
|
414
416
|
def get_version(url)
|
415
|
-
|
416
|
-
|
417
|
+
response = perform_request_to_url(url, :get, ROOT_URI_PATH)
|
418
|
+
if response.code != 404 && !response.body.empty?
|
419
|
+
return LogStash::Json.load(response.body)["version"]["number"] # e.g. "7.10.0"
|
420
|
+
end
|
421
|
+
if @default_server_major_version.nil?
|
422
|
+
@logger.error("Failed to get version from health_check endpoint and default_server_major_version is not configured.")
|
423
|
+
raise "get_version failed! no default_server_major_version configured."
|
424
|
+
end
|
425
|
+
"#{default_server_major_version}.0.0"
|
417
426
|
end
|
418
427
|
|
419
428
|
def last_version
|
@@ -107,7 +107,6 @@ module LogStash; module Outputs; class OpenSearch;
|
|
107
107
|
|
108
108
|
body_stream = StringIO.new
|
109
109
|
if http_compression
|
110
|
-
body_stream.set_encoding "BINARY"
|
111
110
|
stream_writer = gzip_writer(body_stream)
|
112
111
|
else
|
113
112
|
stream_writer = body_stream
|
@@ -126,9 +125,24 @@ module LogStash; module Outputs; class OpenSearch;
|
|
126
125
|
:payload_size => stream_writer.pos,
|
127
126
|
:content_length => body_stream.size,
|
128
127
|
:batch_offset => (index + 1 - batch_actions.size))
|
128
|
+
|
129
|
+
# Have to close gzip writer before reading from body_stream; otherwise stream doesn't end properly
|
130
|
+
# and will cause server side error
|
131
|
+
if http_compression
|
132
|
+
stream_writer.close
|
133
|
+
end
|
134
|
+
|
129
135
|
bulk_responses << bulk_send(body_stream, batch_actions)
|
130
|
-
|
131
|
-
|
136
|
+
|
137
|
+
if http_compression
|
138
|
+
# Get a new StringIO object and gzip writer
|
139
|
+
body_stream = StringIO.new
|
140
|
+
stream_writer = gzip_writer(body_stream)
|
141
|
+
else
|
142
|
+
# Clear existing StringIO object and reuse existing stream writer
|
143
|
+
body_stream.truncate(0) && body_stream.seek(0)
|
144
|
+
end
|
145
|
+
|
132
146
|
batch_actions.clear
|
133
147
|
end
|
134
148
|
stream_writer.write(as_json)
|
@@ -149,6 +163,7 @@ module LogStash; module Outputs; class OpenSearch;
|
|
149
163
|
fail(ArgumentError, "Cannot create gzip writer on IO with unread bytes") unless io.eof?
|
150
164
|
fail(ArgumentError, "Cannot create gzip writer on non-empty IO") unless io.pos == 0
|
151
165
|
|
166
|
+
io.set_encoding "BINARY"
|
152
167
|
Zlib::GzipWriter.new(io, Zlib::DEFAULT_COMPRESSION, Zlib::DEFAULT_STRATEGY)
|
153
168
|
end
|
154
169
|
|
@@ -328,7 +343,8 @@ module LogStash; module Outputs; class OpenSearch;
|
|
328
343
|
:healthcheck_path => options[:healthcheck_path],
|
329
344
|
:resurrect_delay => options[:resurrect_delay],
|
330
345
|
:url_normalizer => self.method(:host_to_url),
|
331
|
-
:metric => options[:metric]
|
346
|
+
:metric => options[:metric],
|
347
|
+
:default_server_major_version => options[:default_server_major_version]
|
332
348
|
}
|
333
349
|
pool_options[:scheme] = self.scheme if self.scheme
|
334
350
|
|
@@ -388,10 +404,16 @@ module LogStash; module Outputs; class OpenSearch;
|
|
388
404
|
@pool.put(path, nil, LogStash::Json.dump(template))
|
389
405
|
end
|
390
406
|
|
407
|
+
def legacy_template?()
|
408
|
+
# TODO: Also check Version and return true for < 7.8 even if :legacy_template=false
|
409
|
+
# Need to figure a way to distinguish between OpenSearch, OpenDistro and other
|
410
|
+
# variants, since they have version numbers in different ranges.
|
411
|
+
client_settings.fetch(:legacy_template, true)
|
412
|
+
end
|
413
|
+
|
391
414
|
def template_endpoint
|
392
|
-
# TODO: Check Version < 7.8 and use index template for >= 7.8 & OpenSearch
|
393
415
|
# https://opensearch.org/docs/opensearch/index-templates/
|
394
|
-
'_template'
|
416
|
+
legacy_template?() ? '_template' : '_index_template'
|
395
417
|
end
|
396
418
|
|
397
419
|
# check whether rollover alias already exists
|
@@ -18,7 +18,8 @@ module LogStash; module Outputs; class OpenSearch;
|
|
18
18
|
:pool_max_per_route => params["pool_max_per_route"],
|
19
19
|
:check_connection_timeout => params["validate_after_inactivity"],
|
20
20
|
:http_compression => params["http_compression"],
|
21
|
-
:headers => params["custom_headers"] || {}
|
21
|
+
:headers => params["custom_headers"] || {},
|
22
|
+
:legacy_template => params["legacy_template"]
|
22
23
|
}
|
23
24
|
|
24
25
|
client_settings[:proxy] = params["proxy"] if params["proxy"]
|
@@ -26,7 +27,8 @@ module LogStash; module Outputs; class OpenSearch;
|
|
26
27
|
common_options = {
|
27
28
|
:client_settings => client_settings,
|
28
29
|
:metric => params["metric"],
|
29
|
-
:resurrect_delay => params["resurrect_delay"]
|
30
|
+
:resurrect_delay => params["resurrect_delay"],
|
31
|
+
:default_server_major_version => params["default_server_major_version"]
|
30
32
|
}
|
31
33
|
|
32
34
|
if params["sniffing"]
|
@@ -18,7 +18,7 @@ module LogStash; module Outputs; class OpenSearch
|
|
18
18
|
else
|
19
19
|
plugin.logger.info("Using a default mapping template", :version => plugin.maximum_seen_major_version,
|
20
20
|
:ecs_compatibility => plugin.ecs_compatibility)
|
21
|
-
template = load_default_template(plugin.maximum_seen_major_version, plugin.ecs_compatibility)
|
21
|
+
template = load_default_template(plugin.maximum_seen_major_version, plugin.ecs_compatibility, plugin.client.legacy_template?)
|
22
22
|
end
|
23
23
|
|
24
24
|
plugin.logger.debug("Attempting to install template", template: template)
|
@@ -26,8 +26,8 @@ module LogStash; module Outputs; class OpenSearch
|
|
26
26
|
end
|
27
27
|
|
28
28
|
private
|
29
|
-
def self.load_default_template(major_version, ecs_compatibility)
|
30
|
-
template_path = default_template_path(major_version, ecs_compatibility)
|
29
|
+
def self.load_default_template(major_version, ecs_compatibility, legacy_template)
|
30
|
+
template_path = default_template_path(major_version, ecs_compatibility, legacy_template)
|
31
31
|
read_template_file(template_path)
|
32
32
|
rescue => e
|
33
33
|
fail "Failed to load default template for OpenSearch v#{major_version} with ECS #{ecs_compatibility}; caused by: #{e.inspect}"
|
@@ -45,9 +45,10 @@ module LogStash; module Outputs; class OpenSearch
|
|
45
45
|
plugin.template_name
|
46
46
|
end
|
47
47
|
|
48
|
-
def self.default_template_path(major_version, ecs_compatibility=:disabled)
|
48
|
+
def self.default_template_path(major_version, ecs_compatibility=:disabled, legacy_template=true)
|
49
49
|
template_version = major_version
|
50
|
-
|
50
|
+
suffix = legacy_template ? "" : "_index"
|
51
|
+
default_template_name = "templates/ecs-#{ecs_compatibility}/#{template_version}x#{suffix}.json"
|
51
52
|
::File.expand_path(default_template_name, ::File.dirname(__FILE__))
|
52
53
|
end
|
53
54
|
|
@@ -0,0 +1,66 @@
|
|
1
|
+
{
|
2
|
+
"index_patterns": "logstash-*",
|
3
|
+
"version": 60001,
|
4
|
+
"priority": 10,
|
5
|
+
"template": {
|
6
|
+
"settings": {
|
7
|
+
"index.refresh_interval": "5s",
|
8
|
+
"number_of_shards": 1
|
9
|
+
},
|
10
|
+
"mappings": {
|
11
|
+
"dynamic_templates": [
|
12
|
+
{
|
13
|
+
"message_field": {
|
14
|
+
"path_match": "message",
|
15
|
+
"match_mapping_type": "string",
|
16
|
+
"mapping": {
|
17
|
+
"type": "text",
|
18
|
+
"norms": false
|
19
|
+
}
|
20
|
+
}
|
21
|
+
},
|
22
|
+
{
|
23
|
+
"string_fields": {
|
24
|
+
"match": "*",
|
25
|
+
"match_mapping_type": "string",
|
26
|
+
"mapping": {
|
27
|
+
"type": "text",
|
28
|
+
"norms": false,
|
29
|
+
"fields": {
|
30
|
+
"keyword": {
|
31
|
+
"type": "keyword",
|
32
|
+
"ignore_above": 256
|
33
|
+
}
|
34
|
+
}
|
35
|
+
}
|
36
|
+
}
|
37
|
+
}
|
38
|
+
],
|
39
|
+
"properties": {
|
40
|
+
"@timestamp": {
|
41
|
+
"type": "date"
|
42
|
+
},
|
43
|
+
"@version": {
|
44
|
+
"type": "keyword"
|
45
|
+
},
|
46
|
+
"geoip": {
|
47
|
+
"dynamic": true,
|
48
|
+
"properties": {
|
49
|
+
"ip": {
|
50
|
+
"type": "ip"
|
51
|
+
},
|
52
|
+
"location": {
|
53
|
+
"type": "geo_point"
|
54
|
+
},
|
55
|
+
"latitude": {
|
56
|
+
"type": "half_float"
|
57
|
+
},
|
58
|
+
"longitude": {
|
59
|
+
"type": "half_float"
|
60
|
+
}
|
61
|
+
}
|
62
|
+
}
|
63
|
+
}
|
64
|
+
}
|
65
|
+
}
|
66
|
+
}
|