fluent-plugin-elasticsearch 0.8.0 → 0.9.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA1:
3
- metadata.gz: c59dd5df9cd9a3ae7cb7e32b79fce58855888dbe
4
- data.tar.gz: 3bb7808c913b37088eb610bb62a59394da068f0a
3
+ metadata.gz: d8a7cd259caf8132773810062c7823f800a0a6cd
4
+ data.tar.gz: d8f23990fb581c5b965f353afaf245d63334f25f
5
5
  SHA512:
6
- metadata.gz: dcd423ab475f32322b88467b74456c290196d5dc9ffb9afddbaa845a91852875f64eedaf84edd77f68c56a56db99102d96077cd31e8ce3fc691a370976ed905e
7
- data.tar.gz: f7bd63fe77c03c7d9e889869c1cc06d1f4a266be4eb503bfdd708b6cee133c1dde5bb03e0415ff85606a8d94f377837ed2962b5d912d90eb45fbfd36635a2542
6
+ metadata.gz: 26d71e33553eee2abf69ee9b41b30b5cd0e7c59b5a5f0241ec16edca9f65f1ea49695add3e69f14e2ec37680ea2d042ec4fe918fbadc9a4ba8ce700efde939e1
7
+ data.tar.gz: 5af60188c0ddc9097c573f36acfe9b3eab207ba556b5fa1e7e2f668122ecc86b07f7ad8561bb9c073afa6e3f097c26d3ef3d8180c3fcf8a49226d616fe1a81f4
data/History.md CHANGED
@@ -2,6 +2,9 @@
2
2
 
3
3
  ### Future
4
4
 
5
+ ### 0.9.0
6
+ - Add `ssl_verify` option (#108)
7
+
5
8
  ### 0.8.0
6
9
  - Replace Patron with Excon HTTP client
7
10
 
data/README.md CHANGED
@@ -31,6 +31,8 @@ This plugin creates ElasticSearch indices by merely writing to them. Consider us
31
31
 
32
32
  **More options:**
33
33
 
34
+ **hosts**
35
+
34
36
  ```
35
37
  hosts host1:port1,host2:port2,host3:port3
36
38
  ```
@@ -45,6 +47,8 @@ You can specify multiple elasticsearch hosts with separator ",".
45
47
 
46
48
  If you specify multiple hosts, this plugin will load balance updates to elasticsearch. This is an [elasticsearch-ruby](https://github.com/elasticsearch/elasticsearch-ruby) feature, the default strategy is round-robin.
47
49
 
50
+ **user, password, path, scheme, ssl_verify**
51
+
48
52
  If you specify this option, host and port options are ignored.
49
53
 
50
54
  ```
@@ -56,6 +60,9 @@ scheme https
56
60
 
57
61
  You can specify user and password for HTTP basic auth. If used in conjunction with a hosts list, then these options will be used by default i.e. if you do not provide any of these options within the hosts listed.
58
62
 
63
+ Specify `ssl_verify false` to skip ssl verification (defaults to true)
64
+
65
+ **logstash_format**
59
66
 
60
67
  ```
61
68
  logstash_format true # defaults to false
@@ -63,16 +70,22 @@ logstash_format true # defaults to false
63
70
 
64
71
  This is meant to make writing data into elasticsearch compatible to what logstash writes. By doing this, one could take advantade of [kibana](http://kibana.org/).
65
72
 
73
+ **logstash_prefix**
74
+
66
75
  ```
67
76
  logstash_prefix mylogs # defaults to "logstash"
68
77
  ```
69
78
 
70
- By default, the records inserted into index `logstash-YYMMDD`. This option allows to insert into specified index like `mylogs-YYMMDD`.
79
+ **logstash_dateformat**
80
+
81
+ By default, the records inserted into index `logstash-YYMMDD`. This option allows to insert into specified index like `mylogs-YYYYMM` for a monthly index.
71
82
 
72
83
  ```
73
84
  logstash_dateformat %Y.%m. # defaults to "%Y.%m.%d"
74
85
  ```
75
86
 
87
+ **time_key**
88
+
76
89
  By default, when inserting records in logstash format, @timestamp is dynamically created with the time at log ingestion. If you'd like to use a custom time. Include an @timestamp with your record.
77
90
 
78
91
  ```
@@ -105,7 +118,7 @@ The output will be
105
118
  }
106
119
  ```
107
120
 
108
- By default, the records inserted into index `logstash-YYMMDD`. This option allows to insert into specified index like `logstash-YYYYMM` for a monthly index.
121
+ **utc_index**
109
122
 
110
123
  ```
111
124
  utc_index true
@@ -113,6 +126,8 @@ utc_index true
113
126
 
114
127
  By default, the records inserted into index `logstash-YYMMDD` with utc (Coordinated Universal Time). This option allows to use local time if you describe utc_index to false.
115
128
 
129
+ **request_timeout**
130
+
116
131
  ```
117
132
  request_timeout 15s # defaults to 5s
118
133
  ```
@@ -121,13 +136,15 @@ You can specify HTTP request timeout.
121
136
 
122
137
  This is useful when Elasticsearch cannot return response for bulk request within the default of 5 seconds.
123
138
 
139
+ **reload_connections**
140
+
124
141
  ```
125
142
  reload_connections false # defaults to true
126
143
  ```
127
144
 
128
- You can tune how the elasticsearch-transport host reloading feature works. By default it will reload the host list from the server
129
- every 10,000th request to spread the load. This can be an issue if your ElasticSearch cluster is behind a Reverse Proxy,
130
- as fluentd process may not have direct network access to the ElasticSearch nodes.
145
+ **reload_on_failure**
146
+
147
+ You can tune how the elasticsearch-transport host reloading feature works. By default it will reload the host list from the server every 10,000th request to spread the load. This can be an issue if your ElasticSearch cluster is behind a Reverse Proxy, as fluentd process may not have direct network access to the ElasticSearch nodes.
131
148
 
132
149
  ```
133
150
  reload_on_failure true # defaults to false
@@ -136,7 +153,7 @@ reload_on_failure true # defaults to false
136
153
  Indicates that the elasticsearch-transport will try to reload the nodes addresses if there is a failure while making the
137
154
  request, this can be useful to quickly remove a dead node from the list of addresses.
138
155
 
139
- ---
156
+ **include_tag_key, tag_key**
140
157
 
141
158
  ```
142
159
  include_tag_key true # defaults to false
@@ -159,7 +176,7 @@ The record inserted into elasticsearch would be
159
176
  {"_key":"my.logs", "name":"Johnny Doeie"}
160
177
  ```
161
178
 
162
- ---
179
+ **id_key**
163
180
 
164
181
  ```
165
182
  id_key request_id # use "request_id" field as a record id in ES
@@ -174,7 +191,7 @@ This following record `{"name":"Johnny","request_id":"87d89af7daffad6"}` will tr
174
191
  { "name": "Johnny", "request_id": "87d89af7daffad6" }
175
192
  ```
176
193
 
177
- ---
194
+ **Buffered output options**
178
195
 
179
196
  fluentd-plugin-elasticsearch is a buffered output that uses elasticseach's bulk API. So additional buffer configuration would be (with default values):
180
197
 
@@ -186,9 +203,9 @@ retry_wait 1.0
186
203
  num_threads 1
187
204
  ```
188
205
 
189
- ---
206
+ **Not seeing a config you need?**
190
207
 
191
- Please consider using [fluent-plugin-forest](https://github.com/tagomoris/fluent-plugin-forest) to send multiple logs to multiple ElasticSearch indices:
208
+ We try to keep the scope of this plugin small. If you need more configuration options, please consider using [fluent-plugin-forest](https://github.com/tagomoris/fluent-plugin-forest). For example, to configure multiple tags to be sent to different ElasticSearch indices:
192
209
 
193
210
  ```
194
211
  <match my.logs.*>
@@ -3,9 +3,9 @@ $:.push File.expand_path('../lib', __FILE__)
3
3
 
4
4
  Gem::Specification.new do |s|
5
5
  s.name = 'fluent-plugin-elasticsearch'
6
- s.version = '0.8.0'
6
+ s.version = '0.9.0'
7
7
  s.authors = ['diogo', 'pitr']
8
- s.email = ['pitr@uken.com', 'diogo@uken.com']
8
+ s.email = ['pitr.vern@gmail.com', 'diogo@uken.com']
9
9
  s.description = %q{ElasticSearch output plugin for Fluent event collector}
10
10
  s.summary = s.description
11
11
  s.homepage = 'https://github.com/uken/fluent-plugin-elasticsearch'
@@ -28,6 +28,7 @@ class Fluent::ElasticsearchOutput < Fluent::BufferedOutput
28
28
  config_param :reload_connections, :bool, :default => true
29
29
  config_param :reload_on_failure, :bool, :default => false
30
30
  config_param :time_key, :string, :default => nil
31
+ config_param :ssl_verify , :bool, :default => true
31
32
 
32
33
  include Fluent::SetTagKeyMixin
33
34
  config_set_default :include_tag_key, false
@@ -53,7 +54,8 @@ class Fluent::ElasticsearchOutput < Fluent::BufferedOutput
53
54
  reload_on_failure: @reload_on_failure,
54
55
  retry_on_failure: 5,
55
56
  transport_options: {
56
- request: { timeout: @request_timeout }
57
+ request: { timeout: @request_timeout },
58
+ ssl: { verify: @ssl_verify }
57
59
  }
58
60
  }), &adapter_conf)
59
61
  es = Elasticsearch::Client.new transport: transport
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: fluent-plugin-elasticsearch
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.8.0
4
+ version: 0.9.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - diogo
@@ -9,7 +9,7 @@ authors:
9
9
  autorequire:
10
10
  bindir: bin
11
11
  cert_chain: []
12
- date: 2015-04-16 00:00:00.000000000 Z
12
+ date: 2015-06-08 00:00:00.000000000 Z
13
13
  dependencies:
14
14
  - !ruby/object:Gem::Dependency
15
15
  name: fluentd
@@ -83,7 +83,7 @@ dependencies:
83
83
  version: '1'
84
84
  description: ElasticSearch output plugin for Fluent event collector
85
85
  email:
86
- - pitr@uken.com
86
+ - pitr.vern@gmail.com
87
87
  - diogo@uken.com
88
88
  executables: []
89
89
  extensions: []
@@ -128,4 +128,3 @@ summary: ElasticSearch output plugin for Fluent event collector
128
128
  test_files:
129
129
  - test/helper.rb
130
130
  - test/plugin/test_out_elasticsearch.rb
131
- has_rdoc: