fluent-plugin-viaq_data_model 0.0.5 → 0.0.6

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA1:
3
- metadata.gz: 72e1031bd59c7056bff5416a8877658c4850d7fb
4
- data.tar.gz: e9e76b523fca3e2c3eb00800a7e00256adc90db3
3
+ metadata.gz: 129a8f3798e87df17888fe973bc9215a4fe9c42c
4
+ data.tar.gz: a22524738b672742e90dd2a626c6126e8b00f97d
5
5
  SHA512:
6
- metadata.gz: 9cc93bc210c483b56dd05ce87efd6b9f54dc067c619dbe90d087002f10b01729b3ca21ac7c80024ec07b88aa852609590b1f5b83479287fb0228d2bcc0438801
7
- data.tar.gz: 6a398beddcc833b48ba06bec215b70cd415a5b8d661df7d0fa4b7d832e9ecc23f2d558623b9817055956a90874d896e55ed966f3b8a6f1040b47f7e8540084af
6
+ metadata.gz: 00f295b2a060138b84c041b59785f785cfce8a51f1a6754574c9fb0a1160e85a1ba16757806373f23721781939ad824ae871cfbdd99e6140655b26ac6b0a782c
7
+ data.tar.gz: 0abc2a05dda5dc57a29c8982176827a7dc3e654415c13a26af7d8e5ad9ec98228a3666fd9a281a59155509c428a97ba6290e8a9d498fc41dd3485b3cf31a4886
data/.travis.yml CHANGED
@@ -6,7 +6,11 @@ rvm:
6
6
  - 2.3.1
7
7
 
8
8
  gemfile:
9
- - Gemfile
9
+ - Gemfile
10
+
11
+ env:
12
+ - FLUENTD_VERSION=0.12.0
13
+ - FLUENTD_VERSION=0.14.0
10
14
 
11
15
  script: bundle exec rake test
12
16
  sudo: false
data/README.md CHANGED
@@ -29,6 +29,31 @@ You cannot set the `@timestamp` field in a Fluentd `record_transformer` filter.
29
29
  The plugin allows you to use some other field e.g. `time` and have that "moved"
30
30
  to a top level field called `@timestamp`.
31
31
 
32
+ * Converts systemd and json-file logs to ViaQ data model format
33
+
34
+ Doing this conversion in a `record_transformer` with embedded ruby code is very
35
+ resource intensive. The ViaQ plugin can convert common input formats such as
36
+ Kubernetes `json-file`, `/var/log/messages`, and systemd `journald` into their
37
+ corresponding ViaQ `_default_`, `systemd`, `kubernetes`, and
38
+ `pipeline_metadata` namespaced fields. The `pipeline_metadata` will be added
39
+ to all records, regardless of tag. Use the `pipeline_type` parameter to
40
+ specify which part of the pipeline this is, `collector` or `normalizer`.
41
+ The ViaQ data model conversion will only be applied to matching `tag`s
42
+ specified in a `formatter` section.
43
+
44
+ * Creates Elasticsearch index names or prefixes
45
+
46
+ You can create either a full Elasticsearch index name for the record (to be
47
+ used with the `fluent-plugin-elasticsearch` `target_index_key` parameter), or
48
+ create an index name prefix (missing the date/timestamp part of the index
49
+ name - to be used with `logstash_prefix_key`). In order to use this, create an
50
+ `elasticsearch_index_name` section, and specify the `tag` to match, and the
51
+ `name_type` type of index name to create. By default, a prefix name will be
52
+ stored in the `viaq_index_prefix` field in the record, and a full name will be
53
+ stored in the `viaq_index_name` field. Configure
54
+ `elasticsearch_index_name_field` or `elasticsearch_index_prefix_field` to use a
55
+ different field name.
56
+
32
57
  ## Configuration
33
58
 
34
59
  NOTE: All fields are Optional - no required fields.
@@ -70,6 +95,49 @@ See `filter-viaq_data_model.conf` for an example filter configuration.
70
95
  * `dest_time_name` - string - default `@timestamp`
71
96
  * This is the name of the top level field to hold the time value. The value
72
97
  is taken from the value of the `src_time_name` field.
98
+ * `formatter` - a formatter for a well known common data model source
99
+ * `type` - one of the well known sources
100
+ * `sys_journal` - a record read from the systemd journal
101
+ * `k8s_journal` - a Kubernetes container record read from the systemd
102
+ journal - should have `CONTAINER_NAME`, `CONTAINER_ID_FULL`
103
+ * `sys_var_log` - a record read from `/var/log/messages`
104
+ * `k8s_json_file` - a record read from a `/var/log/containers/*.log` JSON
105
+ formatted container log file
106
+ * `tag` - the Fluentd tag pattern to match for these records
107
+ * `remove_keys` - comma delimited list of keys to remove from the record
108
+ * `pipeline_type` - which part of the pipeline is this? `collector` or
109
+ `normalizer` - the default is `collector`
110
+ * `elasticsearch_index_name` - how to construct Elasticsearch index names or
111
+ prefixes for given tags
112
+ * `tag` - the Fluentd tag pattern to match for these records
113
+ * `name_type` - the well known type of index name or prefix to create -
114
+ `operations_full, project_full, operations_prefix, project_prefix` - The
115
+ `operations_*` types will create a name like `.operations`, and the
116
+ `project_*` types will create a name like
117
+ `project.record['kubernetes']['namespace_name'].record['kubernetes']['namespace_id']`.
118
+ When using the `full` types, a delimiter `.` followed by the date in
119
+ `YYYY.MM.DD` format will be added to the string to make a full index name.
120
+ When using the `prefix` types, it is assumed that the
121
+ `fluent-plugin-elaticsearch` is used with the `logstash_prefix_key` to
122
+ create the full index name.
123
+ * `elasticsearch_index_name_field` - name of the field in the record which stores
124
+ the index name - you should remove this field in the elasticsearch output
125
+ plugin using the `remove_keys` config parameter - default is `viaq_idnex_name`
126
+ * `elasticsearch_index_prefix_field` - name of the field in the record which stores
127
+ the index prefix - you should remove this field in the elasticsearch output
128
+ plugin using the `remove_keys` config parameter - default is `viaq_idnex_prefix`
129
+
130
+ **NOTE** The `formatter` blocks are matched in the given order in the file.
131
+ This means, don't use `tag "**"` as the first formatter or none of your
132
+ others will be matched or evaulated.
133
+
134
+ **NOTE** The `elasticsearch_index_name` processing is done *last*, *after* the
135
+ formatting, removal of empty fields, `@timestamp` creation, etc., so use
136
+ e.g. `record['systemd']['t']['GID']` instead of `record['_GID']`
137
+
138
+ **NOTE** The `elasticsearch_index_name` blocks are matched in the given order
139
+ in the file. This means, don't use `tag "**"` as the first formatter or none
140
+ of your others will be matched or evaulated.
73
141
 
74
142
  ## Example
75
143
 
@@ -103,6 +171,95 @@ The resulting record, using the defaults, would look like this:
103
171
  "@timestamp": "2017-02-13 15:30:10.259106596-07:00"
104
172
  }
105
173
 
174
+ ## Formatter example
175
+
176
+ Given a record like the following with a tag of `journal.system`
177
+
178
+ __REALTIME_TIMESTAMP=1502228121310282
179
+ __MONOTONIC_TIMESTAMP=722903835100
180
+ _BOOT_ID=d85e8a9d524c4a419bcfb6598db78524
181
+ _TRANSPORT=syslog
182
+ PRIORITY=6
183
+ SYSLOG_FACILITY=3
184
+ SYSLOG_IDENTIFIER=dnsmasq-dhcp
185
+ SYSLOG_PID=2289
186
+ _PID=2289
187
+ _UID=99
188
+ _GID=40
189
+ _COMM=dnsmasq
190
+ _EXE=/usr/sbin/dnsmasq
191
+ _CMDLINE=/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper
192
+ _CAP_EFFECTIVE=3400
193
+ _SYSTEMD_CGROUP=/system.slice/libvirtd.service
194
+ MESSASGE=my message
195
+
196
+ Using a configuration like this:
197
+
198
+ <formatter>
199
+ tag "journal.system**"
200
+ type sys_journal
201
+ remove_keys log,stream,MESSAGE,_SOURCE_REALTIME_TIMESTAMP,__REALTIME_TIMESTAMP,CONTAINER_ID,CONTAINER_ID_FULL,CONTAINER_NAME,PRIORITY,_BOOT_ID,_CAP_EFFECTIVE,_CMDLINE,_COMM,_EXE,_GID,_HOSTNAME,_MACHINE_ID,_PID,_SELINUX_CONTEXT,_SYSTEMD_CGROUP,_SYSTEMD_SLICE,_SYSTEMD_UNIT,_TRANSPORT,_UID,_AUDIT_LOGINUID,_AUDIT_SESSION,_SYSTEMD_OWNER_UID,_SYSTEMD_SESSION,_SYSTEMD_USER_UNIT,CODE_FILE,CODE_FUNCTION,CODE_LINE,ERRNO,MESSAGE_ID,RESULT,UNIT,_KERNEL_DEVICE,_KERNEL_SUBSYSTEM,_UDEV_SYSNAME,_UDEV_DEVNODE,_UDEV_DEVLINK,SYSLOG_FACILITY,SYSLOG_IDENTIFIER,SYSLOG_PID
202
+ </formatter>
203
+
204
+ The resulting record will look like this:
205
+
206
+ {
207
+ "systemd": {
208
+ "t": {
209
+ "BOOT_ID":"d85e8a9d524c4a419bcfb6598db78524",
210
+ "GID":40,
211
+ ...
212
+ },
213
+ "u": {
214
+ "SYSLOG_FACILITY":3,
215
+ "SYSLOG_IDENTIFIER":"dnsmasq-dhcp",
216
+ ...
217
+ },
218
+ "message":"my message",
219
+ ...
220
+ }
221
+
222
+ ## Elasticsearch index name example
223
+
224
+ Given a configuration like this:
225
+
226
+ <elasticsearch_index_name>
227
+ tag "journal.system** system.var.log** **_default_** **_openshift_** **_openshift-infra_** mux.ops"
228
+ name_type operations_full
229
+ </elasticsearch_index_name>
230
+ <elasticsearch_index_name>
231
+ tag "**"
232
+ name_type project_full
233
+ </elasticsearch_index_name>
234
+ elasticsearch_index_field viaq_index_name
235
+
236
+ A record with tag `journal.system` like this:
237
+
238
+ {
239
+ "@timestamp":"2017-07-27T17:27:46.216527+00:00"
240
+ }
241
+
242
+ will end up looking like this:
243
+
244
+ {
245
+ "@timestamp":"2017-07-27T17:27:46.216527+00:00",
246
+ "viaq_index_name":".operations.2017.07.07"
247
+ }
248
+
249
+ A record with tag `kubernetes.journal.container` like this:
250
+
251
+ {
252
+ "@timestamp":"2017-07-27T17:27:46.216527+00:00",
253
+ "kubernetes":{"namespace_name":"myproject","namespace_id":"000000"}
254
+ }
255
+
256
+ will end up looking like this:
257
+
258
+ {
259
+ "@timestamp":"2017-07-27T17:27:46.216527+00:00",
260
+ "kubernetes":{"namespace_name":"myproject","namespace_id":"000000"}
261
+ "viaq_index_name":"project.myproject.000000.2017.07.07"
262
+ }
106
263
 
107
264
  ## Installation
108
265
 
@@ -2,9 +2,12 @@
2
2
  lib = File.expand_path('../lib', __FILE__)
3
3
  $LOAD_PATH.unshift(lib) unless $LOAD_PATH.include?(lib)
4
4
 
5
+ # can override for testing
6
+ FLUENTD_VERSION = ENV['FLUENTD_VERSION'] || "0.12.0"
7
+
5
8
  Gem::Specification.new do |gem|
6
9
  gem.name = "fluent-plugin-viaq_data_model"
7
- gem.version = "0.0.5"
10
+ gem.version = "0.0.6"
8
11
  gem.authors = ["Rich Megginson"]
9
12
  gem.email = ["rmeggins@redhat.com"]
10
13
  gem.description = %q{Filter plugin to ensure data is in the ViaQ common data model}
@@ -20,12 +23,13 @@ Gem::Specification.new do |gem|
20
23
 
21
24
  gem.required_ruby_version = '>= 2.0.0'
22
25
 
23
- gem.add_runtime_dependency "fluentd", ">= 0.12.0"
26
+ gem.add_runtime_dependency "fluentd", "~> #{FLUENTD_VERSION}"
24
27
 
25
28
  gem.add_development_dependency "bundler"
26
- gem.add_development_dependency("fluentd", ">= 0.12.0")
29
+ gem.add_development_dependency("fluentd", "~> #{FLUENTD_VERSION}")
27
30
  gem.add_development_dependency("rake", ["~> 11.0"])
28
31
  gem.add_development_dependency("rr", ["~> 1.0"])
29
32
  gem.add_development_dependency("test-unit", ["~> 3.2"])
30
33
  gem.add_development_dependency("test-unit-rr", ["~> 1.0"])
34
+ gem.add_development_dependency("flexmock", ["~> 2.0"])
31
35
  end
@@ -15,11 +15,42 @@
15
15
  # See the License for the specific language governing permissions and
16
16
  # limitations under the License.
17
17
  #
18
+ require 'time'
19
+ require 'date'
20
+
18
21
  require 'fluent/filter'
19
22
  require 'fluent/log'
23
+ require 'fluent/match'
24
+
25
+ require_relative 'filter_viaq_data_model_systemd'
26
+
27
+ begin
28
+ ViaqMatchClass = Fluent::Match
29
+ rescue
30
+ # Fluent::Match not provided with 0.14
31
+ class ViaqMatchClass
32
+ def initialize(pattern_str, unused)
33
+ patterns = pattern_str.split(/\s+/).map {|str|
34
+ Fluent::MatchPattern.create(str)
35
+ }
36
+ if patterns.length == 1
37
+ @pattern = patterns[0]
38
+ else
39
+ @pattern = Fluent::OrMatchPattern.new(patterns)
40
+ end
41
+ end
42
+ def match(tag)
43
+ @pattern.match(tag)
44
+ end
45
+ def to_s
46
+ "#{@pattern}"
47
+ end
48
+ end
49
+ end
20
50
 
21
51
  module Fluent
22
52
  class ViaqDataModelFilter < Filter
53
+ include ViaqDataModelFilterSystemd
23
54
  Fluent::Plugin.register_filter('viaq_data_model', self)
24
55
 
25
56
  desc 'Default list of comma-delimited fields to keep in each record'
@@ -59,6 +90,50 @@ module Fluent
59
90
  desc 'Name of destination timestamp field'
60
91
  config_param :dest_time_name, :string, default: '@timestamp'
61
92
 
93
+ # <formatter>
94
+ # type sys_journal
95
+ # tag "journal.system**"
96
+ # remove_keys log,stream,MESSAGE,_SOURCE_REALTIME_TIMESTAMP,__REALTIME_TIMESTAMP,CONTAINER_ID,CONTAINER_ID_FULL,CONTAINER_NAME,PRIORITY,_BOOT_ID,_CAP_EFFECTIVE,_CMDLINE,_COMM,_EXE,_GID,_HOSTNAME,_MACHINE_ID,_PID,_SELINUX_CONTEXT,_SYSTEMD_CGROUP,_SYSTEMD_SLICE,_SYSTEMD_UNIT,_TRANSPORT,_UID,_AUDIT_LOGINUID,_AUDIT_SESSION,_SYSTEMD_OWNER_UID,_SYSTEMD_SESSION,_SYSTEMD_USER_UNIT,CODE_FILE,CODE_FUNCTION,CODE_LINE,ERRNO,MESSAGE_ID,RESULT,UNIT,_KERNEL_DEVICE,_KERNEL_SUBSYSTEM,_UDEV_SYSNAME,_UDEV_DEVNODE,_UDEV_DEVLINK,SYSLOG_FACILITY,SYSLOG_IDENTIFIER,SYSLOG_PID
97
+ # </formatter>
98
+ # formatters will be processed in the order specified, so make sure more specific matches
99
+ # come before more general matches
100
+ desc 'Formatters for common data model, for well known record types'
101
+ config_section :formatter, param_name: :formatters do
102
+ desc 'one of the well known formatter types'
103
+ config_param :type, :enum, list: [:sys_journal, :k8s_journal, :sys_var_log, :k8s_json_file]
104
+ desc 'process records with this tag pattern'
105
+ config_param :tag, :string
106
+ desc 'remove these keys from the record - same as record_transformer "remove_keys" field'
107
+ config_param :remove_keys, :string, default: nil
108
+ end
109
+
110
+ desc 'Which part of the pipeline is this - collector, normalizer, etc. for pipeline_metadata'
111
+ config_param :pipeline_type, :enum, list: [:collector, :normalizer], default: :collector
112
+
113
+ # e.g.
114
+ # <elasticsearch_index_name>
115
+ # tag "journal.system** system.var.log** **_default_** **_openshift_** **_openshift-infra_** mux.ops"
116
+ # name_type operations_full
117
+ # </elasticsearch_index_name>
118
+ # <elasticsearch_index_name>
119
+ # tag "**"
120
+ # name_type project_full
121
+ # </elasticsearch_index_name>
122
+ # operations_full - ".operations.YYYY.MM.DD"
123
+ # operations_prefix - ".operations"
124
+ # project_full - "project.${kubernetes.namespace_name}.${kubernetes.namespace_id}.YYYY.MM.DD"
125
+ # project_prefix - "project.${kubernetes.namespace_name}.${kubernetes.namespace_id}"
126
+ # index names will be processed in the order specified, so make sure more specific matches
127
+ # come before more general matches e.g. make sure tag "**" is last
128
+ desc 'Construct Elasticsearch index names or prefixes based on the matching tags pattern and type'
129
+ config_section :elasticsearch_index_name, param_name: :elasticsearch_index_names do
130
+ config_param :tag, :string
131
+ config_param :name_type, :enum, list: [:operations_full, :project_full, :operations_prefix, :project_prefix]
132
+ end
133
+ desc 'Store the Elasticsearch index name in this field'
134
+ config_param :elasticsearch_index_name_field, :string, default: 'viaq_index_name'
135
+ desc 'Store the Elasticsearch index prefix in this field'
136
+ config_param :elasticsearch_index_prefix_field, :string, default: 'viaq_index_prefix'
62
137
 
63
138
  def configure(conf)
64
139
  super
@@ -76,6 +151,44 @@ module Fluent
76
151
  if (@rename_time || @rename_time_if_not_exist) && @use_undefined && !@keep_fields.key?(@src_time_name)
77
152
  raise Fluent::ConfigError, "Field [#{@src_time_name}] must be listed in default_keep_fields or extra_keep_fields"
78
153
  end
154
+ if @formatters
155
+ @formatters.each do |fmtr|
156
+ matcher = ViaqMatchClass.new(fmtr.tag, nil)
157
+ fmtr.instance_eval{ @params[:matcher] = matcher }
158
+ fmtr.instance_eval{ @params[:fmtr_type] = fmtr.type }
159
+ if fmtr.remove_keys
160
+ fmtr.instance_eval{ @params[:fmtr_remove_keys] = fmtr.remove_keys.split(',') }
161
+ else
162
+ fmtr.instance_eval{ @params[:fmtr_remove_keys] = nil }
163
+ end
164
+ case fmtr.type
165
+ when :sys_journal, :k8s_journal
166
+ fmtr_func = method(:process_journal_fields)
167
+ when :sys_var_log
168
+ fmtr_func = method(:process_sys_var_log_fields)
169
+ when :k8s_json_file
170
+ fmtr_func = method(:process_k8s_json_file_fields)
171
+ end
172
+ fmtr.instance_eval{ @params[:fmtr_func] = fmtr_func }
173
+ end
174
+ @formatter_cache = {}
175
+ @formatter_cache_nomatch = {}
176
+ end
177
+ begin
178
+ @docker_hostname = File.open('/etc/docker-hostname') { |f| f.readline }.rstrip
179
+ rescue
180
+ @docker_hostname = nil
181
+ end
182
+ @ipaddr4 = ENV['IPADDR4'] || '127.0.0.1'
183
+ @ipaddr6 = ENV['IPADDR6'] || '::1'
184
+ @pipeline_version = (ENV['FLUENTD_VERSION'] || 'unknown fluentd version') + ' ' + (ENV['DATA_VERSION'] || 'unknown data version')
185
+ # create the elasticsearch index name tag matchers
186
+ unless @elasticsearch_index_names.empty?
187
+ @elasticsearch_index_names.each do |ein|
188
+ matcher = ViaqMatchClass.new(ein.tag, nil)
189
+ ein.instance_eval{ @params[:matcher] = matcher }
190
+ end
191
+ end
79
192
  end
80
193
 
81
194
  def start
@@ -104,12 +217,135 @@ module Fluent
104
217
  thing
105
218
  end
106
219
 
220
+ def process_sys_var_log_fields(tag, time, record, fmtr_type = nil)
221
+ record['systemd'] = {"t" => {"PID" => record['pid']}, "u" => {"SYSLOG_IDENTIFIER" => record['ident']}}
222
+ rectime = record['time'] || time
223
+ # handle the case where the time reported in /var/log/messages is for a previous year
224
+ if Time.at(rectime) > Time.now
225
+ record['time'] = Time.new((rectime.year - 1), rectime.month, rectime.day, rectime.hour, rectime.min, rectime.sec, rectime.utc_offset).utc.to_datetime.rfc3339(6)
226
+ else
227
+ record['time'] = rectime.utc.to_datetime.rfc3339(6)
228
+ end
229
+ if record['host'].eql?('localhost') && @docker_hostname
230
+ record['hostname'] = @docker_hostname
231
+ else
232
+ record['hostname'] = record['host']
233
+ end
234
+ end
235
+
236
+ def process_k8s_json_file_fields(tag, time, record, fmtr_type = nil)
237
+ record['message'] = record['message'] || record['log']
238
+ record['level'] = (record['stream'] == 'stdout') ? 'info' : 'err'
239
+ if record['kubernetes'] && record['kubernetes']['host']
240
+ record['hostname'] = record['kubernetes']['host']
241
+ elsif @docker_hostname
242
+ record['hostname'] = @docker_hostname
243
+ end
244
+ record['time'] = record['time'].utc.to_datetime.rfc3339(6)
245
+ end
246
+
247
+ def check_for_match_and_format(tag, time, record)
248
+ return unless @formatters
249
+ return if @formatter_cache_nomatch[tag]
250
+ fmtr = @formatter_cache[tag]
251
+ unless fmtr
252
+ idx = @formatters.index{|fmtr| fmtr.matcher.match(tag)}
253
+ if idx
254
+ fmtr = @formatters[idx]
255
+ @formatter_cache[tag] = fmtr
256
+ else
257
+ @formatter_cache_nomatch[tag] = true
258
+ return
259
+ end
260
+ end
261
+ fmtr.fmtr_func.call(tag, time, record, fmtr.fmtr_type)
262
+
263
+ if record['time'].nil?
264
+ record['time'] = Time.at(time).utc.to_datetime.rfc3339(6)
265
+ end
266
+
267
+ if fmtr.fmtr_remove_keys
268
+ fmtr.fmtr_remove_keys.each{|k| record.delete(k)}
269
+ end
270
+ end
271
+
272
+ def add_pipeline_metadata (tag, time, record)
273
+ (record['pipeline_metadata'] ||= {})[@pipeline_type.to_s] = {
274
+ "ipaddr4" => @ipaddr4,
275
+ "ipaddr6" => @ipaddr6,
276
+ "inputname" => "fluent-plugin-systemd",
277
+ "name" => "fluentd",
278
+ "received_at" => Time.at(time).utc.to_datetime.rfc3339(6),
279
+ "version" => @pipeline_version
280
+ }
281
+ end
282
+
283
+ def add_elasticsearch_index_name_field(tag, time, record)
284
+ found = false
285
+ @elasticsearch_index_names.each do |ein|
286
+ if ein.matcher.match(tag)
287
+ found = true
288
+ if ein.name_type == :operations_full || ein.name_type == :project_full
289
+ field_name = @elasticsearch_index_name_field
290
+ need_time = true
291
+ else
292
+ field_name = @elasticsearch_index_prefix_field
293
+ need_time = false
294
+ end
295
+
296
+ case ein.name_type
297
+ when :operations_full, :operations_prefix
298
+ prefix = ".operations"
299
+ when :project_full, :project_prefix
300
+ if (k8s = record['kubernetes']).nil?
301
+ log.error("record cannot use elasticsearch index name type #{ein.name_type}: record is missing kubernetes field: #{record}")
302
+ break
303
+ elsif (name = k8s['namespace_name']).nil?
304
+ log.error("record cannot use elasticsearch index name type #{ein.name_type}: record is missing kubernetes.namespace_name field: #{record}")
305
+ break
306
+ elsif (uuid = k8s['namespace_id']).nil?
307
+ log.error("record cannot use elasticsearch index name type #{ein.name_type}: record is missing kubernetes.namespace_id field: #{record}")
308
+ break
309
+ else
310
+ prefix = "project." + name + "." + uuid
311
+ end
312
+ end
313
+
314
+ if ENV['CDM_DEBUG']
315
+ unless tag == ENV['CDM_DEBUG_IGNORE_TAG']
316
+ log.error("prefix #{prefix} need_time #{need_time} time #{record[@dest_time_name]}")
317
+ end
318
+ end
319
+
320
+ if need_time
321
+ ts = DateTime.parse(record[@dest_time_name])
322
+ record[field_name] = prefix + "." + ts.strftime("%Y.%m.%d")
323
+ else
324
+ record[field_name] = prefix
325
+ end
326
+ if ENV['CDM_DEBUG']
327
+ unless tag == ENV['CDM_DEBUG_IGNORE_TAG']
328
+ log.error("record[#{field_name}] = #{record[field_name]}")
329
+ end
330
+ end
331
+
332
+ break
333
+ end
334
+ end
335
+ unless found
336
+ log.warn("no match for tag #{tag}")
337
+ end
338
+ end
339
+
107
340
  def filter(tag, time, record)
108
341
  if ENV['CDM_DEBUG']
109
342
  unless tag == ENV['CDM_DEBUG_IGNORE_TAG']
110
343
  log.error("input #{time} #{tag} #{record}")
111
344
  end
112
345
  end
346
+
347
+ check_for_match_and_format(tag, time, record)
348
+ add_pipeline_metadata(tag, time, record)
113
349
  if @use_undefined
114
350
  # undefined contains all of the fields not in keep_fields
115
351
  undefined = record.reject{|k,v| @keep_fields.key?(k)}
@@ -132,6 +368,14 @@ module Fluent
132
368
  record[@dest_time_name] = val
133
369
  end
134
370
  end
371
+
372
+ if !@elasticsearch_index_names.empty?
373
+ add_elasticsearch_index_name_field(tag, time, record)
374
+ elsif ENV['CDM_DEBUG']
375
+ unless tag == ENV['CDM_DEBUG_IGNORE_TAG']
376
+ log.error("not adding elasticsearch index name or prefix")
377
+ end
378
+ end
135
379
  if ENV['CDM_DEBUG']
136
380
  unless tag == ENV['CDM_DEBUG_IGNORE_TAG']
137
381
  log.error("output #{time} #{tag} #{record}")