fluentd 1.14.2-x86-mingw32 → 1.14.3-x86-mingw32

Sign up to get free protection for your applications and to get access to all the features.

Potentially problematic release.


This version of fluentd might be problematic. Click here for more details.

checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 31e0a5c91d071852b3665fad3428d719a3845287525ff384ef345469af3bb22a
4
- data.tar.gz: 3817219ac8a8159b817edc0508a9ac7d10d5fec5217c86e9d22e599717f4f488
3
+ metadata.gz: 3ba01ca31fbaac62a1c0ea213d4f4dccd7c1a2a21be730a62f9b82007e8a67e3
4
+ data.tar.gz: 5b4dbb5aae91e85043e295f037fac15f877a084ee5c6289d09a0e8b4b7cf0c9c
5
5
  SHA512:
6
- metadata.gz: e3830d204bdf01aa1f47793b1a91c8165139af0152b26e5d77ad5d8977e24acde972ed6c1f73a1124e0db43fb9e0ba6668627941edbb9f79b14b9c0e476aa82b
7
- data.tar.gz: 362a8f72c2dcb31393e49bcc4742e8dc9fdb5e04ec72461882aeaeaefa73e240d8070887ba78871fbe311dd498f917f9a5158c455d404ac5dba195910702cc05
6
+ metadata.gz: ecf03095be6cc94747984eb828d607a9ddb70a43e88fa32b483ba38450c78ad13d309914d3cf8f5146a6d0d97e0fcc803178090217c861ad09c44e033f5a19c7
7
+ data.tar.gz: 902ef5dcd2289d72da71495877ec1b8337477f177b81b7c8002a2fa7beec11a2571957539044e16f2bc7c5a6a6be32dbd72e6867d2056a03ff41e216d5a2af22
data/CHANGELOG.md CHANGED
@@ -1,3 +1,33 @@
1
+ # v1.14.3
2
+
3
+ ## Release v1.14.3 - 2021/11/26
4
+
5
+ ### Enhancement
6
+
7
+ * Changed to accept `http_parser.rb` 0.8.0.
8
+ `http_parser.rb` 0.8.0 is ready for Ractor.
9
+ https://github.com/fluent/fluentd/pull/3544
10
+
11
+ ### Bug fix
12
+
13
+ * in_tail: Fixed a bug that no new logs are read when
14
+ `enable_stat_watcher true` and `enable_watch_timer false` is set.
15
+ https://github.com/fluent/fluentd/pull/3541
16
+ * in_tail: Fixed a bug that the beginning and initial lines are lost
17
+ after startup when `read_from_head false` and path includes wildcard '*'.
18
+ https://github.com/fluent/fluentd/pull/3542
19
+ * Fixed a bug that processing messages were lost when
20
+ BufferChunkOverflowError was thrown even though only a specific
21
+ message size exceeds chunk_limit_size.
22
+ https://github.com/fluent/fluentd/pull/3553
23
+ https://github.com/fluent/fluentd/pull/3562
24
+
25
+ ### Misc
26
+
27
+ * Bump up required version of `win32-service` gem.
28
+ newer version is required to implement additional `fluent-ctl` commands.
29
+ https://github.com/fluent/fluentd/pull/3556
30
+
1
31
  # v1.14.2
2
32
 
3
33
  ## Release v1.14.2 - 2021/10/29
data/README.md CHANGED
@@ -88,6 +88,8 @@ You can run specified test via `TEST` environment variable:
88
88
 
89
89
  A third party security audit was performed by Cure53, you can see the full report [here](docs/SECURITY_AUDIT.pdf).
90
90
 
91
+ See [SECURITY](SECURITY.md) to contact us about vulnerability.
92
+
91
93
  ## Contributors:
92
94
 
93
95
  Patches contributed by [great developers](https://github.com/fluent/fluentd/contributors).
data/SECURITY.md ADDED
@@ -0,0 +1,18 @@
1
+ # Security Policy
2
+
3
+ ## Supported Versions
4
+
5
+ | Version | Supported |
6
+ | ------- | ------------------ |
7
+ | 1.14.x | :white_check_mark: |
8
+ | <= 1.13.x | :x: |
9
+
10
+ ## Reporting a Vulnerability
11
+
12
+ Please contact to current active maintainers. (in alphabetical order)
13
+
14
+ * ashie@clear-code.com
15
+ * fujimoto@clear-code.com
16
+ * hatake@calyptia.com
17
+ * hayashi@clear-code.com
18
+
data/fluentd.gemspec CHANGED
@@ -23,7 +23,7 @@ Gem::Specification.new do |gem|
23
23
  gem.add_runtime_dependency("yajl-ruby", ["~> 1.0"])
24
24
  gem.add_runtime_dependency("cool.io", [">= 1.4.5", "< 2.0.0"])
25
25
  gem.add_runtime_dependency("serverengine", [">= 2.2.2", "< 3.0.0"])
26
- gem.add_runtime_dependency("http_parser.rb", [">= 0.5.1", "< 0.8.0"])
26
+ gem.add_runtime_dependency("http_parser.rb", [">= 0.5.1", "< 0.9.0"])
27
27
  gem.add_runtime_dependency("sigdump", ["~> 0.2.2"])
28
28
  gem.add_runtime_dependency("tzinfo", [">= 1.0", "< 3.0"])
29
29
  gem.add_runtime_dependency("tzinfo-data", ["~> 1.0"])
@@ -35,7 +35,7 @@ Gem::Specification.new do |gem|
35
35
  gem.platform = fake_platform unless fake_platform.empty?
36
36
  if /mswin|mingw/ =~ fake_platform || (/mswin|mingw/ =~ RUBY_PLATFORM && fake_platform.empty?)
37
37
  gem.add_runtime_dependency("win32-api", [">= 1.10", "< 2.0.0"])
38
- gem.add_runtime_dependency("win32-service", ["~> 2.2.0"])
38
+ gem.add_runtime_dependency("win32-service", ["~> 2.3.0"])
39
39
  gem.add_runtime_dependency("win32-ipc", ["~> 0.7.0"])
40
40
  gem.add_runtime_dependency("win32-event", ["~> 0.6.3"])
41
41
  gem.add_runtime_dependency("windows-pr", ["~> 1.2.6"])
@@ -332,12 +332,14 @@ module Fluent
332
332
  unstaged_chunks = {} # metadata => [chunk, chunk, ...]
333
333
  chunks_to_enqueue = []
334
334
  staged_bytesizes_by_chunk = {}
335
+ # track internal BufferChunkOverflowError in write_step_by_step
336
+ buffer_chunk_overflow_errors = []
335
337
 
336
338
  begin
337
339
  # sort metadata to get lock of chunks in same order with other threads
338
340
  metadata_and_data.keys.sort.each do |metadata|
339
341
  data = metadata_and_data[metadata]
340
- write_once(metadata, data, format: format, size: size) do |chunk, adding_bytesize|
342
+ write_once(metadata, data, format: format, size: size) do |chunk, adding_bytesize, error|
341
343
  chunk.mon_enter # add lock to prevent to be committed/rollbacked from other threads
342
344
  operated_chunks << chunk
343
345
  if chunk.staged?
@@ -352,6 +354,9 @@ module Fluent
352
354
  unstaged_chunks[metadata] ||= []
353
355
  unstaged_chunks[metadata] << chunk
354
356
  end
357
+ if error && !error.empty?
358
+ buffer_chunk_overflow_errors << error
359
+ end
355
360
  end
356
361
  end
357
362
 
@@ -444,6 +449,10 @@ module Fluent
444
449
  end
445
450
  chunk.mon_exit rescue nil # this may raise ThreadError for chunks already committed
446
451
  end
452
+ unless buffer_chunk_overflow_errors.empty?
453
+ # Notify delayed BufferChunkOverflowError here
454
+ raise BufferChunkOverflowError, buffer_chunk_overflow_errors.join(", ")
455
+ end
447
456
  end
448
457
  end
449
458
 
@@ -716,6 +725,7 @@ module Fluent
716
725
 
717
726
  def write_step_by_step(metadata, data, format, splits_count, &block)
718
727
  splits = []
728
+ errors = []
719
729
  if splits_count > data.size
720
730
  splits_count = data.size
721
731
  end
@@ -761,18 +771,41 @@ module Fluent
761
771
  begin
762
772
  while writing_splits_index < splits.size
763
773
  split = splits[writing_splits_index]
774
+ formatted_split = format ? format.call(split) : split.first
775
+ if split.size == 1 && original_bytesize == 0
776
+ if format == nil && @compress != :text
777
+ # The actual size of chunk is not determined until after chunk.append.
778
+ # so, keep already processed 'split' content here.
779
+ # (allow performance regression a bit)
780
+ chunk.commit
781
+ else
782
+ big_record_size = formatted_split.bytesize
783
+ if chunk.bytesize + big_record_size > @chunk_limit_size
784
+ errors << "a #{big_record_size} bytes record (nth: #{writing_splits_index}) is larger than buffer chunk limit size (#{@chunk_limit_size})"
785
+ writing_splits_index += 1
786
+ next
787
+ end
788
+ end
789
+ end
790
+
764
791
  if format
765
- chunk.concat(format.call(split), split.size)
792
+ chunk.concat(formatted_split, split.size)
766
793
  else
767
794
  chunk.append(split, compress: @compress)
768
795
  end
769
796
 
770
797
  if chunk_size_over?(chunk) # split size is larger than difference between size_full? and size_over?
798
+ adding_bytes = chunk.instance_eval { @adding_bytes } || "N/A" # 3rd party might not have 'adding_bytes'
771
799
  chunk.rollback
772
800
 
773
801
  if split.size == 1 && original_bytesize == 0
774
- big_record_size = format ? format.call(split).bytesize : split.first.bytesize
775
- raise BufferChunkOverflowError, "a #{big_record_size}bytes record is larger than buffer chunk limit size"
802
+ # It is obviously case that BufferChunkOverflowError should be raised here,
803
+ # but if it raises here, already processed 'split' or
804
+ # the proceeding 'split' will be lost completely.
805
+ # so it is a last resort to delay raising such a exception
806
+ errors << "concatenated/appended a #{adding_bytes} bytes record (nth: #{writing_splits_index}) is larger than buffer chunk limit size (#{@chunk_limit_size})"
807
+ writing_splits_index += 1
808
+ next
776
809
  end
777
810
 
778
811
  if chunk_size_full?(chunk) || split.size == 1
@@ -795,7 +828,8 @@ module Fluent
795
828
  raise
796
829
  end
797
830
 
798
- block.call(chunk, chunk.bytesize - original_bytesize)
831
+ block.call(chunk, chunk.bytesize - original_bytesize, errors)
832
+ errors = []
799
833
  end
800
834
  end
801
835
  rescue ShouldRetry
@@ -59,6 +59,7 @@ module Fluent::Plugin
59
59
  @ignore_list = []
60
60
  @shutdown_start_time = nil
61
61
  @metrics = nil
62
+ @startup = true
62
63
  end
63
64
 
64
65
  desc 'The paths to read. Multiple paths can be specified, separated by comma.'
@@ -369,19 +370,25 @@ module Fluent::Plugin
369
370
  def refresh_watchers
370
371
  target_paths_hash = expand_paths
371
372
  existence_paths_hash = existence_path
372
-
373
- log.debug { "tailing paths: target = #{target_paths.join(",")} | existing = #{existence_paths.join(",")}" }
373
+
374
+ log.debug {
375
+ target_paths_str = target_paths_hash.collect { |key, target_info| target_info.path }.join(",")
376
+ existence_paths_str = existence_paths_hash.collect { |key, target_info| target_info.path }.join(",")
377
+ "tailing paths: target = #{target_paths_str} | existing = #{existence_paths_str}"
378
+ }
374
379
 
375
380
  unwatched_hash = existence_paths_hash.reject {|key, value| target_paths_hash.key?(key)}
376
381
  added_hash = target_paths_hash.reject {|key, value| existence_paths_hash.key?(key)}
377
382
 
378
383
  stop_watchers(unwatched_hash, immediate: false, unwatched: true) unless unwatched_hash.empty?
379
384
  start_watchers(added_hash) unless added_hash.empty?
385
+ @startup = false if @startup
380
386
  end
381
387
 
382
388
  def setup_watcher(target_info, pe)
383
389
  line_buffer_timer_flusher = @multiline_mode ? TailWatcher::LineBufferTimerFlusher.new(log, @multiline_flush_interval, &method(:flush_buffer)) : nil
384
- tw = TailWatcher.new(target_info, pe, log, @read_from_head, @follow_inodes, method(:update_watcher), line_buffer_timer_flusher, method(:io_handler), @metrics)
390
+ read_from_head = !@startup || @read_from_head
391
+ tw = TailWatcher.new(target_info, pe, log, read_from_head, @follow_inodes, method(:update_watcher), line_buffer_timer_flusher, method(:io_handler), @metrics)
385
392
 
386
393
  if @enable_watch_timer
387
394
  tt = TimerTrigger.new(1, log) { tw.on_notify }
@@ -389,7 +396,7 @@ module Fluent::Plugin
389
396
  end
390
397
 
391
398
  if @enable_stat_watcher
392
- tt = StatWatcher.new(path, log) { tw.on_notify }
399
+ tt = StatWatcher.new(target_info.path, log) { tw.on_notify }
393
400
  tw.register_watcher(tt)
394
401
  end
395
402
 
@@ -16,6 +16,6 @@
16
16
 
17
17
  module Fluent
18
18
 
19
- VERSION = '1.14.2'
19
+ VERSION = '1.14.3'
20
20
 
21
21
  end
@@ -18,7 +18,7 @@ module FluentPluginBufferTest
18
18
  end
19
19
  class DummyMemoryChunkError < StandardError; end
20
20
  class DummyMemoryChunk < Fluent::Plugin::Buffer::MemoryChunk
21
- attr_reader :append_count, :rollbacked, :closed, :purged
21
+ attr_reader :append_count, :rollbacked, :closed, :purged, :chunk
22
22
  attr_accessor :failing
23
23
  def initialize(metadata, compress: :text)
24
24
  super
@@ -944,6 +944,52 @@ class BufferTest < Test::Unit::TestCase
944
944
  @p.write({@dm0 => es}, format: @format)
945
945
  end
946
946
  end
947
+
948
+ data(
949
+ first_chunk: Fluent::ArrayEventStream.new([[event_time('2016-04-11 16:00:02 +0000'), {"message" => "x" * 1_280_000}],
950
+ [event_time('2016-04-11 16:00:02 +0000'), {"message" => "a"}],
951
+ [event_time('2016-04-11 16:00:02 +0000'), {"message" => "b"}]]),
952
+ intermediate_chunk: Fluent::ArrayEventStream.new([[event_time('2016-04-11 16:00:02 +0000'), {"message" => "a"}],
953
+ [event_time('2016-04-11 16:00:02 +0000'), {"message" => "x" * 1_280_000}],
954
+ [event_time('2016-04-11 16:00:02 +0000'), {"message" => "b"}]]),
955
+ last_chunk: Fluent::ArrayEventStream.new([[event_time('2016-04-11 16:00:02 +0000'), {"message" => "a"}],
956
+ [event_time('2016-04-11 16:00:02 +0000'), {"message" => "b"}],
957
+ [event_time('2016-04-11 16:00:02 +0000'), {"message" => "x" * 1_280_000}]]),
958
+ multiple_chunks: Fluent::ArrayEventStream.new([[event_time('2016-04-11 16:00:02 +0000'), {"message" => "a"}],
959
+ [event_time('2016-04-11 16:00:02 +0000'), {"message" => "x" * 1_280_000}],
960
+ [event_time('2016-04-11 16:00:02 +0000'), {"message" => "b"}],
961
+ [event_time('2016-04-11 16:00:02 +0000'), {"message" => "x" * 1_280_000}]])
962
+ )
963
+ test '#write exceeds chunk_limit_size, raise BufferChunkOverflowError, but not lost whole messages' do |(es)|
964
+ assert_equal [@dm0], @p.stage.keys
965
+ assert_equal [], @p.queue.map(&:metadata)
966
+
967
+ assert_equal 1_280_000, @p.chunk_limit_size
968
+
969
+ nth = []
970
+ es.entries.each_with_index do |entry, index|
971
+ if entry.last["message"].size == @p.chunk_limit_size
972
+ nth << index
973
+ end
974
+ end
975
+ messages = []
976
+ nth.each do |n|
977
+ messages << "a 1280025 bytes record (nth: #{n}) is larger than buffer chunk limit size (1280000)"
978
+ end
979
+
980
+ assert_raise Fluent::Plugin::Buffer::BufferChunkOverflowError.new(messages.join(", ")) do
981
+ @p.write({@dm0 => es}, format: @format)
982
+ end
983
+ # message a and b are concatenated and staged
984
+ staged_messages = Fluent::MessagePackFactory.msgpack_unpacker.feed_each(@p.stage[@dm0].chunk).collect do |record|
985
+ record.last
986
+ end
987
+ assert_equal([2, [{"message" => "a"}, {"message" => "b"}]],
988
+ [@p.stage[@dm0].size, staged_messages])
989
+ # only es0 message is queued
990
+ assert_equal [@dm0], @p.queue.map(&:metadata)
991
+ assert_equal [5000], @p.queue.map(&:size)
992
+ end
947
993
  end
948
994
 
949
995
  sub_test_case 'custom format with configuration for test with lower chunk limit size' do
@@ -1201,6 +1247,7 @@ class BufferTest < Test::Unit::TestCase
1201
1247
  sub_test_case 'when compress is gzip' do
1202
1248
  setup do
1203
1249
  @p = create_buffer({'compress' => 'gzip'})
1250
+ @dm0 = create_metadata(Time.parse('2016-04-11 16:00:00 +0000').to_i, nil, nil)
1204
1251
  end
1205
1252
 
1206
1253
  test '#compress returns :gzip' do
@@ -1211,6 +1258,30 @@ class BufferTest < Test::Unit::TestCase
1211
1258
  chunk = @p.generate_chunk(create_metadata)
1212
1259
  assert chunk.singleton_class.ancestors.include?(Fluent::Plugin::Buffer::Chunk::Decompressable)
1213
1260
  end
1261
+
1262
+ test '#write compressed data which exceeds chunk_limit_size, it raises BufferChunkOverflowError' do
1263
+ @p = create_buffer({'compress' => 'gzip', 'chunk_limit_size' => 70})
1264
+ timestamp = event_time('2016-04-11 16:00:02 +0000')
1265
+ es = Fluent::ArrayEventStream.new([[timestamp, {"message" => "012345"}], # overflow
1266
+ [timestamp, {"message" => "aaa"}],
1267
+ [timestamp, {"message" => "bbb"}]])
1268
+ assert_equal [], @p.queue.map(&:metadata)
1269
+ assert_equal 70, @p.chunk_limit_size
1270
+
1271
+ # calculate the actual boundary value. it varies on machine
1272
+ c = @p.generate_chunk(create_metadata)
1273
+ c.append(Fluent::ArrayEventStream.new([[timestamp, {"message" => "012345"}]]), compress: :gzip)
1274
+ overflow_bytes = c.bytesize
1275
+
1276
+ messages = "concatenated/appended a #{overflow_bytes} bytes record (nth: 0) is larger than buffer chunk limit size (70)"
1277
+ assert_raise Fluent::Plugin::Buffer::BufferChunkOverflowError.new(messages) do
1278
+ # test format == nil && compress == :gzip
1279
+ @p.write({@dm0 => es})
1280
+ end
1281
+ # message a and b occupies each chunks in full, so both of messages are queued (no staged chunk)
1282
+ assert_equal([2, [@dm0, @dm0], [1, 1], nil],
1283
+ [@p.queue.size, @p.queue.map(&:metadata), @p.queue.map(&:size), @p.stage[@dm0]])
1284
+ end
1214
1285
  end
1215
1286
 
1216
1287
  sub_test_case '#statistics' do
@@ -99,7 +99,7 @@ class TailInputTest < Test::Unit::TestCase
99
99
  })
100
100
  COMMON_CONFIG = CONFIG + config_element("", "", { "pos_file" => "#{TMP_DIR}/tail.pos" })
101
101
  CONFIG_READ_FROM_HEAD = config_element("", "", { "read_from_head" => true })
102
- CONFIG_ENABLE_WATCH_TIMER = config_element("", "", { "enable_watch_timer" => false })
102
+ CONFIG_DISABLE_WATCH_TIMER = config_element("", "", { "enable_watch_timer" => false })
103
103
  CONFIG_DISABLE_STAT_WATCHER = config_element("", "", { "enable_stat_watcher" => false })
104
104
  CONFIG_OPEN_ON_EVERY_UPDATE = config_element("", "", { "open_on_every_update" => true })
105
105
  COMMON_FOLLOW_INODE_CONFIG = config_element("ROOT", "", {
@@ -199,7 +199,7 @@ class TailInputTest < Test::Unit::TestCase
199
199
 
200
200
  sub_test_case "log throttling per file" do
201
201
  test "w/o watcher timer is invalid" do
202
- conf = CONFIG_ENABLE_WATCH_TIMER + config_element("ROOT", "", {"read_bytes_limit_per_second" => "8k"})
202
+ conf = CONFIG_DISABLE_WATCH_TIMER + config_element("ROOT", "", {"read_bytes_limit_per_second" => "8k"})
203
203
  assert_raise(Fluent::ConfigError) do
204
204
  create_driver(conf)
205
205
  end
@@ -215,7 +215,7 @@ class TailInputTest < Test::Unit::TestCase
215
215
 
216
216
  test "both enable_watch_timer and enable_stat_watcher are false" do
217
217
  assert_raise(Fluent::ConfigError) do
218
- create_driver(CONFIG_ENABLE_WATCH_TIMER + CONFIG_DISABLE_STAT_WATCHER + PARSE_SINGLE_LINE_CONFIG)
218
+ create_driver(CONFIG_DISABLE_WATCH_TIMER + CONFIG_DISABLE_STAT_WATCHER + PARSE_SINGLE_LINE_CONFIG)
219
219
  end
220
220
  end
221
221
 
@@ -570,9 +570,9 @@ class TailInputTest < Test::Unit::TestCase
570
570
  assert_equal({"message" => "test4"}, events[3][2])
571
571
  end
572
572
 
573
- data(flat: CONFIG_ENABLE_WATCH_TIMER + SINGLE_LINE_CONFIG,
574
- parse: CONFIG_ENABLE_WATCH_TIMER + PARSE_SINGLE_LINE_CONFIG)
575
- def test_emit_with_enable_watch_timer(data)
573
+ data(flat: CONFIG_DISABLE_WATCH_TIMER + SINGLE_LINE_CONFIG,
574
+ parse: CONFIG_DISABLE_WATCH_TIMER + PARSE_SINGLE_LINE_CONFIG)
575
+ def test_emit_without_watch_timer(data)
576
576
  config = data
577
577
  File.open("#{TMP_DIR}/tail.txt", "wb") {|f|
578
578
  f.puts "test1"
@@ -596,6 +596,38 @@ class TailInputTest < Test::Unit::TestCase
596
596
  assert_equal({"message" => "test4"}, events[1][2])
597
597
  end
598
598
 
599
+ # https://github.com/fluent/fluentd/pull/3541#discussion_r740197711
600
+ def test_watch_wildcard_path_without_watch_timer
601
+ omit "need inotify" unless Fluent.linux?
602
+
603
+ config = config_element("ROOT", "", {
604
+ "path" => "#{TMP_DIR}/tail*.txt",
605
+ "tag" => "t1",
606
+ })
607
+ config = config + CONFIG_DISABLE_WATCH_TIMER + SINGLE_LINE_CONFIG
608
+
609
+ File.open("#{TMP_DIR}/tail.txt", "wb") {|f|
610
+ f.puts "test1"
611
+ f.puts "test2"
612
+ }
613
+
614
+ d = create_driver(config, false)
615
+
616
+ d.run(expect_emits: 1, timeout: 1) do
617
+ File.open("#{TMP_DIR}/tail.txt", "ab") {|f|
618
+ f.puts "test3"
619
+ f.puts "test4"
620
+ }
621
+ end
622
+
623
+ assert_equal(
624
+ [
625
+ {"message" => "test3"},
626
+ {"message" => "test4"},
627
+ ],
628
+ d.events.collect { |event| event[2] })
629
+ end
630
+
599
631
  data(flat: CONFIG_DISABLE_STAT_WATCHER + SINGLE_LINE_CONFIG,
600
632
  parse: CONFIG_DISABLE_STAT_WATCHER + PARSE_SINGLE_LINE_CONFIG)
601
633
  def test_emit_with_disable_stat_watcher(data)
@@ -619,6 +651,23 @@ class TailInputTest < Test::Unit::TestCase
619
651
  assert_equal({"message" => "test3"}, events[0][2])
620
652
  assert_equal({"message" => "test4"}, events[1][2])
621
653
  end
654
+
655
+ def test_always_read_from_head_on_detecting_a_new_file
656
+ d = create_driver(SINGLE_LINE_CONFIG)
657
+
658
+ d.run(expect_emits: 1, timeout: 3) do
659
+ File.open("#{TMP_DIR}/tail.txt", "wb") {|f|
660
+ f.puts "test1\ntest2\n"
661
+ }
662
+ end
663
+
664
+ assert_equal(
665
+ [
666
+ {"message" => "test1"},
667
+ {"message" => "test2"},
668
+ ],
669
+ d.events.collect { |event| event[2] })
670
+ end
622
671
  end
623
672
 
624
673
  class TestWithSystem < self
@@ -524,6 +524,9 @@ class ExecFilterOutputTest < Test::Unit::TestCase
524
524
  assert_equal pid_list[1], events[1][2]['child_pid']
525
525
  assert_equal pid_list[0], events[2][2]['child_pid']
526
526
  assert_equal pid_list[1], events[3][2]['child_pid']
527
+
528
+ ensure
529
+ d.run(start: false, shutdown: true)
527
530
  end
528
531
 
529
532
  # child process exits per 3 lines
@@ -597,6 +600,7 @@ class ExecFilterOutputTest < Test::Unit::TestCase
597
600
  assert_equal 2, logs.select { |l| l.include?('child process exits with error code') }.size
598
601
  assert_equal 2, logs.select { |l| l.include?('respawning child process') }.size
599
602
 
603
+ ensure
600
604
  d.run(start: false, shutdown: true)
601
605
  end
602
606
  end
@@ -319,12 +319,12 @@ class ChildProcessTest < Test::Unit::TestCase
319
319
 
320
320
  test 'can execute external command many times, which finishes immediately' do
321
321
  ary = []
322
- arguments = ["-e", "puts 'okay'; STDOUT.flush rescue nil"]
322
+ arguments = ["okay"]
323
323
  Timeout.timeout(TEST_DEADLOCK_TIMEOUT) do
324
- @d.child_process_execute(:t5, "ruby", arguments: arguments, interval: 1, mode: [:read]) do |io|
324
+ @d.child_process_execute(:t5, "echo", arguments: arguments, interval: 1, mode: [:read]) do |io|
325
325
  ary << io.read.split("\n").map(&:chomp).join
326
326
  end
327
- sleep 2.5 # 2sec(second invocation) + 0.5sec
327
+ sleep 2.9 # 2sec(second invocation) + 0.9sec
328
328
  assert_equal [], @d.log.out.logs
329
329
  @d.stop
330
330
  assert_equal [], @d.log.out.logs
@@ -335,12 +335,12 @@ class ChildProcessTest < Test::Unit::TestCase
335
335
 
336
336
  test 'can execute external command many times, with leading once executed immediately' do
337
337
  ary = []
338
- arguments = ["-e", "puts 'okay'; STDOUT.flush rescue nil"]
338
+ arguments = ["okay"]
339
339
  Timeout.timeout(TEST_DEADLOCK_TIMEOUT) do
340
- @d.child_process_execute(:t6, "ruby", arguments: arguments, interval: 1, immediate: true, mode: [:read]) do |io|
340
+ @d.child_process_execute(:t6, "echo", arguments: arguments, interval: 1, immediate: true, mode: [:read]) do |io|
341
341
  ary << io.read.split("\n").map(&:chomp).join
342
342
  end
343
- sleep 1.5 # 2sec(second invocation) + 0.5sec
343
+ sleep 1.9 # 1sec(second invocation) + 0.9sec
344
344
  @d.stop; @d.shutdown; @d.close; @d.terminate
345
345
  assert_equal 2, ary.size
346
346
  assert_equal [], @d.log.out.logs
@@ -722,14 +722,14 @@ class ChildProcessTest < Test::Unit::TestCase
722
722
  read_data_list = []
723
723
  exit_status_list = []
724
724
 
725
- args = ['-e', 'puts "yay"']
725
+ args = ['yay']
726
726
  cb = ->(status){ exit_status_list << status }
727
727
 
728
728
  Timeout.timeout(TEST_DEADLOCK_TIMEOUT) do
729
- @d.child_process_execute(:st1, "ruby", arguments: args, immediate: true, interval: 1, mode: [:read], on_exit_callback: cb) do |readio|
729
+ @d.child_process_execute(:st1, "echo", arguments: args, immediate: true, interval: 1, mode: [:read], on_exit_callback: cb) do |readio|
730
730
  read_data_list << readio.read.chomp
731
731
  end
732
- sleep 2
732
+ sleep 2.5
733
733
  end
734
734
 
735
735
  assert { read_data_list.size >= 2 }
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: fluentd
3
3
  version: !ruby/object:Gem::Version
4
- version: 1.14.2
4
+ version: 1.14.3
5
5
  platform: x86-mingw32
6
6
  authors:
7
7
  - Sadayuki Furuhashi
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2021-10-29 00:00:00.000000000 Z
11
+ date: 2021-11-26 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: bundler
@@ -107,7 +107,7 @@ dependencies:
107
107
  version: 0.5.1
108
108
  - - "<"
109
109
  - !ruby/object:Gem::Version
110
- version: 0.8.0
110
+ version: 0.9.0
111
111
  type: :runtime
112
112
  prerelease: false
113
113
  version_requirements: !ruby/object:Gem::Requirement
@@ -117,7 +117,7 @@ dependencies:
117
117
  version: 0.5.1
118
118
  - - "<"
119
119
  - !ruby/object:Gem::Version
120
- version: 0.8.0
120
+ version: 0.9.0
121
121
  - !ruby/object:Gem::Dependency
122
122
  name: sigdump
123
123
  requirement: !ruby/object:Gem::Requirement
@@ -232,14 +232,14 @@ dependencies:
232
232
  requirements:
233
233
  - - "~>"
234
234
  - !ruby/object:Gem::Version
235
- version: 2.2.0
235
+ version: 2.3.0
236
236
  type: :runtime
237
237
  prerelease: false
238
238
  version_requirements: !ruby/object:Gem::Requirement
239
239
  requirements:
240
240
  - - "~>"
241
241
  - !ruby/object:Gem::Version
242
- version: 2.2.0
242
+ version: 2.3.0
243
243
  - !ruby/object:Gem::Dependency
244
244
  name: win32-ipc
245
245
  requirement: !ruby/object:Gem::Requirement
@@ -485,6 +485,7 @@ files:
485
485
  - MAINTAINERS.md
486
486
  - README.md
487
487
  - Rakefile
488
+ - SECURITY.md
488
489
  - bin/fluent-binlog-reader
489
490
  - bin/fluent-ca-generate
490
491
  - bin/fluent-cap-ctl