logstash-integration-jdbc 5.1.10 → 5.2.3

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 9f3116d5abaa413fd73af725208c422a1cf98961ce7095c4d88fd7f14cb6c59d
4
- data.tar.gz: 0ef2f5fd309236b8ec0e14bb38d275e26b3d2002cdb172a2d30d53682606ddec
3
+ metadata.gz: 2119dbab43322bdc083cf11602e69e63bcb1a957b537f5f44cfcd2b611df1893
4
+ data.tar.gz: 651e016e43bb17db730cc95ad0fd44bed483b9edfa4bbe86e52ced95947bb4a4
5
5
  SHA512:
6
- metadata.gz: d701317b4edbe221b2dd9a66d6e2591a0a78e8b9502373860656a5d95f552eee06ae752904687b58b2dfda274dc7c20e5da3e5bebbefbd98c538c107f49bec7a
7
- data.tar.gz: 22a9383f113f01025ba2f694cd61845aa4ed08474efc70cb37694993babb9ec7d6ac3d95291bbf2fe5b22835a8923ab1802061d861da2e59d5b0e3041f7b18ed
6
+ metadata.gz: 0a90d6aa88365c9ddfe60bba20f6f06633733af4b910b75becc31c37a9e1a224d3aae8d5e62f03f8ae2a3ffd8a24d1b0635307aa152a836c18435a4b36ca1d5d
7
+ data.tar.gz: f6a6648d071bff9069f54448cfed03327cb0f5bb8c036d8b0c6e5adb949c69a75f923541fe6a1237eeb3e6fe35611ad0afd4f99e356e8dd42b1e2b19534bc5b7
data/CHANGELOG.md CHANGED
@@ -1,7 +1,22 @@
1
+ ## 5.2.3
2
+ - Performance: avoid contention on scheduler execution [#103](https://github.com/logstash-plugins/logstash-integration-jdbc/pull/103)
3
+
4
+ ## 5.2.2
5
+ - Feat: name scheduler threads + redirect error logging [#102](https://github.com/logstash-plugins/logstash-integration-jdbc/pull/102)
6
+
7
+ ## 5.2.1
8
+ - Refactor: isolate paginated normal statement algorithm in a separate handler [#101](https://github.com/logstash-plugins/logstash-integration-jdbc/pull/101)
9
+
10
+ ## 5.2.0
11
+ - Added `jdbc_paging_mode` option to choose if use `explicit` pagination in statements and avoid the initial count
12
+ query or use `auto` to delegate to the underlying library [#95](https://github.com/logstash-plugins/logstash-integration-jdbc/pull/95)
13
+
1
14
  ## 5.1.10
2
15
  - Refactor: to explicit Java (driver) class name loading [#96](https://github.com/logstash-plugins/logstash-integration-jdbc/pull/96),
3
16
  the change is expected to provide a more robust fix for the driver loading issue [#83](https://github.com/logstash-plugins/logstash-integration-jdbc/issues/83).
4
17
 
18
+ NOTE: a fatal driver error will no longer keep reloading the pipeline and now leads to a system exit.
19
+
5
20
  - Fix: regression due returning the Java driver class [#98](https://github.com/logstash-plugins/logstash-integration-jdbc/pull/98)
6
21
 
7
22
  ## 5.1.9 (yanked)
@@ -129,6 +129,9 @@ Here is the list:
129
129
  |sql_last_value | The value used to calculate which rows to query. Before any query is run,
130
130
  this is set to Thursday, 1 January 1970, or 0 if `use_column_value` is true and
131
131
  `tracking_column` is set. It is updated accordingly after subsequent queries are run.
132
+ |offset, size| Values used with manual paging mode to explicitly implement the paging.
133
+ Supported only if <<plugins-{type}s-{plugin}-jdbc_paging_enabled>> is enabled and
134
+ <<plugins-{type}s-{plugin}-jdbc_paging_mode>> has the `explicit` value.
132
135
  |==========================================================
133
136
 
134
137
  Example:
@@ -153,7 +156,7 @@ NOTE: Not all JDBC accessible technologies will support prepared statements.
153
156
  With the introduction of Prepared Statement support comes a different code execution path and some new settings. Most of the existing settings are still useful but there are several new settings for Prepared Statements to read up on.
154
157
  Use the boolean setting `use_prepared_statements` to enable this execution mode. Use the `prepared_statement_name` setting to specify a name for the Prepared Statement, this identifies the prepared statement locally and remotely and it should be unique in your config and on the database. Use the `prepared_statement_bind_values` array setting to specify the bind values, use the exact string `:sql_last_value` (multiple times if necessary) for the predefined parameter mentioned before. The `statement` (or `statement_path`) setting still holds the SQL statement but to use bind variables you must use the `?` character as a placeholder in the exact order found in the `prepared_statement_bind_values` array.
155
158
 
156
- NOTE: Building count queries around a prepared statement is not supported at this time and because jdbc paging uses count queries under the hood, jdbc paging is not supported with prepared statements at this time either. Therefore, `jdbc_paging_enabled`, `jdbc_page_size` settings are ignored when using prepared statements.
159
+ NOTE: Building count queries around a prepared statement is not supported at this time. Because jdbc paging uses count queries when `jdbc_paging_mode` has value `auto`,jdbc paging is not supported with prepared statements at this time either. Therefore, `jdbc_paging_enabled`, `jdbc_page_size` settings are ignored when using prepared statements.
157
160
 
158
161
  Example:
159
162
  [source,ruby]
@@ -193,6 +196,7 @@ This plugin supports the following configuration options plus the <<plugins-{typ
193
196
  | <<plugins-{type}s-{plugin}-jdbc_fetch_size>> |<<number,number>>|No
194
197
  | <<plugins-{type}s-{plugin}-jdbc_page_size>> |<<number,number>>|No
195
198
  | <<plugins-{type}s-{plugin}-jdbc_paging_enabled>> |<<boolean,boolean>>|No
199
+ | <<plugins-{type}s-{plugin}-jdbc_paging_mode>> |<<string,string>>, one of `["auto", "explicit"]`|No
196
200
  | <<plugins-{type}s-{plugin}-jdbc_password>> |<<password,password>>|No
197
201
  | <<plugins-{type}s-{plugin}-jdbc_password_filepath>> |a valid filesystem path|No
198
202
  | <<plugins-{type}s-{plugin}-jdbc_pool_timeout>> |<<number,number>>|No
@@ -373,6 +377,52 @@ result-set. The limit size is set with `jdbc_page_size`.
373
377
 
374
378
  Be aware that ordering is not guaranteed between queries.
375
379
 
380
+ [id="plugins-{type}s-{plugin}-jdbc_paging_mode"]
381
+ ===== `jdbc_paging_mode`
382
+
383
+ * Value can be any of: `auto`, `explicit`
384
+ * Default value is `"auto"`
385
+
386
+ Whether to use `explicit` or `auto` mode during the JDBC paging
387
+
388
+ If `auto`, your statement will be automatically surrounded by a count query and subsequent multiple paged queries (with `LIMIT` statement, etc.).
389
+
390
+ If `explicit`, multiple queries (without a count query ahead) will be performed with your statement, until no more rows are retrieved.
391
+ You have to write your own paging conditions in your statement configuration.
392
+ The `offset` and `size` parameters can be used in your statement (`size` equal to `jdbc_page_size`, and `offset` incremented by `size` for each query).
393
+ When the number of rows returned by the query is not equal to `size`, SQL paging will be ended.
394
+ Example:
395
+
396
+ [source, ruby]
397
+ ------------------------------------------------------
398
+ input {
399
+ jdbc {
400
+ statement => "SELECT id, mycolumn1, mycolumn2 FROM my_table WHERE id > :sql_last_value LIMIT :size OFFSET :offset",
401
+ jdbc_paging_enabled => true,
402
+ jdbc_paging_mode => "explicit",
403
+ jdbc_page_size => 100000
404
+ }
405
+ }
406
+ ------------------------------------------------------
407
+
408
+ [source, ruby]
409
+ ------------------------------------------------------
410
+ input {
411
+ jdbc {
412
+ statement => "CALL fetch_my_data(:sql_last_value, :offset, :size)",
413
+ jdbc_paging_enabled => true,
414
+ jdbc_paging_mode => "explicit",
415
+ jdbc_page_size => 100000
416
+ }
417
+ }
418
+ ------------------------------------------------------
419
+
420
+ This mode can be considered in the following situations:
421
+
422
+ . Performance issues encountered in default paging mode.
423
+ . Your SQL statement is complex, so simply surrounding it with paging statements is not what you want.
424
+ . Your statement is a stored procedure, and the actual paging statement is inside it.
425
+
376
426
  [id="plugins-{type}s-{plugin}-jdbc_password"]
377
427
  ===== `jdbc_password`
378
428
 
@@ -3,6 +3,7 @@ require "logstash/inputs/base"
3
3
  require "logstash/namespace"
4
4
  require "logstash/plugin_mixins/jdbc/common"
5
5
  require "logstash/plugin_mixins/jdbc/jdbc"
6
+ require "logstash/plugin_mixins/jdbc/scheduler"
6
7
  require "logstash/plugin_mixins/ecs_compatibility_support"
7
8
  require "logstash/plugin_mixins/ecs_compatibility_support/target_check"
8
9
  require "logstash/plugin_mixins/validator_support/field_reference_validation_adapter"
@@ -293,8 +294,17 @@ module LogStash module Inputs class Jdbc < LogStash::Inputs::Base
293
294
  def run(queue)
294
295
  load_driver
295
296
  if @schedule
296
- @scheduler = Rufus::Scheduler.new(:max_work_threads => 1)
297
- @scheduler.cron @schedule do
297
+ # input thread (Java) name example "[my-oracle]<jdbc"
298
+ @scheduler = LogStash::PluginMixins::Jdbc::Scheduler.new(
299
+ :max_work_threads => 1,
300
+ :thread_name => "[#{id}]<jdbc__scheduler",
301
+ # amount the scheduler thread sleeps between checking whether jobs
302
+ # should trigger, default is 0.3 which is a bit too often ...
303
+ # in theory the cron expression '* * * * * *' supports running jobs
304
+ # every second but this is very rare, we could potentially go higher
305
+ :frequency => 1.0,
306
+ )
307
+ @scheduler.schedule_cron @schedule do
298
308
  execute_query(queue)
299
309
  end
300
310
 
@@ -55,6 +55,9 @@ module LogStash module PluginMixins module Jdbc
55
55
  # Be aware that ordering is not guaranteed between queries.
56
56
  config :jdbc_paging_enabled, :validate => :boolean, :default => false
57
57
 
58
+ # Which pagination mode to use, automatic pagination or explicitly defined in the query.
59
+ config :jdbc_paging_mode, :validate => [ "auto", "explicit" ], :default => "auto"
60
+
58
61
  # JDBC page size
59
62
  config :jdbc_page_size, :validate => :number, :default => 100000
60
63
 
@@ -211,13 +214,14 @@ module LogStash module PluginMixins module Jdbc
211
214
  open_jdbc_connection
212
215
  sql_last_value = @use_column_value ? @value_tracker.value : Time.now.utc
213
216
  @tracking_column_warning_sent = false
214
- @statement_handler.perform_query(@database, @value_tracker.value, @jdbc_paging_enabled, @jdbc_page_size) do |row|
217
+ @statement_handler.perform_query(@database, @value_tracker.value) do |row|
215
218
  sql_last_value = get_column_value(row) if @use_column_value
216
219
  yield extract_values_from(row)
217
220
  end
218
221
  success = true
219
222
  rescue Sequel::DatabaseConnectionError, Sequel::DatabaseError, Java::JavaSql::SQLException => e
220
- details = { :exception => e.message }
223
+ details = { exception: e.class, message: e.message }
224
+ details[:cause] = e.cause.inspect if e.cause
221
225
  details[:backtrace] = e.backtrace if @logger.debug?
222
226
  @logger.warn("Exception when executing JDBC query", details)
223
227
  else
@@ -0,0 +1,147 @@
1
+ require 'rufus/scheduler'
2
+
3
+ require 'logstash/util/loggable'
4
+
5
+ module LogStash module PluginMixins module Jdbc
6
+ class Scheduler < Rufus::Scheduler
7
+
8
+ include LogStash::Util::Loggable
9
+
10
+ # Rufus::Scheduler >= 3.4 moved the Time impl into a gem EoTime = ::EtOrbi::EoTime`
11
+ # Rufus::Scheduler 3.1 - 3.3 using it's own Time impl `Rufus::Scheduler::ZoTime`
12
+ TimeImpl = defined?(Rufus::Scheduler::EoTime) ? Rufus::Scheduler::EoTime :
13
+ (defined?(Rufus::Scheduler::ZoTime) ? Rufus::Scheduler::ZoTime : ::Time)
14
+
15
+ # @overload
16
+ def timeout_jobs
17
+ # Rufus relies on `Thread.list` which is a blocking operation and with many schedulers
18
+ # (and threads) within LS will have a negative impact on performance as scheduler
19
+ # threads will end up waiting to obtain the `Thread.list` lock.
20
+ #
21
+ # However, this isn't necessary we can easily detect whether there are any jobs
22
+ # that might need to timeout: only when `@opts[:timeout]` is set causes worker thread(s)
23
+ # to have a `Thread.current[:rufus_scheduler_timeout]` that is not nil
24
+ return unless @opts[:timeout]
25
+ super
26
+ end
27
+
28
+ # @overload
29
+ def work_threads(query = :all)
30
+ if query == :__all_no_cache__ # special case from JobDecorator#start_work_thread
31
+ @_work_threads = nil # when a new worker thread is being added reset
32
+ return super(:all)
33
+ end
34
+
35
+ # Gets executed every time a job is triggered, we're going to cache the
36
+ # worker threads for this scheduler (to avoid `Thread.list`) - they only
37
+ # change when a new thread is being started from #start_work_thread ...
38
+ work_threads = @_work_threads
39
+ if work_threads.nil?
40
+ work_threads = threads.select { |t| t[:rufus_scheduler_work_thread] }
41
+ @_work_threads = work_threads
42
+ end
43
+
44
+ case query
45
+ when :active then work_threads.select { |t| t[:rufus_scheduler_job] }
46
+ when :vacant then work_threads.reject { |t| t[:rufus_scheduler_job] }
47
+ else work_threads
48
+ end
49
+ end
50
+
51
+ # @overload
52
+ def on_error(job, err)
53
+ details = { exception: err.class, message: err.message, backtrace: err.backtrace }
54
+ details[:cause] = err.cause if err.cause
55
+
56
+ details[:now] = debug_format_time(TimeImpl.now)
57
+ details[:last_time] = (debug_format_time(job.last_time) rescue nil)
58
+ details[:next_time] = (debug_format_time(job.next_time) rescue nil)
59
+ details[:job] = job
60
+
61
+ details[:opts] = @opts
62
+ details[:started_at] = started_at
63
+ details[:thread] = thread.inspect
64
+ details[:jobs_size] = @jobs.size
65
+ details[:work_threads_size] = work_threads.size
66
+ details[:work_queue_size] = work_queue.size
67
+
68
+ logger.error("Scheduler intercepted an error:", details)
69
+
70
+ rescue => e
71
+ logger.error("Scheduler failed in #on_error #{e.inspect}")
72
+ end
73
+
74
+ def debug_format_time(time)
75
+ # EtOrbi::EoTime used by (newer) Rufus::Scheduler has to_debug_s https://git.io/JyiPj
76
+ time.respond_to?(:to_debug_s) ? time.to_debug_s : time.strftime("%Y-%m-%dT%H:%M:%S.%L")
77
+ end
78
+ private :debug_format_time
79
+
80
+ # @private helper used by JobDecorator
81
+ def work_thread_name_prefix
82
+ ( @opts[:thread_name] || "#{@thread_key}_scheduler" ) + '_worker-'
83
+ end
84
+
85
+ protected
86
+
87
+ # @overload
88
+ def start
89
+ ret = super() # @thread[:name] = @opts[:thread_name] || "#{@thread_key}_scheduler"
90
+
91
+ # at least set thread.name for easier thread dump analysis
92
+ if @thread.is_a?(Thread) && @thread.respond_to?(:name=)
93
+ @thread.name = @thread[:name] if @thread[:name]
94
+ end
95
+
96
+ ret
97
+ end
98
+
99
+ # @overload
100
+ def do_schedule(job_type, t, callable, opts, return_job_instance, block)
101
+ job_or_id = super
102
+
103
+ job_or_id.extend JobDecorator if return_job_instance
104
+
105
+ job_or_id
106
+ end
107
+
108
+ module JobDecorator
109
+
110
+ def start_work_thread
111
+ prev_thread_count = @scheduler.work_threads.size
112
+
113
+ ret = super() # does not return Thread instance in 3.0
114
+
115
+ work_threads = @scheduler.work_threads(:__all_no_cache__)
116
+ while prev_thread_count == work_threads.size # very unlikely
117
+ Thread.pass
118
+ work_threads = @scheduler.work_threads(:__all_no_cache__)
119
+ end
120
+
121
+ work_thread_name_prefix = @scheduler.work_thread_name_prefix
122
+
123
+ work_threads.sort! do |t1, t2|
124
+ if t1[:name].nil?
125
+ t2[:name].nil? ? 0 : +1 # nils at the end
126
+ elsif t2[:name].nil?
127
+ t1[:name].nil? ? 0 : -1
128
+ else
129
+ t1[:name] <=> t2[:name]
130
+ end
131
+ end
132
+
133
+ work_threads.each_with_index do |thread, i|
134
+ unless thread[:name]
135
+ thread[:name] = "#{work_thread_name_prefix}#{sprintf('%02i', i)}"
136
+ thread.name = thread[:name] if thread.respond_to?(:name=)
137
+ # e.g. "[oracle]<jdbc_scheduler_worker-00"
138
+ end
139
+ end
140
+
141
+ ret
142
+ end
143
+
144
+ end
145
+
146
+ end
147
+ end end end
@@ -3,7 +3,19 @@
3
3
  module LogStash module PluginMixins module Jdbc
4
4
  class StatementHandler
5
5
  def self.build_statement_handler(plugin, logger)
6
- klass = plugin.use_prepared_statements ? PreparedStatementHandler : NormalStatementHandler
6
+ if plugin.use_prepared_statements
7
+ klass = PreparedStatementHandler
8
+ else
9
+ if plugin.jdbc_paging_enabled
10
+ if plugin.jdbc_paging_mode == "explicit"
11
+ klass = ExplicitPagingModeStatementHandler
12
+ else
13
+ klass = PagedNormalStatementHandler
14
+ end
15
+ else
16
+ klass = NormalStatementHandler
17
+ end
18
+ end
7
19
  klass.new(plugin, logger)
8
20
  end
9
21
 
@@ -25,22 +37,14 @@ module LogStash module PluginMixins module Jdbc
25
37
  end
26
38
 
27
39
  class NormalStatementHandler < StatementHandler
28
- # Performs the query, respecting our pagination settings, yielding once per row of data
40
+ # Performs the query, yielding once per row of data
29
41
  # @param db [Sequel::Database]
30
- # @param sql_last_value [Integet|DateTime|Time]
42
+ # @param sql_last_value [Integer|DateTime|Time]
31
43
  # @yieldparam row [Hash{Symbol=>Object}]
32
- def perform_query(db, sql_last_value, jdbc_paging_enabled, jdbc_page_size)
44
+ def perform_query(db, sql_last_value)
33
45
  query = build_query(db, sql_last_value)
34
- if jdbc_paging_enabled
35
- query.each_page(jdbc_page_size) do |paged_dataset|
36
- paged_dataset.each do |row|
37
- yield row
38
- end
39
- end
40
- else
41
- query.each do |row|
42
- yield row
43
- end
46
+ query.each do |row|
47
+ yield row
44
48
  end
45
49
  end
46
50
 
@@ -67,6 +71,48 @@ module LogStash module PluginMixins module Jdbc
67
71
  end
68
72
  end
69
73
 
74
+ class PagedNormalStatementHandler < NormalStatementHandler
75
+ attr_reader :jdbc_page_size
76
+
77
+ # Performs the query, respecting our pagination settings, yielding once per row of data
78
+ # @param db [Sequel::Database]
79
+ # @param sql_last_value [Integer|DateTime|Time]
80
+ # @yieldparam row [Hash{Symbol=>Object}]
81
+ def perform_query(db, sql_last_value)
82
+ query = build_query(db, sql_last_value)
83
+ query.each_page(@jdbc_page_size) do |paged_dataset|
84
+ paged_dataset.each do |row|
85
+ yield row
86
+ end
87
+ end
88
+ end
89
+
90
+ def post_init(plugin)
91
+ super(plugin)
92
+ @jdbc_page_size = plugin.jdbc_page_size
93
+ end
94
+ end
95
+
96
+ class ExplicitPagingModeStatementHandler < PagedNormalStatementHandler
97
+ # Performs the query, respecting our pagination settings, yielding once per row of data
98
+ # @param db [Sequel::Database]
99
+ # @param sql_last_value [Integer|DateTime|Time]
100
+ # @yieldparam row [Hash{Symbol=>Object}]
101
+ def perform_query(db, sql_last_value)
102
+ query = build_query(db, sql_last_value)
103
+ offset = 0
104
+ loop do
105
+ rows_in_page = 0
106
+ query.with_sql(query.sql, offset: offset, size: jdbc_page_size).each do |row|
107
+ yield row
108
+ rows_in_page += 1
109
+ end
110
+ break unless rows_in_page == jdbc_page_size
111
+ offset += jdbc_page_size
112
+ end
113
+ end
114
+ end
115
+
70
116
  class PreparedStatementHandler < StatementHandler
71
117
  attr_reader :name, :bind_values_array, :statement_prepared, :prepared
72
118
 
@@ -74,7 +120,7 @@ module LogStash module PluginMixins module Jdbc
74
120
  # @param db [Sequel::Database]
75
121
  # @param sql_last_value [Integet|DateTime|Time]
76
122
  # @yieldparam row [Hash{Symbol=>Object}]
77
- def perform_query(db, sql_last_value, jdbc_paging_enabled, jdbc_page_size)
123
+ def perform_query(db, sql_last_value)
78
124
  query = build_query(db, sql_last_value)
79
125
  query.each do |row|
80
126
  yield row
@@ -1,6 +1,6 @@
1
1
  Gem::Specification.new do |s|
2
2
  s.name = 'logstash-integration-jdbc'
3
- s.version = '5.1.10'
3
+ s.version = '5.2.3'
4
4
  s.licenses = ['Apache License (2.0)']
5
5
  s.summary = "Integration with JDBC - input and filter plugins"
6
6
  s.description = "This gem is a Logstash plugin required to be installed on top of the Logstash core pipeline using $LS_HOME/bin/logstash-plugin install gemname. This gem is not a stand-alone program"
@@ -34,8 +34,7 @@ Gem::Specification.new do |s|
34
34
 
35
35
  s.add_runtime_dependency 'tzinfo'
36
36
  s.add_runtime_dependency 'tzinfo-data'
37
- # 3.5 limitation is required for jdbc-static loading schedule
38
- s.add_runtime_dependency 'rufus-scheduler', '< 3.5'
37
+ s.add_runtime_dependency 'rufus-scheduler', '~> 3.0.9'
39
38
  s.add_runtime_dependency 'logstash-mixin-ecs_compatibility_support', '~>1.3'
40
39
  s.add_runtime_dependency "logstash-mixin-validator_support", '~> 1.0'
41
40
  s.add_runtime_dependency "logstash-mixin-event_support", '~> 1.0'
@@ -70,7 +70,7 @@ describe LogStash::Inputs::Jdbc, :integration => true do
70
70
  plugin.register
71
71
  expect( plugin ).to receive(:log_java_exception)
72
72
  expect(plugin.logger).to receive(:warn).once.with("Exception when executing JDBC query",
73
- hash_including(:exception => instance_of(String)))
73
+ hash_including(:message => instance_of(String)))
74
74
  q = Queue.new
75
75
  expect{ plugin.run(q) }.not_to raise_error
76
76
  end
@@ -329,6 +329,39 @@ describe LogStash::Inputs::Jdbc do
329
329
 
330
330
  end
331
331
 
332
+ context "when iterating result-set via explicit paging mode" do
333
+
334
+ let(:settings) do
335
+ {
336
+ "statement" => "SELECT * from test_table OFFSET :offset ROWS FETCH NEXT :size ROWS ONLY",
337
+ "jdbc_paging_enabled" => true,
338
+ "jdbc_paging_mode" => "explicit",
339
+ "jdbc_page_size" => 10
340
+ }
341
+ end
342
+
343
+ let(:num_rows) { 15 }
344
+
345
+ before do
346
+ plugin.register
347
+ end
348
+
349
+ after do
350
+ plugin.stop
351
+ end
352
+
353
+ it "should fetch all rows" do
354
+ num_rows.times do
355
+ db[:test_table].insert(:num => 1, :custom_time => Time.now.utc, :created_at => Time.now.utc)
356
+ end
357
+
358
+ plugin.run(queue)
359
+
360
+ expect(queue.size).to eq(num_rows)
361
+ end
362
+
363
+ end
364
+
332
365
  context "when using target option" do
333
366
  let(:settings) do
334
367
  {
@@ -0,0 +1,78 @@
1
+ # encoding: utf-8
2
+ require "logstash/devutils/rspec/spec_helper"
3
+ require "logstash/plugin_mixins/jdbc/scheduler"
4
+
5
+ describe LogStash::PluginMixins::Jdbc::Scheduler do
6
+
7
+ let(:thread_name) { '[test]<jdbc_scheduler' }
8
+
9
+ let(:opts) do
10
+ { :max_work_threads => 2, :thread_name => thread_name }
11
+ end
12
+
13
+ subject(:scheduler) { LogStash::PluginMixins::Jdbc::Scheduler.new(opts) }
14
+
15
+ after { scheduler.stop(:wait) }
16
+
17
+ it "sets scheduler thread name" do
18
+ expect( scheduler.thread.name ).to include thread_name
19
+ end
20
+
21
+ context 'cron schedule' do
22
+
23
+ before do
24
+ scheduler.schedule_cron('* * * * * *') { sleep 1.25 } # every second
25
+ end
26
+
27
+ it "sets worker thread names" do
28
+ sleep 3.0
29
+ threads = scheduler.work_threads
30
+ threads.sort! { |t1, t2| (t1.name || '') <=> (t2.name || '') }
31
+
32
+ expect( threads.size ).to eql 2
33
+ expect( threads.first.name ).to eql "#{thread_name}_worker-00"
34
+ expect( threads.last.name ).to eql "#{thread_name}_worker-01"
35
+ end
36
+
37
+ end
38
+
39
+ context 'every 1s' do
40
+
41
+ before do
42
+ scheduler.schedule_in('1s') { raise 'TEST' } # every second
43
+ end
44
+
45
+ it "logs errors handled" do
46
+ expect( scheduler.logger ).to receive(:error).with /Scheduler intercepted an error/, hash_including(:message => 'TEST')
47
+ sleep 1.5
48
+ end
49
+
50
+ end
51
+
52
+ context 'work threads' do
53
+
54
+ let(:opts) { super().merge :max_work_threads => 3 }
55
+
56
+ let(:counter) { java.util.concurrent.atomic.AtomicLong.new(0) }
57
+
58
+ before do
59
+ scheduler.schedule_cron('* * * * * *') { counter.increment_and_get; sleep 3.25 } # every second
60
+ end
61
+
62
+ it "are working" do
63
+ sleep(0.05) while counter.get == 0
64
+ expect( scheduler.work_threads.size ).to eql 1
65
+ sleep(0.05) while counter.get == 1
66
+ expect( scheduler.work_threads.size ).to eql 2
67
+ sleep(0.05) while counter.get == 2
68
+ expect( scheduler.work_threads.size ).to eql 3
69
+
70
+ sleep 1.25
71
+ expect( scheduler.work_threads.size ).to eql 3
72
+ sleep 1.25
73
+ expect( scheduler.work_threads.size ).to eql 3
74
+ end
75
+
76
+ end
77
+
78
+ end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: logstash-integration-jdbc
3
3
  version: !ruby/object:Gem::Version
4
- version: 5.1.10
4
+ version: 5.2.3
5
5
  platform: ruby
6
6
  authors:
7
7
  - Elastic
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2021-12-16 00:00:00.000000000 Z
11
+ date: 2022-02-16 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  requirement: !ruby/object:Gem::Requirement
@@ -131,17 +131,17 @@ dependencies:
131
131
  - !ruby/object:Gem::Dependency
132
132
  requirement: !ruby/object:Gem::Requirement
133
133
  requirements:
134
- - - "<"
134
+ - - "~>"
135
135
  - !ruby/object:Gem::Version
136
- version: '3.5'
136
+ version: 3.0.9
137
137
  name: rufus-scheduler
138
138
  prerelease: false
139
139
  type: :runtime
140
140
  version_requirements: !ruby/object:Gem::Requirement
141
141
  requirements:
142
- - - "<"
142
+ - - "~>"
143
143
  - !ruby/object:Gem::Version
144
- version: '3.5'
144
+ version: 3.0.9
145
145
  - !ruby/object:Gem::Dependency
146
146
  requirement: !ruby/object:Gem::Requirement
147
147
  requirements:
@@ -278,6 +278,7 @@ files:
278
278
  - lib/logstash/plugin_mixins/jdbc/checked_count_logger.rb
279
279
  - lib/logstash/plugin_mixins/jdbc/common.rb
280
280
  - lib/logstash/plugin_mixins/jdbc/jdbc.rb
281
+ - lib/logstash/plugin_mixins/jdbc/scheduler.rb
281
282
  - lib/logstash/plugin_mixins/jdbc/statement_handler.rb
282
283
  - lib/logstash/plugin_mixins/jdbc/value_tracking.rb
283
284
  - lib/logstash/plugin_mixins/jdbc_streaming.rb
@@ -306,6 +307,7 @@ files:
306
307
  - spec/helpers/derbyrun.jar
307
308
  - spec/inputs/integration/integ_spec.rb
308
309
  - spec/inputs/jdbc_spec.rb
310
+ - spec/plugin_mixins/jdbc/scheduler_spec.rb
309
311
  - spec/plugin_mixins/jdbc_streaming/parameter_handler_spec.rb
310
312
  - vendor/jar-dependencies/org/apache/derby/derby/10.14.1.0/derby-10.14.1.0.jar
311
313
  - vendor/jar-dependencies/org/apache/derby/derbyclient/10.14.1.0/derbyclient-10.14.1.0.jar
@@ -358,4 +360,5 @@ test_files:
358
360
  - spec/helpers/derbyrun.jar
359
361
  - spec/inputs/integration/integ_spec.rb
360
362
  - spec/inputs/jdbc_spec.rb
363
+ - spec/plugin_mixins/jdbc/scheduler_spec.rb
361
364
  - spec/plugin_mixins/jdbc_streaming/parameter_handler_spec.rb