logstash-integration-jdbc 5.1.8 → 5.2.2

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 70e445fd000df9ba4c631d3574d67826aaafcb22bb2b4c00f9e30aa386415ed0
4
- data.tar.gz: 2fc2cef288fa7fa2f342db6096e79e49617c012e5ce9f5c4a5d94cc1f79b0838
3
+ metadata.gz: 2f43be09f1996c0e4effd6914055fa643549925433ab4512f4e3a0ed1fa1933c
4
+ data.tar.gz: 346cb42a27eaaa3f51e7cf1ba3d52fb332f278dfe8cecf7d1aa200af42ba4f5b
5
5
  SHA512:
6
- metadata.gz: e9fd72a1ebd12db838330101716032fbe125a9dab866bd552e61eeca37d51222f1f22e29cb8cdcc1b27bba8d1e5b59e8ff22de6aba29587e3811543a23860e79
7
- data.tar.gz: ae0322899fdbc65f3be931f1e2fbc2cbb82ac7d3e9980c1aa1a404085a0d4e7218944567e4750260fd638506e6f43dceb6a477c4762fe856d3c262e152988920
6
+ metadata.gz: 3a75a4a4165e04178384a66740de3bce2981c2d019a8bd2192519a725bfe88d5ddf7469cf2a2aebda12b1b48b2117817171da2f459d283c3e3e16d86422c4c06
7
+ data.tar.gz: ae15ead4d5efd52cc67bf58ad1d994bde11443bbc0e083d56f695ddc7163be44454ee1db1a95b81ec884c51a790f2e7582733df4f5faf6b9d4715a0685a31c92
data/CHANGELOG.md CHANGED
@@ -1,3 +1,25 @@
1
+ ## 5.2.2
2
+ - Feat: name scheduler threads + redirect error logging [#102](https://github.com/logstash-plugins/logstash-integration-jdbc/pull/102)
3
+
4
+ ## 5.2.1
5
+ - Refactor: isolate paginated normal statement algorithm in a separate handler [#101](https://github.com/logstash-plugins/logstash-integration-jdbc/pull/101)
6
+
7
+ ## 5.2.0
8
+ - Added `jdbc_paging_mode` option to choose if use `explicit` pagination in statements and avoid the initial count
9
+ query or use `auto` to delegate to the underlying library [#95](https://github.com/logstash-plugins/logstash-integration-jdbc/pull/95)
10
+
11
+ ## 5.1.10
12
+ - Refactor: to explicit Java (driver) class name loading [#96](https://github.com/logstash-plugins/logstash-integration-jdbc/pull/96),
13
+ the change is expected to provide a more robust fix for the driver loading issue [#83](https://github.com/logstash-plugins/logstash-integration-jdbc/issues/83).
14
+
15
+ NOTE: a fatal driver error will no longer keep reloading the pipeline and now leads to a system exit.
16
+
17
+ - Fix: regression due returning the Java driver class [#98](https://github.com/logstash-plugins/logstash-integration-jdbc/pull/98)
18
+
19
+ ## 5.1.9 (yanked)
20
+ - Refactor: to explicit Java (driver) class name loading [#96](https://github.com/logstash-plugins/logstash-integration-jdbc/pull/96),
21
+ the change is expected to provide a more robust fix for the driver loading issue [#83](https://github.com/logstash-plugins/logstash-integration-jdbc/issues/83).
22
+
1
23
  ## 5.1.8
2
24
  - Fix the blocking pipeline reload and shutdown when connectivity issues happen [#85](https://github.com/logstash-plugins/logstash-integration-jdbc/pull/85)
3
25
 
@@ -129,6 +129,9 @@ Here is the list:
129
129
  |sql_last_value | The value used to calculate which rows to query. Before any query is run,
130
130
  this is set to Thursday, 1 January 1970, or 0 if `use_column_value` is true and
131
131
  `tracking_column` is set. It is updated accordingly after subsequent queries are run.
132
+ |offset, size| Values used with manual paging mode to explicitly implement the paging.
133
+ Supported only if <<plugins-{type}s-{plugin}-jdbc_paging_enabled>> is enabled and
134
+ <<plugins-{type}s-{plugin}-jdbc_paging_mode>> has the `explicit` value.
132
135
  |==========================================================
133
136
 
134
137
  Example:
@@ -153,7 +156,7 @@ NOTE: Not all JDBC accessible technologies will support prepared statements.
153
156
  With the introduction of Prepared Statement support comes a different code execution path and some new settings. Most of the existing settings are still useful but there are several new settings for Prepared Statements to read up on.
154
157
  Use the boolean setting `use_prepared_statements` to enable this execution mode. Use the `prepared_statement_name` setting to specify a name for the Prepared Statement, this identifies the prepared statement locally and remotely and it should be unique in your config and on the database. Use the `prepared_statement_bind_values` array setting to specify the bind values, use the exact string `:sql_last_value` (multiple times if necessary) for the predefined parameter mentioned before. The `statement` (or `statement_path`) setting still holds the SQL statement but to use bind variables you must use the `?` character as a placeholder in the exact order found in the `prepared_statement_bind_values` array.
155
158
 
156
- NOTE: Building count queries around a prepared statement is not supported at this time and because jdbc paging uses count queries under the hood, jdbc paging is not supported with prepared statements at this time either. Therefore, `jdbc_paging_enabled`, `jdbc_page_size` settings are ignored when using prepared statements.
159
+ NOTE: Building count queries around a prepared statement is not supported at this time. Because jdbc paging uses count queries when `jdbc_paging_mode` has value `auto`,jdbc paging is not supported with prepared statements at this time either. Therefore, `jdbc_paging_enabled`, `jdbc_page_size` settings are ignored when using prepared statements.
157
160
 
158
161
  Example:
159
162
  [source,ruby]
@@ -193,6 +196,7 @@ This plugin supports the following configuration options plus the <<plugins-{typ
193
196
  | <<plugins-{type}s-{plugin}-jdbc_fetch_size>> |<<number,number>>|No
194
197
  | <<plugins-{type}s-{plugin}-jdbc_page_size>> |<<number,number>>|No
195
198
  | <<plugins-{type}s-{plugin}-jdbc_paging_enabled>> |<<boolean,boolean>>|No
199
+ | <<plugins-{type}s-{plugin}-jdbc_paging_mode>> |<<string,string>>, one of `["auto", "explicit"]`|No
196
200
  | <<plugins-{type}s-{plugin}-jdbc_password>> |<<password,password>>|No
197
201
  | <<plugins-{type}s-{plugin}-jdbc_password_filepath>> |a valid filesystem path|No
198
202
  | <<plugins-{type}s-{plugin}-jdbc_pool_timeout>> |<<number,number>>|No
@@ -373,6 +377,52 @@ result-set. The limit size is set with `jdbc_page_size`.
373
377
 
374
378
  Be aware that ordering is not guaranteed between queries.
375
379
 
380
+ [id="plugins-{type}s-{plugin}-jdbc_paging_mode"]
381
+ ===== `jdbc_paging_mode`
382
+
383
+ * Value can be any of: `auto`, `explicit`
384
+ * Default value is `"auto"`
385
+
386
+ Whether to use `explicit` or `auto` mode during the JDBC paging
387
+
388
+ If `auto`, your statement will be automatically surrounded by a count query and subsequent multiple paged queries (with `LIMIT` statement, etc.).
389
+
390
+ If `explicit`, multiple queries (without a count query ahead) will be performed with your statement, until no more rows are retrieved.
391
+ You have to write your own paging conditions in your statement configuration.
392
+ The `offset` and `size` parameters can be used in your statement (`size` equal to `jdbc_page_size`, and `offset` incremented by `size` for each query).
393
+ When the number of rows returned by the query is not equal to `size`, SQL paging will be ended.
394
+ Example:
395
+
396
+ [source, ruby]
397
+ ------------------------------------------------------
398
+ input {
399
+ jdbc {
400
+ statement => "SELECT id, mycolumn1, mycolumn2 FROM my_table WHERE id > :sql_last_value LIMIT :size OFFSET :offset",
401
+ jdbc_paging_enabled => true,
402
+ jdbc_paging_mode => "explicit",
403
+ jdbc_page_size => 100000
404
+ }
405
+ }
406
+ ------------------------------------------------------
407
+
408
+ [source, ruby]
409
+ ------------------------------------------------------
410
+ input {
411
+ jdbc {
412
+ statement => "CALL fetch_my_data(:sql_last_value, :offset, :size)",
413
+ jdbc_paging_enabled => true,
414
+ jdbc_paging_mode => "explicit",
415
+ jdbc_page_size => 100000
416
+ }
417
+ }
418
+ ------------------------------------------------------
419
+
420
+ This mode can be considered in the following situations:
421
+
422
+ . Performance issues encountered in default paging mode.
423
+ . Your SQL statement is complex, so simply surrounding it with paging statements is not what you want.
424
+ . Your statement is a stored procedure, and the actual paging statement is inside it.
425
+
376
426
  [id="plugins-{type}s-{plugin}-jdbc_password"]
377
427
  ===== `jdbc_password`
378
428
 
@@ -3,6 +3,7 @@ require "logstash/inputs/base"
3
3
  require "logstash/namespace"
4
4
  require "logstash/plugin_mixins/jdbc/common"
5
5
  require "logstash/plugin_mixins/jdbc/jdbc"
6
+ require "logstash/plugin_mixins/jdbc/scheduler"
6
7
  require "logstash/plugin_mixins/ecs_compatibility_support"
7
8
  require "logstash/plugin_mixins/ecs_compatibility_support/target_check"
8
9
  require "logstash/plugin_mixins/validator_support/field_reference_validation_adapter"
@@ -293,8 +294,11 @@ module LogStash module Inputs class Jdbc < LogStash::Inputs::Base
293
294
  def run(queue)
294
295
  load_driver
295
296
  if @schedule
296
- @scheduler = Rufus::Scheduler.new(:max_work_threads => 1)
297
- @scheduler.cron @schedule do
297
+ # input thread (Java) name example "[my-oracle]<jdbc"
298
+ @scheduler = LogStash::PluginMixins::Jdbc::Scheduler.new(
299
+ :max_work_threads => 1, :thread_name => "[#{id}]<jdbc__scheduler"
300
+ )
301
+ @scheduler.schedule_cron @schedule do
298
302
  execute_query(queue)
299
303
  end
300
304
 
@@ -1,9 +1,12 @@
1
+ require 'jruby'
1
2
 
2
3
  module LogStash module PluginMixins module Jdbc
3
4
  module Common
4
5
 
5
6
  private
6
7
 
8
+ # NOTE: using the JRuby mechanism to load classes (through JavaSupport)
9
+ # makes the lock redundant although it does not hurt to have it around.
7
10
  DRIVERS_LOADING_LOCK = java.util.concurrent.locks.ReentrantLock.new()
8
11
 
9
12
  def complete_sequel_opts(defaults = {})
@@ -30,16 +33,16 @@ module LogStash module PluginMixins module Jdbc
30
33
  begin
31
34
  load_driver_jars
32
35
  begin
33
- @driver_impl = Sequel::JDBC.load_driver(normalized_driver_class)
34
- rescue Sequel::AdapterNotFound => e # Sequel::AdapterNotFound, "#{@jdbc_driver_class} not loaded"
35
- # fix this !!!
36
+ @driver_impl = load_jdbc_driver_class
37
+ rescue => e # catch java.lang.ClassNotFoundException, potential errors
38
+ # (e.g. ExceptionInInitializerError or LinkageError) won't get caught
36
39
  message = if jdbc_driver_library_set?
37
40
  "Are you sure you've included the correct jdbc driver in :jdbc_driver_library?"
38
41
  else
39
42
  ":jdbc_driver_library is not set, are you sure you included " +
40
43
  "the proper driver client libraries in your classpath?"
41
44
  end
42
- raise LogStash::PluginLoadingError, "#{e}. #{message} #{e.backtrace}"
45
+ raise LogStash::PluginLoadingError, "#{e.inspect}. #{message}"
43
46
  end
44
47
  ensure
45
48
  DRIVERS_LOADING_LOCK.unlock()
@@ -71,16 +74,16 @@ module LogStash module PluginMixins module Jdbc
71
74
  !@jdbc_driver_library.nil? && !@jdbc_driver_library.empty?
72
75
  end
73
76
 
74
- # normalizing the class name to always have a Java:: prefix
75
- # is helpful since JRuby is only able to directly load class names
76
- # whose top-level package is com, org, java, javax
77
- # There are many jdbc drivers that use cc, io, net, etc.
78
- def normalized_driver_class
79
- if @jdbc_driver_class.start_with?("Java::", "Java.")
80
- @jdbc_driver_class
81
- else
82
- "Java::#{@jdbc_driver_class}"
83
- end
77
+ def load_jdbc_driver_class
78
+ # sub a potential: 'Java::org::my.Driver' to 'org.my.Driver'
79
+ klass = @jdbc_driver_class.gsub('::', '.').sub(/^Java\./, '')
80
+ # NOTE: JRuby's Java::JavaClass.for_name which considers the custom class-loader(s)
81
+ # in 9.3 the API changed and thus to avoid surprises we go down to the Java API :
82
+ klass = JRuby.runtime.getJavaSupport.loadJavaClass(klass) # throws ClassNotFoundException
83
+ # unfortunately we can not simply return the wrapped java.lang.Class instance as
84
+ # Sequel assumes to be able to do a `driver_class.new` which only works on the proxy,
85
+ org.jruby.javasupport.Java.getProxyClass(JRuby.runtime, klass)
84
86
  end
87
+
85
88
  end
86
89
  end end end
@@ -55,6 +55,9 @@ module LogStash module PluginMixins module Jdbc
55
55
  # Be aware that ordering is not guaranteed between queries.
56
56
  config :jdbc_paging_enabled, :validate => :boolean, :default => false
57
57
 
58
+ # Which pagination mode to use, automatic pagination or explicitly defined in the query.
59
+ config :jdbc_paging_mode, :validate => [ "auto", "explicit" ], :default => "auto"
60
+
58
61
  # JDBC page size
59
62
  config :jdbc_page_size, :validate => :number, :default => 100000
60
63
 
@@ -211,13 +214,14 @@ module LogStash module PluginMixins module Jdbc
211
214
  open_jdbc_connection
212
215
  sql_last_value = @use_column_value ? @value_tracker.value : Time.now.utc
213
216
  @tracking_column_warning_sent = false
214
- @statement_handler.perform_query(@database, @value_tracker.value, @jdbc_paging_enabled, @jdbc_page_size) do |row|
217
+ @statement_handler.perform_query(@database, @value_tracker.value) do |row|
215
218
  sql_last_value = get_column_value(row) if @use_column_value
216
219
  yield extract_values_from(row)
217
220
  end
218
221
  success = true
219
222
  rescue Sequel::DatabaseConnectionError, Sequel::DatabaseError, Java::JavaSql::SQLException => e
220
- details = { :exception => e.message }
223
+ details = { exception: e.class, message: e.message }
224
+ details[:cause] = e.cause.inspect if e.cause
221
225
  details[:backtrace] = e.backtrace if @logger.debug?
222
226
  @logger.warn("Exception when executing JDBC query", details)
223
227
  else
@@ -0,0 +1,111 @@
1
+ require 'rufus/scheduler'
2
+
3
+ require 'logstash/util/loggable'
4
+
5
+ module LogStash module PluginMixins module Jdbc
6
+ class Scheduler < Rufus::Scheduler
7
+
8
+ include LogStash::Util::Loggable
9
+
10
+ # Rufus::Scheduler >= 3.4 moved the Time impl into a gem EoTime = ::EtOrbi::EoTime`
11
+ # Rufus::Scheduler 3.1 - 3.3 using it's own Time impl `Rufus::Scheduler::ZoTime`
12
+ TimeImpl = defined?(Rufus::Scheduler::EoTime) ? Rufus::Scheduler::EoTime :
13
+ (defined?(Rufus::Scheduler::ZoTime) ? Rufus::Scheduler::ZoTime : ::Time)
14
+
15
+ # @overload
16
+ def on_error(job, err)
17
+ details = { exception: err.class, message: err.message, backtrace: err.backtrace }
18
+ details[:cause] = err.cause if err.cause
19
+
20
+ details[:now] = debug_format_time(TimeImpl.now)
21
+ details[:last_time] = (debug_format_time(job.last_time) rescue nil)
22
+ details[:next_time] = (debug_format_time(job.next_time) rescue nil)
23
+ details[:job] = job
24
+
25
+ details[:opts] = @opts
26
+ details[:started_at] = started_at
27
+ details[:thread] = thread.inspect
28
+ details[:jobs_size] = @jobs.size
29
+ details[:work_threads_size] = work_threads.size
30
+ details[:work_queue_size] = work_queue.size
31
+
32
+ logger.error("Scheduler intercepted an error:", details)
33
+
34
+ rescue => e
35
+ logger.error("Scheduler failed in #on_error #{e.inspect}")
36
+ end
37
+
38
+ def debug_format_time(time)
39
+ # EtOrbi::EoTime used by (newer) Rufus::Scheduler has to_debug_s https://git.io/JyiPj
40
+ time.respond_to?(:to_debug_s) ? time.to_debug_s : time.strftime("%Y-%m-%dT%H:%M:%S.%L")
41
+ end
42
+ private :debug_format_time
43
+
44
+ # @private helper used by JobDecorator
45
+ def work_thread_name_prefix
46
+ ( @opts[:thread_name] || "#{@thread_key}_scheduler" ) + '_worker-'
47
+ end
48
+
49
+ protected
50
+
51
+ # @overload
52
+ def start
53
+ ret = super() # @thread[:name] = @opts[:thread_name] || "#{@thread_key}_scheduler"
54
+
55
+ # at least set thread.name for easier thread dump analysis
56
+ if @thread.is_a?(Thread) && @thread.respond_to?(:name=)
57
+ @thread.name = @thread[:name] if @thread[:name]
58
+ end
59
+
60
+ ret
61
+ end
62
+
63
+ # @overload
64
+ def do_schedule(job_type, t, callable, opts, return_job_instance, block)
65
+ job_or_id = super
66
+
67
+ job_or_id.extend JobDecorator if return_job_instance
68
+
69
+ job_or_id
70
+ end
71
+
72
+ module JobDecorator
73
+
74
+ def start_work_thread
75
+ prev_thread_count = @scheduler.work_threads.size
76
+
77
+ ret = super() # does not return Thread instance in 3.0
78
+
79
+ work_threads = @scheduler.work_threads
80
+ while prev_thread_count == work_threads.size # very unlikely
81
+ Thread.pass
82
+ work_threads = @scheduler.work_threads
83
+ end
84
+
85
+ work_thread_name_prefix = @scheduler.work_thread_name_prefix
86
+
87
+ work_threads.sort! do |t1, t2|
88
+ if t1[:name].nil?
89
+ t2[:name].nil? ? 0 : +1 # nils at the end
90
+ elsif t2[:name].nil?
91
+ t1[:name].nil? ? 0 : -1
92
+ else
93
+ t1[:name] <=> t2[:name]
94
+ end
95
+ end
96
+
97
+ work_threads.each_with_index do |thread, i|
98
+ unless thread[:name]
99
+ thread[:name] = "#{work_thread_name_prefix}#{sprintf('%02i', i)}"
100
+ thread.name = thread[:name] if thread.respond_to?(:name=)
101
+ # e.g. "[oracle]<jdbc_scheduler_worker-00"
102
+ end
103
+ end
104
+
105
+ ret
106
+ end
107
+
108
+ end
109
+
110
+ end
111
+ end end end
@@ -3,7 +3,19 @@
3
3
  module LogStash module PluginMixins module Jdbc
4
4
  class StatementHandler
5
5
  def self.build_statement_handler(plugin, logger)
6
- klass = plugin.use_prepared_statements ? PreparedStatementHandler : NormalStatementHandler
6
+ if plugin.use_prepared_statements
7
+ klass = PreparedStatementHandler
8
+ else
9
+ if plugin.jdbc_paging_enabled
10
+ if plugin.jdbc_paging_mode == "explicit"
11
+ klass = ExplicitPagingModeStatementHandler
12
+ else
13
+ klass = PagedNormalStatementHandler
14
+ end
15
+ else
16
+ klass = NormalStatementHandler
17
+ end
18
+ end
7
19
  klass.new(plugin, logger)
8
20
  end
9
21
 
@@ -25,22 +37,14 @@ module LogStash module PluginMixins module Jdbc
25
37
  end
26
38
 
27
39
  class NormalStatementHandler < StatementHandler
28
- # Performs the query, respecting our pagination settings, yielding once per row of data
40
+ # Performs the query, yielding once per row of data
29
41
  # @param db [Sequel::Database]
30
- # @param sql_last_value [Integet|DateTime|Time]
42
+ # @param sql_last_value [Integer|DateTime|Time]
31
43
  # @yieldparam row [Hash{Symbol=>Object}]
32
- def perform_query(db, sql_last_value, jdbc_paging_enabled, jdbc_page_size)
44
+ def perform_query(db, sql_last_value)
33
45
  query = build_query(db, sql_last_value)
34
- if jdbc_paging_enabled
35
- query.each_page(jdbc_page_size) do |paged_dataset|
36
- paged_dataset.each do |row|
37
- yield row
38
- end
39
- end
40
- else
41
- query.each do |row|
42
- yield row
43
- end
46
+ query.each do |row|
47
+ yield row
44
48
  end
45
49
  end
46
50
 
@@ -67,6 +71,48 @@ module LogStash module PluginMixins module Jdbc
67
71
  end
68
72
  end
69
73
 
74
+ class PagedNormalStatementHandler < NormalStatementHandler
75
+ attr_reader :jdbc_page_size
76
+
77
+ # Performs the query, respecting our pagination settings, yielding once per row of data
78
+ # @param db [Sequel::Database]
79
+ # @param sql_last_value [Integer|DateTime|Time]
80
+ # @yieldparam row [Hash{Symbol=>Object}]
81
+ def perform_query(db, sql_last_value)
82
+ query = build_query(db, sql_last_value)
83
+ query.each_page(@jdbc_page_size) do |paged_dataset|
84
+ paged_dataset.each do |row|
85
+ yield row
86
+ end
87
+ end
88
+ end
89
+
90
+ def post_init(plugin)
91
+ super(plugin)
92
+ @jdbc_page_size = plugin.jdbc_page_size
93
+ end
94
+ end
95
+
96
+ class ExplicitPagingModeStatementHandler < PagedNormalStatementHandler
97
+ # Performs the query, respecting our pagination settings, yielding once per row of data
98
+ # @param db [Sequel::Database]
99
+ # @param sql_last_value [Integer|DateTime|Time]
100
+ # @yieldparam row [Hash{Symbol=>Object}]
101
+ def perform_query(db, sql_last_value)
102
+ query = build_query(db, sql_last_value)
103
+ offset = 0
104
+ loop do
105
+ rows_in_page = 0
106
+ query.with_sql(query.sql, offset: offset, size: jdbc_page_size).each do |row|
107
+ yield row
108
+ rows_in_page += 1
109
+ end
110
+ break unless rows_in_page == jdbc_page_size
111
+ offset += jdbc_page_size
112
+ end
113
+ end
114
+ end
115
+
70
116
  class PreparedStatementHandler < StatementHandler
71
117
  attr_reader :name, :bind_values_array, :statement_prepared, :prepared
72
118
 
@@ -74,7 +120,7 @@ module LogStash module PluginMixins module Jdbc
74
120
  # @param db [Sequel::Database]
75
121
  # @param sql_last_value [Integet|DateTime|Time]
76
122
  # @yieldparam row [Hash{Symbol=>Object}]
77
- def perform_query(db, sql_last_value, jdbc_paging_enabled, jdbc_page_size)
123
+ def perform_query(db, sql_last_value)
78
124
  query = build_query(db, sql_last_value)
79
125
  query.each do |row|
80
126
  yield row
@@ -1,6 +1,6 @@
1
1
  Gem::Specification.new do |s|
2
2
  s.name = 'logstash-integration-jdbc'
3
- s.version = '5.1.8'
3
+ s.version = '5.2.2'
4
4
  s.licenses = ['Apache License (2.0)']
5
5
  s.summary = "Integration with JDBC - input and filter plugins"
6
6
  s.description = "This gem is a Logstash plugin required to be installed on top of the Logstash core pipeline using $LS_HOME/bin/logstash-plugin install gemname. This gem is not a stand-alone program"
@@ -34,8 +34,7 @@ Gem::Specification.new do |s|
34
34
 
35
35
  s.add_runtime_dependency 'tzinfo'
36
36
  s.add_runtime_dependency 'tzinfo-data'
37
- # 3.5 limitation is required for jdbc-static loading schedule
38
- s.add_runtime_dependency 'rufus-scheduler', '< 3.5'
37
+ s.add_runtime_dependency 'rufus-scheduler', '~> 3.0.9'
39
38
  s.add_runtime_dependency 'logstash-mixin-ecs_compatibility_support', '~>1.3'
40
39
  s.add_runtime_dependency "logstash-mixin-validator_support", '~> 1.0'
41
40
  s.add_runtime_dependency "logstash-mixin-event_support", '~> 1.0'
@@ -70,7 +70,7 @@ describe LogStash::Inputs::Jdbc, :integration => true do
70
70
  plugin.register
71
71
  expect( plugin ).to receive(:log_java_exception)
72
72
  expect(plugin.logger).to receive(:warn).once.with("Exception when executing JDBC query",
73
- hash_including(:exception => instance_of(String)))
73
+ hash_including(:message => instance_of(String)))
74
74
  q = Queue.new
75
75
  expect{ plugin.run(q) }.not_to raise_error
76
76
  end
@@ -329,6 +329,39 @@ describe LogStash::Inputs::Jdbc do
329
329
 
330
330
  end
331
331
 
332
+ context "when iterating result-set via explicit paging mode" do
333
+
334
+ let(:settings) do
335
+ {
336
+ "statement" => "SELECT * from test_table OFFSET :offset ROWS FETCH NEXT :size ROWS ONLY",
337
+ "jdbc_paging_enabled" => true,
338
+ "jdbc_paging_mode" => "explicit",
339
+ "jdbc_page_size" => 10
340
+ }
341
+ end
342
+
343
+ let(:num_rows) { 15 }
344
+
345
+ before do
346
+ plugin.register
347
+ end
348
+
349
+ after do
350
+ plugin.stop
351
+ end
352
+
353
+ it "should fetch all rows" do
354
+ num_rows.times do
355
+ db[:test_table].insert(:num => 1, :custom_time => Time.now.utc, :created_at => Time.now.utc)
356
+ end
357
+
358
+ plugin.run(queue)
359
+
360
+ expect(queue.size).to eq(num_rows)
361
+ end
362
+
363
+ end
364
+
332
365
  context "when using target option" do
333
366
  let(:settings) do
334
367
  {
@@ -1620,16 +1653,35 @@ describe LogStash::Inputs::Jdbc do
1620
1653
  describe "jdbc_driver_class" do
1621
1654
  context "when not prefixed with Java::" do
1622
1655
  let(:jdbc_driver_class) { "org.apache.derby.jdbc.EmbeddedDriver" }
1623
- it "loads the class prefixed with Java::" do
1624
- expect(Sequel::JDBC).to receive(:load_driver).with(/^Java::/)
1625
- plugin.send(:load_driver)
1656
+ it "loads the class" do
1657
+ expect { plugin.send(:load_driver) }.not_to raise_error
1626
1658
  end
1627
1659
  end
1628
1660
  context "when prefixed with Java::" do
1629
1661
  let(:jdbc_driver_class) { "Java::org.apache.derby.jdbc.EmbeddedDriver" }
1630
- it "loads the class as-is" do
1631
- expect(Sequel::JDBC).to receive(:load_driver).with(jdbc_driver_class)
1632
- plugin.send(:load_driver)
1662
+ it "loads the class" do
1663
+ expect { plugin.send(:load_driver) }.not_to raise_error
1664
+ end
1665
+ end
1666
+ context "when prefixed with Java." do
1667
+ let(:jdbc_driver_class) { "Java.org::apache::derby::jdbc.EmbeddedDriver" }
1668
+ it "loads the class" do
1669
+ expect { plugin.send(:load_driver) }.not_to raise_error
1670
+ end
1671
+
1672
+ it "can instantiate the returned driver class" do
1673
+ # for drivers where the path through DriverManager fails, Sequel assumes
1674
+ # having a proxied Java class instance (instead of a java.lang.Class) and
1675
+ # does a driver.new.connect https://git.io/JDV6M
1676
+ driver = plugin.send(:load_driver)
1677
+ expect { driver.new }.not_to raise_error
1678
+ end
1679
+ end
1680
+ context "when class name invalid" do
1681
+ let(:jdbc_driver_class) { "org.apache.NonExistentDriver" }
1682
+ it "raises a loading error" do
1683
+ expect { plugin.send(:load_driver) }.to raise_error LogStash::PluginLoadingError,
1684
+ /java.lang.ClassNotFoundException: org.apache.NonExistentDriver/
1633
1685
  end
1634
1686
  end
1635
1687
  end
@@ -0,0 +1,52 @@
1
+ # encoding: utf-8
2
+ require "logstash/devutils/rspec/spec_helper"
3
+ require "logstash/plugin_mixins/jdbc/scheduler"
4
+
5
+ describe LogStash::PluginMixins::Jdbc::Scheduler do
6
+
7
+ let(:thread_name) { '[test]<jdbc_scheduler' }
8
+
9
+ let(:opts) do
10
+ { :max_work_threads => 2, :thread_name => thread_name }
11
+ end
12
+
13
+ subject(:scheduler) { LogStash::PluginMixins::Jdbc::Scheduler.new(opts) }
14
+
15
+ after { scheduler.stop(:wait) }
16
+
17
+ it "sets scheduler thread name" do
18
+ expect( scheduler.thread.name ).to include thread_name
19
+ end
20
+
21
+ context 'cron schedule' do
22
+
23
+ before do
24
+ scheduler.schedule_cron('* * * * * *') { sleep 1.25 } # every second
25
+ end
26
+
27
+ it "sets worker thread names" do
28
+ sleep 3.0
29
+ threads = scheduler.work_threads
30
+ threads.sort! { |t1, t2| (t1.name || '') <=> (t2.name || '') }
31
+
32
+ expect( threads.size ).to eql 2
33
+ expect( threads.first.name ).to eql "#{thread_name}_worker-00"
34
+ expect( threads.last.name ).to eql "#{thread_name}_worker-01"
35
+ end
36
+
37
+ end
38
+
39
+ context 'every 1s' do
40
+
41
+ before do
42
+ scheduler.schedule_in('1s') { raise 'TEST' } # every second
43
+ end
44
+
45
+ it "logs errors handled" do
46
+ expect( scheduler.logger ).to receive(:error).with /Scheduler intercepted an error/, hash_including(:message => 'TEST')
47
+ sleep 1.5
48
+ end
49
+
50
+ end
51
+
52
+ end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: logstash-integration-jdbc
3
3
  version: !ruby/object:Gem::Version
4
- version: 5.1.8
4
+ version: 5.2.2
5
5
  platform: ruby
6
6
  authors:
7
7
  - Elastic
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2021-11-02 00:00:00.000000000 Z
11
+ date: 2022-01-19 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  requirement: !ruby/object:Gem::Requirement
@@ -131,17 +131,17 @@ dependencies:
131
131
  - !ruby/object:Gem::Dependency
132
132
  requirement: !ruby/object:Gem::Requirement
133
133
  requirements:
134
- - - "<"
134
+ - - "~>"
135
135
  - !ruby/object:Gem::Version
136
- version: '3.5'
136
+ version: 3.0.9
137
137
  name: rufus-scheduler
138
138
  prerelease: false
139
139
  type: :runtime
140
140
  version_requirements: !ruby/object:Gem::Requirement
141
141
  requirements:
142
- - - "<"
142
+ - - "~>"
143
143
  - !ruby/object:Gem::Version
144
- version: '3.5'
144
+ version: 3.0.9
145
145
  - !ruby/object:Gem::Dependency
146
146
  requirement: !ruby/object:Gem::Requirement
147
147
  requirements:
@@ -278,6 +278,7 @@ files:
278
278
  - lib/logstash/plugin_mixins/jdbc/checked_count_logger.rb
279
279
  - lib/logstash/plugin_mixins/jdbc/common.rb
280
280
  - lib/logstash/plugin_mixins/jdbc/jdbc.rb
281
+ - lib/logstash/plugin_mixins/jdbc/scheduler.rb
281
282
  - lib/logstash/plugin_mixins/jdbc/statement_handler.rb
282
283
  - lib/logstash/plugin_mixins/jdbc/value_tracking.rb
283
284
  - lib/logstash/plugin_mixins/jdbc_streaming.rb
@@ -306,6 +307,7 @@ files:
306
307
  - spec/helpers/derbyrun.jar
307
308
  - spec/inputs/integration/integ_spec.rb
308
309
  - spec/inputs/jdbc_spec.rb
310
+ - spec/plugin_mixins/jdbc/scheduler_spec.rb
309
311
  - spec/plugin_mixins/jdbc_streaming/parameter_handler_spec.rb
310
312
  - vendor/jar-dependencies/org/apache/derby/derby/10.14.1.0/derby-10.14.1.0.jar
311
313
  - vendor/jar-dependencies/org/apache/derby/derbyclient/10.14.1.0/derbyclient-10.14.1.0.jar
@@ -358,4 +360,5 @@ test_files:
358
360
  - spec/helpers/derbyrun.jar
359
361
  - spec/inputs/integration/integ_spec.rb
360
362
  - spec/inputs/jdbc_spec.rb
363
+ - spec/plugin_mixins/jdbc/scheduler_spec.rb
361
364
  - spec/plugin_mixins/jdbc_streaming/parameter_handler_spec.rb