logstash-output-analyticdb 5.4.0.4

Sign up to get free protection for your applications and to get access to all the features.
@@ -0,0 +1,7 @@
1
+ ---
2
+ SHA256:
3
+ metadata.gz: 14f9715929aed304d0041f628a595576f57cd66eea8d5048ad25af19f2433c7e
4
+ data.tar.gz: 3d52381c84cbdba1b03509d86c447896187ea18e296ea8877dab2671ce7a019b
5
+ SHA512:
6
+ metadata.gz: 88799b9382cf99bc969964807b6eccb11d9b4f86cdf78c578afb5b890ebd6c23d8e9e55dd9cf78680b1183bd3fd083eacd1672fec9bf5094569750ca650b77ab
7
+ data.tar.gz: b0b72c6cccf33228ee69d27c6554dcbab5f1400def3cb8d10caca50153c2fe63638e254104a86125adadd2474a0e043dfd623bb0f636007430bf7718a28c74b0
@@ -0,0 +1,64 @@
1
+ # Change Log
2
+ All notable changes to this project will be documented in this file, from 0.2.0.
3
+
4
+ ## [5.3.0] - 2017-11-08
5
+ - Adds configuration options `enable_event_as_json_keyword` and `event_as_json_keyword`
6
+ - Adds BigDecimal support
7
+ - Adds additional logging for debugging purposes (with thanks to @mlkmhd's work)
8
+
9
+ ## [5.2.1] - 2017-04-09
10
+ - Adds Array and Hash to_json support for non-sprintf syntax
11
+
12
+ ## [5.2.0] - 2017-04-01
13
+ - Upgrades HikariCP to latest
14
+ - Fixes HikariCP logging integration issues
15
+
16
+ ## [5.1.0] - 2016-12-17
17
+ - phoenix-thin fixes for issue #60
18
+
19
+ ## [5.0.0] - 2016-11-03
20
+ - logstash v5 support
21
+
22
+ ## [0.3.1] - 2016-08-28
23
+ - Adds connection_test configuration option, to prevent the connection test from occuring, allowing the error to be suppressed.
24
+ Useful for cockroachdb deployments. https://github.com/theangryangel/logstash-output-jdbc/issues/53
25
+
26
+ ## [0.3.0] - 2016-07-24
27
+ - Brings tests from v5 branch, providing greater coverage
28
+ - Removes bulk update support, due to inconsistent behaviour
29
+ - Plugin now marked as threadsafe, meaning only 1 instance per-Logstash
30
+ - Raises default max_pool_size to match the default number of workers (1 connection per worker)
31
+
32
+ ## [0.2.10] - 2016-07-07
33
+ - Support non-string entries in statement array
34
+ - Adds backtrace to exception logging
35
+
36
+ ## [0.2.9] - 2016-06-29
37
+ - Fix NameError exception.
38
+ - Moved log_jdbc_exception calls
39
+
40
+ ## [0.2.7] - 2016-05-29
41
+ - Backport retry exception logic from v5 branch
42
+ - Backport improved timestamp compatibility from v5 branch
43
+
44
+ ## [0.2.6] - 2016-05-02
45
+ - Fix for exception infinite loop
46
+
47
+ ## [0.2.5] - 2016-04-11
48
+ ### Added
49
+ - Basic tests running against DerbyDB
50
+ - Fix for converting Logstash::Timestamp to iso8601 from @hordijk
51
+
52
+ ## [0.2.4] - 2016-04-07
53
+ - Documentation fixes from @hordijk
54
+
55
+ ## [0.2.3] - 2016-02-16
56
+ - Bug fixes
57
+
58
+ ## [0.2.2] - 2015-12-30
59
+ - Bug fixes
60
+
61
+ ## [0.2.1] - 2015-12-22
62
+ - Support for connection pooling support added through HikariCP
63
+ - Support for unsafe statement handling (allowing dynamic queries)
64
+ - Altered exception handling to now count sequential flushes with exceptions thrown
data/Gemfile ADDED
@@ -0,0 +1,11 @@
1
+ source 'https://rubygems.org'
2
+
3
+ gemspec
4
+
5
+ logstash_path = ENV["LOGSTASH_PATH"] || "../../logstash"
6
+ use_logstash_source = ENV["LOGSTASH_SOURCE"] && ENV["LOGSTASH_SOURCE"].to_s == "1"
7
+
8
+ if Dir.exist?(logstash_path) && use_logstash_source
9
+ gem 'logstash-core', :path => "#{logstash_path}/logstash-core"
10
+ gem 'logstash-core-plugin-api', :path => "#{logstash_path}/logstash-core-plugin-api"
11
+ end
@@ -0,0 +1,85 @@
1
+ # logstash-output-AnalyticDB
2
+ This plugin is AnalyticDB output for logstash which is forked from https://github.com/theangryangel/logstash-output-jdbc.
3
+
4
+ This plugin is provided as an external plugin and is not part of the Logstash project.
5
+
6
+ This plugin allows you to output to AnalyticDB database, using JDBC adapters.
7
+ See below for tested adapters, and example configurations.
8
+
9
+ ## Support & release schedule
10
+ I no longer have time at work to maintain this plugin in step with Logstash's releases, and I am not completely immersed in the Logstash ecosystem. If something is broken for you I will do my best to help, but I cannot guarantee timeframes.
11
+
12
+ Pull requests are always welcome.
13
+
14
+ ## Changelog
15
+ See CHANGELOG.md
16
+
17
+ ## Versions
18
+ Released versions are available via rubygems, and typically tagged.
19
+
20
+ For development:
21
+ - See master branch for logstash v5 & v6 :warning: This is untested under Logstash 6.3 at this time, and there has been 1 unverified report of an issue. Please use at your own risk until I can find the time to evaluate and test 6.3.
22
+ - See v2.x branch for logstash v2
23
+ - See v1.5 branch for logstash v1.5
24
+ - See v1.4 branch for logstash 1.4
25
+
26
+ ## Installation
27
+ - Run `bin/logstash-plugin install logstash-output-analyticdb` in your logstash installation directory
28
+ - Now either:
29
+ - Use driver_jar_path in your configuraton to specify a path to your jar file
30
+ - Or:
31
+ - Create the directory vendor/jar/jdbc in your logstash installation (`mkdir -p vendor/jar/jdbc/`)
32
+ - Add JDBC jar files to vendor/jar/jdbc in your logstash installation
33
+ - And then configure (examples can be found in the examples directory)
34
+
35
+ ## Configuration options
36
+
37
+ | Option | Type | Description | Required? | Default |
38
+ | ------ | ---- | ----------- | --------- | ------- |
39
+ | driver_class | String | Specify a driver class if autoloading fails | No | |
40
+ | driver_auto_commit | Boolean | If the driver does not support auto commit, you should set this to false | No | True |
41
+ | driver_jar_path | String | File path to jar file containing your JDBC driver. This is optional, and all JDBC jars may be placed in $LOGSTASH_HOME/vendor/jar/jdbc instead. | No | |
42
+ | connection_string | String | JDBC connection URL | Yes | |
43
+ | connection_test | Boolean | Run a JDBC connection test. Some drivers do not function correctly, and you may need to disable the connection test to supress an error. Cockroach with the postgres JDBC driver is such an example. | No | Yes |
44
+ | connection_test_query | String | Connection test and init query string, required for some JDBC drivers that don't support isValid(). Typically you'd set to this "SELECT 1" | No | |
45
+ | username | String | AnalyticDB username - this is optional as it may be included in the connection string, for many drivers | No | |
46
+ | password | String | AnalyticDB password - this is optional as it may be included in the connection string, for many drivers | No | |
47
+ | statement | Array | An array of strings representing the SQL statement to run. Index 0 is the SQL statement that is prepared, all other array entries are passed in as parameters (in order). A parameter may either be a property of the event (i.e. "@timestamp", or "host") or a formatted string (i.e. "%{host} - %{message}" or "%{message}"). If a key is passed then it will be automatically converted as required for insertion into SQL. If it's a formatted string then it will be passed in verbatim. | Yes | |
48
+ | unsafe_statement | Boolean | If yes, the statement is evaluated for event fields - this allows you to use dynamic table names, etc. **This is highly dangerous** and you should **not** use this unless you are 100% sure that the field(s) you are passing in are 100% safe. Failure to do so will result in possible SQL injections. Example statement: [ "insert into %{table_name_field} (column) values(?)", "fieldname" ] | No | False |
49
+ | max_pool_size | Number | Maximum number of connections to open to the SQL server at any 1 time | No | 5 |
50
+ | connection_timeout | Number | Number of milliseconds before a SQL connection is closed | No | 10000 |
51
+ | flush_size | Number | Maximum number of entries to buffer before sending to SQL - if this is reached before idle_flush_time | No | 1000 |
52
+ | max_flush_exceptions | Number | Number of sequential flushes which cause an exception, before the set of events are discarded. Set to a value less than 1 if you never want it to stop. This should be carefully configured with respect to retry_initial_interval and retry_max_interval, if your SQL server is not highly available | No | 10 |
53
+ | retry_initial_interval | Number | Number of seconds before the initial retry in the event of a failure. On each failure it will be doubled until it reaches retry_max_interval | No | 2 |
54
+ | retry_max_interval | Number | Maximum number of seconds between each retry | No | 128 |
55
+ | retry_sql_states | Array of strings | An array of custom SQL state codes you wish to retry until `max_flush_exceptions`. Useful if you're using a JDBC driver which returns retry-able, but non-standard SQL state codes in it's exceptions. | No | [] |
56
+ | event_as_json_keyword | String | The magic key word that the plugin looks for to convert the entire event into a JSON object. As Logstash does not support this out of the box with it's `sprintf` implementation, you can use whatever this field is set to in the statement parameters | No | @event |
57
+ | enable_event_as_json_keyword | Boolean | Enables the magic keyword set in the configuration option `event_as_json_keyword`. Without this enabled the plugin will not convert the `event_as_json_keyword` into JSON encoding of the entire event. | No | False |
58
+
59
+ ## Example configurations
60
+ Example logstash configurations, can now be found in the examples directory. Where possible we try to link every configuration with a tested jar.
61
+
62
+ If you have a working sample configuration, for a DB thats not listed, pull requests are welcome.
63
+
64
+ ## Development and Running tests
65
+ For development tests are recommended to run inside a virtual machine (Vagrantfile is included in the repo), as it requires
66
+ access to various database engines and could completely destroy any data in a live system.
67
+
68
+ If you have vagrant available (this is temporary whilst I'm hacking on v5 support. I'll make this more streamlined later):
69
+ - `vagrant up`
70
+ - `vagrant ssh`
71
+ - `cd /vagrant`
72
+ - `gem install bundler`
73
+ - `cd /vagrant && bundle install && bundle exec rake vendor && bundle exec rake install_jars`
74
+ - `./scripts/travis-before_script.sh && source ./scripts/travis-variables.sh`
75
+ - `bundle exec rspec`
76
+
77
+ ## Releasing
78
+ - Update Changelog
79
+ - Bump version in gemspec
80
+ - Commit
81
+ - Create tag `git tag v<version-number-in-gemspec>`
82
+ - `bundle exec rake install_jars`
83
+ - `bundle exec rake pre_release_checks`
84
+ - `gem build logstash-output-analyticdb.gemspec`
85
+ - `gem push`
@@ -0,0 +1,18 @@
1
+ logstash-output-jdbc is a project originally created by Karl Southern
2
+ (the_angry_angel), but there are a number of people that have contributed
3
+ or implemented key features over time. We do our best to keep this list
4
+ up-to-date, but you can also have a look at the nice contributor graphs
5
+ produced by GitHub: https://github.com/theangryangel/logstash-output-jdbc/graphs/contributors
6
+
7
+ * [hordijk](https://github.com/hordijk)
8
+ * [dmitryakadiamond](https://github.com/dmitryakadiamond)
9
+ * [MassimoSporchia](https://github.com/MassimoSporchia)
10
+ * [ebuildy](https://github.com/ebuildy)
11
+ * [kushtrimjunuzi](https://github.com/kushtrimjunuzi)
12
+ * [josemazo](https://github.com/josemazo)
13
+ * [aceoliver](https://github.com/aceoliver)
14
+ * [roflmao](https://github.com/roflmao)
15
+ * [onesuper](https://github.com/onesuper)
16
+ * [phr0gz](https://github.com/phr0gz)
17
+ * [jMonsinjon](https://github.com/jMonsinjon)
18
+ * [mlkmhd](https://github.com/mlkmhd)
@@ -0,0 +1,5 @@
1
+ # encoding: utf-8
2
+ require 'logstash/environment'
3
+
4
+ root_dir = File.expand_path(File.join(File.dirname(__FILE__), '..'))
5
+ LogStash::Environment.load_runtime_jars! File.join(root_dir, 'vendor')
@@ -0,0 +1,404 @@
1
+ # encoding: utf-8
2
+ require 'logstash/outputs/base'
3
+ require 'logstash/namespace'
4
+ require 'concurrent'
5
+ require 'stud/interval'
6
+ require 'java'
7
+ require 'logstash-output-analyticdb_jars'
8
+ require 'json'
9
+ require 'bigdecimal'
10
+
11
+ # Write events to a SQL engine, using JDBC.
12
+ #
13
+ # It is upto the user of the plugin to correctly configure the plugin. This
14
+ # includes correctly crafting the SQL statement, and matching the number of
15
+ # parameters correctly.
16
+ class LogStash::Outputs::Analyticdb < LogStash::Outputs::Base
17
+ concurrency :shared
18
+
19
+ STRFTIME_FMT = '%Y-%m-%d %T.%L'.freeze
20
+
21
+ RETRYABLE_SQLSTATE_CLASSES = [
22
+ # Classes of retryable SQLSTATE codes
23
+ # Not all in the class will be retryable. However, this is the best that
24
+ # we've got right now.
25
+ # If a custom state code is required, set it in retry_sql_states.
26
+ '08', # Connection Exception
27
+ '24', # Invalid Cursor State (Maybe retry-able in some circumstances)
28
+ '25', # Invalid Transaction State
29
+ '40', # Transaction Rollback
30
+ '53', # Insufficient Resources
31
+ '54', # Program Limit Exceeded (MAYBE)
32
+ '55', # Object Not In Prerequisite State
33
+ '57', # Operator Intervention
34
+ '58', # System Error
35
+ ].freeze
36
+
37
+ config_name 'analyticdb'
38
+
39
+ # Driver class - Reintroduced for https://github.com/theangryangel/logstash-output-jdbc/issues/26
40
+ config :driver_class, validate: :string
41
+
42
+ # Does the JDBC driver support autocommit?
43
+ config :driver_auto_commit, validate: :boolean, default: true, required: true
44
+
45
+ # Where to find the jar
46
+ # Defaults to not required, and to the original behaviour
47
+ config :driver_jar_path, validate: :string, required: false
48
+
49
+ # jdbc connection string
50
+ config :connection_string, validate: :string, required: true
51
+
52
+ # jdbc username - optional, maybe in the connection string
53
+ config :username, validate: :string, required: false
54
+
55
+ # jdbc password - optional, maybe in the connection string
56
+ config :password, validate: :string, required: false
57
+
58
+ # [ "insert into table (message) values(?)", "%{message}" ]
59
+ config :statement, validate: :array, required: true
60
+
61
+ # If this is an unsafe statement, use event.sprintf
62
+ # This also has potential performance penalties due to having to create a
63
+ # new statement for each event, rather than adding to the batch and issuing
64
+ # multiple inserts in 1 go
65
+ config :unsafe_statement, validate: :boolean, default: false
66
+
67
+ # Number of connections in the pool to maintain
68
+ config :max_pool_size, validate: :number, default: 5
69
+
70
+ # Connection timeout
71
+ config :connection_timeout, validate: :number, default: 10000
72
+
73
+ # We buffer a certain number of events before flushing that out to SQL.
74
+ # This setting controls how many events will be buffered before sending a
75
+ # batch of events.
76
+ config :flush_size, validate: :number, default: 1000
77
+
78
+ # Set initial interval in seconds between retries. Doubled on each retry up to `retry_max_interval`
79
+ config :retry_initial_interval, validate: :number, default: 2
80
+
81
+ # Maximum time between retries, in seconds
82
+ config :retry_max_interval, validate: :number, default: 128
83
+
84
+ # Any additional custom, retryable SQL state codes.
85
+ # Suitable for configuring retryable custom JDBC SQL state codes.
86
+ config :retry_sql_states, validate: :array, default: []
87
+
88
+ # Run a connection test on start.
89
+ config :connection_test, validate: :boolean, default: true
90
+
91
+ # Connection test and init string, required for some JDBC endpoints
92
+ config :connection_test_query, validate: :string, required: false
93
+
94
+ # Maximum number of sequential failed attempts, before we stop retrying.
95
+ # If set to < 1, then it will infinitely retry.
96
+ # At the default values this is a little over 10 minutes
97
+ config :max_flush_exceptions, validate: :number, default: 10
98
+
99
+ config :max_repeat_exceptions, obsolete: 'This has been replaced by max_flush_exceptions - which behaves slightly differently. Please check the documentation.'
100
+ config :max_repeat_exceptions_time, obsolete: 'This is no longer required'
101
+ config :idle_flush_time, obsolete: 'No longer necessary under Logstash v5'
102
+
103
+ # Allows the whole event to be converted to JSON
104
+ config :enable_event_as_json_keyword, validate: :boolean, default: false
105
+
106
+ # The magic key used to convert the whole event to JSON. If you need this, and you have the default in your events, you can use this to change your magic keyword.
107
+ config :event_as_json_keyword, validate: :string, default: '@event'
108
+
109
+ config :commit_size, validate: :number, default: 32768
110
+
111
+ def register
112
+ @logger.info('JDBC - Starting up')
113
+
114
+ load_jar_files!
115
+
116
+ @stopping = Concurrent::AtomicBoolean.new(false)
117
+
118
+ @logger.warn('JDBC - Flush size is set to > 1000') if @flush_size > 1000
119
+
120
+ if @statement.empty?
121
+ @logger.error('JDBC - No statement provided. Configuration error.')
122
+ end
123
+
124
+ if !@unsafe_statement && @statement.length < 2
125
+ @logger.error("JDBC - Statement has no parameters. No events will be inserted into SQL as you're not passing any event data. Likely configuration error.")
126
+ end
127
+
128
+ @stmt_prefix = @statement[0]
129
+ fst_tmp = @stmt_prefix.index("(")
130
+ snd_prefix = @stmt_prefix[fst_tmp + 1, @stmt_prefix.length]
131
+ snd_tmp = snd_prefix.index("(")
132
+ if snd_tmp == nil
133
+ @pre_len = fst_tmp
134
+ else
135
+ @pre_len = fst_tmp + snd_tmp + 1
136
+ end
137
+
138
+ setup_and_test_pool!
139
+ end
140
+
141
+ def multi_receive(events)
142
+ events.each_slice(@flush_size) do |slice|
143
+ retrying_submit(slice)
144
+ end
145
+ end
146
+
147
+ def close
148
+ @stopping.make_true
149
+ @pool.close
150
+ super
151
+ end
152
+
153
+ private
154
+
155
+ def setup_and_test_pool!
156
+ # Setup pool
157
+ @pool = Java::ComZaxxerHikari::HikariDataSource.new
158
+
159
+ @pool.setAutoCommit(@driver_auto_commit)
160
+ @pool.setDriverClassName(@driver_class) if @driver_class
161
+
162
+ @pool.setJdbcUrl(@connection_string)
163
+
164
+ @pool.setUsername(@username) if @username
165
+ @pool.setPassword(@password) if @password
166
+
167
+ @pool.setMaximumPoolSize(@max_pool_size)
168
+ @pool.setConnectionTimeout(@connection_timeout)
169
+
170
+ validate_connection_timeout = (@connection_timeout / 1000) / 2
171
+
172
+ if !@connection_test_query.nil? and @connection_test_query.length > 1
173
+ @pool.setConnectionTestQuery(@connection_test_query)
174
+ @pool.setConnectionInitSql(@connection_test_query)
175
+ end
176
+
177
+ return unless @connection_test
178
+
179
+ # Test connection
180
+ test_connection = @pool.getConnection
181
+ unless test_connection.isValid(validate_connection_timeout)
182
+ @logger.warn('JDBC - Connection is not reporting as validate. Either connection is invalid, or driver is not getting the appropriate response.')
183
+ end
184
+ test_connection.close
185
+ end
186
+
187
+ def load_jar_files!
188
+ # Load jar from driver path
189
+ unless @driver_jar_path.nil?
190
+ raise LogStash::ConfigurationError, 'JDBC - Could not find jar file at given path. Check config.' unless File.exist? @driver_jar_path
191
+ require @driver_jar_path
192
+ return
193
+ end
194
+
195
+ # Revert original behaviour of loading from vendor directory
196
+ # if no path given
197
+ jarpath = if ENV['LOGSTASH_HOME']
198
+ File.join(ENV['LOGSTASH_HOME'], '/vendor/jar/jdbc/*.jar')
199
+ else
200
+ File.join(File.dirname(__FILE__), '../../../vendor/jar/jdbc/*.jar')
201
+ end
202
+
203
+ @logger.trace('JDBC - jarpath', path: jarpath)
204
+
205
+ jars = Dir[jarpath]
206
+ raise LogStash::ConfigurationError, 'JDBC - No jars found. Have you read the README?' if jars.empty?
207
+
208
+ jars.each do |jar|
209
+ @logger.trace('JDBC - Loaded jar', jar: jar)
210
+ require jar
211
+ end
212
+ end
213
+
214
+ def submit(events)
215
+ connection = nil
216
+ statement = nil
217
+ events_to_retry = []
218
+ insert_sql = ""
219
+ sql_len = 0
220
+ is_insert_err = false
221
+
222
+ begin
223
+ connection = @pool.getConnection
224
+ rescue => e
225
+ log_jdbc_exception(e, true, nil)
226
+ # If a connection is not available, then the server has gone away
227
+ # We're not counting that towards our retry count.
228
+ return events, false
229
+ end
230
+
231
+ begin
232
+ events.each do |event|
233
+ statement = connection.prepareStatement(
234
+ (@unsafe_statement == true) ? event.sprintf(@statement[0]) : @statement[0]
235
+ )
236
+ begin
237
+ statement = add_statement_event_params(statement, event) if @statement.length > 1
238
+ stmt_str = statement.toString
239
+ one_sql = stmt_str[stmt_str.index(": ") + 2, stmt_str.length]
240
+ if sql_len + one_sql.length >= @commit_size
241
+ statement.execute(insert_sql)
242
+ sql_len = 0
243
+ insert_sql = ""
244
+ end
245
+ if sql_len == 0
246
+ insert_sql = one_sql
247
+ sql_len = one_sql.length
248
+ else
249
+ insert_sql.concat(",").concat(one_sql[@pre_len, one_sql.length])
250
+ sql_len = sql_len + one_sql.length - @pre_len
251
+ end
252
+ rescue => e
253
+ retry_exception?(e, event.to_json())
254
+ end
255
+ end
256
+ statement.execute(insert_sql)
257
+ rescue => e
258
+ @logger.error("Submit data error, sql is #{insert_sql}, error is #{e}")
259
+ is_insert_err = true
260
+ ensure
261
+ statement.close unless statement.nil?
262
+ end
263
+
264
+ # retry each event
265
+ if is_insert_err
266
+ events.each do |event|
267
+ begin
268
+ statement = connection.prepareStatement(
269
+ (@unsafe_statement == true) ? event.sprintf(@statement[0]) : @statement[0]
270
+ )
271
+ statement = add_statement_event_params(statement, event) if @statement.length > 1
272
+ statement.execute
273
+ rescue => e
274
+ if retry_exception?(e, event.to_json())
275
+ events_to_retry.push(event)
276
+ end
277
+ ensure
278
+ statement.close unless statement.nil?
279
+ end
280
+ end
281
+ end
282
+ # retry each event end
283
+
284
+ connection.close unless connection.nil?
285
+
286
+ return events_to_retry, true
287
+ end
288
+
289
+ def retrying_submit(actions)
290
+ # Initially we submit the full list of actions
291
+ submit_actions = actions
292
+ count_as_attempt = true
293
+
294
+ attempts = 1
295
+
296
+ sleep_interval = @retry_initial_interval
297
+ while @stopping.false? and (submit_actions and !submit_actions.empty?)
298
+ return if !submit_actions || submit_actions.empty? # If everything's a success we move along
299
+ # We retry whatever didn't succeed
300
+ submit_actions, count_as_attempt = submit(submit_actions)
301
+
302
+ # Everything was a success!
303
+ break if !submit_actions || submit_actions.empty?
304
+
305
+ if @max_flush_exceptions > 0 and count_as_attempt == true
306
+ attempts += 1
307
+
308
+ if attempts > @max_flush_exceptions
309
+ @logger.error("JDBC - max_flush_exceptions has been reached. #{submit_actions.length} events have been unable to be sent to SQL and are being dropped. See previously logged exceptions for details.")
310
+ break
311
+ end
312
+ end
313
+
314
+ # If we're retrying the action sleep for the recommended interval
315
+ # Double the interval for the next time through to achieve exponential backoff
316
+ Stud.stoppable_sleep(sleep_interval) {@stopping.true?}
317
+ sleep_interval = next_sleep_interval(sleep_interval)
318
+ end
319
+ end
320
+
321
+ def add_statement_event_params(statement, event)
322
+ @statement[1..-1].each_with_index do |i, idx|
323
+ if @enable_event_as_json_keyword == true and i.is_a? String and i == @event_as_json_keyword
324
+ value = event.to_json
325
+ elsif i.is_a? String
326
+ value = event.get(i)
327
+ if value.nil? and i =~ /%\{/
328
+ value = event.sprintf(i)
329
+ end
330
+ else
331
+ value = i
332
+ end
333
+
334
+ case value
335
+ when Time
336
+ # See LogStash::Timestamp, below, for the why behind strftime.
337
+ statement.setString(idx + 1, value.strftime(STRFTIME_FMT))
338
+ when LogStash::Timestamp
339
+ # XXX: Using setString as opposed to setTimestamp, because setTimestamp
340
+ # doesn't behave correctly in some drivers (Known: sqlite)
341
+ #
342
+ # Additionally this does not use `to_iso8601`, since some SQL databases
343
+ # choke on the 'T' in the string (Known: Derby).
344
+ #
345
+ # strftime appears to be the most reliable across drivers.
346
+ #statement.setString(idx + 1, value.time.strftime(STRFTIME_FMT))
347
+ statement.setString(idx + 1, value.time.strftime("%Y-%m-%d %H:%M:%S"))
348
+ when Fixnum, Integer
349
+ if value > 2147483647 or value < -2147483648
350
+ statement.setLong(idx + 1, value)
351
+ else
352
+ statement.setInt(idx + 1, value)
353
+ end
354
+ when BigDecimal
355
+ statement.setBigDecimal(idx + 1, value.to_java)
356
+ when Float
357
+ statement.setFloat(idx + 1, value)
358
+ when String
359
+ statement.setString(idx + 1, value)
360
+ when Array, Hash
361
+ statement.setString(idx + 1, value.to_json)
362
+ when true, false
363
+ statement.setBoolean(idx + 1, value)
364
+ else
365
+ statement.setString(idx + 1, nil)
366
+ end
367
+ end
368
+
369
+ statement
370
+ end
371
+
372
+ def retry_exception?(exception, event)
373
+ retrying = (exception.respond_to? 'getSQLState' and (RETRYABLE_SQLSTATE_CLASSES.include?(exception.getSQLState.to_s[0, 2]) or @retry_sql_states.include?(exception.getSQLState)))
374
+ log_jdbc_exception(exception, retrying, event)
375
+
376
+ retrying
377
+ end
378
+
379
+ def log_jdbc_exception(exception, retrying, event)
380
+ current_exception = exception
381
+ log_text = 'JDBC - Exception. ' + (retrying ? 'Retrying' : 'Not retrying')
382
+
383
+ log_method = (retrying ? 'warn' : 'error')
384
+
385
+ loop do
386
+ # TODO reformat event output so that it only shows the fields necessary.
387
+
388
+ @logger.send(log_method, log_text, :exception => current_exception, :statement => @statement[0], :event => event)
389
+
390
+ if current_exception.respond_to? 'getNextException'
391
+ current_exception = current_exception.getNextException()
392
+ else
393
+ current_exception = nil
394
+ end
395
+
396
+ break if current_exception == nil
397
+ end
398
+ end
399
+
400
+ def next_sleep_interval(current_interval)
401
+ doubled = current_interval * 2
402
+ doubled > @retry_max_interval ? @retry_max_interval : doubled
403
+ end
404
+ end # class LogStash::Outputs::analyticdb
@@ -0,0 +1,32 @@
1
+ Gem::Specification.new do |s|
2
+ s.name = 'logstash-output-analyticdb'
3
+ s.version = '5.4.0.4'
4
+ s.licenses = ['Apache License (2.0)']
5
+ s.summary = 'This plugin allows you to output to SQL, via JDBC'
6
+ s.description = "This gem is a logstash plugin required to be installed on top of the Logstash core pipeline using $LS_HOME/bin/logstash-plugin install 'logstash-output-analyticdb'. This gem is not a stand-alone program"
7
+ s.authors = ['the_angry_angel']
8
+ s.email = 'karl+github@theangryangel.co.uk'
9
+ s.homepage = 'https://github.com/wuchase/logstash-output-analyticdb'
10
+ s.require_paths = ['lib']
11
+
12
+ # Files
13
+ s.files = Dir['lib/**/*','spec/**/*','vendor/**/*','*.gemspec','*.md','CONTRIBUTORS','Gemfile','LICENSE','NOTICE.TXT']
14
+ # Tests
15
+ s.test_files = s.files.grep(%r{^(test|spec|features)/})
16
+
17
+ # Special flag to let us know this is actually a logstash plugin
18
+ s.metadata = { 'logstash_plugin' => 'true', 'logstash_group' => 'output' }
19
+
20
+ # Gem dependencies
21
+ #
22
+ s.add_runtime_dependency 'logstash-core-plugin-api', ">= 1.60", "<= 2.99"
23
+ s.add_runtime_dependency 'logstash-codec-plain'
24
+ s.add_development_dependency 'logstash-devutils'
25
+
26
+ s.requirements << "jar 'com.zaxxer:HikariCP', '2.7.2'"
27
+ s.requirements << "jar 'org.apache.logging.log4j:log4j-slf4j-impl', '2.6.2'"
28
+
29
+ s.add_development_dependency 'jar-dependencies'
30
+ s.add_development_dependency 'ruby-maven', '~> 3.3'
31
+ s.add_development_dependency 'rubocop', '0.41.2'
32
+ end
@@ -0,0 +1,216 @@
1
+ require 'logstash/devutils/rspec/spec_helper'
2
+ require 'logstash/outputs/analyticdb'
3
+ require 'stud/temporary'
4
+ require 'java'
5
+ require 'securerandom'
6
+
7
+ RSpec::Support::ObjectFormatter.default_instance.max_formatted_output_length = 80000
8
+
9
+ RSpec.configure do |c|
10
+
11
+ def start_service(name)
12
+ cmd = "sudo /etc/init.d/#{name}* start"
13
+
14
+ `which systemctl`
15
+ if $?.success?
16
+ cmd = "sudo systemctl start #{name}"
17
+ end
18
+
19
+ `#{cmd}`
20
+ end
21
+
22
+ def stop_service(name)
23
+ cmd = "sudo /etc/init.d/#{name}* stop"
24
+
25
+ `which systemctl`
26
+ if $?.success?
27
+ cmd = "sudo systemctl stop #{name}"
28
+ end
29
+
30
+ `#{cmd}`
31
+ end
32
+
33
+ end
34
+
35
+ RSpec.shared_context 'rspec setup' do
36
+ it 'ensure jar is available' do
37
+ expect(ENV[jdbc_jar_env]).not_to be_nil, "#{jdbc_jar_env} not defined, required to run tests"
38
+ expect(File.exist?(ENV[jdbc_jar_env])).to eq(true), "#{jdbc_jar_env} defined, but not valid"
39
+ end
40
+ end
41
+
42
+ RSpec.shared_context 'when initializing' do
43
+ it 'shouldn\'t register with a missing jar file' do
44
+ jdbc_settings['driver_jar_path'] = nil
45
+ plugin = LogStash::Plugin.lookup('output', 'jdbc').new(jdbc_settings)
46
+ expect { plugin.register }.to raise_error(LogStash::ConfigurationError)
47
+ end
48
+ end
49
+
50
+ RSpec.shared_context 'when outputting messages' do
51
+ let(:logger) {
52
+ double("logger")
53
+ }
54
+
55
+ let(:jdbc_test_table) do
56
+ 'logstash_output_jdbc_test'
57
+ end
58
+
59
+ let(:jdbc_drop_table) do
60
+ "DROP TABLE #{jdbc_test_table}"
61
+ end
62
+
63
+ let(:jdbc_statement_fields) do
64
+ [
65
+ {db_field: "created_at", db_type: "datetime", db_value: '?', event_field: '@timestamp'},
66
+ {db_field: "message", db_type: "varchar(512)", db_value: '?', event_field: 'message'},
67
+ {db_field: "message_sprintf", db_type: "varchar(512)", db_value: '?', event_field: 'sprintf-%{message}'},
68
+ {db_field: "static_int", db_type: "int", db_value: '?', event_field: 'int'},
69
+ {db_field: "static_bigint", db_type: "bigint", db_value: '?', event_field: 'bigint'},
70
+ {db_field: "static_float", db_type: "float", db_value: '?', event_field: 'float'},
71
+ {db_field: "static_bool", db_type: "boolean", db_value: '?', event_field: 'bool'},
72
+ {db_field: "static_bigdec", db_type: "decimal", db_value: '?', event_field: 'bigdec'}
73
+ ]
74
+ end
75
+
76
+ let(:jdbc_create_table) do
77
+ fields = jdbc_statement_fields.collect { |entry| "#{entry[:db_field]} #{entry[:db_type]} not null" }.join(", ")
78
+
79
+ "CREATE table #{jdbc_test_table} (#{fields})"
80
+ end
81
+
82
+ let(:jdbc_drop_table) do
83
+ "DROP table #{jdbc_test_table}"
84
+ end
85
+
86
+ let(:jdbc_statement) do
87
+ fields = jdbc_statement_fields.collect { |entry| "#{entry[:db_field]}" }.join(", ")
88
+ values = jdbc_statement_fields.collect { |entry| "#{entry[:db_value]}" }.join(", ")
89
+ statement = jdbc_statement_fields.collect { |entry| entry[:event_field] }
90
+
91
+ statement.insert(0, "insert into #{jdbc_test_table} (#{fields}) values(#{values})")
92
+ end
93
+
94
+ let(:systemd_database_service) do
95
+ nil
96
+ end
97
+
98
+ let(:event) do
99
+ # TODO: Auto generate fields from jdbc_statement_fields
100
+ LogStash::Event.new({
101
+ message: "test-message #{SecureRandom.uuid}",
102
+ float: 12.1,
103
+ bigint: 4000881632477184,
104
+ bool: true,
105
+ int: 1,
106
+ bigdec: BigDecimal.new("123.123")
107
+ })
108
+ end
109
+
110
+ let(:plugin) do
111
+ # Setup logger
112
+ allow(LogStash::Outputs::Jdbc).to receive(:logger).and_return(logger)
113
+
114
+ # XXX: Suppress reflection logging. There has to be a better way around this.
115
+ allow(logger).to receive(:debug).with(/config LogStash::/)
116
+
117
+ # Suppress beta warnings.
118
+ allow(logger).to receive(:info).with(/Please let us know if you find bugs or have suggestions on how to improve this plugin./)
119
+
120
+ # Suppress start up messages.
121
+ expect(logger).to receive(:info).once.with(/JDBC - Starting up/)
122
+
123
+ # Setup plugin
124
+ output = LogStash::Plugin.lookup('output', 'jdbc').new(jdbc_settings)
125
+ output.register
126
+
127
+ output
128
+ end
129
+
130
+ before :each do
131
+ # Setup table
132
+ c = plugin.instance_variable_get(:@pool).getConnection
133
+
134
+ # Derby doesn't support IF EXISTS.
135
+ # Seems like the quickest solution. Bleurgh.
136
+ begin
137
+ stmt = c.createStatement
138
+ stmt.executeUpdate(jdbc_drop_table)
139
+ rescue
140
+ # noop
141
+ ensure
142
+ stmt.close
143
+
144
+ stmt = c.createStatement
145
+ stmt.executeUpdate(jdbc_create_table)
146
+ stmt.close
147
+ c.close
148
+ end
149
+ end
150
+
151
+ # Delete table after each
152
+ after :each do
153
+ c = plugin.instance_variable_get(:@pool).getConnection
154
+
155
+ stmt = c.createStatement
156
+ stmt.executeUpdate(jdbc_drop_table)
157
+ stmt.close
158
+ c.close
159
+ end
160
+
161
+ it 'should save a event' do
162
+ expect { plugin.multi_receive([event]) }.to_not raise_error
163
+
164
+ # Verify the number of items in the output table
165
+ c = plugin.instance_variable_get(:@pool).getConnection
166
+
167
+ # TODO replace this simple count with a check of the actual contents
168
+
169
+ stmt = c.prepareStatement("select count(*) as total from #{jdbc_test_table} where message = ?")
170
+ stmt.setString(1, event.get('message'))
171
+ rs = stmt.executeQuery
172
+ count = 0
173
+ count = rs.getInt('total') while rs.next
174
+ stmt.close
175
+ c.close
176
+
177
+ expect(count).to eq(1)
178
+ end
179
+
180
+ it 'should not save event, and log an unretryable exception' do
181
+ e = event
182
+ original_event = e.get('message')
183
+ e.set('message', nil)
184
+
185
+ expect(logger).to receive(:error).once.with(/JDBC - Exception. Not retrying/, Hash)
186
+ expect { plugin.multi_receive([event]) }.to_not raise_error
187
+
188
+ e.set('message', original_event)
189
+ end
190
+
191
+ it 'it should retry after a connection loss, and log a warning' do
192
+ skip "does not run as a service, or known issue with test" if systemd_database_service.nil?
193
+
194
+ p = plugin
195
+
196
+ # Check that everything is fine right now
197
+ expect { p.multi_receive([event]) }.not_to raise_error
198
+
199
+ stop_service(systemd_database_service)
200
+
201
+ # Start a thread to restart the service after the fact.
202
+ t = Thread.new(systemd_database_service) { |systemd_database_service|
203
+ sleep 20
204
+
205
+ start_service(systemd_database_service)
206
+ }
207
+
208
+ t.run
209
+
210
+ expect(logger).to receive(:warn).at_least(:once).with(/JDBC - Exception. Retrying/, Hash)
211
+ expect { p.multi_receive([event]) }.to_not raise_error
212
+
213
+ # Wait for the thread to finish
214
+ t.join
215
+ end
216
+ end
@@ -0,0 +1,24 @@
1
+ require_relative '../jdbc_spec_helper'
2
+
3
+ describe 'logstash-output-analyticdb: AnalyticDB', if: ENV['JDBC_MYSQL_JAR'] do
4
+ include_context 'rspec setup'
5
+ include_context 'when outputting messages'
6
+
7
+ let(:jdbc_jar_env) do
8
+ 'JDBC_MYSQL_JAR'
9
+ end
10
+
11
+ let(:systemd_database_service) do
12
+ 'mysql'
13
+ end
14
+
15
+ let(:jdbc_settings) do
16
+ {
17
+ 'driver_class' => 'com.mysql.jdbc.Driver',
18
+ 'connection_string' => 'jdbc:mysql://localhost/logstash?user=logstash&password=logstash',
19
+ 'driver_jar_path' => ENV[jdbc_jar_env],
20
+ 'statement' => jdbc_statement,
21
+ 'max_flush_exceptions' => 1
22
+ }
23
+ end
24
+ end
@@ -0,0 +1,11 @@
1
+ require_relative '../jdbc_spec_helper'
2
+
3
+ describe LogStash::Outputs::Jdbc do
4
+ context 'when initializing' do
5
+ it 'shouldn\'t register without a config' do
6
+ expect do
7
+ LogStash::Plugin.lookup('output', 'jdbc').new
8
+ end.to raise_error(LogStash::ConfigurationError)
9
+ end
10
+ end
11
+ end
metadata ADDED
@@ -0,0 +1,152 @@
1
+ --- !ruby/object:Gem::Specification
2
+ name: logstash-output-analyticdb
3
+ version: !ruby/object:Gem::Version
4
+ version: 5.4.0.4
5
+ platform: ruby
6
+ authors:
7
+ - the_angry_angel
8
+ autorequire:
9
+ bindir: bin
10
+ cert_chain: []
11
+ date: 2018-12-04 00:00:00.000000000 Z
12
+ dependencies:
13
+ - !ruby/object:Gem::Dependency
14
+ requirement: !ruby/object:Gem::Requirement
15
+ requirements:
16
+ - - ">="
17
+ - !ruby/object:Gem::Version
18
+ version: '1.60'
19
+ - - "<="
20
+ - !ruby/object:Gem::Version
21
+ version: '2.99'
22
+ name: logstash-core-plugin-api
23
+ prerelease: false
24
+ type: :runtime
25
+ version_requirements: !ruby/object:Gem::Requirement
26
+ requirements:
27
+ - - ">="
28
+ - !ruby/object:Gem::Version
29
+ version: '1.60'
30
+ - - "<="
31
+ - !ruby/object:Gem::Version
32
+ version: '2.99'
33
+ - !ruby/object:Gem::Dependency
34
+ requirement: !ruby/object:Gem::Requirement
35
+ requirements:
36
+ - - ">="
37
+ - !ruby/object:Gem::Version
38
+ version: '0'
39
+ name: logstash-codec-plain
40
+ prerelease: false
41
+ type: :runtime
42
+ version_requirements: !ruby/object:Gem::Requirement
43
+ requirements:
44
+ - - ">="
45
+ - !ruby/object:Gem::Version
46
+ version: '0'
47
+ - !ruby/object:Gem::Dependency
48
+ requirement: !ruby/object:Gem::Requirement
49
+ requirements:
50
+ - - ">="
51
+ - !ruby/object:Gem::Version
52
+ version: '0'
53
+ name: logstash-devutils
54
+ prerelease: false
55
+ type: :development
56
+ version_requirements: !ruby/object:Gem::Requirement
57
+ requirements:
58
+ - - ">="
59
+ - !ruby/object:Gem::Version
60
+ version: '0'
61
+ - !ruby/object:Gem::Dependency
62
+ requirement: !ruby/object:Gem::Requirement
63
+ requirements:
64
+ - - ">="
65
+ - !ruby/object:Gem::Version
66
+ version: '0'
67
+ name: jar-dependencies
68
+ prerelease: false
69
+ type: :development
70
+ version_requirements: !ruby/object:Gem::Requirement
71
+ requirements:
72
+ - - ">="
73
+ - !ruby/object:Gem::Version
74
+ version: '0'
75
+ - !ruby/object:Gem::Dependency
76
+ requirement: !ruby/object:Gem::Requirement
77
+ requirements:
78
+ - - "~>"
79
+ - !ruby/object:Gem::Version
80
+ version: '3.3'
81
+ name: ruby-maven
82
+ prerelease: false
83
+ type: :development
84
+ version_requirements: !ruby/object:Gem::Requirement
85
+ requirements:
86
+ - - "~>"
87
+ - !ruby/object:Gem::Version
88
+ version: '3.3'
89
+ - !ruby/object:Gem::Dependency
90
+ requirement: !ruby/object:Gem::Requirement
91
+ requirements:
92
+ - - '='
93
+ - !ruby/object:Gem::Version
94
+ version: 0.41.2
95
+ name: rubocop
96
+ prerelease: false
97
+ type: :development
98
+ version_requirements: !ruby/object:Gem::Requirement
99
+ requirements:
100
+ - - '='
101
+ - !ruby/object:Gem::Version
102
+ version: 0.41.2
103
+ description: This gem is a logstash plugin required to be installed on top of the
104
+ Logstash core pipeline using $LS_HOME/bin/logstash-plugin install 'logstash-output-analyticdb'.
105
+ This gem is not a stand-alone program
106
+ email: karl+github@theangryangel.co.uk
107
+ executables: []
108
+ extensions: []
109
+ extra_rdoc_files: []
110
+ files:
111
+ - CHANGELOG.md
112
+ - Gemfile
113
+ - README.md
114
+ - THANKS.md
115
+ - lib/logstash-output-analyticdb_jars.rb
116
+ - lib/logstash/outputs/analyticdb.rb
117
+ - logstash-output-analyticdb.gemspec
118
+ - spec/jdbc_spec_helper.rb
119
+ - spec/outputs/jdbc_analyticdb_spec.rb
120
+ - spec/outputs/jdbc_spec.rb
121
+ homepage: https://github.com/wuchase/logstash-output-analyticdb
122
+ licenses:
123
+ - Apache License (2.0)
124
+ metadata:
125
+ logstash_plugin: 'true'
126
+ logstash_group: output
127
+ post_install_message:
128
+ rdoc_options: []
129
+ require_paths:
130
+ - lib
131
+ required_ruby_version: !ruby/object:Gem::Requirement
132
+ requirements:
133
+ - - ">="
134
+ - !ruby/object:Gem::Version
135
+ version: '0'
136
+ required_rubygems_version: !ruby/object:Gem::Requirement
137
+ requirements:
138
+ - - ">="
139
+ - !ruby/object:Gem::Version
140
+ version: '0'
141
+ requirements:
142
+ - jar 'com.zaxxer:HikariCP', '2.7.2'
143
+ - jar 'org.apache.logging.log4j:log4j-slf4j-impl', '2.6.2'
144
+ rubyforge_project:
145
+ rubygems_version: 2.7.6
146
+ signing_key:
147
+ specification_version: 4
148
+ summary: This plugin allows you to output to SQL, via JDBC
149
+ test_files:
150
+ - spec/jdbc_spec_helper.rb
151
+ - spec/outputs/jdbc_analyticdb_spec.rb
152
+ - spec/outputs/jdbc_spec.rb