jruby-druid 1.0.0.pre.rc2 → 1.0.0.pre.rc3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA1:
3
- metadata.gz: 21e8d32e124615a4416d3c96d100dc5321aad913
4
- data.tar.gz: 5d668c1de6018fe3d4e0fe0a8ab8150df5ebf89d
3
+ metadata.gz: 06409cebe3d3cd2cb703ddee8f02641ec595fefb
4
+ data.tar.gz: 0ca03f5b670a4d814d1d5d172b73376894558245
5
5
  SHA512:
6
- metadata.gz: d8f6e25c91a4d162f4d1b463354cdc2baa55b41bebe0017097bf16752952bcb181912449237db0b25c9f2cfdf9249d63656796a45049b33361bf44010bd1b8a9
7
- data.tar.gz: 2a4b4842a23fc82303c57b148a74a5b4a9d141c6fe6382c261cde2fe8945fffae45f295f1b07ddfd0704afc937fc3393639cf131d1c1a4778de07a178ebff376
6
+ metadata.gz: aa7f5e0373e4c546661f9cb6a0cd986e7caaf9b34176556fad06296f9348ea7068f581d6450c4adf5e4621ca567065184e4a6b96734d0747a07913fc3fefb845
7
+ data.tar.gz: 666e188bd7807760b11b9ca5c1eb1d50627fcb929501f77d53edda0472637e9d663eccd502f33a11b29a8538bac986f0250f406eac77d7351a67ad51516d6b9a
@@ -1,4 +1,10 @@
1
1
  language: ruby
2
2
  rvm:
3
- - 2.2.3
4
- before_install: gem install bundler -v 1.11.2
3
+ - jruby-1.7.19
4
+ - jruby-9.1.2.0
5
+ jdk:
6
+ - openjdk7
7
+ - oraclejdk7
8
+ - oraclejdk8
9
+ script:
10
+ - bundle exec rspec spec/druid/
data/Gemfile CHANGED
@@ -1,2 +1,9 @@
1
1
  source 'https://rubygems.org'
2
+
3
+ if JRUBY_VERSION.to_i < 9
4
+ gem 'activesupport', '~> 4.0'
5
+ else
6
+ gem 'activesupport', '~> 5.0'
7
+ end
8
+
2
9
  gemspec
data/README.md CHANGED
@@ -1,9 +1,147 @@
1
- # JRuby Druid
1
+ # jruby-druid
2
+ [![Build Status](https://travis-ci.org/andremleblanc/jruby-druid.svg?branch=master)](https://travis-ci.org/andremleblanc/jruby-druid)
2
3
 
3
- This project is in active development.
4
+ ## Foreword
5
+ This documentation is intended to be a quick-start guide, not a comprehensive
6
+ list of all available methods and configuration options. Please look through
7
+ the source for more information; a great place to get started is `Druid::Client`
8
+ and the `Druid::Query` modules as they expose most of the methods on the client.
4
9
 
10
+ Further, this guide assumes a significant knowledge of Druid, for more info:
11
+ http://druid.io/docs/latest/design/index.html
12
+
13
+ ## Install
14
+
15
+ ```
16
+ $ gem install jruby-druid
17
+ ```
18
+
19
+ ## Usage
20
+
21
+ ### Creating a Client
22
+ ```
23
+ # With default configuration:
24
+ client = Druid::Client.new
25
+ ```
26
+
27
+ ```
28
+ # With custom tuning_granularity:
29
+ client = Druid::Client.new(tuning_granularity: :hour)
30
+ ```
31
+ *Note:* There are many configuration options, please take a look at
32
+ `Druid::Configuration` for more details.
33
+
34
+ ### Administrative Tasks
35
+ Creating a datasource is handled when writing points.
36
+
37
+ Delete datasource(s):
38
+ ```
39
+ # Delete a specific datasource:
40
+ datasource_name = 'foo'
41
+ client.delete_datasource(datasource_name)
42
+ ```
43
+
44
+ ```
45
+ # Deleting all datasources:
46
+ client.delete_datasources
47
+ ```
48
+
49
+ *Note:* Deleting datasources and writing to them again can be a bit tricky in
50
+ Druid. If this is something you need to do (like in testing) then you can use
51
+ the strong_delete configuration option
52
+ (`Druid::Client.new(strong_delete: true)`). This setting is not recommend for
53
+ production since it uses randomizeTaskIds in the DruidBeamConfig which can lead
54
+ to race conditions.
55
+
56
+ List datasources:
57
+ ```
58
+ client.list_datasources
59
+ ```
60
+
61
+ Shutdown tasks:
62
+ ```
63
+ client.shutdown_tasks
64
+ ```
65
+
66
+ ## Writing Data
67
+
68
+ Write a datapoint:
69
+ ```
70
+ datasource_name = 'foo'
71
+ datapoint = {
72
+ timestamp: Time.now.utc, # Optional: Defaults to Time.now.utc
73
+ dimensions: { foo: 'bar' }, # Arbitrary key-value tags
74
+ metrics: { baz: 1 } # The values being measured
75
+ }
76
+ client.write_point(datasource_name, datapoint)
77
+ ```
78
+ *Note:* The `write_point` utilizes the
79
+ [Tranquility Core API](https://github.com/druid-io/tranquility/blob/master/docs/core.md)
80
+ to communicate with Druid. The Tranquility API handles a lot of concerns like
81
+ buffering, service discovery, schema rollover etc. The main features of
82
+ the `write_point` method are 1) creation of the data schema and 2) detecting
83
+ schema change to support automatic schema evolution.
84
+
85
+ Basically, when you write a point with the `write_point` method, it will compare
86
+ the point with the current schema (if present) and create a new Tranquilizer
87
+ (connection to Druid) with the new schema if needed. This all happens
88
+ seamlessly without your application needing to know anything about schema.
89
+
90
+ The expectation is the schema changes infrequently and subsequent writes after
91
+ a schema change have the same schema.
92
+
93
+ ## Reading Data
94
+
95
+ Querying:
96
+ ```
97
+ start_time = Time.now.utc.advance(days: -30)
98
+
99
+ client.query(
100
+ queryType: 'timeseries',
101
+ dataSource: 'foo',
102
+ granularity: 'day',
103
+ intervals: start_time.iso8601 + '/' + Time.now.utc.iso8601,
104
+ aggregations: [{ type: 'longSum', name: 'baz', fieldName: 'baz' }]
105
+ )
106
+ ```
107
+ *Note:* The `query` method just POSTs the query to Druid; for information on
108
+ querying Druid: http://druid.io/docs/latest/querying/querying.html. This is
109
+ intentionally simple to allow all current features and hopefully all future
110
+ features of the Druid query language without updating the gem.
111
+
112
+ Fill Empty Intervals:
113
+
114
+ Currently, Druid will not fill empty intervals for which there are no points. To
115
+ accommodate this need until it is handled more efficiently in Druid, use the
116
+ experimental `fill_value` feature in your query. This ensure you get an result
117
+ for every interval in intervals.
118
+
119
+ This has only been tested with 'timeseries' and single-dimension 'groupBy'
120
+ queries with simple granularities.
121
+ ```
122
+ start_time = Time.now.utc.advance(days: -30)
123
+
124
+ client.query(
125
+ queryType: 'timeseries',
126
+ dataSource: 'foo',
127
+ granularity: 'day',
128
+ intervals: start_time.iso8601 + '/' + Time.now.utc.iso8601,
129
+ aggregations: [{ type: 'longSum', name: 'baz', fieldName: 'baz' }],
130
+ fill_value: 0
131
+ )
132
+ ```
133
+
134
+ ## Testing
135
+
136
+ The tests in `/spec/druid` can be run without Druid running. The tests in
137
+ `/spec/integration` require Druid to be running.
5
138
 
6
139
  ## License
7
140
 
8
141
  The gem is available as open source under the terms of the [MIT License](http://opensource.org/licenses/MIT).
9
142
 
143
+ ## Credit
144
+
145
+ This project is somewhat modeled after the
146
+ [influxdb-ruby](https://github.com/influxdata/influxdb-ruby) adapter, just
147
+ wanted to give a shout out for their work.
@@ -5,6 +5,9 @@ require 'jruby-druid'
5
5
  @max = 100000
6
6
  # 100000 points | druid: 154.772s | influx: 246.821s
7
7
  # 100000 points | druid: 208.095s | influx: 244.993s - Schema Change Detection
8
+ # 100000 points | druid: 391.063s | influx: 230.737 - 1.0.0-rc1
9
+ # 100000 points | druid: 529.668s | influx: 500.526s - 1.0.0 - Handle ZK Connection
10
+ # 100000 points | druid: 387.175s | influx: 433.34s - 1.0.0-rc3
8
11
 
9
12
  def run_druid_benchmark
10
13
  attempts = 0
@@ -3,12 +3,5 @@
3
3
  require "bundler/setup"
4
4
  require "jruby-druid"
5
5
 
6
- # You can add fixtures and/or initialization code here to make experimenting
7
- # with your gem easier. You can also use a different console, if you like.
8
-
9
- # (If you use this, don't forget to add pry to your Gemfile!)
10
- # require "pry"
11
- # Pry.start
12
-
13
6
  require "irb"
14
7
  IRB.start
@@ -19,10 +19,9 @@ Gem::Specification.new do |spec|
19
19
  spec.executables = spec.files.grep(%r{^exe/}) { |f| File.basename(f) }
20
20
  spec.require_paths = ["lib", "vendor"]
21
21
 
22
- spec.add_dependency "activesupport", "~> 4.0"
23
- spec.add_dependency "zk", "~> 1.9"
22
+ spec.add_dependency "activesupport"
24
23
 
25
- spec.add_development_dependency "bundler", "~> 1.11"
24
+ spec.add_development_dependency "bundler", "~> 1.7"
26
25
  spec.add_development_dependency "faker", "~> 1.6"
27
26
  spec.add_development_dependency "rake", "~> 10.0"
28
27
  spec.add_development_dependency "rspec", "~> 3.0"
@@ -1,8 +1,9 @@
1
1
  module Druid
2
2
  class Client
3
- include Druid::Query::Core
4
- include Druid::Query::Datasource
5
- include Druid::Query::Task
3
+ include Druid::Logging
4
+ include Druid::Queries::Core
5
+ include Druid::Queries::Datasource
6
+ include Druid::Queries::Task
6
7
 
7
8
  attr_reader :broker,
8
9
  :config,
@@ -14,8 +15,15 @@ module Druid
14
15
  @config = Druid::Configuration.new(options)
15
16
  @broker = Druid::Node::Broker.new(config)
16
17
  @coordinator = Druid::Node::Coordinator.new(config)
18
+ setup_logger
17
19
  @overlord = Druid::Node::Overlord.new(config)
18
20
  @writer = Druid::Writer::Base.new(config)
19
21
  end
22
+
23
+ private
24
+
25
+ def setup_logger
26
+ logger.set_level(config.log_level)
27
+ end
20
28
  end
21
29
  end
@@ -5,22 +5,26 @@ module Druid
5
5
  CURATOR_URI = 'localhost:2181'.freeze
6
6
  DISCOVERY_PATH = '/druid/discovery'.freeze
7
7
  INDEX_SERVICE = 'druid/overlord'.freeze
8
+ LOG_LEVEL = :warn
8
9
  OVERLORD_URI = 'http://localhost:8090/'.freeze
9
10
  ROLLUP_GRANULARITY = :minute
10
- STRONG_DELETE = false
11
+ STRONG_DELETE = false # Not recommend to be true for production.
11
12
  TUNING_GRANULARITY = :day
12
13
  TUNING_WINDOW = 'PT1H'.freeze
14
+ WAIT_TIME = 20 # Seconds
13
15
 
14
16
  attr_reader :broker_uri,
15
17
  :coordinator_uri,
16
18
  :curator_uri,
17
19
  :discovery_path,
18
20
  :index_service,
21
+ :log_level,
19
22
  :overlord_uri,
20
23
  :rollup_granularity,
21
24
  :strong_delete,
22
25
  :tuning_granularity,
23
- :tuning_window
26
+ :tuning_window,
27
+ :wait_time
24
28
 
25
29
 
26
30
  def initialize(opts = {})
@@ -29,11 +33,13 @@ module Druid
29
33
  @curator_uri = opts[:curator_uri] || CURATOR_URI
30
34
  @discovery_path = opts[:discovery_path] || DISCOVERY_PATH
31
35
  @index_service = opts[:index_service] || INDEX_SERVICE
36
+ @log_level = opts[:log_level] || LOG_LEVEL
32
37
  @overlord_uri = opts[:overlord_uri] || OVERLORD_URI
33
38
  @rollup_granularity = rollup_granularity_string(opts[:rollup_granularity])
34
39
  @strong_delete = opts[:strong_delete] || STRONG_DELETE
35
40
  @tuning_granularity = tuning_granularity_string(opts[:tuning_granularity])
36
41
  @tuning_window = opts[:tuning_window] || TUNING_WINDOW
42
+ @wait_time = opts[:wait_time] || WAIT_TIME
37
43
  end
38
44
 
39
45
  private
@@ -3,6 +3,7 @@ module Druid
3
3
  class ClientError < Error; end
4
4
  class ConnectionError < Error; end
5
5
  class QueryError < Error; end
6
+ class ValidationError < Error; end
6
7
 
7
8
  # Adopted from: https://github.com/lostisland/faraday/blob/master/lib/faraday/adapter/net_http.rb
8
9
  NET_HTTP_EXCEPTIONS = [
@@ -0,0 +1,82 @@
1
+ # Supported Levels: https://github.com/twitter/util
2
+ require 'logger'
3
+
4
+ module Druid
5
+ class Logger
6
+ include Druid::TopLevelPackages
7
+
8
+ attr_accessor :logger
9
+ delegate :debug, :info, :warn, :error, :fatal, :unknown, to: :logger
10
+
11
+ def initialize
12
+ @logger = ::Logger.new(STDOUT)
13
+ end
14
+
15
+ def set_level(level)
16
+ set_logger_level(level)
17
+ set_logback_level(level)
18
+ set_twitter_level(level)
19
+ end
20
+
21
+ private
22
+
23
+ def set_logback_level(level)
24
+ org.slf4j.LoggerFactory.
25
+ getLogger(org.slf4j.Logger.ROOT_LOGGER_NAME).
26
+ setLevel(get_logback_level(level))
27
+ end
28
+
29
+ def set_logger_level(level)
30
+ logger.level = get_logger_level(level)
31
+ end
32
+
33
+ def set_twitter_level(level)
34
+ com.twitter.logging.Logger.
35
+ get("com.twitter").
36
+ setLevel(java.util.logging.Level::WARNING)
37
+ end
38
+
39
+ def get_logback_level(level)
40
+ ch.qos.logback.classic.Level.const_get(map_logback_level(level))
41
+ end
42
+
43
+ def get_logger_level(level)
44
+ @logger.class.const_get(map_logger_level(level))
45
+ end
46
+
47
+ def get_twitter_level(level)
48
+ java.util.logging.Level.const_get(map_twitter_level(level))
49
+ end
50
+
51
+ def map_logback_level(level)
52
+ case level
53
+ when :critical
54
+ 'ERROR'.freeze
55
+ when :fatal
56
+ 'ERROR'.freeze
57
+ else
58
+ level.to_s.upcase
59
+ end
60
+ end
61
+
62
+ def map_logger_level(level)
63
+ case level
64
+ when :trace
65
+ 'DEBUG'.freeze
66
+ when :critical
67
+ 'ERROR'.freeze
68
+ else
69
+ level.to_s.upcase
70
+ end
71
+ end
72
+
73
+ def map_twitter_level(level)
74
+ case level
75
+ when :warn
76
+ 'WARNING'.freeze
77
+ else
78
+ level.to_s.upcase
79
+ end
80
+ end
81
+ end
82
+ end
@@ -1 +1,7 @@
1
- #TODO: Setup Logging and Do Something Useful
1
+ module Druid
2
+ module Logging
3
+ def logger
4
+ @@logger ||= Druid::Logger.new
5
+ end
6
+ end
7
+ end
@@ -0,0 +1,11 @@
1
+ module Druid
2
+ module Queries
3
+ module Core
4
+ delegate :write_point, to: :writer
5
+
6
+ def query(opts)
7
+ Druid::Query.create(opts.merge(broker: broker))
8
+ end
9
+ end
10
+ end
11
+ end
@@ -1,5 +1,5 @@
1
1
  module Druid
2
- module Query
2
+ module Queries
3
3
  module Datasource
4
4
  java_import org.apache.zookeeper.ZKUtil
5
5
 
@@ -1,5 +1,5 @@
1
1
  module Druid
2
- module Query
2
+ module Queries
3
3
  module Task
4
4
  delegate :shutdown_tasks, to: :overlord
5
5
  end
@@ -0,0 +1,182 @@
1
+ module Druid
2
+ class Query
3
+ attr_reader :aggregations,
4
+ :broker,
5
+ :dimensions,
6
+ :end_interval,
7
+ :fill_value,
8
+ :granularity,
9
+ :query_opts,
10
+ :query_type,
11
+ :range,
12
+ :result_key,
13
+ :start_interval
14
+
15
+ def initialize(opts)
16
+ @aggregations = opts[:aggregations].map{|agg| agg[:name]}
17
+ @broker = opts[:broker]
18
+ @dimensions = opts[:dimensions]
19
+ @fill_value = opts[:fill_value]
20
+ @granularity = opts[:granularity]
21
+ @range = parse_range(opts[:intervals])
22
+ @query_type = opts[:queryType]
23
+ @end_interval = calculate_end_interval
24
+ @start_interval = calculate_start_interval
25
+ @query_opts = opts_for_query(opts)
26
+ end
27
+
28
+ def execute
29
+ result = broker.query(query_opts)
30
+ fill_query_results(result)
31
+ end
32
+
33
+ private
34
+
35
+ # TODO: Can this be made smarter? Prefer to avoid case statements.
36
+ # Cases found here: http://druid.io/docs/latest/querying/granularities.html
37
+ def advance_interval(time)
38
+ case granularity
39
+ when 'second'
40
+ time.advance(seconds: 1)
41
+ when 'minute'
42
+ time.advance(minutes: 1)
43
+ when 'fifteen_minute'
44
+ time.advance(minutes: 15)
45
+ when 'thirty_minute'
46
+ time.advance(minutes: 30)
47
+ when 'hour'
48
+ time.advance(hours: 1)
49
+ when 'day'
50
+ time.advance(days: 1)
51
+ when 'week'
52
+ time.advance(weeks: 1)
53
+ when 'month'
54
+ time.advance(months: 1)
55
+ when 'quarter'
56
+ time.advance(months: 3)
57
+ when 'year'
58
+ time.advance(years: 1)
59
+ else
60
+ raise Druid::QueryError, 'Unsupported granularity'
61
+ end
62
+ end
63
+
64
+ def calculate_end_interval
65
+ iso8601_duration_end_interval(range)
66
+ end
67
+
68
+ def calculate_start_interval
69
+ time = iso8601_duration_start_interval(range)
70
+ start_of_interval(time)
71
+ end
72
+
73
+ def fill_empty_intervals(points, opts = {})
74
+ interval = start_interval
75
+ result = []
76
+
77
+ while interval <= end_interval do
78
+ # TODO:
79
+ # This will search the points every time, could be more performant if
80
+ # we track the 'current point' in the points and only compare the
81
+ # current point's timestamp
82
+ point = find_or_create_point(interval, points)
83
+ aggregations.each do |aggregation|
84
+ point[result_key][aggregation] = fill_value if point[result_key][aggregation].blank?
85
+ point[result_key].merge!(opts)
86
+ end
87
+ result << point
88
+ interval = advance_interval(interval)
89
+ end
90
+
91
+ result
92
+ end
93
+
94
+ # NOTE:
95
+ # This responsibility really lies in Druid, but until the feature works
96
+ # reliably in Druid, this is serves the purpose.
97
+ # https://github.com/druid-io/druid/issues/2106
98
+ def fill_query_results(query_result)
99
+ return query_result unless query_result.present? && fill_value.present?
100
+ parse_result_key(query_result.first)
101
+
102
+ #TODO: handle multi-dimensional group by
103
+ if group_by?
104
+ result = []
105
+ dimension_key = dimensions.first
106
+ groups = query_result.group_by{ |point| point[result_key][dimension_key] }
107
+ groups.each do |dimension_value, dimension_points|
108
+ result += fill_empty_intervals(dimension_points, { dimension_key => dimension_value })
109
+ end
110
+ result
111
+ else
112
+ fill_empty_intervals(query_result)
113
+ end
114
+ end
115
+
116
+ def find_or_create_point(interval, points)
117
+ point = points.find{ |point| point['timestamp'].to_s.to_time == interval.to_time }
118
+ point.present? ? point : { 'timestamp' => interval.iso8601(3), result_key => {} }
119
+ end
120
+
121
+ def group_by?
122
+ query_type == 'groupBy'
123
+ end
124
+
125
+ def iso8601_duration_start_interval(duration)
126
+ duration.split('/').first.to_time.utc
127
+ end
128
+
129
+ def iso8601_duration_end_interval(duration)
130
+ duration.split('/').last.to_time.utc
131
+ end
132
+
133
+ def opts_for_query(opts)
134
+ opts.except(:fill_value, :broker)
135
+ end
136
+
137
+ def parse_range(range)
138
+ range.is_a?(Array) ? range.first : range
139
+ end
140
+
141
+ def parse_result_key(point)
142
+ @result_key = point['event'].present? ? 'event' : 'result'
143
+ end
144
+
145
+ # TODO: Can this be made smarter? Prefer to avoid case statements.
146
+ # Cases found here: http://druid.io/docs/latest/querying/granularities.html
147
+ def start_of_interval(time)
148
+ case granularity
149
+ when 'second'
150
+ time.change(usec: 0)
151
+ when 'minute'
152
+ time.beginning_of_minute
153
+ when 'fifteen_minute'
154
+ first_fifteen = [45, 30, 15, 0].detect{ |m| m <= time.min }
155
+ time.change(min: first_fifteen)
156
+ when 'thirty_minute'
157
+ first_thirty = [30, 0].detect{ |m| m <= time.min }
158
+ time.change(min: first_thirty)
159
+ when 'hour'
160
+ time.beginning_of_hour
161
+ when 'day'
162
+ time.beginning_of_day
163
+ when 'week'
164
+ time.beginning_of_week
165
+ when 'month'
166
+ time.beginning_of_month
167
+ when 'quarter'
168
+ time.beginning_of_quarter
169
+ when 'year'
170
+ time.beginning_of_year
171
+ else
172
+ time
173
+ end
174
+ end
175
+
176
+ class << self
177
+ def create(opts)
178
+ new(opts).execute
179
+ end
180
+ end
181
+ end
182
+ end
@@ -0,0 +1,11 @@
1
+ module Druid
2
+ module TopLevelPackages
3
+ def ch
4
+ Java::Ch
5
+ end
6
+
7
+ def io
8
+ Java::Io
9
+ end
10
+ end
11
+ end
@@ -1,3 +1,3 @@
1
1
  module JrubyDruid
2
- VERSION = '1.0.0-rc2'
2
+ VERSION = '1.0.0-rc3'
3
3
  end
@@ -17,7 +17,7 @@ module Druid
17
17
  def write_point(datasource, datapoint)
18
18
  datapoint = Druid::Writer::Tranquilizer::Datapoint.new(datapoint)
19
19
  sender = get_tranquilizer(datasource, datapoint)
20
- sender.send(datapoint)
20
+ sender.safe_send(datapoint)
21
21
  end
22
22
 
23
23
  private
@@ -1,48 +1,16 @@
1
1
  module Druid
2
2
  module Writer
3
3
  module Tranquilizer
4
- class << self
5
- def ch
6
- Java::Ch
7
- end
4
+ extend Druid::TopLevelPackages
8
5
 
9
- def io
10
- Java::Io
11
- end
12
- end
13
-
14
6
  java_import com.google.common.collect.ImmutableList
15
7
  java_import com.google.common.collect.ImmutableMap
16
- java_import ch.qos.logback.classic.Level
17
- java_import ch.qos.logback.classic.encoder.PatternLayoutEncoder
18
- java_import ch.qos.logback.core.FileAppender
19
8
  java_import com.metamx.tranquility.beam.ClusteredBeamTuning
20
9
  java_import io.druid.query.aggregation.LongSumAggregatorFactory
21
10
  java_import io.druid.granularity.QueryGranularity
22
11
  java_import io.druid.data.input.impl.TimestampSpec
23
12
  java_import org.apache.curator.framework.CuratorFrameworkFactory
24
- java_import org.slf4j.LoggerFactory
25
- java_import org.slf4j.Logger
26
13
  java_import Java::ScalaCollection::JavaConverters
27
-
28
- logger = Druid::Writer::Tranquilizer::LoggerFactory.getLogger(Druid::Writer::Tranquilizer::Logger.ROOT_LOGGER_NAME)
29
- logger.detachAndStopAllAppenders
30
-
31
- appender = Druid::Writer::Tranquilizer::FileAppender.new
32
- context = logger.getLoggerContext
33
- encoder = Druid::Writer::Tranquilizer::PatternLayoutEncoder.new
34
-
35
- encoder.setPattern("%date %level [%thread] %logger{10} [%file:%line] %msg%n")
36
- encoder.setContext(context)
37
- encoder.start
38
-
39
- appender.setFile('jruby-druid.log')
40
- appender.setEncoder(encoder)
41
- appender.setContext(context)
42
- appender.start
43
-
44
- logger.addAppender(appender)
45
- logger.setLevel(Druid::Writer::Tranquilizer::Level::TRACE)
46
14
  end
47
15
  end
48
16
  end
@@ -5,7 +5,12 @@ module Druid
5
5
  module Writer
6
6
  module Tranquilizer
7
7
  class Base
8
- attr_reader :config, :curator, :datasource, :rollup, :service, :tuning
8
+ attr_reader :config,
9
+ :curator,
10
+ :datasource,
11
+ :rollup,
12
+ :service,
13
+ :tuning
9
14
 
10
15
  def initialize(params)
11
16
  @config = params[:config]
@@ -18,8 +23,11 @@ module Druid
18
23
  start
19
24
  end
20
25
 
21
- def send(datapoint)
22
- service.send(argument_map(datapoint)).addEventListener(EventListener.new)
26
+ def safe_send(datapoint)
27
+ thread = Thread.new{ send(datapoint) }
28
+ result = thread.join(@config.wait_time)
29
+ raise Druid::ConnectionError, 'Error connecting to ZooKeeper' unless result
30
+ Druid::Writer::Tranquilizer::Future.new(result.value)
23
31
  end
24
32
 
25
33
  def start
@@ -58,6 +66,10 @@ module Druid
58
66
  builder = builder.druidBeamConfig(DruidBeamConfig.build(true)) if config.strong_delete
59
67
  builder.buildTranquilizer
60
68
  end
69
+
70
+ def send(datapoint)
71
+ service.send(argument_map(datapoint)).addEventListener(EventListener.new)
72
+ end
61
73
  end
62
74
  end
63
75
  end
@@ -8,8 +8,8 @@ module Druid
8
8
 
9
9
  def initialize(datapoint)
10
10
  @timestamp = build_time(datapoint[:timestamp])
11
- @dimensions = datapoint[:dimensions].with_indifferent_access
12
- @metrics = datapoint[:metrics].with_indifferent_access
11
+ @dimensions = parse_dimensions(datapoint[:dimensions])
12
+ @metrics = parse_metrics(datapoint[:metrics])
13
13
  end
14
14
 
15
15
  private
@@ -18,6 +18,15 @@ module Druid
18
18
  time = Time.now.utc unless time
19
19
  Hash[TIMESTAMP_LABEL, time.iso8601]
20
20
  end
21
+
22
+ def parse_dimensions(dimensions)
23
+ dimensions.present? ? dimensions.with_indifferent_access : {}
24
+ end
25
+
26
+ def parse_metrics(metrics)
27
+ raise ValidationError, 'Must specify at least one metric' unless metrics.present?
28
+ metrics.with_indifferent_access
29
+ end
21
30
  end
22
31
  end
23
32
  end
@@ -2,14 +2,15 @@ module Druid
2
2
  module Writer
3
3
  module Tranquilizer
4
4
  class EventListener
5
+ include Logging
5
6
  include_package com.twitter.util.FutureEventListener
6
7
 
7
8
  def onSuccess(data)
8
- # puts "success: #{data.to_s}" #TODO: Log this (trace)
9
+ # logger.debug data.to_s
9
10
  end
10
11
 
11
12
  def onFailure(error)
12
- puts "failure: #{error.to_s}" #TODO: Log this (debug)
13
+ logger.warn error.to_s
13
14
  end
14
15
  end
15
16
  end
@@ -0,0 +1,39 @@
1
+ module Druid
2
+ module Writer
3
+ module Tranquilizer
4
+ class Future
5
+ AWAIT = com.twitter.util.Awaitable::CanAwait
6
+ WAIT_TIME = 20
7
+
8
+ attr_reader :future
9
+ delegate :isDefined, to: :future
10
+
11
+ def initialize(future)
12
+ @future = future
13
+ end
14
+
15
+ def failure?(wait_time = WAIT_TIME)
16
+ begin
17
+ future.ready(build_duration(wait_time), AWAIT).isThrow
18
+ rescue Java::ComTwitterUtil::TimeoutException => e
19
+ raise Druid::ConnectionError, 'Future timed out.'
20
+ end
21
+ end
22
+
23
+ def success?(wait_time = WAIT_TIME)
24
+ begin
25
+ future.ready(build_duration(wait_time), AWAIT).isReturn
26
+ rescue Java::ComTwitterUtil::TimeoutException => e
27
+ raise Druid::ConnectionError, 'Future timed out.'
28
+ end
29
+ end
30
+
31
+ private
32
+
33
+ def build_duration(duration)
34
+ com.twitter.util.Duration.fromSeconds(duration)
35
+ end
36
+ end
37
+ end
38
+ end
39
+ end
@@ -4,18 +4,22 @@ Dir["#{File.dirname(__FILE__)}/../vendor/tranquility/*.jar"].each { |file| requi
4
4
  require "active_support/all"
5
5
  require "json"
6
6
 
7
+ require "druid/top_level_packages"
7
8
  require "druid/configuration"
8
9
  require "druid/connection"
9
10
  require "druid/errors"
11
+ require "druid/logger"
12
+ require "druid/logging"
13
+ require "druid/query"
10
14
  require "druid/version"
11
15
 
12
16
  require "druid/node/broker"
13
17
  require "druid/node/coordinator"
14
18
  require "druid/node/overlord"
15
19
 
16
- require "druid/query/core"
17
- require "druid/query/datasource"
18
- require "druid/query/task"
20
+ require "druid/queries/core"
21
+ require "druid/queries/datasource"
22
+ require "druid/queries/task"
19
23
 
20
24
  require "druid/writer/base"
21
25
 
@@ -28,6 +32,7 @@ require "druid/writer/tranquilizer/dimensions"
28
32
  require "druid/writer/tranquilizer/druid_beams"
29
33
  require "druid/writer/tranquilizer/druid_beam_config"
30
34
  require "druid/writer/tranquilizer/event_listener"
35
+ require "druid/writer/tranquilizer/future"
31
36
  require "druid/writer/tranquilizer/rollup"
32
37
  require "druid/writer/tranquilizer/timestamper"
33
38
  require "druid/writer/tranquilizer/tuning"
metadata CHANGED
@@ -1,49 +1,35 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: jruby-druid
3
3
  version: !ruby/object:Gem::Version
4
- version: 1.0.0.pre.rc2
4
+ version: 1.0.0.pre.rc3
5
5
  platform: ruby
6
6
  authors:
7
7
  - Andre LeBlanc
8
8
  autorequire:
9
9
  bindir: exe
10
10
  cert_chain: []
11
- date: 2016-07-14 00:00:00.000000000 Z
11
+ date: 2016-07-28 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  requirement: !ruby/object:Gem::Requirement
15
15
  requirements:
16
- - - ~>
16
+ - - '>='
17
17
  - !ruby/object:Gem::Version
18
- version: '4.0'
18
+ version: '0'
19
19
  name: activesupport
20
20
  prerelease: false
21
21
  type: :runtime
22
22
  version_requirements: !ruby/object:Gem::Requirement
23
23
  requirements:
24
- - - ~>
25
- - !ruby/object:Gem::Version
26
- version: '4.0'
27
- - !ruby/object:Gem::Dependency
28
- requirement: !ruby/object:Gem::Requirement
29
- requirements:
30
- - - ~>
31
- - !ruby/object:Gem::Version
32
- version: '1.9'
33
- name: zk
34
- prerelease: false
35
- type: :runtime
36
- version_requirements: !ruby/object:Gem::Requirement
37
- requirements:
38
- - - ~>
24
+ - - '>='
39
25
  - !ruby/object:Gem::Version
40
- version: '1.9'
26
+ version: '0'
41
27
  - !ruby/object:Gem::Dependency
42
28
  requirement: !ruby/object:Gem::Requirement
43
29
  requirements:
44
30
  - - ~>
45
31
  - !ruby/object:Gem::Version
46
- version: '1.11'
32
+ version: '1.7'
47
33
  name: bundler
48
34
  prerelease: false
49
35
  type: :development
@@ -51,7 +37,7 @@ dependencies:
51
37
  requirements:
52
38
  - - ~>
53
39
  - !ruby/object:Gem::Version
54
- version: '1.11'
40
+ version: '1.7'
55
41
  - !ruby/object:Gem::Dependency
56
42
  requirement: !ruby/object:Gem::Requirement
57
43
  requirements:
@@ -146,13 +132,16 @@ files:
146
132
  - lib/druid/configuration.rb
147
133
  - lib/druid/connection.rb
148
134
  - lib/druid/errors.rb
135
+ - lib/druid/logger.rb
149
136
  - lib/druid/logging.rb
150
137
  - lib/druid/node/broker.rb
151
138
  - lib/druid/node/coordinator.rb
152
139
  - lib/druid/node/overlord.rb
153
- - lib/druid/query/core.rb
154
- - lib/druid/query/datasource.rb
155
- - lib/druid/query/task.rb
140
+ - lib/druid/queries/core.rb
141
+ - lib/druid/queries/datasource.rb
142
+ - lib/druid/queries/task.rb
143
+ - lib/druid/query.rb
144
+ - lib/druid/top_level_packages.rb
156
145
  - lib/druid/version.rb
157
146
  - lib/druid/writer/base.rb
158
147
  - lib/druid/writer/tranquilizer.rb
@@ -164,6 +153,7 @@ files:
164
153
  - lib/druid/writer/tranquilizer/druid_beam_config.rb
165
154
  - lib/druid/writer/tranquilizer/druid_beams.rb
166
155
  - lib/druid/writer/tranquilizer/event_listener.rb
156
+ - lib/druid/writer/tranquilizer/future.rb
167
157
  - lib/druid/writer/tranquilizer/rollup.rb
168
158
  - lib/druid/writer/tranquilizer/timestamper.rb
169
159
  - lib/druid/writer/tranquilizer/tuning.rb
@@ -1,8 +0,0 @@
1
- module Druid
2
- module Query
3
- module Core
4
- delegate :query, to: :broker
5
- delegate :write_point, to: :writer
6
- end
7
- end
8
- end