influxdb-client 1.1.0.pre.323 → 1.2.0.pre.503

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 5468123420673da53963b6610fc3c42ba6379a919b07d2e3623c949303962fd2
4
- data.tar.gz: 1eaab67ebca09a668c1669780807808d3fe07fb973bd5f387e846be3fd7449db
3
+ metadata.gz: 369c8281b3985a0757703f1c68a95d25793508a8c1c3c6dd458646ae01c768d8
4
+ data.tar.gz: 82c4dae23afd29ccb574a6314443ef53fc4fe3ff17f9f952d435c7522441d7e3
5
5
  SHA512:
6
- metadata.gz: 1cd55d49226c9d1a8060fff080674c8e18118361296c7975dfd456e1465f969c376ddfe4b30f15341d333353abc0fa40c74992d50fc61e63252a97bc0c6463ea
7
- data.tar.gz: c7044374c0f284f7b329a46dc0c9d6beb896554c49b166315997bf9e6df3136c9f295107030e08618401ab0a9f97be2997d0c3d51b499bdba262ddeb9f440841
6
+ metadata.gz: 17c810f5d87978b3c7359031b52fa86d39c06d70bb118a15e4f686f5f0b515de2001bc77cae24bce50a1dec6d4f3b366b104997e657d61219586e266c67b3d4b
7
+ data.tar.gz: 5255b4b24ffd03b9895c624a501e765881a944368cd31d41eb02c421b29344955ad941532df9d02b416acb0ab9a7ff389142c432fca974ecf301ef88ba3f7c88
@@ -1,7 +1,14 @@
1
- ## 1.1.0 [unreleased]
1
+ ## 1.2.0 [unreleased]
2
+
3
+ ### Bugs
4
+ 1. [#22](https://github.com/influxdata/influxdb-client-ruby/pull/22): Fixed batch write
5
+
6
+ ## 1.1.0 [2020-02-14]
2
7
 
3
8
  ### Features
4
9
  1. [#14](https://github.com/influxdata/influxdb-client-ruby/issues/14): Added QueryApi
10
+ 2. [#17](https://github.com/influxdata/influxdb-client-ruby/issues/17): Added possibility to stream query result
11
+ 3. [#19](https://github.com/influxdata/influxdb-client-ruby/issues/19): Added WriteOptions and possibility to batch write
5
12
 
6
13
  ## 1.0.0.beta [2020-01-17]
7
14
 
data/README.md CHANGED
@@ -11,7 +11,6 @@
11
11
  This repository contains the reference Ruby client for the InfluxDB 2.0.
12
12
 
13
13
  #### Note: This library is for use with InfluxDB 2.x. For connecting to InfluxDB 1.x instances, please use the [influxdb-ruby](https://github.com/influxdata/influxdb-ruby) client.
14
- #### Disclaimer: This library is a work in progress and should not be considered production ready yet.
15
14
 
16
15
  ## Installation
17
16
 
@@ -24,7 +23,7 @@ The client can be installed manually or with bundler.
24
23
  To install the client gem manually:
25
24
 
26
25
  ```
27
- gem install influxdb-client -v 1.0.0.beta
26
+ gem install influxdb-client -v 1.1.0
28
27
  ```
29
28
 
30
29
  ## Usage
@@ -63,6 +62,7 @@ The result retrieved by [QueryApi](https://github.com/influxdata/influxdb-client
63
62
 
64
63
  1. Raw query response
65
64
  2. Flux data structure: [FluxTable, FluxColumn and FluxRecord](https://github.com/influxdata/influxdb-client-ruby/blob/master/lib/influxdb2/client/flux_table.rb)
65
+ 3. Stream of [FluxRecord](https://github.com/influxdata/influxdb-client-ruby/blob/master/lib/influxdb2/client/flux_table.rb)
66
66
 
67
67
  #### Query raw
68
68
 
@@ -86,7 +86,25 @@ query_api = client.create_query_api
86
86
  result = query_api.query(query: 'from(bucket:"' + bucket + '") |> range(start: 1970-01-01T00:00:00.000000001Z) |> last()')
87
87
  ```
88
88
 
89
+ #### Query stream
90
+ Synchronously executes the Flux query and return stream of [FluxRecord](https://github.com/influxdata/influxdb-client-ruby/blob/master/lib/influxdb2/client/flux_table.rb)
91
+ ```ruby
92
+ client = InfluxDB2::Client.new('https://localhost:9999', 'my-token',
93
+ bucket: 'my-bucket',
94
+ org: 'my-org')
95
+
96
+ query_api = client.create_query_api
97
+
98
+ query = 'from(bucket: "my-bucket") |> range(start: -10m, stop: now()) ' \
99
+ "|> filter(fn: (r) => r._measurement == \"#{measurement}\")"
100
+
101
+ query_api.query_stream(query: query).each do |record|
102
+ puts record.to_s
103
+ end
104
+ ```
105
+
89
106
  ### Writing data
107
+ The [WriteApi](https://github.com/influxdata/influxdb-client-ruby/blob/master/lib/influxdb2/client/write_api.rb) supports synchronous and batching writes into InfluxDB 2.0. In default api uses synchronous write. To enable batching you can use WriteOption.
90
108
 
91
109
  ```ruby
92
110
  client = InfluxDB2::Client.new('https://localhost:9999', 'my-token',
@@ -98,6 +116,28 @@ write_api = client.create_write_api
98
116
  write_api.write(data: 'h2o,location=west value=33i 15')
99
117
  ```
100
118
 
119
+ #### Batching
120
+ The writes are processed in batches which are configurable by `WriteOptions`:
121
+
122
+ | Property | Description | Default Value |
123
+ | --- | --- | --- |
124
+ | **batchSize** | the number of data point to collect in batch | 1000 |
125
+ | **flushInterval** | the number of milliseconds before the batch is written | 1000 |
126
+
127
+ ```ruby
128
+ write_options = InfluxDB2::WriteOptions.new(write_type: InfluxDB2::WriteType::BATCHING,
129
+ batch_size: 10, flush_interval: 5_000)
130
+ client = InfluxDB2::Client.new('http://localhost:9999',
131
+ 'my-token',
132
+ bucket: 'my-bucket',
133
+ org: 'my-org',
134
+ precision: InfluxDB2::WritePrecision::NANOSECOND,
135
+ use_ssl: false)
136
+
137
+ write_api = client.create_write_api(write_options: write_options)
138
+ write_api.write(data: 'h2o,location=west value=33i 15')
139
+ ```
140
+
101
141
  #### Time precision
102
142
 
103
143
  Configure default time precision:
@@ -117,7 +157,6 @@ client = InfluxDB2::Client.new('https://localhost:9999', 'my-token',
117
157
  write_api = client.create_write_api
118
158
  write_api.write(data: 'h2o,location=west value=33i 15', precision: InfluxDB2::WritePrecision::SECOND)
119
159
  ```
120
-
121
160
  Allowed values for precision are:
122
161
  - `InfluxDB::WritePrecision::NANOSECOND` for nanosecond
123
162
  - `InfluxDB::WritePrecision::MICROSECOND` for microsecond
@@ -45,6 +45,7 @@ module InfluxDB2
45
45
  # @option options [bool] :use_ssl Turn on/off SSL for HTTP communication
46
46
  # the body line-protocol
47
47
  def initialize(url, token, options = nil)
48
+ @auto_closeable = []
48
49
  @options = options ? options.dup : {}
49
50
  @options[:url] = url if url.is_a? String
50
51
  @options[:token] = token if token.is_a? String
@@ -56,8 +57,10 @@ module InfluxDB2
56
57
  # Write time series data into InfluxDB thought WriteApi.
57
58
  #
58
59
  # @return [WriteApi] New instance of WriteApi.
59
- def create_write_api
60
- WriteApi.new(options: @options)
60
+ def create_write_api(write_options: InfluxDB2::SYNCHRONOUS)
61
+ write_api = WriteApi.new(options: @options, write_options: write_options)
62
+ @auto_closeable.push(write_api)
63
+ write_api
61
64
  end
62
65
 
63
66
  # Get the Query client.
@@ -72,6 +75,7 @@ module InfluxDB2
72
75
  # @return [ true ] Always true.
73
76
  def close!
74
77
  @closed = true
78
+ @auto_closeable.each(&:close!)
75
79
  true
76
80
  end
77
81
  end
@@ -19,9 +19,11 @@
19
19
  # THE SOFTWARE.
20
20
  require 'csv'
21
21
  require 'base64'
22
+ require 'time'
22
23
 
23
24
  module InfluxDB2
24
25
  # This class represents Flux query error
26
+ #
25
27
  class FluxQueryError < StandardError
26
28
  def initialize(message, reference)
27
29
  super(message)
@@ -32,6 +34,7 @@ module InfluxDB2
32
34
  end
33
35
 
34
36
  # This class represents Flux query error
37
+ #
35
38
  class FluxCsvParserError < StandardError
36
39
  def initialize(message)
37
40
  super(message)
@@ -39,20 +42,28 @@ module InfluxDB2
39
42
  end
40
43
 
41
44
  # This class us used to construct FluxResult from CSV.
45
+ #
42
46
  class FluxCsvParser
43
- def initialize
47
+ include Enumerable
48
+ def initialize(response, stream: false)
49
+ @response = response
50
+ @stream = stream
44
51
  @tables = {}
45
52
 
46
53
  @table_index = 0
47
54
  @start_new_table = false
48
55
  @table = nil
49
56
  @parsing_state_error = false
57
+
58
+ @closed = false
50
59
  end
51
60
 
52
- attr_reader :tables
61
+ attr_reader :tables, :closed
62
+
63
+ def parse
64
+ @csv_file = CSV.new(@response.instance_of?(Net::HTTPOK) ? @response.body : @response)
53
65
 
54
- def parse(response)
55
- CSV.parse(response) do |csv|
66
+ while (csv = @csv_file.shift)
56
67
  # Response has HTTP status ok, but response is error.
57
68
  next if csv.empty?
58
69
 
@@ -68,10 +79,24 @@ module InfluxDB2
68
79
  raise FluxQueryError.new(error, reference_value.nil? || reference_value.empty? ? 0 : reference_value.to_i)
69
80
  end
70
81
 
71
- _parse_line(csv)
82
+ result = _parse_line(csv)
83
+
84
+ yield result if @stream && result.instance_of?(InfluxDB2::FluxRecord)
85
+ end
86
+
87
+ self
88
+ end
89
+
90
+ def each
91
+ return enum_for(:each) unless block_given?
92
+
93
+ parse do |record|
94
+ yield record
72
95
  end
73
96
 
74
- @tables
97
+ self
98
+ ensure
99
+ _close_connection
75
100
  end
76
101
 
77
102
  private
@@ -84,7 +109,9 @@ module InfluxDB2
84
109
  # Return already parsed DataFrame
85
110
  @start_new_table = true
86
111
  @table = InfluxDB2::FluxTable.new
87
- @tables[@table_index] = @table
112
+
113
+ @tables[@table_index] = @table unless @stream
114
+
88
115
  @table_index += 1
89
116
  elsif @table.nil?
90
117
  raise FluxCsvParserError, 'Unable to parse CSV response. FluxTable definition was not found.'
@@ -157,13 +184,17 @@ module InfluxDB2
157
184
  @table.columns.push(column)
158
185
  end
159
186
 
160
- @tables[@table_index] = @table
187
+ @tables[@table_index] = @table unless @stream
161
188
  @table_index += 1
162
189
  end
163
190
 
164
191
  flux_record = _parse_record(@table_index - 1, @table, csv)
165
192
 
166
- @tables[@table_index - 1].records.push(flux_record)
193
+ if @stream
194
+ flux_record
195
+ else
196
+ @tables[@table_index - 1].records.push(flux_record)
197
+ end
167
198
  end
168
199
 
169
200
  def _parse_record(table_index, table, csv)
@@ -206,5 +237,11 @@ module InfluxDB2
206
237
  str_val
207
238
  end
208
239
  end
240
+
241
+ def _close_connection
242
+ # Close CSV Parser
243
+ @csv_file.close
244
+ @closed = true
245
+ end
209
246
  end
210
247
  end
@@ -45,6 +45,7 @@ module InfluxDB2
45
45
  @time = time
46
46
  @precision = precision
47
47
  end
48
+ attr_reader :precision
48
49
 
49
50
  # Create DataPoint instance from specified data.
50
51
  #
@@ -38,16 +38,7 @@ module InfluxDB2
38
38
  # @param [String] org specifies the source organization
39
39
  # @return [String] result of query
40
40
  def query_raw(query: nil, org: nil, dialect: DEFAULT_DIALECT)
41
- org_param = org || @options[:org]
42
- _check('org', org_param)
43
-
44
- payload = _generate_payload(query, dialect)
45
- return nil if payload.nil?
46
-
47
- uri = URI.parse(File.join(@options[:url], '/api/v2/query'))
48
- uri.query = URI.encode_www_form(org: org_param)
49
-
50
- _post(payload.to_body.to_json, uri).read_body
41
+ _post_query(query: query, org: org, dialect: dialect).read_body
51
42
  end
52
43
 
53
44
  # @param [Object] query the flux query to execute. The data could be represent by [String], [Query]
@@ -55,13 +46,36 @@ module InfluxDB2
55
46
  # @return [Array] list of FluxTables which are matched the query
56
47
  def query(query: nil, org: nil, dialect: DEFAULT_DIALECT)
57
48
  response = query_raw(query: query, org: org, dialect: dialect)
58
- parser = InfluxDB2::FluxCsvParser.new
49
+ parser = InfluxDB2::FluxCsvParser.new(response)
59
50
 
60
- parser.parse(response)
51
+ parser.parse
52
+ parser.tables
53
+ end
54
+
55
+ # @param [Object] query the flux query to execute. The data could be represent by [String], [Query]
56
+ # @param [String] org specifies the source organization
57
+ # @return stream of Flux Records
58
+ def query_stream(query: nil, org: nil, dialect: DEFAULT_DIALECT)
59
+ response = _post_query(query: query, org: org, dialect: dialect)
60
+
61
+ InfluxDB2::FluxCsvParser.new(response, stream: true)
61
62
  end
62
63
 
63
64
  private
64
65
 
66
+ def _post_query(query: nil, org: nil, dialect: DEFAULT_DIALECT)
67
+ org_param = org || @options[:org]
68
+ _check('org', org_param)
69
+
70
+ payload = _generate_payload(query, dialect)
71
+ return nil if payload.nil?
72
+
73
+ uri = URI.parse(File.join(@options[:url], '/api/v2/query'))
74
+ uri.query = URI.encode_www_form(org: org_param)
75
+
76
+ _post(payload.to_body.to_json, uri)
77
+ end
78
+
65
79
  def _generate_payload(query, dialect)
66
80
  if query.nil?
67
81
  nil
@@ -19,5 +19,5 @@
19
19
  # THE SOFTWARE.
20
20
 
21
21
  module InfluxDB2
22
- VERSION = '1.1.0'.freeze
22
+ VERSION = '1.2.0'.freeze
23
23
  end
@@ -0,0 +1,93 @@
1
+ # The MIT License
2
+ #
3
+ # Permission is hereby granted, free of charge, to any person obtaining a copy
4
+ # of this software and associated documentation files (the "Software"), to deal
5
+ # in the Software without restriction, including without limitation the rights
6
+ # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
7
+ # copies of the Software, and to permit persons to whom the Software is
8
+ # furnished to do so, subject to the following conditions:
9
+ #
10
+ # The above copyright notice and this permission notice shall be included in
11
+ # all copies or substantial portions of the Software.
12
+ #
13
+ # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
14
+ # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
15
+ # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
16
+ # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
17
+ # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
18
+ # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
19
+ # THE SOFTWARE.
20
+
21
+ module InfluxDB2
22
+ # Worker for handling write batching queue
23
+ #
24
+ class Worker
25
+ def initialize(api_client, write_options)
26
+ @api_client = api_client
27
+ @write_options = write_options
28
+
29
+ @queue = Queue.new
30
+ @queue_event = Queue.new
31
+
32
+ @queue_event.push(true)
33
+
34
+ @thread_flush = Thread.new do
35
+ until api_client.closed
36
+ sleep @write_options.flush_interval / 1_000
37
+ check_background_queue
38
+ end
39
+ end
40
+
41
+ @thread_size = Thread.new do
42
+ until api_client.closed
43
+ check_background_queue(size: true) if @queue.length >= @write_options.batch_size
44
+ sleep 0.01
45
+ end
46
+ end
47
+ end
48
+
49
+ def push(payload)
50
+ @queue.push(payload)
51
+ end
52
+
53
+ def check_background_queue(size: false)
54
+ @queue_event.pop
55
+ data = {}
56
+ points = 0
57
+
58
+ if size && @queue.length < @write_options.batch_size
59
+ @queue_event.push(true)
60
+ return
61
+ end
62
+
63
+ while (points < @write_options.batch_size) && !@queue.empty?
64
+ begin
65
+ item = @queue.pop(true)
66
+ key = item.key
67
+ data[key] = [] unless data.key?(key)
68
+ data[key] << item.data
69
+ points += 1
70
+ rescue ThreadError
71
+ @queue_event.push(true)
72
+ return
73
+ end
74
+ end
75
+
76
+ begin
77
+ write(data) unless data.values.flatten.empty?
78
+ ensure
79
+ @queue_event.push(true)
80
+ end
81
+ end
82
+
83
+ def flush_all
84
+ check_background_queue until @queue.empty?
85
+ end
86
+
87
+ def write(data)
88
+ data.each do |key, points|
89
+ @api_client.write_raw(points.join("\n"), precision: key.precision, bucket: key.bucket, org: key.org)
90
+ end
91
+ end
92
+ end
93
+ end
@@ -17,8 +17,31 @@
17
17
  # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
18
18
  # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
19
19
  # THE SOFTWARE.
20
+ require_relative 'worker'
20
21
 
21
22
  module InfluxDB2
23
+ module WriteType
24
+ SYNCHRONOUS = 1
25
+ BATCHING = 2
26
+ end
27
+
28
+ # Creates write api configuration.
29
+ #
30
+ # @param write_type: methods of write (batching, asynchronous, synchronous)
31
+ # @param batch_size: the number of data point to collect in batch
32
+ # @param flush_interval: flush data at least in this interval
33
+ class WriteOptions
34
+ def initialize(write_type: WriteType::SYNCHRONOUS, batch_size: 1_000, flush_interval: 1_000)
35
+ @write_type = write_type
36
+ @batch_size = batch_size
37
+ @flush_interval = flush_interval
38
+ end
39
+
40
+ attr_reader :write_type, :batch_size, :flush_interval
41
+ end
42
+
43
+ SYNCHRONOUS = InfluxDB2::WriteOptions.new(write_type: WriteType::SYNCHRONOUS)
44
+
22
45
  # Precision constants.
23
46
  #
24
47
  class WritePrecision
@@ -39,9 +62,13 @@ module InfluxDB2
39
62
  #
40
63
  class WriteApi < DefaultApi
41
64
  # @param [Hash] options The options to be used by the client.
42
- def initialize(options:)
65
+ # @param [WriteOptions] write_options Write api configuration.
66
+ def initialize(options:, write_options: SYNCHRONOUS)
43
67
  super(options: options)
68
+ @write_options = write_options
69
+ @closed = false
44
70
  end
71
+ attr_reader :closed
45
72
 
46
73
  # Write data into specified Bucket.
47
74
  #
@@ -83,33 +110,108 @@ module InfluxDB2
83
110
  _check('bucket', bucket_param)
84
111
  _check('org', org_param)
85
112
 
86
- payload = _generate_payload(data)
113
+ payload = _generate_payload(data, bucket: bucket_param, org: org_param, precision: precision_param)
87
114
  return nil if payload.nil?
88
115
 
116
+ if WriteType::BATCHING == @write_options.write_type
117
+ _worker.push(payload)
118
+ else
119
+ write_raw(payload, precision: precision_param, bucket: bucket_param, org: org_param)
120
+ end
121
+ end
122
+
123
+ # @return [ true ] Always true.
124
+ def close!
125
+ _worker.flush_all unless _worker.nil?
126
+ @closed = true
127
+ true
128
+ end
129
+
130
+ # @param [String] payload data as String
131
+ # @param [WritePrecision] precision The precision for the unix timestamps within the body line-protocol
132
+ # @param [String] bucket specifies the destination bucket for writes
133
+ # @param [String] org specifies the destination organization for writes
134
+ def write_raw(payload, precision: nil, bucket: nil, org: nil)
135
+ precision_param = precision || @options[:precision]
136
+ bucket_param = bucket || @options[:bucket]
137
+ org_param = org || @options[:org]
138
+ _check('precision', precision_param)
139
+ _check('bucket', bucket_param)
140
+ _check('org', org_param)
141
+
142
+ return nil unless payload.instance_of?(String) || payload.empty?
143
+
89
144
  uri = URI.parse(File.join(@options[:url], '/api/v2/write'))
90
145
  uri.query = URI.encode_www_form(bucket: bucket_param, org: org_param, precision: precision_param.to_s)
91
146
 
92
147
  _post(payload, uri)
93
148
  end
94
149
 
150
+ # Item for batching queue
151
+ class BatchItem
152
+ def initialize(key, data)
153
+ @key = key
154
+ @data = data
155
+ end
156
+ attr_reader :key, :data
157
+ end
158
+
159
+ # Key for batch item
160
+ class BatchItemKey
161
+ def initialize(bucket, org, precision = DEFAULT_WRITE_PRECISION)
162
+ @bucket = bucket
163
+ @org = org
164
+ @precision = precision
165
+ end
166
+ attr_reader :bucket, :org, :precision
167
+
168
+ def ==(other)
169
+ @bucket == other.bucket && @org == other.org && @precision == other.precision
170
+ end
171
+
172
+ alias eql? ==
173
+
174
+ def hash
175
+ @bucket.hash ^ @org.hash ^ @precision.hash # XOR
176
+ end
177
+ end
178
+
95
179
  private
96
180
 
97
- def _generate_payload(data)
181
+ WORKER_MUTEX = Mutex.new
182
+ def _worker
183
+ return nil unless @write_options.write_type == WriteType::BATCHING
184
+
185
+ return @worker if @worker
186
+
187
+ WORKER_MUTEX.synchronize do
188
+ # this return is necessary because the previous mutex holder
189
+ # might have already assigned the @worker
190
+ return @worker if @worker
191
+
192
+ @worker = Worker.new(self, @write_options)
193
+ end
194
+ end
195
+
196
+ def _generate_payload(data, precision: nil, bucket: nil, org: nil)
98
197
  if data.nil?
99
198
  nil
100
199
  elsif data.is_a?(Point)
101
- data.to_line_protocol
200
+ _generate_payload(data.to_line_protocol, bucket: bucket, org: org, precision: data.precision ||
201
+ DEFAULT_WRITE_PRECISION)
102
202
  elsif data.is_a?(String)
103
203
  if data.empty?
104
204
  nil
205
+ elsif @write_options.write_type == WriteType::BATCHING
206
+ BatchItem.new(BatchItemKey.new(bucket, org, precision), data)
105
207
  else
106
208
  data
107
209
  end
108
210
  elsif data.is_a?(Hash)
109
- _generate_payload(Point.from_hash(data))
211
+ _generate_payload(Point.from_hash(data), bucket: bucket, org: org, precision: precision)
110
212
  elsif data.respond_to? :map
111
213
  data.map do |item|
112
- _generate_payload(item)
214
+ _generate_payload(item, bucket: bucket, org: org, precision: precision)
113
215
  end.reject(&:nil?).join("\n".freeze)
114
216
  end
115
217
  end
@@ -21,10 +21,6 @@
21
21
  require 'test_helper'
22
22
 
23
23
  class FluxCsvParserTest < MiniTest::Test
24
- def setup
25
- @parser = InfluxDB2::FluxCsvParser.new
26
- end
27
-
28
24
  def test_multiple_values
29
25
  data = "#datatype,string,long,dateTime:RFC3339,dateTime:RFC3339,string,string,string,string,long,long,string\n" \
30
26
  "#group,false,false,true,true,true,true,true,true,false,false,false\n" \
@@ -35,7 +31,7 @@ class FluxCsvParserTest < MiniTest::Test
35
31
  ",,2,1677-09-21T00:12:43.145224192Z,2018-07-16T11:21:02.547596934Z,usage_system,cpu,A,west,1444,38,test\n" \
36
32
  ',,3,1677-09-21T00:12:43.145224192Z,2018-07-16T11:21:02.547596934Z,user_usage,cpu,A,west,2401,49,test'
37
33
 
38
- tables = @parser.parse(data)
34
+ tables = InfluxDB2::FluxCsvParser.new(data).parse.tables
39
35
 
40
36
  column_headers = tables[0].columns
41
37
  assert_equal 11, column_headers.size
@@ -55,7 +51,7 @@ class FluxCsvParserTest < MiniTest::Test
55
51
  ",result,table,_start,_stop,_time,_value,_field,_measurement,host,value\n" \
56
52
  ",,0,1970-01-01T00:00:10Z,1970-01-01T00:00:20Z,1970-01-01T00:00:10Z,10,free,mem,A,true\n"
57
53
 
58
- tables = @parser.parse(data)
54
+ tables = InfluxDB2::FluxCsvParser.new(data).parse.tables
59
55
 
60
56
  assert_equal 1, tables.size
61
57
  assert_equal 1, tables[0].records.size
@@ -81,7 +77,7 @@ class FluxCsvParserTest < MiniTest::Test
81
77
  ",,0,1970-01-01T00:00:10Z,1970-01-01T00:00:20Z,1970-01-01T00:00:10Z,10,free,mem,A,x\n" \
82
78
  ",,0,1970-01-01T00:00:10Z,1970-01-01T00:00:20Z,1970-01-01T00:00:10Z,10,free,mem,A,\n"
83
79
 
84
- tables = @parser.parse(data)
80
+ tables = InfluxDB2::FluxCsvParser.new(data).parse.tables
85
81
  records = tables[0].records
86
82
 
87
83
  assert_equal true, records[0].values['value']
@@ -101,7 +97,7 @@ class FluxCsvParserTest < MiniTest::Test
101
97
 
102
98
  expected = 17_916_881_237_904_312_345
103
99
 
104
- tables = @parser.parse(data)
100
+ tables = InfluxDB2::FluxCsvParser.new(data).parse.tables
105
101
  records = tables[0].records
106
102
 
107
103
  assert_equal expected, records[0].values['value']
@@ -117,7 +113,7 @@ class FluxCsvParserTest < MiniTest::Test
117
113
  ",,0,1970-01-01T00:00:10Z,1970-01-01T00:00:20Z,1970-01-01T00:00:10Z,10,free,mem,A,12.25\n" \
118
114
  ",,0,1970-01-01T00:00:10Z,1970-01-01T00:00:20Z,1970-01-01T00:00:10Z,10,free,mem,A,\n" \
119
115
 
120
- tables = @parser.parse(data)
116
+ tables = InfluxDB2::FluxCsvParser.new(data).parse.tables
121
117
  records = tables[0].records
122
118
 
123
119
  assert_equal 12.25, records[0].values['value']
@@ -136,7 +132,7 @@ class FluxCsvParserTest < MiniTest::Test
136
132
  ',,0,1970-01-01T00:00:10Z,1970-01-01T00:00:20Z,1970-01-01T00:00:10Z,10,free,mem,A,' + encoded_data + "\n" \
137
133
  ",,0,1970-01-01T00:00:10Z,1970-01-01T00:00:20Z,1970-01-01T00:00:10Z,10,free,mem,A,\n"
138
134
 
139
- tables = @parser.parse(data)
135
+ tables = InfluxDB2::FluxCsvParser.new(data).parse.tables
140
136
  records = tables[0].records
141
137
 
142
138
  value = records[0].values['value']
@@ -156,7 +152,7 @@ class FluxCsvParserTest < MiniTest::Test
156
152
  ",,0,1970-01-01T00:00:10Z,1970-01-01T00:00:20Z,1970-01-01T00:00:10Z,10,free,mem,A,1970-01-01T00:00:10Z\n" \
157
153
  ",,0,1970-01-01T00:00:10Z,1970-01-01T00:00:20Z,1970-01-01T00:00:10Z,10,free,mem,A,\n"
158
154
 
159
- tables = @parser.parse(data)
155
+ tables = InfluxDB2::FluxCsvParser.new(data).parse.tables
160
156
  records = tables[0].records
161
157
 
162
158
  assert_equal Time.parse('1970-01-01T00:00:10Z').to_datetime.rfc3339, records[0].values['value']
@@ -172,7 +168,7 @@ class FluxCsvParserTest < MiniTest::Test
172
168
  ",,0,1970-01-01T00:00:10Z,1970-01-01T00:00:20Z,1970-01-01T00:00:10Z,10,free,mem,A,125\n" \
173
169
  ",,0,1970-01-01T00:00:10Z,1970-01-01T00:00:20Z,1970-01-01T00:00:10Z,10,free,mem,A,\n"
174
170
 
175
- tables = @parser.parse(data)
171
+ tables = InfluxDB2::FluxCsvParser.new(data).parse.tables
176
172
  records = tables[0].records
177
173
 
178
174
  assert_equal 125, records[0].values['value']
@@ -188,7 +184,7 @@ class FluxCsvParserTest < MiniTest::Test
188
184
  ",,0,1970-01-01T00:00:10Z,1970-01-01T00:00:20Z,1970-01-01T00:00:10Z,10,free,mem,A,125\n" \
189
185
  ",,0,1970-01-01T00:00:10Z,1970-01-01T00:00:20Z,1970-01-01T00:00:10Z,10,free,mem,A,\n" \
190
186
 
191
- tables = @parser.parse(data)
187
+ tables = InfluxDB2::FluxCsvParser.new(data).parse.tables
192
188
 
193
189
  assert_equal 10, tables[0].columns.size
194
190
  assert_equal 2, tables[0].group_key.size
@@ -203,7 +199,7 @@ class FluxCsvParserTest < MiniTest::Test
203
199
  ",,0,1970-01-01T00:00:10Z,1970-01-01T00:00:20Z,1970-01-01T00:00:10Z,10,free,mem,A,12.25\n" \
204
200
  ",,0,1970-01-01T00:00:10Z,1970-01-01T00:00:20Z,1970-01-01T00:00:10Z,10,free,mem,A,\n"
205
201
 
206
- tables = @parser.parse(data)
202
+ tables = InfluxDB2::FluxCsvParser.new(data).parse.tables
207
203
  records = tables[0].records
208
204
 
209
205
  assert_equal '12.25', records[0].values['value']
@@ -278,10 +274,6 @@ class FluxCsvParserTest < MiniTest::Test
278
274
  end
279
275
 
280
276
  class FluxCsvParserErrorTest < MiniTest::Test
281
- def setup
282
- @parser = InfluxDB2::FluxCsvParser.new
283
- end
284
-
285
277
  def test_error
286
278
  data = "#datatype,string,string\n" \
287
279
  "#group,true,true\n" \
@@ -289,8 +281,10 @@ class FluxCsvParserErrorTest < MiniTest::Test
289
281
  ",error,reference\n" \
290
282
  ',failed to create physical plan: invalid time bounds from procedure from: bounds contain zero time,897'
291
283
 
284
+ parser = InfluxDB2::FluxCsvParser.new(data)
285
+
292
286
  error = assert_raises InfluxDB2::FluxQueryError do
293
- @parser.parse(data)
287
+ parser.parse
294
288
  end
295
289
 
296
290
  assert_equal 'failed to create physical plan: invalid time bounds from procedure from: bounds contain zero time',
@@ -305,8 +299,10 @@ class FluxCsvParserErrorTest < MiniTest::Test
305
299
  ",error,reference\n" \
306
300
  ',failed to create physical plan: invalid time bounds from procedure from: bounds contain zero time,'
307
301
 
302
+ parser = InfluxDB2::FluxCsvParser.new(data)
303
+
308
304
  error = assert_raises InfluxDB2::FluxQueryError do
309
- @parser.parse(data)
305
+ parser.parse
310
306
  end
311
307
 
312
308
  assert_equal 'failed to create physical plan: invalid time bounds from procedure from: bounds contain zero time',
@@ -319,8 +315,10 @@ class FluxCsvParserErrorTest < MiniTest::Test
319
315
  ",,0,1970-01-01T00:00:10Z,1970-01-01T00:00:20Z,1970-01-01T00:00:10Z,10,free,mem,A,12.25\n" \
320
316
  ",,0,1970-01-01T00:00:10Z,1970-01-01T00:00:20Z,1970-01-01T00:00:10Z,10,free,mem,A,\n"
321
317
 
318
+ parser = InfluxDB2::FluxCsvParser.new(data)
319
+
322
320
  error = assert_raises InfluxDB2::FluxCsvParserError do
323
- @parser.parse(data)
321
+ parser.parse
324
322
  end
325
323
 
326
324
  assert_equal 'Unable to parse CSV response. FluxTable definition was not found.', error.message
@@ -0,0 +1,98 @@
1
+ # The MIT License
2
+ #
3
+ # Permission is hereby granted, free of charge, to any person obtaining a copy
4
+ # of this software and associated documentation files (the "Software"), to deal
5
+ # in the Software without restriction, including without limitation the rights
6
+ # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
7
+ # copies of the Software, and to permit persons to whom the Software is
8
+ # furnished to do so, subject to the following conditions:
9
+ #
10
+ # The above copyright notice and this permission notice shall be included in
11
+ # all copies or substantial portions of the Software.
12
+ #
13
+ # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
14
+ # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
15
+ # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
16
+ # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
17
+ # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
18
+ # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
19
+ # THE SOFTWARE.
20
+
21
+ require 'test_helper'
22
+
23
+ class QueryApiStreamTest < MiniTest::Test
24
+ def setup
25
+ WebMock.allow_net_connect!
26
+
27
+ @client = InfluxDB2::Client.new('http://localhost:9999', 'my-token',
28
+ bucket: 'my-bucket',
29
+ org: 'my-org',
30
+ precision: InfluxDB2::WritePrecision::NANOSECOND,
31
+ use_ssl: false)
32
+ @now = Time.now.utc
33
+ end
34
+
35
+ def test_query_stream
36
+ measurement = 'h2o_query_stream' + @now.to_i.to_s + @now.nsec.to_s
37
+ _write(10, measurement: measurement)
38
+
39
+ query = 'from(bucket: "my-bucket") |> range(start: -1m, stop: now()) ' \
40
+ "|> filter(fn: (r) => r._measurement == \"#{measurement}\")"
41
+
42
+ count = 0
43
+ @client.create_query_api.query_stream(query: query).each do |record|
44
+ count += 1
45
+ assert_equal measurement, record.measurement
46
+ assert_equal 'europe', record.values['location']
47
+ assert_equal count, record.value
48
+ assert_equal 'level', record.field
49
+ end
50
+
51
+ assert_equal 10, count
52
+ end
53
+
54
+ def test_query_stream_break
55
+ measurement = 'h2o_query_stream_break' + @now.to_i.to_s + @now.nsec.to_s
56
+ _write(20, measurement: measurement)
57
+
58
+ query = 'from(bucket: "my-bucket") |> range(start: -1m, stop: now()) ' \
59
+ "|> filter(fn: (r) => r._measurement == \"#{measurement}\")"
60
+
61
+ records = []
62
+
63
+ parser = @client.create_query_api.query_stream(query: query)
64
+
65
+ assert_equal false, parser.closed
66
+
67
+ count = 0
68
+ parser.each do |record|
69
+ records.push(record)
70
+ count += 1
71
+
72
+ break if count >= 5
73
+ end
74
+
75
+ assert_equal 5, records.size
76
+ assert_equal true, parser.closed
77
+
78
+ # record 1
79
+ record = records[0]
80
+ assert_equal measurement, record.measurement
81
+ assert_equal 'europe', record.values['location']
82
+ assert_equal 1, record.value
83
+ assert_equal 'level', record.field
84
+ end
85
+
86
+ private
87
+
88
+ def _write(values, measurement:)
89
+ write_api = @client.create_write_api
90
+
91
+ (1..values).each do |value|
92
+ write_api.write(data: InfluxDB2::Point.new(name: measurement)
93
+ .add_tag('location', 'europe')
94
+ .add_field('level', value)
95
+ .time(@now - values + value, InfluxDB2::WritePrecision::NANOSECOND))
96
+ end
97
+ end
98
+ end
@@ -0,0 +1,166 @@
1
+ # The MIT License
2
+ #
3
+ # Permission is hereby granted, free of charge, to any person obtaining a copy
4
+ # of this software and associated documentation files (the "Software"), to deal
5
+ # in the Software without restriction, including without limitation the rights
6
+ # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
7
+ # copies of the Software, and to permit persons to whom the Software is
8
+ # furnished to do so, subject to the following conditions:
9
+ #
10
+ # The above copyright notice and this permission notice shall be included in
11
+ # all copies or substantial portions of the Software.
12
+ #
13
+ # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
14
+ # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
15
+ # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
16
+ # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
17
+ # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
18
+ # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
19
+ # THE SOFTWARE.
20
+
21
+ require 'test_helper'
22
+
23
+ class WriteApiBatchingTest < MiniTest::Test
24
+ def setup
25
+ WebMock.disable_net_connect!
26
+
27
+ @write_options = InfluxDB2::WriteOptions.new(write_type: InfluxDB2::WriteType::BATCHING,
28
+ batch_size: 2, flush_interval: 5_000)
29
+ @client = InfluxDB2::Client.new('http://localhost:9999',
30
+ 'my-token',
31
+ bucket: 'my-bucket',
32
+ org: 'my-org',
33
+ precision: InfluxDB2::WritePrecision::NANOSECOND,
34
+ use_ssl: false)
35
+
36
+ @write_client = @client.create_write_api(write_options: @write_options)
37
+ end
38
+
39
+ def teardown
40
+ @client.close!
41
+
42
+ assert_equal true, @write_client.closed
43
+
44
+ WebMock.reset!
45
+ end
46
+
47
+ def test_batch_size
48
+ stub_request(:post, 'http://localhost:9999/api/v2/write?bucket=my-bucket&org=my-org&precision=ns')
49
+ .to_return(status: 204)
50
+
51
+ @write_client.write(data: 'h2o_feet,location=coyote_creek level\\ water_level=1.0 1')
52
+ @write_client.write(data: 'h2o_feet,location=coyote_creek level\\ water_level=2.0 2')
53
+ @write_client.write(data: 'h2o_feet,location=coyote_creek level\\ water_level=3.0 3')
54
+ @write_client.write(data: 'h2o_feet,location=coyote_creek level\\ water_level=4.0 4')
55
+
56
+ sleep(1)
57
+
58
+ request1 = "h2o_feet,location=coyote_creek level\\ water_level=1.0 1\n" \
59
+ 'h2o_feet,location=coyote_creek level\\ water_level=2.0 2'
60
+ request2 = "h2o_feet,location=coyote_creek level\\ water_level=3.0 3\n" \
61
+ 'h2o_feet,location=coyote_creek level\\ water_level=4.0 4'
62
+
63
+ assert_requested(:post, 'http://localhost:9999/api/v2/write?bucket=my-bucket&org=my-org&precision=ns',
64
+ times: 1, body: request1)
65
+ assert_requested(:post, 'http://localhost:9999/api/v2/write?bucket=my-bucket&org=my-org&precision=ns',
66
+ times: 1, body: request2)
67
+ end
68
+
69
+ def test_batch_size_group_by
70
+ stub_request(:post, 'http://localhost:9999/api/v2/write?bucket=my-bucket&org=my-org&precision=ns')
71
+ .to_return(status: 204)
72
+ stub_request(:post, 'http://localhost:9999/api/v2/write?bucket=my-bucket&org=my-org&precision=s')
73
+ .to_return(status: 204)
74
+ stub_request(:post, 'http://localhost:9999/api/v2/write?bucket=my-bucket&org=my-org-a&precision=ns')
75
+ .to_return(status: 204)
76
+ stub_request(:post, 'http://localhost:9999/api/v2/write?bucket=my-bucket2&org=my-org-a&precision=ns')
77
+ .to_return(status: 204)
78
+
79
+ bucket = 'my-bucket'
80
+ bucket2 = 'my-bucket2'
81
+ org_a = 'my-org-a'
82
+
83
+ @write_client.write(data: 'h2o_feet,location=coyote_creek level\\ water_level=1.0 1', bucket: bucket, org: 'my-org')
84
+ @write_client.write(data: 'h2o_feet,location=coyote_creek level\\ water_level=2.0 2', bucket: bucket, org: 'my-org',
85
+ precision: InfluxDB2::WritePrecision::SECOND)
86
+ @write_client.write(data: 'h2o_feet,location=coyote_creek level\\ water_level=3.0 3', bucket: bucket, org: org_a)
87
+ @write_client.write(data: 'h2o_feet,location=coyote_creek level\\ water_level=4.0 4', bucket: bucket, org: org_a)
88
+ @write_client.write(data: 'h2o_feet,location=coyote_creek level\\ water_level=5.0 5', bucket: bucket2, org: org_a)
89
+ @write_client.write(data: 'h2o_feet,location=coyote_creek level\\ water_level=6.0 6', bucket: bucket, org: org_a)
90
+
91
+ sleep(1)
92
+
93
+ assert_requested(:post, 'http://localhost:9999/api/v2/write?bucket=my-bucket&org=my-org&precision=ns',
94
+ times: 1, body: 'h2o_feet,location=coyote_creek level\\ water_level=1.0 1')
95
+ assert_requested(:post, 'http://localhost:9999/api/v2/write?bucket=my-bucket&org=my-org&precision=s',
96
+ times: 1, body: 'h2o_feet,location=coyote_creek level\\ water_level=2.0 2')
97
+ assert_requested(:post, 'http://localhost:9999/api/v2/write?bucket=my-bucket&org=my-org-a&precision=ns',
98
+ times: 1, body: "h2o_feet,location=coyote_creek level\\ water_level=3.0 3\n" \
99
+ 'h2o_feet,location=coyote_creek level\\ water_level=4.0 4')
100
+ assert_requested(:post, 'http://localhost:9999/api/v2/write?bucket=my-bucket2&org=my-org-a&precision=ns',
101
+ times: 1, body: 'h2o_feet,location=coyote_creek level\\ water_level=5.0 5')
102
+ assert_requested(:post, 'http://localhost:9999/api/v2/write?bucket=my-bucket&org=my-org-a&precision=ns',
103
+ times: 1, body: 'h2o_feet,location=coyote_creek level\\ water_level=6.0 6')
104
+ end
105
+
106
+ def test_flush_interval
107
+ stub_request(:post, 'http://localhost:9999/api/v2/write?bucket=my-bucket&org=my-org&precision=ns')
108
+ .to_return(status: 204)
109
+
110
+ request1 = "h2o_feet,location=coyote_creek level\\ water_level=1.0 1\n" \
111
+ 'h2o_feet,location=coyote_creek level\\ water_level=2.0 2'
112
+ request2 = 'h2o_feet,location=coyote_creek level\\ water_level=3.0 3'
113
+
114
+ @write_client.write(data: 'h2o_feet,location=coyote_creek level\\ water_level=1.0 1')
115
+ @write_client.write(data: 'h2o_feet,location=coyote_creek level\\ water_level=2.0 2')
116
+
117
+ sleep(1)
118
+
119
+ assert_requested(:post, 'http://localhost:9999/api/v2/write?bucket=my-bucket&org=my-org&precision=ns',
120
+ times: 1, body: request1)
121
+
122
+ @write_client.write(data: 'h2o_feet,location=coyote_creek level\\ water_level=3.0 3')
123
+
124
+ sleep(2)
125
+
126
+ assert_requested(:post, 'http://localhost:9999/api/v2/write?bucket=my-bucket&org=my-org&precision=ns',
127
+ times: 0, body: request2)
128
+
129
+ sleep(3)
130
+
131
+ assert_requested(:post, 'http://localhost:9999/api/v2/write?bucket=my-bucket&org=my-org&precision=ns',
132
+ times: 1, body: request2)
133
+ end
134
+
135
+ def test_flush_all_by_close_client
136
+ stub_request(:post, 'http://localhost:9999/api/v2/write?bucket=my-bucket&org=my-org&precision=ns')
137
+ .to_return(status: 204)
138
+
139
+ @client.close!
140
+
141
+ @write_options = InfluxDB2::WriteOptions.new(write_type: InfluxDB2::WriteType::BATCHING,
142
+ batch_size: 10, flush_interval: 5_000)
143
+ @client = InfluxDB2::Client.new('http://localhost:9999',
144
+ 'my-token',
145
+ bucket: 'my-bucket',
146
+ org: 'my-org',
147
+ precision: InfluxDB2::WritePrecision::NANOSECOND,
148
+ use_ssl: false)
149
+
150
+ @write_client = @client.create_write_api(write_options: @write_options)
151
+
152
+ @write_client.write(data: 'h2o_feet,location=coyote_creek level\\ water_level=1.0 1')
153
+ @write_client.write(data: 'h2o_feet,location=coyote_creek level\\ water_level=2.0 2')
154
+ @write_client.write(data: 'h2o_feet,location=coyote_creek level\\ water_level=3.0 3')
155
+
156
+ assert_requested(:post, 'http://localhost:9999/api/v2/write?bucket=my-bucket&org=my-org&precision=ns',
157
+ times: 0, body: 'h2o_feet,location=coyote_creek level\\ water_level=3.0 3')
158
+
159
+ @client.close!
160
+
161
+ assert_requested(:post, 'http://localhost:9999/api/v2/write?bucket=my-bucket&org=my-org&precision=ns',
162
+ times: 1, body: "h2o_feet,location=coyote_creek level\\ water_level=1.0 1\n" \
163
+ "h2o_feet,location=coyote_creek level\\ water_level=2.0 2\n" \
164
+ 'h2o_feet,location=coyote_creek level\\ water_level=3.0 3')
165
+ end
166
+ end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: influxdb-client
3
3
  version: !ruby/object:Gem::Version
4
- version: 1.1.0.pre.323
4
+ version: 1.2.0.pre.503
5
5
  platform: ruby
6
6
  authors:
7
7
  - Jakub Bednar
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2020-01-31 00:00:00.000000000 Z
11
+ date: 2020-02-20 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: bundler
@@ -157,12 +157,15 @@ files:
157
157
  - lib/influxdb2/client/point.rb
158
158
  - lib/influxdb2/client/query_api.rb
159
159
  - lib/influxdb2/client/version.rb
160
+ - lib/influxdb2/client/worker.rb
160
161
  - lib/influxdb2/client/write_api.rb
161
162
  - test/influxdb/client_test.rb
162
163
  - test/influxdb/flux_csv_parser_test.rb
163
164
  - test/influxdb/point_test.rb
164
165
  - test/influxdb/query_api_integration_test.rb
166
+ - test/influxdb/query_api_stream_test.rb
165
167
  - test/influxdb/query_api_test.rb
168
+ - test/influxdb/write_api_batching_test.rb
166
169
  - test/influxdb/write_api_integration_test.rb
167
170
  - test/influxdb/write_api_test.rb
168
171
  - test/test_helper.rb
@@ -197,7 +200,9 @@ test_files:
197
200
  - test/influxdb/flux_csv_parser_test.rb
198
201
  - test/influxdb/point_test.rb
199
202
  - test/influxdb/query_api_integration_test.rb
203
+ - test/influxdb/query_api_stream_test.rb
200
204
  - test/influxdb/query_api_test.rb
205
+ - test/influxdb/write_api_batching_test.rb
201
206
  - test/influxdb/write_api_integration_test.rb
202
207
  - test/influxdb/write_api_test.rb
203
208
  - test/test_helper.rb