google-cloud-bigquery-storage-v1 0.1.3 → 0.2.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 7337540110b472ba215f9c619a0adaf365f165b5c2925bbdfb1f09643ff20fb6
4
- data.tar.gz: f88a1ca2cee278fd0fa21c7902f52498dce2d6b64dbcbad61bff0236d8356d22
3
+ metadata.gz: 53ec17886dd8d3fe3c6d4ff6604890803b861ce9322e1b01f3fc43f8f6208001
4
+ data.tar.gz: 604cb086ec8a0818ced020b49648d19468eea9b48a7efa463c85c37a379742ae
5
5
  SHA512:
6
- metadata.gz: a6936a74f38e76ec24e8c011fb694e33a790c0f326e49bea2c7ba21df89426bdfd7d0c5cd635ba898b0998065cf3c225da5e0dda7fc6742e4399fa4ebd7efc95
7
- data.tar.gz: 47367aede21c530fb8600b7b7ba31b059d50ead93cfce3718edcfd31e3b28e08422aaeab3559bc195c9389f1e0a13983607e4536d6ccfe285c19ccd545a49ec4
6
+ metadata.gz: 2f57fdd25b7007bf4c6d6a41b7e49aece11dc05255db888edb1cd3438bd0d4c7cd148c1eb873be2a0ab835c241decf1604a13322cacf28070921490b9c4c5538
7
+ data.tar.gz: ed34374c69280bc76d1383600e1ea3939dcf083b2f59884f345f5b65f8898fc20f1d85bc098ce7fb9b834fcff43ad07d0ea184e667feac5259fb089a747ee128
@@ -27,7 +27,7 @@ export BIGQUERY_STORAGE_CREDENTIALS=path/to/keyfile.json
27
27
  ```ruby
28
28
  require "google/cloud/bigquery/storage/v1"
29
29
 
30
- client = Google::Cloud::Bigquery::Storage::V1::BigQueryRead::Client.new
30
+ client = ::Google::Cloud::Bigquery::Storage::V1::BigQueryRead::Client.new
31
31
  ```
32
32
 
33
33
  ## Credential Lookup
@@ -64,7 +64,7 @@ containers where writing files is difficult or not encouraged.
64
64
 
65
65
  The environment variables that google-cloud-bigquery-storage-v1
66
66
  checks for credentials are configured on the service Credentials class (such as
67
- {Google::Cloud::Bigquery::Storage::V1::BigQueryRead::Credentials}):
67
+ {::Google::Cloud::Bigquery::Storage::V1::BigQueryRead::Credentials}):
68
68
 
69
69
  1. `BIGQUERY_STORAGE_CREDENTIALS` - Path to JSON file, or JSON contents
70
70
  2. `BIGQUERY_STORAGE_KEYFILE` - Path to JSON file, or JSON contents
@@ -77,7 +77,7 @@ require "google/cloud/bigquery/storage/v1"
77
77
 
78
78
  ENV["BIGQUERY_STORAGE_CREDENTIALS"] = "path/to/keyfile.json"
79
79
 
80
- client = Google::Cloud::Bigquery::Storage::V1::BigQueryRead::Client.new
80
+ client = ::Google::Cloud::Bigquery::Storage::V1::BigQueryRead::Client.new
81
81
  ```
82
82
 
83
83
  ### Configuration
@@ -88,7 +88,7 @@ environment variables. Either on an individual client initialization:
88
88
  ```ruby
89
89
  require "google/cloud/bigquery/storage/v1"
90
90
 
91
- client = Google::Cloud::Bigquery::Storage::V1::BigQueryRead::Client.new do |config|
91
+ client = ::Google::Cloud::Bigquery::Storage::V1::BigQueryRead::Client.new do |config|
92
92
  config.credentials = "path/to/keyfile.json"
93
93
  end
94
94
  ```
@@ -98,11 +98,11 @@ Or configured globally for all clients:
98
98
  ```ruby
99
99
  require "google/cloud/bigquery/storage/v1"
100
100
 
101
- Google::Cloud::Bigquery::Storage::V1::BigQueryRead::Client.configure do |config|
101
+ ::Google::Cloud::Bigquery::Storage::V1::BigQueryRead::Client.configure do |config|
102
102
  config.credentials = "path/to/keyfile.json"
103
103
  end
104
104
 
105
- client = Google::Cloud::Bigquery::Storage::V1::BigQueryRead::Client.new
105
+ client = ::Google::Cloud::Bigquery::Storage::V1::BigQueryRead::Client.new
106
106
  ```
107
107
 
108
108
  ### Cloud SDK
data/README.md CHANGED
@@ -18,6 +18,7 @@ In order to use this library, you first need to go through the following steps:
18
18
 
19
19
  1. [Select or create a Cloud Platform project.](https://console.cloud.google.com/project)
20
20
  1. [Enable billing for your project.](https://cloud.google.com/billing/docs/how-to/modify-project#enable_billing_for_a_project)
21
+ 1. [Enable the API.](https://console.cloud.google.com/apis/library/bigquerystorage.googleapis.com)
21
22
  1. {file:AUTHENTICATION.md Set up authentication.}
22
23
 
23
24
  ## Quick Start
@@ -25,7 +26,7 @@ In order to use this library, you first need to go through the following steps:
25
26
  ```ruby
26
27
  require "google/cloud/bigquery/storage/v1"
27
28
 
28
- client = Google::Cloud::Bigquery::Storage::V1::BigQueryRead::Client.new
29
+ client = ::Google::Cloud::Bigquery::Storage::V1::BigQueryRead::Client.new
29
30
  request = my_create_request
30
31
  response = client.create_read_session request
31
32
  ```
@@ -33,6 +34,9 @@ response = client.create_read_session request
33
34
  View the [Client Library Documentation](https://googleapis.dev/ruby/google-cloud-bigquery-storage-v1/latest)
34
35
  for class and method documentation.
35
36
 
37
+ See also the [Product Documentation](https://cloud.google.com/bigquery/docs/reference/storage)
38
+ for general usage information.
39
+
36
40
  ## Enabling Logging
37
41
 
38
42
  To enable logging for this library, set the logger for the underlying [gRPC](https://github.com/grpc/grpc/tree/master/src/ruby) library.
@@ -27,7 +27,7 @@ module Google
27
27
  # To load this package, including all its services, and instantiate a client:
28
28
  #
29
29
  # require "google/cloud/bigquery/storage/v1"
30
- # client = Google::Cloud::Bigquery::Storage::V1::BigQueryRead::Client.new
30
+ # client = ::Google::Cloud::Bigquery::Storage::V1::BigQueryRead::Client.new
31
31
  #
32
32
  module V1
33
33
  end
@@ -39,7 +39,7 @@ module Google
39
39
  # To load this service and instantiate a client:
40
40
  #
41
41
  # require "google/cloud/bigquery/storage/v1/big_query_read"
42
- # client = Google::Cloud::Bigquery::Storage::V1::BigQueryRead::Client.new
42
+ # client = ::Google::Cloud::Bigquery::Storage::V1::BigQueryRead::Client.new
43
43
  #
44
44
  module BigQueryRead
45
45
  end
@@ -41,15 +41,15 @@ module Google
41
41
  ##
42
42
  # Configure the BigQueryRead Client class.
43
43
  #
44
- # See {Google::Cloud::Bigquery::Storage::V1::BigQueryRead::Client::Configuration}
44
+ # See {::Google::Cloud::Bigquery::Storage::V1::BigQueryRead::Client::Configuration}
45
45
  # for a description of the configuration fields.
46
46
  #
47
47
  # ## Example
48
48
  #
49
49
  # To modify the configuration for all BigQueryRead clients:
50
50
  #
51
- # Google::Cloud::Bigquery::Storage::V1::BigQueryRead::Client.configure do |config|
52
- # config.timeout = 10_000
51
+ # ::Google::Cloud::Bigquery::Storage::V1::BigQueryRead::Client.configure do |config|
52
+ # config.timeout = 10.0
53
53
  # end
54
54
  #
55
55
  # @yield [config] Configure the Client client.
@@ -105,7 +105,7 @@ module Google
105
105
  # but structural changes (adding new fields, etc.) are not allowed. Structural changes
106
106
  # should be made on {Client.configure}.
107
107
  #
108
- # See {Google::Cloud::Bigquery::Storage::V1::BigQueryRead::Client::Configuration}
108
+ # See {::Google::Cloud::Bigquery::Storage::V1::BigQueryRead::Client::Configuration}
109
109
  # for a description of the configuration fields.
110
110
  #
111
111
  # @yield [config] Configure the Client client.
@@ -126,13 +126,13 @@ module Google
126
126
  # To create a new BigQueryRead client with the default
127
127
  # configuration:
128
128
  #
129
- # client = Google::Cloud::Bigquery::Storage::V1::BigQueryRead::Client.new
129
+ # client = ::Google::Cloud::Bigquery::Storage::V1::BigQueryRead::Client.new
130
130
  #
131
131
  # To create a new BigQueryRead client with a custom
132
132
  # configuration:
133
133
  #
134
- # client = Google::Cloud::Bigquery::Storage::V1::BigQueryRead::Client.new do |config|
135
- # config.timeout = 10_000
134
+ # client = ::Google::Cloud::Bigquery::Storage::V1::BigQueryRead::Client.new do |config|
135
+ # config.timeout = 10.0
136
136
  # end
137
137
  #
138
138
  # @yield [config] Configure the BigQueryRead client.
@@ -157,10 +157,11 @@ module Google
157
157
  if credentials.is_a?(String) || credentials.is_a?(Hash)
158
158
  credentials = Credentials.new credentials, scope: @config.scope
159
159
  end
160
- @quota_project_id = credentials.respond_to?(:quota_project_id) ? credentials.quota_project_id : nil
160
+ @quota_project_id = @config.quota_project
161
+ @quota_project_id ||= credentials.quota_project_id if credentials.respond_to? :quota_project_id
161
162
 
162
- @big_query_read_stub = Gapic::ServiceStub.new(
163
- Google::Cloud::Bigquery::Storage::V1::BigQueryRead::Stub,
163
+ @big_query_read_stub = ::Gapic::ServiceStub.new(
164
+ ::Google::Cloud::Bigquery::Storage::V1::BigQueryRead::Stub,
164
165
  credentials: credentials,
165
166
  endpoint: @config.endpoint,
166
167
  channel_args: @config.channel_args,
@@ -193,12 +194,12 @@ module Google
193
194
  #
194
195
  # @overload create_read_session(request, options = nil)
195
196
  # Pass arguments to `create_read_session` via a request object, either of type
196
- # {Google::Cloud::Bigquery::Storage::V1::CreateReadSessionRequest} or an equivalent Hash.
197
+ # {::Google::Cloud::Bigquery::Storage::V1::CreateReadSessionRequest} or an equivalent Hash.
197
198
  #
198
- # @param request [Google::Cloud::Bigquery::Storage::V1::CreateReadSessionRequest, Hash]
199
+ # @param request [::Google::Cloud::Bigquery::Storage::V1::CreateReadSessionRequest, ::Hash]
199
200
  # A request object representing the call parameters. Required. To specify no
200
201
  # parameters, or to keep all the default parameter values, pass an empty Hash.
201
- # @param options [Gapic::CallOptions, Hash]
202
+ # @param options [::Gapic::CallOptions, ::Hash]
202
203
  # Overrides the default settings for this call, e.g, timeout, retries, etc. Optional.
203
204
  #
204
205
  # @overload create_read_session(parent: nil, read_session: nil, max_stream_count: nil)
@@ -206,12 +207,12 @@ module Google
206
207
  # least one keyword argument is required. To specify no parameters, or to keep all
207
208
  # the default parameter values, pass an empty Hash as a request object (see above).
208
209
  #
209
- # @param parent [String]
210
+ # @param parent [::String]
210
211
  # Required. The request project that owns the session, in the form of
211
212
  # `projects/{project_id}`.
212
- # @param read_session [Google::Cloud::Bigquery::Storage::V1::ReadSession, Hash]
213
+ # @param read_session [::Google::Cloud::Bigquery::Storage::V1::ReadSession, ::Hash]
213
214
  # Required. Session to be created.
214
- # @param max_stream_count [Integer]
215
+ # @param max_stream_count [::Integer]
215
216
  # Max initial number of streams. If unset or zero, the server will
216
217
  # provide a value of streams so as to produce reasonable throughput. Must be
217
218
  # non-negative. The number of streams may be lower than the requested number,
@@ -222,26 +223,26 @@ module Google
222
223
  # Streams must be read starting from offset 0.
223
224
  #
224
225
  # @yield [response, operation] Access the result along with the RPC operation
225
- # @yieldparam response [Google::Cloud::Bigquery::Storage::V1::ReadSession]
226
- # @yieldparam operation [GRPC::ActiveCall::Operation]
226
+ # @yieldparam response [::Google::Cloud::Bigquery::Storage::V1::ReadSession]
227
+ # @yieldparam operation [::GRPC::ActiveCall::Operation]
227
228
  #
228
- # @return [Google::Cloud::Bigquery::Storage::V1::ReadSession]
229
+ # @return [::Google::Cloud::Bigquery::Storage::V1::ReadSession]
229
230
  #
230
- # @raise [Google::Cloud::Error] if the RPC is aborted.
231
+ # @raise [::Google::Cloud::Error] if the RPC is aborted.
231
232
  #
232
233
  def create_read_session request, options = nil
233
- raise ArgumentError, "request must be provided" if request.nil?
234
+ raise ::ArgumentError, "request must be provided" if request.nil?
234
235
 
235
- request = Gapic::Protobuf.coerce request, to: Google::Cloud::Bigquery::Storage::V1::CreateReadSessionRequest
236
+ request = ::Gapic::Protobuf.coerce request, to: ::Google::Cloud::Bigquery::Storage::V1::CreateReadSessionRequest
236
237
 
237
238
  # Converts hash and nil to an options object
238
- options = Gapic::CallOptions.new(**options.to_h) if options.respond_to? :to_h
239
+ options = ::Gapic::CallOptions.new(**options.to_h) if options.respond_to? :to_h
239
240
 
240
241
  # Customize the options with defaults
241
242
  metadata = @config.rpcs.create_read_session.metadata.to_h
242
243
 
243
244
  # Set x-goog-api-client and x-goog-user-project headers
244
- metadata[:"x-goog-api-client"] ||= Gapic::Headers.x_goog_api_client \
245
+ metadata[:"x-goog-api-client"] ||= ::Gapic::Headers.x_goog_api_client \
245
246
  lib_name: @config.lib_name, lib_version: @config.lib_version,
246
247
  gapic_version: ::Google::Cloud::Bigquery::Storage::V1::VERSION
247
248
  metadata[:"x-goog-user-project"] = @quota_project_id if @quota_project_id
@@ -262,8 +263,8 @@ module Google
262
263
  yield response, operation if block_given?
263
264
  return response
264
265
  end
265
- rescue GRPC::BadStatus => e
266
- raise Google::Cloud::Error.from_error(e)
266
+ rescue ::GRPC::BadStatus => e
267
+ raise ::Google::Cloud::Error.from_error(e)
267
268
  end
268
269
 
269
270
  ##
@@ -277,12 +278,12 @@ module Google
277
278
  #
278
279
  # @overload read_rows(request, options = nil)
279
280
  # Pass arguments to `read_rows` via a request object, either of type
280
- # {Google::Cloud::Bigquery::Storage::V1::ReadRowsRequest} or an equivalent Hash.
281
+ # {::Google::Cloud::Bigquery::Storage::V1::ReadRowsRequest} or an equivalent Hash.
281
282
  #
282
- # @param request [Google::Cloud::Bigquery::Storage::V1::ReadRowsRequest, Hash]
283
+ # @param request [::Google::Cloud::Bigquery::Storage::V1::ReadRowsRequest, ::Hash]
283
284
  # A request object representing the call parameters. Required. To specify no
284
285
  # parameters, or to keep all the default parameter values, pass an empty Hash.
285
- # @param options [Gapic::CallOptions, Hash]
286
+ # @param options [::Gapic::CallOptions, ::Hash]
286
287
  # Overrides the default settings for this call, e.g, timeout, retries, etc. Optional.
287
288
  #
288
289
  # @overload read_rows(read_stream: nil, offset: nil)
@@ -290,34 +291,34 @@ module Google
290
291
  # least one keyword argument is required. To specify no parameters, or to keep all
291
292
  # the default parameter values, pass an empty Hash as a request object (see above).
292
293
  #
293
- # @param read_stream [String]
294
+ # @param read_stream [::String]
294
295
  # Required. Stream to read rows from.
295
- # @param offset [Integer]
296
+ # @param offset [::Integer]
296
297
  # The offset requested must be less than the last row read from Read.
297
298
  # Requesting a larger offset is undefined. If not specified, start reading
298
299
  # from offset zero.
299
300
  #
300
301
  # @yield [response, operation] Access the result along with the RPC operation
301
- # @yieldparam response [Enumerable<Google::Cloud::Bigquery::Storage::V1::ReadRowsResponse>]
302
- # @yieldparam operation [GRPC::ActiveCall::Operation]
302
+ # @yieldparam response [::Enumerable<::Google::Cloud::Bigquery::Storage::V1::ReadRowsResponse>]
303
+ # @yieldparam operation [::GRPC::ActiveCall::Operation]
303
304
  #
304
- # @return [Enumerable<Google::Cloud::Bigquery::Storage::V1::ReadRowsResponse>]
305
+ # @return [::Enumerable<::Google::Cloud::Bigquery::Storage::V1::ReadRowsResponse>]
305
306
  #
306
- # @raise [Google::Cloud::Error] if the RPC is aborted.
307
+ # @raise [::Google::Cloud::Error] if the RPC is aborted.
307
308
  #
308
309
  def read_rows request, options = nil
309
- raise ArgumentError, "request must be provided" if request.nil?
310
+ raise ::ArgumentError, "request must be provided" if request.nil?
310
311
 
311
- request = Gapic::Protobuf.coerce request, to: Google::Cloud::Bigquery::Storage::V1::ReadRowsRequest
312
+ request = ::Gapic::Protobuf.coerce request, to: ::Google::Cloud::Bigquery::Storage::V1::ReadRowsRequest
312
313
 
313
314
  # Converts hash and nil to an options object
314
- options = Gapic::CallOptions.new(**options.to_h) if options.respond_to? :to_h
315
+ options = ::Gapic::CallOptions.new(**options.to_h) if options.respond_to? :to_h
315
316
 
316
317
  # Customize the options with defaults
317
318
  metadata = @config.rpcs.read_rows.metadata.to_h
318
319
 
319
320
  # Set x-goog-api-client and x-goog-user-project headers
320
- metadata[:"x-goog-api-client"] ||= Gapic::Headers.x_goog_api_client \
321
+ metadata[:"x-goog-api-client"] ||= ::Gapic::Headers.x_goog_api_client \
321
322
  lib_name: @config.lib_name, lib_version: @config.lib_version,
322
323
  gapic_version: ::Google::Cloud::Bigquery::Storage::V1::VERSION
323
324
  metadata[:"x-goog-user-project"] = @quota_project_id if @quota_project_id
@@ -338,8 +339,8 @@ module Google
338
339
  yield response, operation if block_given?
339
340
  return response
340
341
  end
341
- rescue GRPC::BadStatus => e
342
- raise Google::Cloud::Error.from_error(e)
342
+ rescue ::GRPC::BadStatus => e
343
+ raise ::Google::Cloud::Error.from_error(e)
343
344
  end
344
345
 
345
346
  ##
@@ -358,12 +359,12 @@ module Google
358
359
  #
359
360
  # @overload split_read_stream(request, options = nil)
360
361
  # Pass arguments to `split_read_stream` via a request object, either of type
361
- # {Google::Cloud::Bigquery::Storage::V1::SplitReadStreamRequest} or an equivalent Hash.
362
+ # {::Google::Cloud::Bigquery::Storage::V1::SplitReadStreamRequest} or an equivalent Hash.
362
363
  #
363
- # @param request [Google::Cloud::Bigquery::Storage::V1::SplitReadStreamRequest, Hash]
364
+ # @param request [::Google::Cloud::Bigquery::Storage::V1::SplitReadStreamRequest, ::Hash]
364
365
  # A request object representing the call parameters. Required. To specify no
365
366
  # parameters, or to keep all the default parameter values, pass an empty Hash.
366
- # @param options [Gapic::CallOptions, Hash]
367
+ # @param options [::Gapic::CallOptions, ::Hash]
367
368
  # Overrides the default settings for this call, e.g, timeout, retries, etc. Optional.
368
369
  #
369
370
  # @overload split_read_stream(name: nil, fraction: nil)
@@ -371,9 +372,9 @@ module Google
371
372
  # least one keyword argument is required. To specify no parameters, or to keep all
372
373
  # the default parameter values, pass an empty Hash as a request object (see above).
373
374
  #
374
- # @param name [String]
375
+ # @param name [::String]
375
376
  # Required. Name of the stream to split.
376
- # @param fraction [Float]
377
+ # @param fraction [::Float]
377
378
  # A value in the range (0.0, 1.0) that specifies the fractional point at
378
379
  # which the original stream should be split. The actual split point is
379
380
  # evaluated on pre-filtered rows, so if a filter is provided, then there is
@@ -383,26 +384,26 @@ module Google
383
384
  # will always map to a data storage boundary on the server side.
384
385
  #
385
386
  # @yield [response, operation] Access the result along with the RPC operation
386
- # @yieldparam response [Google::Cloud::Bigquery::Storage::V1::SplitReadStreamResponse]
387
- # @yieldparam operation [GRPC::ActiveCall::Operation]
387
+ # @yieldparam response [::Google::Cloud::Bigquery::Storage::V1::SplitReadStreamResponse]
388
+ # @yieldparam operation [::GRPC::ActiveCall::Operation]
388
389
  #
389
- # @return [Google::Cloud::Bigquery::Storage::V1::SplitReadStreamResponse]
390
+ # @return [::Google::Cloud::Bigquery::Storage::V1::SplitReadStreamResponse]
390
391
  #
391
- # @raise [Google::Cloud::Error] if the RPC is aborted.
392
+ # @raise [::Google::Cloud::Error] if the RPC is aborted.
392
393
  #
393
394
  def split_read_stream request, options = nil
394
- raise ArgumentError, "request must be provided" if request.nil?
395
+ raise ::ArgumentError, "request must be provided" if request.nil?
395
396
 
396
- request = Gapic::Protobuf.coerce request, to: Google::Cloud::Bigquery::Storage::V1::SplitReadStreamRequest
397
+ request = ::Gapic::Protobuf.coerce request, to: ::Google::Cloud::Bigquery::Storage::V1::SplitReadStreamRequest
397
398
 
398
399
  # Converts hash and nil to an options object
399
- options = Gapic::CallOptions.new(**options.to_h) if options.respond_to? :to_h
400
+ options = ::Gapic::CallOptions.new(**options.to_h) if options.respond_to? :to_h
400
401
 
401
402
  # Customize the options with defaults
402
403
  metadata = @config.rpcs.split_read_stream.metadata.to_h
403
404
 
404
405
  # Set x-goog-api-client and x-goog-user-project headers
405
- metadata[:"x-goog-api-client"] ||= Gapic::Headers.x_goog_api_client \
406
+ metadata[:"x-goog-api-client"] ||= ::Gapic::Headers.x_goog_api_client \
406
407
  lib_name: @config.lib_name, lib_version: @config.lib_version,
407
408
  gapic_version: ::Google::Cloud::Bigquery::Storage::V1::VERSION
408
409
  metadata[:"x-goog-user-project"] = @quota_project_id if @quota_project_id
@@ -423,8 +424,8 @@ module Google
423
424
  yield response, operation if block_given?
424
425
  return response
425
426
  end
426
- rescue GRPC::BadStatus => e
427
- raise Google::Cloud::Error.from_error(e)
427
+ rescue ::GRPC::BadStatus => e
428
+ raise ::Google::Cloud::Error.from_error(e)
428
429
  end
429
430
 
430
431
  ##
@@ -434,7 +435,7 @@ module Google
434
435
  # providing control over timeouts, retry behavior, logging, transport
435
436
  # parameters, and other low-level controls. Certain parameters can also be
436
437
  # applied individually to specific RPCs. See
437
- # {Google::Cloud::Bigquery::Storage::V1::BigQueryRead::Client::Configuration::Rpcs}
438
+ # {::Google::Cloud::Bigquery::Storage::V1::BigQueryRead::Client::Configuration::Rpcs}
438
439
  # for a list of RPCs that can be configured independently.
439
440
  #
440
441
  # Configuration can be applied globally to all clients, or to a single client
@@ -445,22 +446,22 @@ module Google
445
446
  # To modify the global config, setting the timeout for create_read_session
446
447
  # to 20 seconds, and all remaining timeouts to 10 seconds:
447
448
  #
448
- # Google::Cloud::Bigquery::Storage::V1::BigQueryRead::Client.configure do |config|
449
- # config.timeout = 10_000
450
- # config.rpcs.create_read_session.timeout = 20_000
449
+ # ::Google::Cloud::Bigquery::Storage::V1::BigQueryRead::Client.configure do |config|
450
+ # config.timeout = 10.0
451
+ # config.rpcs.create_read_session.timeout = 20.0
451
452
  # end
452
453
  #
453
454
  # To apply the above configuration only to a new client:
454
455
  #
455
- # client = Google::Cloud::Bigquery::Storage::V1::BigQueryRead::Client.new do |config|
456
- # config.timeout = 10_000
457
- # config.rpcs.create_read_session.timeout = 20_000
456
+ # client = ::Google::Cloud::Bigquery::Storage::V1::BigQueryRead::Client.new do |config|
457
+ # config.timeout = 10.0
458
+ # config.rpcs.create_read_session.timeout = 20.0
458
459
  # end
459
460
  #
460
461
  # @!attribute [rw] endpoint
461
462
  # The hostname or hostname:port of the service endpoint.
462
463
  # Defaults to `"bigquerystorage.googleapis.com"`.
463
- # @return [String]
464
+ # @return [::String]
464
465
  # @!attribute [rw] credentials
465
466
  # Credentials to send with calls. You may provide any of the following types:
466
467
  # * (`String`) The path to a service account key file in JSON format
@@ -472,29 +473,29 @@ module Google
472
473
  # * (`GRPC::Core::Channel`) a gRPC channel with included credentials
473
474
  # * (`GRPC::Core::ChannelCredentials`) a gRPC credentails object
474
475
  # * (`nil`) indicating no credentials
475
- # @return [Object]
476
+ # @return [::Object]
476
477
  # @!attribute [rw] scope
477
478
  # The OAuth scopes
478
- # @return [Array<String>]
479
+ # @return [::Array<::String>]
479
480
  # @!attribute [rw] lib_name
480
481
  # The library name as recorded in instrumentation and logging
481
- # @return [String]
482
+ # @return [::String]
482
483
  # @!attribute [rw] lib_version
483
484
  # The library version as recorded in instrumentation and logging
484
- # @return [String]
485
+ # @return [::String]
485
486
  # @!attribute [rw] channel_args
486
487
  # Extra parameters passed to the gRPC channel. Note: this is ignored if a
487
488
  # `GRPC::Core::Channel` object is provided as the credential.
488
- # @return [Hash]
489
+ # @return [::Hash]
489
490
  # @!attribute [rw] interceptors
490
491
  # An array of interceptors that are run before calls are executed.
491
- # @return [Array<GRPC::ClientInterceptor>]
492
+ # @return [::Array<::GRPC::ClientInterceptor>]
492
493
  # @!attribute [rw] timeout
493
- # The call timeout in milliseconds.
494
- # @return [Numeric]
494
+ # The call timeout in seconds.
495
+ # @return [::Numeric]
495
496
  # @!attribute [rw] metadata
496
497
  # Additional gRPC headers to be sent with the call.
497
- # @return [Hash{Symbol=>String}]
498
+ # @return [::Hash{::Symbol=>::String}]
498
499
  # @!attribute [rw] retry_policy
499
500
  # The retry policy. The value is a hash with the following keys:
500
501
  # * `:initial_delay` (*type:* `Numeric`) - The initial delay in seconds.
@@ -502,25 +503,29 @@ module Google
502
503
  # * `:multiplier` (*type:* `Numeric`) - The incremental backoff multiplier.
503
504
  # * `:retry_codes` (*type:* `Array<String>`) - The error codes that should
504
505
  # trigger a retry.
505
- # @return [Hash]
506
+ # @return [::Hash]
507
+ # @!attribute [rw] quota_project
508
+ # A separate project against which to charge quota.
509
+ # @return [::String]
506
510
  #
507
511
  class Configuration
508
- extend Gapic::Config
512
+ extend ::Gapic::Config
509
513
 
510
- config_attr :endpoint, "bigquerystorage.googleapis.com", String
511
- config_attr :credentials, nil do |value|
514
+ config_attr :endpoint, "bigquerystorage.googleapis.com", ::String
515
+ config_attr :credentials, nil do |value|
512
516
  allowed = [::String, ::Hash, ::Proc, ::Google::Auth::Credentials, ::Signet::OAuth2::Client, nil]
513
517
  allowed += [::GRPC::Core::Channel, ::GRPC::Core::ChannelCredentials] if defined? ::GRPC
514
518
  allowed.any? { |klass| klass === value }
515
519
  end
516
- config_attr :scope, nil, String, Array, nil
517
- config_attr :lib_name, nil, String, nil
518
- config_attr :lib_version, nil, String, nil
519
- config_attr(:channel_args, { "grpc.service_config_disable_resolution"=>1 }, Hash, nil)
520
- config_attr :interceptors, nil, Array, nil
521
- config_attr :timeout, nil, Numeric, nil
522
- config_attr :metadata, nil, Hash, nil
523
- config_attr :retry_policy, nil, Hash, Proc, nil
520
+ config_attr :scope, nil, ::String, ::Array, nil
521
+ config_attr :lib_name, nil, ::String, nil
522
+ config_attr :lib_version, nil, ::String, nil
523
+ config_attr(:channel_args, { "grpc.service_config_disable_resolution"=>1 }, ::Hash, nil)
524
+ config_attr :interceptors, nil, ::Array, nil
525
+ config_attr :timeout, nil, ::Numeric, nil
526
+ config_attr :metadata, nil, ::Hash, nil
527
+ config_attr :retry_policy, nil, ::Hash, ::Proc, nil
528
+ config_attr :quota_project, nil, ::String, nil
524
529
 
525
530
  # @private
526
531
  def initialize parent_config = nil
@@ -536,7 +541,7 @@ module Google
536
541
  def rpcs
537
542
  @rpcs ||= begin
538
543
  parent_rpcs = nil
539
- parent_rpcs = @parent_config.rpcs if @parent_config&.respond_to? :rpcs
544
+ parent_rpcs = @parent_config.rpcs if defined?(@parent_config) && @parent_config&.respond_to?(:rpcs)
540
545
  Rpcs.new parent_rpcs
541
546
  end
542
547
  end
@@ -561,28 +566,28 @@ module Google
561
566
  class Rpcs
562
567
  ##
563
568
  # RPC-specific configuration for `create_read_session`
564
- # @return [Gapic::Config::Method]
569
+ # @return [::Gapic::Config::Method]
565
570
  #
566
571
  attr_reader :create_read_session
567
572
  ##
568
573
  # RPC-specific configuration for `read_rows`
569
- # @return [Gapic::Config::Method]
574
+ # @return [::Gapic::Config::Method]
570
575
  #
571
576
  attr_reader :read_rows
572
577
  ##
573
578
  # RPC-specific configuration for `split_read_stream`
574
- # @return [Gapic::Config::Method]
579
+ # @return [::Gapic::Config::Method]
575
580
  #
576
581
  attr_reader :split_read_stream
577
582
 
578
583
  # @private
579
584
  def initialize parent_rpcs = nil
580
585
  create_read_session_config = parent_rpcs&.create_read_session if parent_rpcs&.respond_to? :create_read_session
581
- @create_read_session = Gapic::Config::Method.new create_read_session_config
586
+ @create_read_session = ::Gapic::Config::Method.new create_read_session_config
582
587
  read_rows_config = parent_rpcs&.read_rows if parent_rpcs&.respond_to? :read_rows
583
- @read_rows = Gapic::Config::Method.new read_rows_config
588
+ @read_rows = ::Gapic::Config::Method.new read_rows_config
584
589
  split_read_stream_config = parent_rpcs&.split_read_stream if parent_rpcs&.respond_to? :split_read_stream
585
- @split_read_stream = Gapic::Config::Method.new split_read_stream_config
590
+ @split_read_stream = ::Gapic::Config::Method.new split_read_stream_config
586
591
 
587
592
  yield self if block_given?
588
593
  end
@@ -25,7 +25,7 @@ module Google
25
25
  module V1
26
26
  module BigQueryRead
27
27
  # Credentials for the BigQueryRead API.
28
- class Credentials < Google::Auth::Credentials
28
+ class Credentials < ::Google::Auth::Credentials
29
29
  self.scope = [
30
30
  "https://www.googleapis.com/auth/bigquery",
31
31
  "https://www.googleapis.com/auth/bigquery.readonly",
@@ -34,7 +34,7 @@ module Google
34
34
  #
35
35
  # @param project [String]
36
36
  #
37
- # @return [String]
37
+ # @return [::String]
38
38
  def project_path project:
39
39
  "projects/#{project}"
40
40
  end
@@ -50,10 +50,10 @@ module Google
50
50
  # @param location [String]
51
51
  # @param session [String]
52
52
  #
53
- # @return [String]
53
+ # @return [::String]
54
54
  def read_session_path project:, location:, session:
55
- raise ArgumentError, "project cannot contain /" if project.to_s.include? "/"
56
- raise ArgumentError, "location cannot contain /" if location.to_s.include? "/"
55
+ raise ::ArgumentError, "project cannot contain /" if project.to_s.include? "/"
56
+ raise ::ArgumentError, "location cannot contain /" if location.to_s.include? "/"
57
57
 
58
58
  "projects/#{project}/locations/#{location}/sessions/#{session}"
59
59
  end
@@ -70,11 +70,11 @@ module Google
70
70
  # @param session [String]
71
71
  # @param stream [String]
72
72
  #
73
- # @return [String]
73
+ # @return [::String]
74
74
  def read_stream_path project:, location:, session:, stream:
75
- raise ArgumentError, "project cannot contain /" if project.to_s.include? "/"
76
- raise ArgumentError, "location cannot contain /" if location.to_s.include? "/"
77
- raise ArgumentError, "session cannot contain /" if session.to_s.include? "/"
75
+ raise ::ArgumentError, "project cannot contain /" if project.to_s.include? "/"
76
+ raise ::ArgumentError, "location cannot contain /" if location.to_s.include? "/"
77
+ raise ::ArgumentError, "session cannot contain /" if session.to_s.include? "/"
78
78
 
79
79
  "projects/#{project}/locations/#{location}/sessions/#{session}/streams/#{stream}"
80
80
  end
@@ -90,10 +90,10 @@ module Google
90
90
  # @param dataset [String]
91
91
  # @param table [String]
92
92
  #
93
- # @return [String]
93
+ # @return [::String]
94
94
  def table_path project:, dataset:, table:
95
- raise ArgumentError, "project cannot contain /" if project.to_s.include? "/"
96
- raise ArgumentError, "dataset cannot contain /" if dataset.to_s.include? "/"
95
+ raise ::ArgumentError, "project cannot contain /" if project.to_s.include? "/"
96
+ raise ::ArgumentError, "dataset cannot contain /" if dataset.to_s.include? "/"
97
97
 
98
98
  "projects/#{project}/datasets/#{dataset}/tables/#{table}"
99
99
  end
@@ -22,7 +22,7 @@ module Google
22
22
  module Bigquery
23
23
  module Storage
24
24
  module V1
25
- VERSION = "0.1.3"
25
+ VERSION = "0.2.3"
26
26
  end
27
27
  end
28
28
  end
@@ -128,7 +128,7 @@ module Google
128
128
  # - pattern: "shelves/{shelf}"
129
129
  # parent_type: "cloudresourcemanager.googleapis.com/Folder"
130
130
  # @!attribute [rw] type
131
- # @return [String]
131
+ # @return [::String]
132
132
  # The resource type. It must be in the format of
133
133
  # \\{service_name}/\\{resource_type_kind}. The `resource_type_kind` must be
134
134
  # singular and must not include version numbers.
@@ -140,7 +140,7 @@ module Google
140
140
  # should use PascalCase (UpperCamelCase). The maximum number of
141
141
  # characters allowed for the `resource_type_kind` is 100.
142
142
  # @!attribute [rw] pattern
143
- # @return [Array<String>]
143
+ # @return [::Array<::String>]
144
144
  # Optional. The relative resource name pattern associated with this resource
145
145
  # type. The DNS prefix of the full resource name shouldn't be specified here.
146
146
  #
@@ -161,11 +161,11 @@ module Google
161
161
  # the same component name (e.g. "project") refers to IDs of the same
162
162
  # type of resource.
163
163
  # @!attribute [rw] name_field
164
- # @return [String]
164
+ # @return [::String]
165
165
  # Optional. The field on the resource that designates the resource name
166
166
  # field. If omitted, this is assumed to be "name".
167
167
  # @!attribute [rw] history
168
- # @return [Google::Api::ResourceDescriptor::History]
168
+ # @return [::Google::Api::ResourceDescriptor::History]
169
169
  # Optional. The historical or future-looking state of the resource pattern.
170
170
  #
171
171
  # Example:
@@ -182,19 +182,19 @@ module Google
182
182
  # };
183
183
  # }
184
184
  # @!attribute [rw] plural
185
- # @return [String]
185
+ # @return [::String]
186
186
  # The plural name used in the resource name, such as 'projects' for
187
187
  # the name of 'projects/\\{project}'. It is the same concept of the `plural`
188
188
  # field in k8s CRD spec
189
189
  # https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/
190
190
  # @!attribute [rw] singular
191
- # @return [String]
191
+ # @return [::String]
192
192
  # The same concept of the `singular` field in k8s CRD spec
193
193
  # https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/
194
194
  # Such as "project" for the `resourcemanager.googleapis.com/Project` type.
195
195
  class ResourceDescriptor
196
- include Google::Protobuf::MessageExts
197
- extend Google::Protobuf::MessageExts::ClassMethods
196
+ include ::Google::Protobuf::MessageExts
197
+ extend ::Google::Protobuf::MessageExts::ClassMethods
198
198
 
199
199
  # A description of the historical or future-looking state of the
200
200
  # resource pattern.
@@ -216,7 +216,7 @@ module Google
216
216
  # Defines a proto annotation that describes a string field that refers to
217
217
  # an API resource.
218
218
  # @!attribute [rw] type
219
- # @return [String]
219
+ # @return [::String]
220
220
  # The resource type that the annotated field references.
221
221
  #
222
222
  # Example:
@@ -227,7 +227,7 @@ module Google
227
227
  # }];
228
228
  # }
229
229
  # @!attribute [rw] child_type
230
- # @return [String]
230
+ # @return [::String]
231
231
  # The resource type of a child collection that the annotated field
232
232
  # references. This is useful for annotating the `parent` field that
233
233
  # doesn't have a fixed resource type.
@@ -240,8 +240,8 @@ module Google
240
240
  # };
241
241
  # }
242
242
  class ResourceReference
243
- include Google::Protobuf::MessageExts
244
- extend Google::Protobuf::MessageExts::ClassMethods
243
+ include ::Google::Protobuf::MessageExts
244
+ extend ::Google::Protobuf::MessageExts::ClassMethods
245
245
  end
246
246
  end
247
247
  end
@@ -29,23 +29,23 @@ module Google
29
29
  #
30
30
  # See code samples on how this message can be deserialized.
31
31
  # @!attribute [rw] serialized_schema
32
- # @return [String]
32
+ # @return [::String]
33
33
  # IPC serialized Arrow schema.
34
34
  class ArrowSchema
35
- include Google::Protobuf::MessageExts
36
- extend Google::Protobuf::MessageExts::ClassMethods
35
+ include ::Google::Protobuf::MessageExts
36
+ extend ::Google::Protobuf::MessageExts::ClassMethods
37
37
  end
38
38
 
39
39
  # Arrow RecordBatch.
40
40
  # @!attribute [rw] serialized_record_batch
41
- # @return [String]
41
+ # @return [::String]
42
42
  # IPC-serialized Arrow RecordBatch.
43
43
  # @!attribute [rw] row_count
44
- # @return [Integer]
44
+ # @return [::Integer]
45
45
  # The count of rows in `serialized_record_batch`.
46
46
  class ArrowRecordBatch
47
- include Google::Protobuf::MessageExts
48
- extend Google::Protobuf::MessageExts::ClassMethods
47
+ include ::Google::Protobuf::MessageExts
48
+ extend ::Google::Protobuf::MessageExts::ClassMethods
49
49
  end
50
50
  end
51
51
  end
@@ -24,24 +24,24 @@ module Google
24
24
  module V1
25
25
  # Avro schema.
26
26
  # @!attribute [rw] schema
27
- # @return [String]
27
+ # @return [::String]
28
28
  # Json serialized schema, as described at
29
29
  # https://avro.apache.org/docs/1.8.1/spec.html.
30
30
  class AvroSchema
31
- include Google::Protobuf::MessageExts
32
- extend Google::Protobuf::MessageExts::ClassMethods
31
+ include ::Google::Protobuf::MessageExts
32
+ extend ::Google::Protobuf::MessageExts::ClassMethods
33
33
  end
34
34
 
35
35
  # Avro rows.
36
36
  # @!attribute [rw] serialized_binary_rows
37
- # @return [String]
37
+ # @return [::String]
38
38
  # Binary serialized rows in a block.
39
39
  # @!attribute [rw] row_count
40
- # @return [Integer]
40
+ # @return [::Integer]
41
41
  # The count of rows in the returning block.
42
42
  class AvroRows
43
- include Google::Protobuf::MessageExts
44
- extend Google::Protobuf::MessageExts::ClassMethods
43
+ include ::Google::Protobuf::MessageExts
44
+ extend ::Google::Protobuf::MessageExts::ClassMethods
45
45
  end
46
46
  end
47
47
  end
@@ -24,14 +24,14 @@ module Google
24
24
  module V1
25
25
  # Request message for `CreateReadSession`.
26
26
  # @!attribute [rw] parent
27
- # @return [String]
27
+ # @return [::String]
28
28
  # Required. The request project that owns the session, in the form of
29
29
  # `projects/{project_id}`.
30
30
  # @!attribute [rw] read_session
31
- # @return [Google::Cloud::Bigquery::Storage::V1::ReadSession]
31
+ # @return [::Google::Cloud::Bigquery::Storage::V1::ReadSession]
32
32
  # Required. Session to be created.
33
33
  # @!attribute [rw] max_stream_count
34
- # @return [Integer]
34
+ # @return [::Integer]
35
35
  # Max initial number of streams. If unset or zero, the server will
36
36
  # provide a value of streams so as to produce reasonable throughput. Must be
37
37
  # non-negative. The number of streams may be lower than the requested number,
@@ -41,44 +41,44 @@ module Google
41
41
  #
42
42
  # Streams must be read starting from offset 0.
43
43
  class CreateReadSessionRequest
44
- include Google::Protobuf::MessageExts
45
- extend Google::Protobuf::MessageExts::ClassMethods
44
+ include ::Google::Protobuf::MessageExts
45
+ extend ::Google::Protobuf::MessageExts::ClassMethods
46
46
  end
47
47
 
48
48
  # Request message for `ReadRows`.
49
49
  # @!attribute [rw] read_stream
50
- # @return [String]
50
+ # @return [::String]
51
51
  # Required. Stream to read rows from.
52
52
  # @!attribute [rw] offset
53
- # @return [Integer]
53
+ # @return [::Integer]
54
54
  # The offset requested must be less than the last row read from Read.
55
55
  # Requesting a larger offset is undefined. If not specified, start reading
56
56
  # from offset zero.
57
57
  class ReadRowsRequest
58
- include Google::Protobuf::MessageExts
59
- extend Google::Protobuf::MessageExts::ClassMethods
58
+ include ::Google::Protobuf::MessageExts
59
+ extend ::Google::Protobuf::MessageExts::ClassMethods
60
60
  end
61
61
 
62
62
  # Information on if the current connection is being throttled.
63
63
  # @!attribute [rw] throttle_percent
64
- # @return [Integer]
64
+ # @return [::Integer]
65
65
  # How much this connection is being throttled. Zero means no throttling,
66
66
  # 100 means fully throttled.
67
67
  class ThrottleState
68
- include Google::Protobuf::MessageExts
69
- extend Google::Protobuf::MessageExts::ClassMethods
68
+ include ::Google::Protobuf::MessageExts
69
+ extend ::Google::Protobuf::MessageExts::ClassMethods
70
70
  end
71
71
 
72
72
  # Estimated stream statistics for a given Stream.
73
73
  # @!attribute [rw] progress
74
- # @return [Google::Cloud::Bigquery::Storage::V1::StreamStats::Progress]
74
+ # @return [::Google::Cloud::Bigquery::Storage::V1::StreamStats::Progress]
75
75
  # Represents the progress of the current stream.
76
76
  class StreamStats
77
- include Google::Protobuf::MessageExts
78
- extend Google::Protobuf::MessageExts::ClassMethods
77
+ include ::Google::Protobuf::MessageExts
78
+ extend ::Google::Protobuf::MessageExts::ClassMethods
79
79
 
80
80
  # @!attribute [rw] at_response_start
81
- # @return [Float]
81
+ # @return [::Float]
82
82
  # The fraction of rows assigned to the stream that have been processed by
83
83
  # the server so far, not including the rows in the current response
84
84
  # message.
@@ -92,44 +92,44 @@ module Google
92
92
  # previous response may not necessarily be equal to the
93
93
  # `at_response_start` value of the current response.
94
94
  # @!attribute [rw] at_response_end
95
- # @return [Float]
95
+ # @return [::Float]
96
96
  # Similar to `at_response_start`, except that this value includes the
97
97
  # rows in the current response.
98
98
  class Progress
99
- include Google::Protobuf::MessageExts
100
- extend Google::Protobuf::MessageExts::ClassMethods
99
+ include ::Google::Protobuf::MessageExts
100
+ extend ::Google::Protobuf::MessageExts::ClassMethods
101
101
  end
102
102
  end
103
103
 
104
104
  # Response from calling `ReadRows` may include row data, progress and
105
105
  # throttling information.
106
106
  # @!attribute [rw] avro_rows
107
- # @return [Google::Cloud::Bigquery::Storage::V1::AvroRows]
107
+ # @return [::Google::Cloud::Bigquery::Storage::V1::AvroRows]
108
108
  # Serialized row data in AVRO format.
109
109
  # @!attribute [rw] arrow_record_batch
110
- # @return [Google::Cloud::Bigquery::Storage::V1::ArrowRecordBatch]
110
+ # @return [::Google::Cloud::Bigquery::Storage::V1::ArrowRecordBatch]
111
111
  # Serialized row data in Arrow RecordBatch format.
112
112
  # @!attribute [rw] row_count
113
- # @return [Integer]
113
+ # @return [::Integer]
114
114
  # Number of serialized rows in the rows block.
115
115
  # @!attribute [rw] stats
116
- # @return [Google::Cloud::Bigquery::Storage::V1::StreamStats]
116
+ # @return [::Google::Cloud::Bigquery::Storage::V1::StreamStats]
117
117
  # Statistics for the stream.
118
118
  # @!attribute [rw] throttle_state
119
- # @return [Google::Cloud::Bigquery::Storage::V1::ThrottleState]
119
+ # @return [::Google::Cloud::Bigquery::Storage::V1::ThrottleState]
120
120
  # Throttling state. If unset, the latest response still describes
121
121
  # the current throttling status.
122
122
  class ReadRowsResponse
123
- include Google::Protobuf::MessageExts
124
- extend Google::Protobuf::MessageExts::ClassMethods
123
+ include ::Google::Protobuf::MessageExts
124
+ extend ::Google::Protobuf::MessageExts::ClassMethods
125
125
  end
126
126
 
127
127
  # Request message for `SplitReadStream`.
128
128
  # @!attribute [rw] name
129
- # @return [String]
129
+ # @return [::String]
130
130
  # Required. Name of the stream to split.
131
131
  # @!attribute [rw] fraction
132
- # @return [Float]
132
+ # @return [::Float]
133
133
  # A value in the range (0.0, 1.0) that specifies the fractional point at
134
134
  # which the original stream should be split. The actual split point is
135
135
  # evaluated on pre-filtered rows, so if a filter is provided, then there is
@@ -138,23 +138,23 @@ module Google
138
138
  # server-side unit for assigning data is collections of rows, this fraction
139
139
  # will always map to a data storage boundary on the server side.
140
140
  class SplitReadStreamRequest
141
- include Google::Protobuf::MessageExts
142
- extend Google::Protobuf::MessageExts::ClassMethods
141
+ include ::Google::Protobuf::MessageExts
142
+ extend ::Google::Protobuf::MessageExts::ClassMethods
143
143
  end
144
144
 
145
145
  # Response message for `SplitReadStream`.
146
146
  # @!attribute [rw] primary_stream
147
- # @return [Google::Cloud::Bigquery::Storage::V1::ReadStream]
147
+ # @return [::Google::Cloud::Bigquery::Storage::V1::ReadStream]
148
148
  # Primary stream, which contains the beginning portion of
149
149
  # |original_stream|. An empty value indicates that the original stream can no
150
150
  # longer be split.
151
151
  # @!attribute [rw] remainder_stream
152
- # @return [Google::Cloud::Bigquery::Storage::V1::ReadStream]
152
+ # @return [::Google::Cloud::Bigquery::Storage::V1::ReadStream]
153
153
  # Remainder stream, which contains the tail of |original_stream|. An empty
154
154
  # value indicates that the original stream can no longer be split.
155
155
  class SplitReadStreamResponse
156
- include Google::Protobuf::MessageExts
157
- extend Google::Protobuf::MessageExts::ClassMethods
156
+ include ::Google::Protobuf::MessageExts
157
+ extend ::Google::Protobuf::MessageExts::ClassMethods
158
158
  end
159
159
  end
160
160
  end
@@ -24,35 +24,35 @@ module Google
24
24
  module V1
25
25
  # Information about the ReadSession.
26
26
  # @!attribute [r] name
27
- # @return [String]
27
+ # @return [::String]
28
28
  # Output only. Unique identifier for the session, in the form
29
29
  # `projects/{project_id}/locations/{location}/sessions/{session_id}`.
30
30
  # @!attribute [r] expire_time
31
- # @return [Google::Protobuf::Timestamp]
31
+ # @return [::Google::Protobuf::Timestamp]
32
32
  # Output only. Time at which the session becomes invalid. After this time, subsequent
33
33
  # requests to read this Session will return errors. The expire_time is
34
34
  # automatically assigned and currently cannot be specified or updated.
35
35
  # @!attribute [rw] data_format
36
- # @return [Google::Cloud::Bigquery::Storage::V1::DataFormat]
36
+ # @return [::Google::Cloud::Bigquery::Storage::V1::DataFormat]
37
37
  # Immutable. Data format of the output data.
38
38
  # @!attribute [r] avro_schema
39
- # @return [Google::Cloud::Bigquery::Storage::V1::AvroSchema]
39
+ # @return [::Google::Cloud::Bigquery::Storage::V1::AvroSchema]
40
40
  # Output only. Avro schema.
41
41
  # @!attribute [r] arrow_schema
42
- # @return [Google::Cloud::Bigquery::Storage::V1::ArrowSchema]
42
+ # @return [::Google::Cloud::Bigquery::Storage::V1::ArrowSchema]
43
43
  # Output only. Arrow schema.
44
44
  # @!attribute [rw] table
45
- # @return [String]
45
+ # @return [::String]
46
46
  # Immutable. Table that this ReadSession is reading from, in the form
47
47
  # `projects/{project_id}/datasets/{dataset_id}/tables/{table_id}`
48
48
  # @!attribute [rw] table_modifiers
49
- # @return [Google::Cloud::Bigquery::Storage::V1::ReadSession::TableModifiers]
49
+ # @return [::Google::Cloud::Bigquery::Storage::V1::ReadSession::TableModifiers]
50
50
  # Optional. Any modifiers which are applied when reading from the specified table.
51
51
  # @!attribute [rw] read_options
52
- # @return [Google::Cloud::Bigquery::Storage::V1::ReadSession::TableReadOptions]
52
+ # @return [::Google::Cloud::Bigquery::Storage::V1::ReadSession::TableReadOptions]
53
53
  # Optional. Read options for this session (e.g. column selection, filters).
54
54
  # @!attribute [r] streams
55
- # @return [Array<Google::Cloud::Bigquery::Storage::V1::ReadStream>]
55
+ # @return [::Array<::Google::Cloud::Bigquery::Storage::V1::ReadStream>]
56
56
  # Output only. A list of streams created with the session.
57
57
  #
58
58
  # At least one stream is created with the session. In the future, larger
@@ -60,27 +60,27 @@ module Google
60
60
  # in that case, the user will need to use a List method to get the streams
61
61
  # instead, which is not yet available.
62
62
  class ReadSession
63
- include Google::Protobuf::MessageExts
64
- extend Google::Protobuf::MessageExts::ClassMethods
63
+ include ::Google::Protobuf::MessageExts
64
+ extend ::Google::Protobuf::MessageExts::ClassMethods
65
65
 
66
66
  # Additional attributes when reading a table.
67
67
  # @!attribute [rw] snapshot_time
68
- # @return [Google::Protobuf::Timestamp]
68
+ # @return [::Google::Protobuf::Timestamp]
69
69
  # The snapshot time of the table. If not set, interpreted as now.
70
70
  class TableModifiers
71
- include Google::Protobuf::MessageExts
72
- extend Google::Protobuf::MessageExts::ClassMethods
71
+ include ::Google::Protobuf::MessageExts
72
+ extend ::Google::Protobuf::MessageExts::ClassMethods
73
73
  end
74
74
 
75
75
  # Options dictating how we read a table.
76
76
  # @!attribute [rw] selected_fields
77
- # @return [Array<String>]
77
+ # @return [::Array<::String>]
78
78
  # Names of the fields in the table that should be read. If empty, all
79
79
  # fields will be read. If the specified field is a nested field, all
80
80
  # the sub-fields in the field will be selected. The output field order is
81
81
  # unrelated to the order of fields in selected_fields.
82
82
  # @!attribute [rw] row_restriction
83
- # @return [String]
83
+ # @return [::String]
84
84
  # SQL text filtering statement, similar to a WHERE clause in a query.
85
85
  # Aggregates are not supported.
86
86
  #
@@ -90,8 +90,8 @@ module Google
90
90
  # "st_equals(geo_field, st_geofromtext("POINT(2, 2)"))"
91
91
  # "numeric_field BETWEEN 1.0 AND 5.0"
92
92
  class TableReadOptions
93
- include Google::Protobuf::MessageExts
94
- extend Google::Protobuf::MessageExts::ClassMethods
93
+ include ::Google::Protobuf::MessageExts
94
+ extend ::Google::Protobuf::MessageExts::ClassMethods
95
95
  end
96
96
  end
97
97
 
@@ -99,12 +99,12 @@ module Google
99
99
  # Most of the information about `ReadStream` instances is aggregated, making
100
100
  # `ReadStream` lightweight.
101
101
  # @!attribute [r] name
102
- # @return [String]
102
+ # @return [::String]
103
103
  # Output only. Name of the stream, in the form
104
104
  # `projects/{project_id}/locations/{location}/sessions/{session_id}/streams/{stream_id}`.
105
105
  class ReadStream
106
- include Google::Protobuf::MessageExts
107
- extend Google::Protobuf::MessageExts::ClassMethods
106
+ include ::Google::Protobuf::MessageExts
107
+ extend ::Google::Protobuf::MessageExts::ClassMethods
108
108
  end
109
109
 
110
110
  # Data format for input or output data.
@@ -102,19 +102,19 @@ module Google
102
102
  # http://www.joda.org/joda-time/apidocs/org/joda/time/format/ISODateTimeFormat.html#dateTime%2D%2D
103
103
  # ) to obtain a formatter capable of generating timestamps in this format.
104
104
  # @!attribute [rw] seconds
105
- # @return [Integer]
105
+ # @return [::Integer]
106
106
  # Represents seconds of UTC time since Unix epoch
107
107
  # 1970-01-01T00:00:00Z. Must be from 0001-01-01T00:00:00Z to
108
108
  # 9999-12-31T23:59:59Z inclusive.
109
109
  # @!attribute [rw] nanos
110
- # @return [Integer]
110
+ # @return [::Integer]
111
111
  # Non-negative fractions of a second at nanosecond resolution. Negative
112
112
  # second values with fractions must still have non-negative nanos values
113
113
  # that count forward in time. Must be from 0 to 999,999,999
114
114
  # inclusive.
115
115
  class Timestamp
116
- include Google::Protobuf::MessageExts
117
- extend Google::Protobuf::MessageExts::ClassMethods
116
+ include ::Google::Protobuf::MessageExts
117
+ extend ::Google::Protobuf::MessageExts::ClassMethods
118
118
  end
119
119
  end
120
120
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: google-cloud-bigquery-storage-v1
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.1.3
4
+ version: 0.2.3
5
5
  platform: ruby
6
6
  authors:
7
7
  - Google LLC
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2020-04-13 00:00:00.000000000 Z
11
+ date: 2020-06-18 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: gapic-common
@@ -58,14 +58,42 @@ dependencies:
58
58
  requirements:
59
59
  - - "~>"
60
60
  - !ruby/object:Gem::Version
61
- version: '5.10'
61
+ version: '5.14'
62
62
  type: :development
63
63
  prerelease: false
64
64
  version_requirements: !ruby/object:Gem::Requirement
65
65
  requirements:
66
66
  - - "~>"
67
67
  - !ruby/object:Gem::Version
68
- version: '5.10'
68
+ version: '5.14'
69
+ - !ruby/object:Gem::Dependency
70
+ name: minitest-focus
71
+ requirement: !ruby/object:Gem::Requirement
72
+ requirements:
73
+ - - "~>"
74
+ - !ruby/object:Gem::Version
75
+ version: '1.1'
76
+ type: :development
77
+ prerelease: false
78
+ version_requirements: !ruby/object:Gem::Requirement
79
+ requirements:
80
+ - - "~>"
81
+ - !ruby/object:Gem::Version
82
+ version: '1.1'
83
+ - !ruby/object:Gem::Dependency
84
+ name: minitest-rg
85
+ requirement: !ruby/object:Gem::Requirement
86
+ requirements:
87
+ - - "~>"
88
+ - !ruby/object:Gem::Version
89
+ version: '5.2'
90
+ type: :development
91
+ prerelease: false
92
+ version_requirements: !ruby/object:Gem::Requirement
93
+ requirements:
94
+ - - "~>"
95
+ - !ruby/object:Gem::Version
96
+ version: '5.2'
69
97
  - !ruby/object:Gem::Dependency
70
98
  name: rake
71
99
  requirement: !ruby/object:Gem::Requirement
@@ -144,7 +172,6 @@ files:
144
172
  - lib/google/cloud/bigquery/storage/v1/storage_services_pb.rb
145
173
  - lib/google/cloud/bigquery/storage/v1/stream_pb.rb
146
174
  - lib/google/cloud/bigquery/storage/v1/version.rb
147
- - lib/google/cloud/common_resources_pb.rb
148
175
  - proto_docs/README.md
149
176
  - proto_docs/google/api/field_behavior.rb
150
177
  - proto_docs/google/api/resource.rb
@@ -172,7 +199,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
172
199
  - !ruby/object:Gem::Version
173
200
  version: '0'
174
201
  requirements: []
175
- rubygems_version: 3.0.6
202
+ rubygems_version: 3.1.3
176
203
  signing_key:
177
204
  specification_version: 4
178
205
  summary: API Client library for the BigQuery Storage V1 API
@@ -1,15 +0,0 @@
1
- # Generated by the protocol buffer compiler. DO NOT EDIT!
2
- # source: google/cloud/common_resources.proto
3
-
4
- require 'google/protobuf'
5
-
6
- require 'google/api/resource_pb'
7
- Google::Protobuf::DescriptorPool.generated_pool.build do
8
- add_file("google/cloud/common_resources.proto", :syntax => :proto3) do
9
- end
10
- end
11
-
12
- module Google
13
- module Cloud
14
- end
15
- end