google-cloud-bigquery-storage-v1 0.23.0 → 0.25.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 88f6d73e6343e3767da6cf80186af75e6fa12fef61cf57c8d946fedf5baab5af
4
- data.tar.gz: 9cb48413e6fd14bec8bcf082b612adc17d2f1aee58057624334b1bf49a12e6f1
3
+ metadata.gz: 5af79affad9b1e4fdefb7b4414d04b0a2f8853f59d34163f477fb77ac599d1d0
4
+ data.tar.gz: 6a0b8cd28e7b163337a9f7d05493fb0a47264df2bcfc8cd20450b7b6785bba18
5
5
  SHA512:
6
- metadata.gz: fe0cd7c5d2e2994627b7f5d55de419931210990dbe8eb12d5880ed209e084448a1b9b80b1ceb1b40f142bfd24eee49bd19ee5c5b0b770d0d451b5971146c9b0b
7
- data.tar.gz: 7b42b054f4ef1786341e06c4602588f858dc7caf9c30bb141c6b120ec542b32a653f8079503595cb86971e1ba22efc4be736050270f8ebfb7dabee213e15e245
6
+ metadata.gz: 4f014fedfb82f73f9f0afb0087db56e15f6f3cd5868eedb637da32a39b5fabadf725c81eca9ea49b0156196a688da3a0ae72ce18f9e62bd5221537d9a975fa0f
7
+ data.tar.gz: db7a25ef7eaf88e925dc7ff39b417b944111e4d4baffa2dec27bba65d4e316f246cfc9c32f8f90686340e693a04eba2303f10773ff36335c99e3a48985d426d6
data/AUTHENTICATION.md CHANGED
@@ -1,151 +1,122 @@
1
1
  # Authentication
2
2
 
3
- In general, the google-cloud-bigquery-storage-v1 library uses
4
- [Service Account](https://cloud.google.com/iam/docs/creating-managing-service-accounts)
5
- credentials to connect to Google Cloud services. When running within
6
- [Google Cloud Platform environments](#google-cloud-platform-environments) the
7
- credentials will be discovered automatically. When running on other
8
- environments, the Service Account credentials can be specified by providing the
9
- path to the
10
- [JSON keyfile](https://cloud.google.com/iam/docs/managing-service-account-keys)
11
- for the account (or the JSON itself) in
12
- [environment variables](#environment-variables). Additionally, Cloud SDK
13
- credentials can also be discovered automatically, but this is only recommended
14
- during development.
3
+ The recommended way to authenticate to the google-cloud-bigquery-storage-v1 library is to use
4
+ [Application Default Credentials (ADC)](https://cloud.google.com/docs/authentication/application-default-credentials).
5
+ To review all of your authentication options, see [Credentials lookup](#credential-lookup).
15
6
 
16
7
  ## Quickstart
17
8
 
18
- 1. [Create a service account and credentials](#creating-a-service-account).
19
- 2. Set the [environment variable](#environment-variables).
9
+ The following example shows how to set up authentication for a local development
10
+ environment with your user credentials.
20
11
 
21
- ```sh
22
- export BIGQUERY_STORAGE_CREDENTIALS=path/to/keyfile.json
23
- ```
24
-
25
- 3. Initialize the client.
12
+ **NOTE:** This method is _not_ recommended for running in production. User credentials
13
+ should be used only during development.
26
14
 
27
- ```ruby
28
- require "google/cloud/bigquery/storage/v1"
15
+ 1. [Download and install the Google Cloud CLI](https://cloud.google.com/sdk).
16
+ 2. Set up a local ADC file with your user credentials:
29
17
 
30
- client = ::Google::Cloud::Bigquery::Storage::V1::BigQueryRead::Client.new
18
+ ```sh
19
+ gcloud auth application-default login
31
20
  ```
32
21
 
33
- ## Credential Lookup
34
-
35
- The google-cloud-bigquery-storage-v1 library aims to make authentication
36
- as simple as possible, and provides several mechanisms to configure your system
37
- without requiring **Service Account Credentials** directly in code.
38
-
39
- **Credentials** are discovered in the following order:
40
-
41
- 1. Specify credentials in method arguments
42
- 2. Specify credentials in configuration
43
- 3. Discover credentials path in environment variables
44
- 4. Discover credentials JSON in environment variables
45
- 5. Discover credentials file in the Cloud SDK's path
46
- 6. Discover GCP credentials
47
-
48
- ### Google Cloud Platform environments
22
+ 3. Write code as if already authenticated.
49
23
 
50
- When running on Google Cloud Platform (GCP), including Google Compute Engine
51
- (GCE), Google Kubernetes Engine (GKE), Google App Engine (GAE), Google Cloud
52
- Functions (GCF) and Cloud Run, **Credentials** are discovered automatically.
53
- Code should be written as if already authenticated.
24
+ For more information about setting up authentication for a local development environment, see
25
+ [Set up Application Default Credentials](https://cloud.google.com/docs/authentication/provide-credentials-adc#local-dev).
54
26
 
55
- ### Environment Variables
27
+ ## Credential Lookup
56
28
 
57
- The **Credentials JSON** can be placed in environment variables instead of
58
- declaring them directly in code. Each service has its own environment variable,
59
- allowing for different service accounts to be used for different services. (See
60
- the READMEs for the individual service gems for details.) The path to the
61
- **Credentials JSON** file can be stored in the environment variable, or the
62
- **Credentials JSON** itself can be stored for environments such as Docker
63
- containers where writing files is difficult or not encouraged.
29
+ The google-cloud-bigquery-storage-v1 library provides several mechanisms to configure your system.
30
+ Generally, using Application Default Credentials to facilitate automatic
31
+ credentials discovery is the easist method. But if you need to explicitly specify
32
+ credentials, there are several methods available to you.
64
33
 
65
- The environment variables that google-cloud-bigquery-storage-v1
66
- checks for credentials are configured on the service Credentials class (such as
67
- {::Google::Cloud::Bigquery::Storage::V1::BigQueryRead::Credentials}):
34
+ Credentials are accepted in the following ways, in the following order or precedence:
68
35
 
69
- * `BIGQUERY_STORAGE_CREDENTIALS` - Path to JSON file, or JSON contents
70
- * `BIGQUERY_STORAGE_KEYFILE` - Path to JSON file, or JSON contents
71
- * `GOOGLE_CLOUD_CREDENTIALS` - Path to JSON file, or JSON contents
72
- * `GOOGLE_CLOUD_KEYFILE` - Path to JSON file, or JSON contents
73
- * `GOOGLE_APPLICATION_CREDENTIALS` - Path to JSON file
36
+ 1. Credentials specified in method arguments
37
+ 2. Credentials specified in configuration
38
+ 3. Credentials pointed to or included in environment variables
39
+ 4. Credentials found in local ADC file
40
+ 5. Credentials returned by the metadata server for the attached service account (GCP)
74
41
 
75
- ```ruby
76
- require "google/cloud/bigquery/storage/v1"
77
-
78
- ENV["BIGQUERY_STORAGE_CREDENTIALS"] = "path/to/keyfile.json"
42
+ ### Configuration
79
43
 
80
- client = ::Google::Cloud::Bigquery::Storage::V1::BigQueryRead::Client.new
81
- ```
44
+ You can configure a path to a JSON credentials file, either for an individual client object or
45
+ globally, for all client objects. The JSON file can contain credentials created for
46
+ [workload identity federation](https://cloud.google.com/iam/docs/workload-identity-federation),
47
+ [workforce identity federation](https://cloud.google.com/iam/docs/workforce-identity-federation), or a
48
+ [service account key](https://cloud.google.com/docs/authentication/provide-credentials-adc#local-key).
82
49
 
83
- ### Configuration
50
+ Note: Service account keys are a security risk if not managed correctly. You should
51
+ [choose a more secure alternative to service account keys](https://cloud.google.com/docs/authentication#auth-decision-tree)
52
+ whenever possible.
84
53
 
85
- The path to the **Credentials JSON** file can be configured instead of storing
86
- it in an environment variable. Either on an individual client initialization:
54
+ To configure a credentials file for an individual client initialization:
87
55
 
88
56
  ```ruby
89
57
  require "google/cloud/bigquery/storage/v1"
90
58
 
91
59
  client = ::Google::Cloud::Bigquery::Storage::V1::BigQueryRead::Client.new do |config|
92
- config.credentials = "path/to/keyfile.json"
60
+ config.credentials = "path/to/credentialfile.json"
93
61
  end
94
62
  ```
95
63
 
96
- Or globally for all clients:
64
+ To configure a credentials file globally for all clients:
97
65
 
98
66
  ```ruby
99
67
  require "google/cloud/bigquery/storage/v1"
100
68
 
101
69
  ::Google::Cloud::Bigquery::Storage::V1::BigQueryRead::Client.configure do |config|
102
- config.credentials = "path/to/keyfile.json"
70
+ config.credentials = "path/to/credentialfile.json"
103
71
  end
104
72
 
105
73
  client = ::Google::Cloud::Bigquery::Storage::V1::BigQueryRead::Client.new
106
74
  ```
107
75
 
108
- ### Cloud SDK
76
+ ### Environment Variables
109
77
 
110
- This option allows for an easy way to authenticate during development. If
111
- credentials are not provided in code or in environment variables, then Cloud SDK
112
- credentials are discovered.
78
+ You can also use an environment variable to provide a JSON credentials file.
79
+ The environment variable can contain a path to the credentials file or, for
80
+ environments such as Docker containers where writing files is not encouraged,
81
+ you can include the credentials file itself.
113
82
 
114
- To configure your system for this, simply:
83
+ The JSON file can contain credentials created for
84
+ [workload identity federation](https://cloud.google.com/iam/docs/workload-identity-federation),
85
+ [workforce identity federation](https://cloud.google.com/iam/docs/workforce-identity-federation), or a
86
+ [service account key](https://cloud.google.com/docs/authentication/provide-credentials-adc#local-key).
115
87
 
116
- 1. [Download and install the Cloud SDK](https://cloud.google.com/sdk)
117
- 2. Authenticate using OAuth 2.0 `$ gcloud auth application-default login`
118
- 3. Write code as if already authenticated.
88
+ Note: Service account keys are a security risk if not managed correctly. You should
89
+ [choose a more secure alternative to service account keys](https://cloud.google.com/docs/authentication#auth-decision-tree)
90
+ whenever possible.
91
+
92
+ The environment variables that google-cloud-bigquery-storage-v1
93
+ checks for credentials are:
119
94
 
120
- **NOTE:** This is _not_ recommended for running in production. The Cloud SDK
121
- *should* only be used during development.
95
+ * `GOOGLE_CLOUD_CREDENTIALS` - Path to JSON file, or JSON contents
96
+ * `GOOGLE_APPLICATION_CREDENTIALS` - Path to JSON file
122
97
 
123
- ## Creating a Service Account
98
+ ```ruby
99
+ require "google/cloud/bigquery/storage/v1"
124
100
 
125
- Google Cloud requires **Service Account Credentials** to
126
- connect to the APIs. You will use the **JSON key file** to
127
- connect to most services with google-cloud-bigquery-storage-v1.
101
+ ENV["GOOGLE_APPLICATION_CREDENTIALS"] = "path/to/credentialfile.json"
128
102
 
129
- If you are not running this client within
130
- [Google Cloud Platform environments](#google-cloud-platform-environments), you
131
- need a Google Developers service account.
103
+ client = ::Google::Cloud::Bigquery::Storage::V1::BigQueryRead::Client.new
104
+ ```
132
105
 
133
- 1. Visit the [Google Cloud Console](https://console.cloud.google.com/project).
134
- 2. Create a new project or click on an existing project.
135
- 3. Activate the menu in the upper left and select **APIs & Services**. From
136
- here, you will enable the APIs that your application requires.
106
+ ### Local ADC file
137
107
 
138
- *Note: You may need to enable billing in order to use these services.*
108
+ You can set up a local ADC file with your user credentials for authentication during
109
+ development. If credentials are not provided in code or in environment variables,
110
+ then the local ADC credentials are discovered.
139
111
 
140
- 4. Select **Credentials** from the side navigation.
112
+ Follow the steps in [Quickstart](#quickstart) to set up a local ADC file.
141
113
 
142
- Find the "Create credentials" drop down near the top of the page, and select
143
- "Service account" to be guided through downloading a new JSON key file.
114
+ ### Google Cloud Platform environments
144
115
 
145
- If you want to re-use an existing service account, you can easily generate a
146
- new key file. Just select the account you wish to re-use, click the pencil
147
- tool on the right side to edit the service account, select the **Keys** tab,
148
- and then select **Add Key**.
116
+ When running on Google Cloud Platform (GCP), including Google Compute Engine
117
+ (GCE), Google Kubernetes Engine (GKE), Google App Engine (GAE), Google Cloud
118
+ Functions (GCF) and Cloud Run, credentials are retrieved from the attached
119
+ service account automatically. Code should be written as if already authenticated.
149
120
 
150
- The key file you download will be used by this library to authenticate API
151
- requests and should be stored in a secure location.
121
+ For more information, see
122
+ [Set up ADC for Google Cloud services](https://cloud.google.com/docs/authentication/provide-credentials-adc#attached-sa).
@@ -33,6 +33,9 @@ module Google
33
33
  # The Read API can be used to read data from BigQuery.
34
34
  #
35
35
  class Client
36
+ # @private
37
+ DEFAULT_ENDPOINT_TEMPLATE = "bigquerystorage.$UNIVERSE_DOMAIN$"
38
+
36
39
  include Paths
37
40
 
38
41
  # @private
@@ -108,6 +111,15 @@ module Google
108
111
  @config
109
112
  end
110
113
 
114
+ ##
115
+ # The effective universe domain
116
+ #
117
+ # @return [String]
118
+ #
119
+ def universe_domain
120
+ @big_query_read_stub.universe_domain
121
+ end
122
+
111
123
  ##
112
124
  # Create a new BigQueryRead client object.
113
125
  #
@@ -141,8 +153,9 @@ module Google
141
153
  credentials = @config.credentials
142
154
  # Use self-signed JWT if the endpoint is unchanged from default,
143
155
  # but only if the default endpoint does not have a region prefix.
144
- enable_self_signed_jwt = @config.endpoint == Configuration::DEFAULT_ENDPOINT &&
145
- !@config.endpoint.split(".").first.include?("-")
156
+ enable_self_signed_jwt = @config.endpoint.nil? ||
157
+ (@config.endpoint == Configuration::DEFAULT_ENDPOINT &&
158
+ !@config.endpoint.split(".").first.include?("-"))
146
159
  credentials ||= Credentials.default scope: @config.scope,
147
160
  enable_self_signed_jwt: enable_self_signed_jwt
148
161
  if credentials.is_a?(::String) || credentials.is_a?(::Hash)
@@ -153,8 +166,10 @@ module Google
153
166
 
154
167
  @big_query_read_stub = ::Gapic::ServiceStub.new(
155
168
  ::Google::Cloud::Bigquery::Storage::V1::BigQueryRead::Stub,
156
- credentials: credentials,
157
- endpoint: @config.endpoint,
169
+ credentials: credentials,
170
+ endpoint: @config.endpoint,
171
+ endpoint_template: DEFAULT_ENDPOINT_TEMPLATE,
172
+ universe_domain: @config.universe_domain,
158
173
  channel_args: @config.channel_args,
159
174
  interceptors: @config.interceptors,
160
175
  channel_pool_config: @config.channel_pool
@@ -521,9 +536,9 @@ module Google
521
536
  # end
522
537
  #
523
538
  # @!attribute [rw] endpoint
524
- # The hostname or hostname:port of the service endpoint.
525
- # Defaults to `"bigquerystorage.googleapis.com"`.
526
- # @return [::String]
539
+ # A custom service endpoint, as a hostname or hostname:port. The default is
540
+ # nil, indicating to use the default endpoint in the current universe domain.
541
+ # @return [::String,nil]
527
542
  # @!attribute [rw] credentials
528
543
  # Credentials to send with calls. You may provide any of the following types:
529
544
  # * (`String`) The path to a service account key file in JSON format
@@ -569,13 +584,20 @@ module Google
569
584
  # @!attribute [rw] quota_project
570
585
  # A separate project against which to charge quota.
571
586
  # @return [::String]
587
+ # @!attribute [rw] universe_domain
588
+ # The universe domain within which to make requests. This determines the
589
+ # default endpoint URL. The default value of nil uses the environment
590
+ # universe (usually the default "googleapis.com" universe).
591
+ # @return [::String,nil]
572
592
  #
573
593
  class Configuration
574
594
  extend ::Gapic::Config
575
595
 
596
+ # @private
597
+ # The endpoint specific to the default "googleapis.com" universe. Deprecated.
576
598
  DEFAULT_ENDPOINT = "bigquerystorage.googleapis.com"
577
599
 
578
- config_attr :endpoint, DEFAULT_ENDPOINT, ::String
600
+ config_attr :endpoint, nil, ::String, nil
579
601
  config_attr :credentials, nil do |value|
580
602
  allowed = [::String, ::Hash, ::Proc, ::Symbol, ::Google::Auth::Credentials, ::Signet::OAuth2::Client, nil]
581
603
  allowed += [::GRPC::Core::Channel, ::GRPC::Core::ChannelCredentials] if defined? ::GRPC
@@ -590,6 +612,7 @@ module Google
590
612
  config_attr :metadata, nil, ::Hash, nil
591
613
  config_attr :retry_policy, nil, ::Hash, ::Proc, nil
592
614
  config_attr :quota_project, nil, ::String, nil
615
+ config_attr :universe_domain, nil, ::String, nil
593
616
 
594
617
  # @private
595
618
  def initialize parent_config = nil
@@ -36,6 +36,9 @@ module Google
36
36
  # https://cloud.google.com/bigquery/docs/write-api
37
37
  #
38
38
  class Client
39
+ # @private
40
+ DEFAULT_ENDPOINT_TEMPLATE = "bigquerystorage.$UNIVERSE_DOMAIN$"
41
+
39
42
  include Paths
40
43
 
41
44
  # @private
@@ -126,6 +129,15 @@ module Google
126
129
  @config
127
130
  end
128
131
 
132
+ ##
133
+ # The effective universe domain
134
+ #
135
+ # @return [String]
136
+ #
137
+ def universe_domain
138
+ @big_query_write_stub.universe_domain
139
+ end
140
+
129
141
  ##
130
142
  # Create a new BigQueryWrite client object.
131
143
  #
@@ -159,8 +171,9 @@ module Google
159
171
  credentials = @config.credentials
160
172
  # Use self-signed JWT if the endpoint is unchanged from default,
161
173
  # but only if the default endpoint does not have a region prefix.
162
- enable_self_signed_jwt = @config.endpoint == Configuration::DEFAULT_ENDPOINT &&
163
- !@config.endpoint.split(".").first.include?("-")
174
+ enable_self_signed_jwt = @config.endpoint.nil? ||
175
+ (@config.endpoint == Configuration::DEFAULT_ENDPOINT &&
176
+ !@config.endpoint.split(".").first.include?("-"))
164
177
  credentials ||= Credentials.default scope: @config.scope,
165
178
  enable_self_signed_jwt: enable_self_signed_jwt
166
179
  if credentials.is_a?(::String) || credentials.is_a?(::Hash)
@@ -171,8 +184,10 @@ module Google
171
184
 
172
185
  @big_query_write_stub = ::Gapic::ServiceStub.new(
173
186
  ::Google::Cloud::Bigquery::Storage::V1::BigQueryWrite::Stub,
174
- credentials: credentials,
175
- endpoint: @config.endpoint,
187
+ credentials: credentials,
188
+ endpoint: @config.endpoint,
189
+ endpoint_template: DEFAULT_ENDPOINT_TEMPLATE,
190
+ universe_domain: @config.universe_domain,
176
191
  channel_args: @config.channel_args,
177
192
  interceptors: @config.interceptors,
178
193
  channel_pool_config: @config.channel_pool
@@ -776,9 +791,9 @@ module Google
776
791
  # end
777
792
  #
778
793
  # @!attribute [rw] endpoint
779
- # The hostname or hostname:port of the service endpoint.
780
- # Defaults to `"bigquerystorage.googleapis.com"`.
781
- # @return [::String]
794
+ # A custom service endpoint, as a hostname or hostname:port. The default is
795
+ # nil, indicating to use the default endpoint in the current universe domain.
796
+ # @return [::String,nil]
782
797
  # @!attribute [rw] credentials
783
798
  # Credentials to send with calls. You may provide any of the following types:
784
799
  # * (`String`) The path to a service account key file in JSON format
@@ -824,13 +839,20 @@ module Google
824
839
  # @!attribute [rw] quota_project
825
840
  # A separate project against which to charge quota.
826
841
  # @return [::String]
842
+ # @!attribute [rw] universe_domain
843
+ # The universe domain within which to make requests. This determines the
844
+ # default endpoint URL. The default value of nil uses the environment
845
+ # universe (usually the default "googleapis.com" universe).
846
+ # @return [::String,nil]
827
847
  #
828
848
  class Configuration
829
849
  extend ::Gapic::Config
830
850
 
851
+ # @private
852
+ # The endpoint specific to the default "googleapis.com" universe. Deprecated.
831
853
  DEFAULT_ENDPOINT = "bigquerystorage.googleapis.com"
832
854
 
833
- config_attr :endpoint, DEFAULT_ENDPOINT, ::String
855
+ config_attr :endpoint, nil, ::String, nil
834
856
  config_attr :credentials, nil do |value|
835
857
  allowed = [::String, ::Hash, ::Proc, ::Symbol, ::Google::Auth::Credentials, ::Signet::OAuth2::Client, nil]
836
858
  allowed += [::GRPC::Core::Channel, ::GRPC::Core::ChannelCredentials] if defined? ::GRPC
@@ -845,6 +867,7 @@ module Google
845
867
  config_attr :metadata, nil, ::Hash, nil
846
868
  config_attr :retry_policy, nil, ::Hash, ::Proc, nil
847
869
  config_attr :quota_project, nil, ::String, nil
870
+ config_attr :universe_domain, nil, ::String, nil
848
871
 
849
872
  # @private
850
873
  def initialize parent_config = nil
@@ -18,7 +18,7 @@ require 'google/protobuf/wrappers_pb'
18
18
  require 'google/rpc/status_pb'
19
19
 
20
20
 
21
- descriptor_data = "\n.google/cloud/bigquery/storage/v1/storage.proto\x12 google.cloud.bigquery.storage.v1\x1a\x1cgoogle/api/annotations.proto\x1a\x17google/api/client.proto\x1a\x1fgoogle/api/field_behavior.proto\x1a\x19google/api/resource.proto\x1a,google/cloud/bigquery/storage/v1/arrow.proto\x1a+google/cloud/bigquery/storage/v1/avro.proto\x1a/google/cloud/bigquery/storage/v1/protobuf.proto\x1a-google/cloud/bigquery/storage/v1/stream.proto\x1a,google/cloud/bigquery/storage/v1/table.proto\x1a\x1fgoogle/protobuf/timestamp.proto\x1a\x1egoogle/protobuf/wrappers.proto\x1a\x17google/rpc/status.proto\"\xe7\x01\n\x18\x43reateReadSessionRequest\x12\x43\n\x06parent\x18\x01 \x01(\tB3\xe0\x41\x02\xfa\x41-\n+cloudresourcemanager.googleapis.com/Project\x12H\n\x0cread_session\x18\x02 \x01(\x0b\x32-.google.cloud.bigquery.storage.v1.ReadSessionB\x03\xe0\x41\x02\x12\x18\n\x10max_stream_count\x18\x03 \x01(\x05\x12\"\n\x1apreferred_min_stream_count\x18\x04 \x01(\x05\"i\n\x0fReadRowsRequest\x12\x46\n\x0bread_stream\x18\x01 \x01(\tB1\xe0\x41\x02\xfa\x41+\n)bigquerystorage.googleapis.com/ReadStream\x12\x0e\n\x06offset\x18\x02 \x01(\x03\")\n\rThrottleState\x12\x18\n\x10throttle_percent\x18\x01 \x01(\x05\"\x97\x01\n\x0bStreamStats\x12H\n\x08progress\x18\x02 \x01(\x0b\x32\x36.google.cloud.bigquery.storage.v1.StreamStats.Progress\x1a>\n\x08Progress\x12\x19\n\x11\x61t_response_start\x18\x01 \x01(\x01\x12\x17\n\x0f\x61t_response_end\x18\x02 \x01(\x01\"\xe7\x03\n\x10ReadRowsResponse\x12?\n\tavro_rows\x18\x03 \x01(\x0b\x32*.google.cloud.bigquery.storage.v1.AvroRowsH\x00\x12P\n\x12\x61rrow_record_batch\x18\x04 \x01(\x0b\x32\x32.google.cloud.bigquery.storage.v1.ArrowRecordBatchH\x00\x12\x11\n\trow_count\x18\x06 \x01(\x03\x12<\n\x05stats\x18\x02 \x01(\x0b\x32-.google.cloud.bigquery.storage.v1.StreamStats\x12G\n\x0ethrottle_state\x18\x05 \x01(\x0b\x32/.google.cloud.bigquery.storage.v1.ThrottleState\x12H\n\x0b\x61vro_schema\x18\x07 \x01(\x0b\x32,.google.cloud.bigquery.storage.v1.AvroSchemaB\x03\xe0\x41\x03H\x01\x12J\n\x0c\x61rrow_schema\x18\x08 \x01(\x0b\x32-.google.cloud.bigquery.storage.v1.ArrowSchemaB\x03\xe0\x41\x03H\x01\x42\x06\n\x04rowsB\x08\n\x06schema\"k\n\x16SplitReadStreamRequest\x12?\n\x04name\x18\x01 \x01(\tB1\xe0\x41\x02\xfa\x41+\n)bigquerystorage.googleapis.com/ReadStream\x12\x10\n\x08\x66raction\x18\x02 \x01(\x01\"\xa7\x01\n\x17SplitReadStreamResponse\x12\x44\n\x0eprimary_stream\x18\x01 \x01(\x0b\x32,.google.cloud.bigquery.storage.v1.ReadStream\x12\x46\n\x10remainder_stream\x18\x02 \x01(\x0b\x32,.google.cloud.bigquery.storage.v1.ReadStream\"\x9b\x01\n\x18\x43reateWriteStreamRequest\x12\x35\n\x06parent\x18\x01 \x01(\tB%\xe0\x41\x02\xfa\x41\x1f\n\x1d\x62igquery.googleapis.com/Table\x12H\n\x0cwrite_stream\x18\x02 \x01(\x0b\x32-.google.cloud.bigquery.storage.v1.WriteStreamB\x03\xe0\x41\x02\"\x8d\x07\n\x11\x41ppendRowsRequest\x12H\n\x0cwrite_stream\x18\x01 \x01(\tB2\xe0\x41\x02\xfa\x41,\n*bigquerystorage.googleapis.com/WriteStream\x12+\n\x06offset\x18\x02 \x01(\x0b\x32\x1b.google.protobuf.Int64Value\x12S\n\nproto_rows\x18\x04 \x01(\x0b\x32=.google.cloud.bigquery.storage.v1.AppendRowsRequest.ProtoDataH\x00\x12\x10\n\x08trace_id\x18\x06 \x01(\t\x12{\n\x1dmissing_value_interpretations\x18\x07 \x03(\x0b\x32T.google.cloud.bigquery.storage.v1.AppendRowsRequest.MissingValueInterpretationsEntry\x12\x81\x01\n$default_missing_value_interpretation\x18\x08 \x01(\x0e\x32N.google.cloud.bigquery.storage.v1.AppendRowsRequest.MissingValueInterpretationB\x03\xe0\x41\x01\x1a\x8c\x01\n\tProtoData\x12\x44\n\rwriter_schema\x18\x01 \x01(\x0b\x32-.google.cloud.bigquery.storage.v1.ProtoSchema\x12\x39\n\x04rows\x18\x02 \x01(\x0b\x32+.google.cloud.bigquery.storage.v1.ProtoRows\x1a\x92\x01\n MissingValueInterpretationsEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12]\n\x05value\x18\x02 \x01(\x0e\x32N.google.cloud.bigquery.storage.v1.AppendRowsRequest.MissingValueInterpretation:\x02\x38\x01\"m\n\x1aMissingValueInterpretation\x12,\n(MISSING_VALUE_INTERPRETATION_UNSPECIFIED\x10\x00\x12\x0e\n\nNULL_VALUE\x10\x01\x12\x11\n\rDEFAULT_VALUE\x10\x02\x42\x06\n\x04rows\"\xfb\x02\n\x12\x41ppendRowsResponse\x12Z\n\rappend_result\x18\x01 \x01(\x0b\x32\x41.google.cloud.bigquery.storage.v1.AppendRowsResponse.AppendResultH\x00\x12#\n\x05\x65rror\x18\x02 \x01(\x0b\x32\x12.google.rpc.StatusH\x00\x12\x45\n\x0eupdated_schema\x18\x03 \x01(\x0b\x32-.google.cloud.bigquery.storage.v1.TableSchema\x12>\n\nrow_errors\x18\x04 \x03(\x0b\x32*.google.cloud.bigquery.storage.v1.RowError\x12\x14\n\x0cwrite_stream\x18\x05 \x01(\t\x1a;\n\x0c\x41ppendResult\x12+\n\x06offset\x18\x01 \x01(\x0b\x32\x1b.google.protobuf.Int64ValueB\n\n\x08response\"\x9a\x01\n\x15GetWriteStreamRequest\x12@\n\x04name\x18\x01 \x01(\tB2\xe0\x41\x02\xfa\x41,\n*bigquerystorage.googleapis.com/WriteStream\x12?\n\x04view\x18\x03 \x01(\x0e\x32\x31.google.cloud.bigquery.storage.v1.WriteStreamView\"s\n\x1e\x42\x61tchCommitWriteStreamsRequest\x12\x35\n\x06parent\x18\x01 \x01(\tB%\xe0\x41\x02\xfa\x41\x1f\n\x1d\x62igquery.googleapis.com/Table\x12\x1a\n\rwrite_streams\x18\x02 \x03(\tB\x03\xe0\x41\x02\"\x99\x01\n\x1f\x42\x61tchCommitWriteStreamsResponse\x12/\n\x0b\x63ommit_time\x18\x01 \x01(\x0b\x32\x1a.google.protobuf.Timestamp\x12\x45\n\rstream_errors\x18\x02 \x03(\x0b\x32..google.cloud.bigquery.storage.v1.StorageError\"^\n\x1a\x46inalizeWriteStreamRequest\x12@\n\x04name\x18\x01 \x01(\tB2\xe0\x41\x02\xfa\x41,\n*bigquerystorage.googleapis.com/WriteStream\"0\n\x1b\x46inalizeWriteStreamResponse\x12\x11\n\trow_count\x18\x01 \x01(\x03\"\x89\x01\n\x10\x46lushRowsRequest\x12H\n\x0cwrite_stream\x18\x01 \x01(\tB2\xe0\x41\x02\xfa\x41,\n*bigquerystorage.googleapis.com/WriteStream\x12+\n\x06offset\x18\x02 \x01(\x0b\x32\x1b.google.protobuf.Int64Value\"#\n\x11\x46lushRowsResponse\x12\x0e\n\x06offset\x18\x01 \x01(\x03\"\xa4\x04\n\x0cStorageError\x12M\n\x04\x63ode\x18\x01 \x01(\x0e\x32?.google.cloud.bigquery.storage.v1.StorageError.StorageErrorCode\x12\x0e\n\x06\x65ntity\x18\x02 \x01(\t\x12\x15\n\rerror_message\x18\x03 \x01(\t\"\x9d\x03\n\x10StorageErrorCode\x12\"\n\x1eSTORAGE_ERROR_CODE_UNSPECIFIED\x10\x00\x12\x13\n\x0fTABLE_NOT_FOUND\x10\x01\x12\x1c\n\x18STREAM_ALREADY_COMMITTED\x10\x02\x12\x14\n\x10STREAM_NOT_FOUND\x10\x03\x12\x17\n\x13INVALID_STREAM_TYPE\x10\x04\x12\x18\n\x14INVALID_STREAM_STATE\x10\x05\x12\x14\n\x10STREAM_FINALIZED\x10\x06\x12 \n\x1cSCHEMA_MISMATCH_EXTRA_FIELDS\x10\x07\x12\x19\n\x15OFFSET_ALREADY_EXISTS\x10\x08\x12\x17\n\x13OFFSET_OUT_OF_RANGE\x10\t\x12\x15\n\x11\x43MEK_NOT_PROVIDED\x10\n\x12\x19\n\x15INVALID_CMEK_PROVIDED\x10\x0b\x12\x19\n\x15\x43MEK_ENCRYPTION_ERROR\x10\x0c\x12\x15\n\x11KMS_SERVICE_ERROR\x10\r\x12\x19\n\x15KMS_PERMISSION_DENIED\x10\x0e\"\xb3\x01\n\x08RowError\x12\r\n\x05index\x18\x01 \x01(\x03\x12\x45\n\x04\x63ode\x18\x02 \x01(\x0e\x32\x37.google.cloud.bigquery.storage.v1.RowError.RowErrorCode\x12\x0f\n\x07message\x18\x03 \x01(\t\"@\n\x0cRowErrorCode\x12\x1e\n\x1aROW_ERROR_CODE_UNSPECIFIED\x10\x00\x12\x10\n\x0c\x46IELDS_ERROR\x10\x01\x32\x92\x06\n\x0c\x42igQueryRead\x12\xe9\x01\n\x11\x43reateReadSession\x12:.google.cloud.bigquery.storage.v1.CreateReadSessionRequest\x1a-.google.cloud.bigquery.storage.v1.ReadSession\"i\x82\xd3\xe4\x93\x02<\"7/v1/{read_session.table=projects/*/datasets/*/tables/*}:\x01*\xda\x41$parent,read_session,max_stream_count\x12\xcf\x01\n\x08ReadRows\x12\x31.google.cloud.bigquery.storage.v1.ReadRowsRequest\x1a\x32.google.cloud.bigquery.storage.v1.ReadRowsResponse\"Z\x82\xd3\xe4\x93\x02?\x12=/v1/{read_stream=projects/*/locations/*/sessions/*/streams/*}\xda\x41\x12read_stream,offset0\x01\x12\xc6\x01\n\x0fSplitReadStream\x12\x38.google.cloud.bigquery.storage.v1.SplitReadStreamRequest\x1a\x39.google.cloud.bigquery.storage.v1.SplitReadStreamResponse\">\x82\xd3\xe4\x93\x02\x38\x12\x36/v1/{name=projects/*/locations/*/sessions/*/streams/*}\x1a{\xca\x41\x1e\x62igquerystorage.googleapis.com\xd2\x41Whttps://www.googleapis.com/auth/bigquery,https://www.googleapis.com/auth/cloud-platform2\xbc\x0b\n\rBigQueryWrite\x12\xd7\x01\n\x11\x43reateWriteStream\x12:.google.cloud.bigquery.storage.v1.CreateWriteStreamRequest\x1a-.google.cloud.bigquery.storage.v1.WriteStream\"W\x82\xd3\xe4\x93\x02;\"+/v1/{parent=projects/*/datasets/*/tables/*}:\x0cwrite_stream\xda\x41\x13parent,write_stream\x12\xd2\x01\n\nAppendRows\x12\x33.google.cloud.bigquery.storage.v1.AppendRowsRequest\x1a\x34.google.cloud.bigquery.storage.v1.AppendRowsResponse\"U\x82\xd3\xe4\x93\x02@\";/v1/{write_stream=projects/*/datasets/*/tables/*/streams/*}:\x01*\xda\x41\x0cwrite_stream(\x01\x30\x01\x12\xbf\x01\n\x0eGetWriteStream\x12\x37.google.cloud.bigquery.storage.v1.GetWriteStreamRequest\x1a-.google.cloud.bigquery.storage.v1.WriteStream\"E\x82\xd3\xe4\x93\x02\x38\"3/v1/{name=projects/*/datasets/*/tables/*/streams/*}:\x01*\xda\x41\x04name\x12\xd9\x01\n\x13\x46inalizeWriteStream\x12<.google.cloud.bigquery.storage.v1.FinalizeWriteStreamRequest\x1a=.google.cloud.bigquery.storage.v1.FinalizeWriteStreamResponse\"E\x82\xd3\xe4\x93\x02\x38\"3/v1/{name=projects/*/datasets/*/tables/*/streams/*}:\x01*\xda\x41\x04name\x12\xdc\x01\n\x17\x42\x61tchCommitWriteStreams\x12@.google.cloud.bigquery.storage.v1.BatchCommitWriteStreamsRequest\x1a\x41.google.cloud.bigquery.storage.v1.BatchCommitWriteStreamsResponse\"<\x82\xd3\xe4\x93\x02-\x12+/v1/{parent=projects/*/datasets/*/tables/*}\xda\x41\x06parent\x12\xcb\x01\n\tFlushRows\x12\x32.google.cloud.bigquery.storage.v1.FlushRowsRequest\x1a\x33.google.cloud.bigquery.storage.v1.FlushRowsResponse\"U\x82\xd3\xe4\x93\x02@\";/v1/{write_stream=projects/*/datasets/*/tables/*/streams/*}:\x01*\xda\x41\x0cwrite_stream\x1a\xb0\x01\xca\x41\x1e\x62igquerystorage.googleapis.com\xd2\x41\x8b\x01https://www.googleapis.com/auth/bigquery,https://www.googleapis.com/auth/bigquery.insertdata,https://www.googleapis.com/auth/cloud-platformB\x94\x02\n$com.google.cloud.bigquery.storage.v1B\x0cStorageProtoP\x01Z>cloud.google.com/go/bigquery/storage/apiv1/storagepb;storagepb\xaa\x02 Google.Cloud.BigQuery.Storage.V1\xca\x02 Google\\Cloud\\BigQuery\\Storage\\V1\xea\x41U\n\x1d\x62igquery.googleapis.com/Table\x12\x34projects/{project}/datasets/{dataset}/tables/{table}b\x06proto3"
21
+ descriptor_data = "\n.google/cloud/bigquery/storage/v1/storage.proto\x12 google.cloud.bigquery.storage.v1\x1a\x1cgoogle/api/annotations.proto\x1a\x17google/api/client.proto\x1a\x1fgoogle/api/field_behavior.proto\x1a\x19google/api/resource.proto\x1a,google/cloud/bigquery/storage/v1/arrow.proto\x1a+google/cloud/bigquery/storage/v1/avro.proto\x1a/google/cloud/bigquery/storage/v1/protobuf.proto\x1a-google/cloud/bigquery/storage/v1/stream.proto\x1a,google/cloud/bigquery/storage/v1/table.proto\x1a\x1fgoogle/protobuf/timestamp.proto\x1a\x1egoogle/protobuf/wrappers.proto\x1a\x17google/rpc/status.proto\"\xe7\x01\n\x18\x43reateReadSessionRequest\x12\x43\n\x06parent\x18\x01 \x01(\tB3\xe0\x41\x02\xfa\x41-\n+cloudresourcemanager.googleapis.com/Project\x12H\n\x0cread_session\x18\x02 \x01(\x0b\x32-.google.cloud.bigquery.storage.v1.ReadSessionB\x03\xe0\x41\x02\x12\x18\n\x10max_stream_count\x18\x03 \x01(\x05\x12\"\n\x1apreferred_min_stream_count\x18\x04 \x01(\x05\"i\n\x0fReadRowsRequest\x12\x46\n\x0bread_stream\x18\x01 \x01(\tB1\xe0\x41\x02\xfa\x41+\n)bigquerystorage.googleapis.com/ReadStream\x12\x0e\n\x06offset\x18\x02 \x01(\x03\")\n\rThrottleState\x12\x18\n\x10throttle_percent\x18\x01 \x01(\x05\"\x97\x01\n\x0bStreamStats\x12H\n\x08progress\x18\x02 \x01(\x0b\x32\x36.google.cloud.bigquery.storage.v1.StreamStats.Progress\x1a>\n\x08Progress\x12\x19\n\x11\x61t_response_start\x18\x01 \x01(\x01\x12\x17\n\x0f\x61t_response_end\x18\x02 \x01(\x01\"\xac\x04\n\x10ReadRowsResponse\x12?\n\tavro_rows\x18\x03 \x01(\x0b\x32*.google.cloud.bigquery.storage.v1.AvroRowsH\x00\x12P\n\x12\x61rrow_record_batch\x18\x04 \x01(\x0b\x32\x32.google.cloud.bigquery.storage.v1.ArrowRecordBatchH\x00\x12\x11\n\trow_count\x18\x06 \x01(\x03\x12<\n\x05stats\x18\x02 \x01(\x0b\x32-.google.cloud.bigquery.storage.v1.StreamStats\x12G\n\x0ethrottle_state\x18\x05 \x01(\x0b\x32/.google.cloud.bigquery.storage.v1.ThrottleState\x12H\n\x0b\x61vro_schema\x18\x07 \x01(\x0b\x32,.google.cloud.bigquery.storage.v1.AvroSchemaB\x03\xe0\x41\x03H\x01\x12J\n\x0c\x61rrow_schema\x18\x08 \x01(\x0b\x32-.google.cloud.bigquery.storage.v1.ArrowSchemaB\x03\xe0\x41\x03H\x01\x12(\n\x16uncompressed_byte_size\x18\t \x01(\x03\x42\x03\xe0\x41\x01H\x02\x88\x01\x01\x42\x06\n\x04rowsB\x08\n\x06schemaB\x19\n\x17_uncompressed_byte_size\"k\n\x16SplitReadStreamRequest\x12?\n\x04name\x18\x01 \x01(\tB1\xe0\x41\x02\xfa\x41+\n)bigquerystorage.googleapis.com/ReadStream\x12\x10\n\x08\x66raction\x18\x02 \x01(\x01\"\xa7\x01\n\x17SplitReadStreamResponse\x12\x44\n\x0eprimary_stream\x18\x01 \x01(\x0b\x32,.google.cloud.bigquery.storage.v1.ReadStream\x12\x46\n\x10remainder_stream\x18\x02 \x01(\x0b\x32,.google.cloud.bigquery.storage.v1.ReadStream\"\x9b\x01\n\x18\x43reateWriteStreamRequest\x12\x35\n\x06parent\x18\x01 \x01(\tB%\xe0\x41\x02\xfa\x41\x1f\n\x1d\x62igquery.googleapis.com/Table\x12H\n\x0cwrite_stream\x18\x02 \x01(\x0b\x32-.google.cloud.bigquery.storage.v1.WriteStreamB\x03\xe0\x41\x02\"\x8d\x07\n\x11\x41ppendRowsRequest\x12H\n\x0cwrite_stream\x18\x01 \x01(\tB2\xe0\x41\x02\xfa\x41,\n*bigquerystorage.googleapis.com/WriteStream\x12+\n\x06offset\x18\x02 \x01(\x0b\x32\x1b.google.protobuf.Int64Value\x12S\n\nproto_rows\x18\x04 \x01(\x0b\x32=.google.cloud.bigquery.storage.v1.AppendRowsRequest.ProtoDataH\x00\x12\x10\n\x08trace_id\x18\x06 \x01(\t\x12{\n\x1dmissing_value_interpretations\x18\x07 \x03(\x0b\x32T.google.cloud.bigquery.storage.v1.AppendRowsRequest.MissingValueInterpretationsEntry\x12\x81\x01\n$default_missing_value_interpretation\x18\x08 \x01(\x0e\x32N.google.cloud.bigquery.storage.v1.AppendRowsRequest.MissingValueInterpretationB\x03\xe0\x41\x01\x1a\x8c\x01\n\tProtoData\x12\x44\n\rwriter_schema\x18\x01 \x01(\x0b\x32-.google.cloud.bigquery.storage.v1.ProtoSchema\x12\x39\n\x04rows\x18\x02 \x01(\x0b\x32+.google.cloud.bigquery.storage.v1.ProtoRows\x1a\x92\x01\n MissingValueInterpretationsEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12]\n\x05value\x18\x02 \x01(\x0e\x32N.google.cloud.bigquery.storage.v1.AppendRowsRequest.MissingValueInterpretation:\x02\x38\x01\"m\n\x1aMissingValueInterpretation\x12,\n(MISSING_VALUE_INTERPRETATION_UNSPECIFIED\x10\x00\x12\x0e\n\nNULL_VALUE\x10\x01\x12\x11\n\rDEFAULT_VALUE\x10\x02\x42\x06\n\x04rows\"\xfb\x02\n\x12\x41ppendRowsResponse\x12Z\n\rappend_result\x18\x01 \x01(\x0b\x32\x41.google.cloud.bigquery.storage.v1.AppendRowsResponse.AppendResultH\x00\x12#\n\x05\x65rror\x18\x02 \x01(\x0b\x32\x12.google.rpc.StatusH\x00\x12\x45\n\x0eupdated_schema\x18\x03 \x01(\x0b\x32-.google.cloud.bigquery.storage.v1.TableSchema\x12>\n\nrow_errors\x18\x04 \x03(\x0b\x32*.google.cloud.bigquery.storage.v1.RowError\x12\x14\n\x0cwrite_stream\x18\x05 \x01(\t\x1a;\n\x0c\x41ppendResult\x12+\n\x06offset\x18\x01 \x01(\x0b\x32\x1b.google.protobuf.Int64ValueB\n\n\x08response\"\x9a\x01\n\x15GetWriteStreamRequest\x12@\n\x04name\x18\x01 \x01(\tB2\xe0\x41\x02\xfa\x41,\n*bigquerystorage.googleapis.com/WriteStream\x12?\n\x04view\x18\x03 \x01(\x0e\x32\x31.google.cloud.bigquery.storage.v1.WriteStreamView\"s\n\x1e\x42\x61tchCommitWriteStreamsRequest\x12\x35\n\x06parent\x18\x01 \x01(\tB%\xe0\x41\x02\xfa\x41\x1f\n\x1d\x62igquery.googleapis.com/Table\x12\x1a\n\rwrite_streams\x18\x02 \x03(\tB\x03\xe0\x41\x02\"\x99\x01\n\x1f\x42\x61tchCommitWriteStreamsResponse\x12/\n\x0b\x63ommit_time\x18\x01 \x01(\x0b\x32\x1a.google.protobuf.Timestamp\x12\x45\n\rstream_errors\x18\x02 \x03(\x0b\x32..google.cloud.bigquery.storage.v1.StorageError\"^\n\x1a\x46inalizeWriteStreamRequest\x12@\n\x04name\x18\x01 \x01(\tB2\xe0\x41\x02\xfa\x41,\n*bigquerystorage.googleapis.com/WriteStream\"0\n\x1b\x46inalizeWriteStreamResponse\x12\x11\n\trow_count\x18\x01 \x01(\x03\"\x89\x01\n\x10\x46lushRowsRequest\x12H\n\x0cwrite_stream\x18\x01 \x01(\tB2\xe0\x41\x02\xfa\x41,\n*bigquerystorage.googleapis.com/WriteStream\x12+\n\x06offset\x18\x02 \x01(\x0b\x32\x1b.google.protobuf.Int64Value\"#\n\x11\x46lushRowsResponse\x12\x0e\n\x06offset\x18\x01 \x01(\x03\"\xa4\x04\n\x0cStorageError\x12M\n\x04\x63ode\x18\x01 \x01(\x0e\x32?.google.cloud.bigquery.storage.v1.StorageError.StorageErrorCode\x12\x0e\n\x06\x65ntity\x18\x02 \x01(\t\x12\x15\n\rerror_message\x18\x03 \x01(\t\"\x9d\x03\n\x10StorageErrorCode\x12\"\n\x1eSTORAGE_ERROR_CODE_UNSPECIFIED\x10\x00\x12\x13\n\x0fTABLE_NOT_FOUND\x10\x01\x12\x1c\n\x18STREAM_ALREADY_COMMITTED\x10\x02\x12\x14\n\x10STREAM_NOT_FOUND\x10\x03\x12\x17\n\x13INVALID_STREAM_TYPE\x10\x04\x12\x18\n\x14INVALID_STREAM_STATE\x10\x05\x12\x14\n\x10STREAM_FINALIZED\x10\x06\x12 \n\x1cSCHEMA_MISMATCH_EXTRA_FIELDS\x10\x07\x12\x19\n\x15OFFSET_ALREADY_EXISTS\x10\x08\x12\x17\n\x13OFFSET_OUT_OF_RANGE\x10\t\x12\x15\n\x11\x43MEK_NOT_PROVIDED\x10\n\x12\x19\n\x15INVALID_CMEK_PROVIDED\x10\x0b\x12\x19\n\x15\x43MEK_ENCRYPTION_ERROR\x10\x0c\x12\x15\n\x11KMS_SERVICE_ERROR\x10\r\x12\x19\n\x15KMS_PERMISSION_DENIED\x10\x0e\"\xb3\x01\n\x08RowError\x12\r\n\x05index\x18\x01 \x01(\x03\x12\x45\n\x04\x63ode\x18\x02 \x01(\x0e\x32\x37.google.cloud.bigquery.storage.v1.RowError.RowErrorCode\x12\x0f\n\x07message\x18\x03 \x01(\t\"@\n\x0cRowErrorCode\x12\x1e\n\x1aROW_ERROR_CODE_UNSPECIFIED\x10\x00\x12\x10\n\x0c\x46IELDS_ERROR\x10\x01\x32\x92\x06\n\x0c\x42igQueryRead\x12\xe9\x01\n\x11\x43reateReadSession\x12:.google.cloud.bigquery.storage.v1.CreateReadSessionRequest\x1a-.google.cloud.bigquery.storage.v1.ReadSession\"i\x82\xd3\xe4\x93\x02<\"7/v1/{read_session.table=projects/*/datasets/*/tables/*}:\x01*\xda\x41$parent,read_session,max_stream_count\x12\xcf\x01\n\x08ReadRows\x12\x31.google.cloud.bigquery.storage.v1.ReadRowsRequest\x1a\x32.google.cloud.bigquery.storage.v1.ReadRowsResponse\"Z\x82\xd3\xe4\x93\x02?\x12=/v1/{read_stream=projects/*/locations/*/sessions/*/streams/*}\xda\x41\x12read_stream,offset0\x01\x12\xc6\x01\n\x0fSplitReadStream\x12\x38.google.cloud.bigquery.storage.v1.SplitReadStreamRequest\x1a\x39.google.cloud.bigquery.storage.v1.SplitReadStreamResponse\">\x82\xd3\xe4\x93\x02\x38\x12\x36/v1/{name=projects/*/locations/*/sessions/*/streams/*}\x1a{\xca\x41\x1e\x62igquerystorage.googleapis.com\xd2\x41Whttps://www.googleapis.com/auth/bigquery,https://www.googleapis.com/auth/cloud-platform2\xbc\x0b\n\rBigQueryWrite\x12\xd7\x01\n\x11\x43reateWriteStream\x12:.google.cloud.bigquery.storage.v1.CreateWriteStreamRequest\x1a-.google.cloud.bigquery.storage.v1.WriteStream\"W\x82\xd3\xe4\x93\x02;\"+/v1/{parent=projects/*/datasets/*/tables/*}:\x0cwrite_stream\xda\x41\x13parent,write_stream\x12\xd2\x01\n\nAppendRows\x12\x33.google.cloud.bigquery.storage.v1.AppendRowsRequest\x1a\x34.google.cloud.bigquery.storage.v1.AppendRowsResponse\"U\x82\xd3\xe4\x93\x02@\";/v1/{write_stream=projects/*/datasets/*/tables/*/streams/*}:\x01*\xda\x41\x0cwrite_stream(\x01\x30\x01\x12\xbf\x01\n\x0eGetWriteStream\x12\x37.google.cloud.bigquery.storage.v1.GetWriteStreamRequest\x1a-.google.cloud.bigquery.storage.v1.WriteStream\"E\x82\xd3\xe4\x93\x02\x38\"3/v1/{name=projects/*/datasets/*/tables/*/streams/*}:\x01*\xda\x41\x04name\x12\xd9\x01\n\x13\x46inalizeWriteStream\x12<.google.cloud.bigquery.storage.v1.FinalizeWriteStreamRequest\x1a=.google.cloud.bigquery.storage.v1.FinalizeWriteStreamResponse\"E\x82\xd3\xe4\x93\x02\x38\"3/v1/{name=projects/*/datasets/*/tables/*/streams/*}:\x01*\xda\x41\x04name\x12\xdc\x01\n\x17\x42\x61tchCommitWriteStreams\x12@.google.cloud.bigquery.storage.v1.BatchCommitWriteStreamsRequest\x1a\x41.google.cloud.bigquery.storage.v1.BatchCommitWriteStreamsResponse\"<\x82\xd3\xe4\x93\x02-\x12+/v1/{parent=projects/*/datasets/*/tables/*}\xda\x41\x06parent\x12\xcb\x01\n\tFlushRows\x12\x32.google.cloud.bigquery.storage.v1.FlushRowsRequest\x1a\x33.google.cloud.bigquery.storage.v1.FlushRowsResponse\"U\x82\xd3\xe4\x93\x02@\";/v1/{write_stream=projects/*/datasets/*/tables/*/streams/*}:\x01*\xda\x41\x0cwrite_stream\x1a\xb0\x01\xca\x41\x1e\x62igquerystorage.googleapis.com\xd2\x41\x8b\x01https://www.googleapis.com/auth/bigquery,https://www.googleapis.com/auth/bigquery.insertdata,https://www.googleapis.com/auth/cloud-platformB\x94\x02\n$com.google.cloud.bigquery.storage.v1B\x0cStorageProtoP\x01Z>cloud.google.com/go/bigquery/storage/apiv1/storagepb;storagepb\xaa\x02 Google.Cloud.BigQuery.Storage.V1\xca\x02 Google\\Cloud\\BigQuery\\Storage\\V1\xea\x41U\n\x1d\x62igquery.googleapis.com/Table\x12\x34projects/{project}/datasets/{dataset}/tables/{table}b\x06proto3"
22
22
 
23
23
  pool = Google::Protobuf::DescriptorPool.generated_pool
24
24
 
@@ -12,7 +12,7 @@ require 'google/cloud/bigquery/storage/v1/table_pb'
12
12
  require 'google/protobuf/timestamp_pb'
13
13
 
14
14
 
15
- descriptor_data = "\n-google/cloud/bigquery/storage/v1/stream.proto\x12 google.cloud.bigquery.storage.v1\x1a\x1fgoogle/api/field_behavior.proto\x1a\x19google/api/resource.proto\x1a,google/cloud/bigquery/storage/v1/arrow.proto\x1a+google/cloud/bigquery/storage/v1/avro.proto\x1a,google/cloud/bigquery/storage/v1/table.proto\x1a\x1fgoogle/protobuf/timestamp.proto\"\xb0\n\n\x0bReadSession\x12\x11\n\x04name\x18\x01 \x01(\tB\x03\xe0\x41\x03\x12\x34\n\x0b\x65xpire_time\x18\x02 \x01(\x0b\x32\x1a.google.protobuf.TimestampB\x03\xe0\x41\x03\x12\x46\n\x0b\x64\x61ta_format\x18\x03 \x01(\x0e\x32,.google.cloud.bigquery.storage.v1.DataFormatB\x03\xe0\x41\x05\x12H\n\x0b\x61vro_schema\x18\x04 \x01(\x0b\x32,.google.cloud.bigquery.storage.v1.AvroSchemaB\x03\xe0\x41\x03H\x00\x12J\n\x0c\x61rrow_schema\x18\x05 \x01(\x0b\x32-.google.cloud.bigquery.storage.v1.ArrowSchemaB\x03\xe0\x41\x03H\x00\x12\x34\n\x05table\x18\x06 \x01(\tB%\xe0\x41\x05\xfa\x41\x1f\n\x1d\x62igquery.googleapis.com/Table\x12Z\n\x0ftable_modifiers\x18\x07 \x01(\x0b\x32<.google.cloud.bigquery.storage.v1.ReadSession.TableModifiersB\x03\xe0\x41\x01\x12Y\n\x0cread_options\x18\x08 \x01(\x0b\x32>.google.cloud.bigquery.storage.v1.ReadSession.TableReadOptionsB\x03\xe0\x41\x01\x12\x42\n\x07streams\x18\n \x03(\x0b\x32,.google.cloud.bigquery.storage.v1.ReadStreamB\x03\xe0\x41\x03\x12*\n\x1d\x65stimated_total_bytes_scanned\x18\x0c \x01(\x03\x42\x03\xe0\x41\x03\x12/\n\"estimated_total_physical_file_size\x18\x0f \x01(\x03\x42\x03\xe0\x41\x03\x12 \n\x13\x65stimated_row_count\x18\x0e \x01(\x03\x42\x03\xe0\x41\x03\x12\x15\n\x08trace_id\x18\r \x01(\tB\x03\xe0\x41\x01\x1a\x43\n\x0eTableModifiers\x12\x31\n\rsnapshot_time\x18\x01 \x01(\x0b\x32\x1a.google.protobuf.Timestamp\x1a\xf6\x02\n\x10TableReadOptions\x12\x17\n\x0fselected_fields\x18\x01 \x03(\t\x12\x17\n\x0frow_restriction\x18\x02 \x01(\t\x12g\n\x1b\x61rrow_serialization_options\x18\x03 \x01(\x0b\x32;.google.cloud.bigquery.storage.v1.ArrowSerializationOptionsB\x03\xe0\x41\x01H\x00\x12\x65\n\x1a\x61vro_serialization_options\x18\x04 \x01(\x0b\x32:.google.cloud.bigquery.storage.v1.AvroSerializationOptionsB\x03\xe0\x41\x01H\x00\x12#\n\x11sample_percentage\x18\x05 \x01(\x01\x42\x03\xe0\x41\x01H\x01\x88\x01\x01\x42%\n#output_format_serialization_optionsB\x14\n\x12_sample_percentage:k\xea\x41h\n*bigquerystorage.googleapis.com/ReadSession\x12:projects/{project}/locations/{location}/sessions/{session}B\x08\n\x06schema\"\x9c\x01\n\nReadStream\x12\x11\n\x04name\x18\x01 \x01(\tB\x03\xe0\x41\x03:{\xea\x41x\n)bigquerystorage.googleapis.com/ReadStream\x12Kprojects/{project}/locations/{location}/sessions/{session}/streams/{stream}\"\xfb\x04\n\x0bWriteStream\x12\x11\n\x04name\x18\x01 \x01(\tB\x03\xe0\x41\x03\x12\x45\n\x04type\x18\x02 \x01(\x0e\x32\x32.google.cloud.bigquery.storage.v1.WriteStream.TypeB\x03\xe0\x41\x05\x12\x34\n\x0b\x63reate_time\x18\x03 \x01(\x0b\x32\x1a.google.protobuf.TimestampB\x03\xe0\x41\x03\x12\x34\n\x0b\x63ommit_time\x18\x04 \x01(\x0b\x32\x1a.google.protobuf.TimestampB\x03\xe0\x41\x03\x12H\n\x0ctable_schema\x18\x05 \x01(\x0b\x32-.google.cloud.bigquery.storage.v1.TableSchemaB\x03\xe0\x41\x03\x12P\n\nwrite_mode\x18\x07 \x01(\x0e\x32\x37.google.cloud.bigquery.storage.v1.WriteStream.WriteModeB\x03\xe0\x41\x05\x12\x15\n\x08location\x18\x08 \x01(\tB\x03\xe0\x41\x05\"F\n\x04Type\x12\x14\n\x10TYPE_UNSPECIFIED\x10\x00\x12\r\n\tCOMMITTED\x10\x01\x12\x0b\n\x07PENDING\x10\x02\x12\x0c\n\x08\x42UFFERED\x10\x03\"3\n\tWriteMode\x12\x1a\n\x16WRITE_MODE_UNSPECIFIED\x10\x00\x12\n\n\x06INSERT\x10\x01:v\xea\x41s\n*bigquerystorage.googleapis.com/WriteStream\x12\x45projects/{project}/datasets/{dataset}/tables/{table}/streams/{stream}*>\n\nDataFormat\x12\x1b\n\x17\x44\x41TA_FORMAT_UNSPECIFIED\x10\x00\x12\x08\n\x04\x41VRO\x10\x01\x12\t\n\x05\x41RROW\x10\x02*I\n\x0fWriteStreamView\x12!\n\x1dWRITE_STREAM_VIEW_UNSPECIFIED\x10\x00\x12\t\n\x05\x42\x41SIC\x10\x01\x12\x08\n\x04\x46ULL\x10\x02\x42\xbb\x01\n$com.google.cloud.bigquery.storage.v1B\x0bStreamProtoP\x01Z>cloud.google.com/go/bigquery/storage/apiv1/storagepb;storagepb\xaa\x02 Google.Cloud.BigQuery.Storage.V1\xca\x02 Google\\Cloud\\BigQuery\\Storage\\V1b\x06proto3"
15
+ descriptor_data = "\n-google/cloud/bigquery/storage/v1/stream.proto\x12 google.cloud.bigquery.storage.v1\x1a\x1fgoogle/api/field_behavior.proto\x1a\x19google/api/resource.proto\x1a,google/cloud/bigquery/storage/v1/arrow.proto\x1a+google/cloud/bigquery/storage/v1/avro.proto\x1a,google/cloud/bigquery/storage/v1/table.proto\x1a\x1fgoogle/protobuf/timestamp.proto\"\xc3\x0c\n\x0bReadSession\x12\x11\n\x04name\x18\x01 \x01(\tB\x03\xe0\x41\x03\x12\x34\n\x0b\x65xpire_time\x18\x02 \x01(\x0b\x32\x1a.google.protobuf.TimestampB\x03\xe0\x41\x03\x12\x46\n\x0b\x64\x61ta_format\x18\x03 \x01(\x0e\x32,.google.cloud.bigquery.storage.v1.DataFormatB\x03\xe0\x41\x05\x12H\n\x0b\x61vro_schema\x18\x04 \x01(\x0b\x32,.google.cloud.bigquery.storage.v1.AvroSchemaB\x03\xe0\x41\x03H\x00\x12J\n\x0c\x61rrow_schema\x18\x05 \x01(\x0b\x32-.google.cloud.bigquery.storage.v1.ArrowSchemaB\x03\xe0\x41\x03H\x00\x12\x34\n\x05table\x18\x06 \x01(\tB%\xe0\x41\x05\xfa\x41\x1f\n\x1d\x62igquery.googleapis.com/Table\x12Z\n\x0ftable_modifiers\x18\x07 \x01(\x0b\x32<.google.cloud.bigquery.storage.v1.ReadSession.TableModifiersB\x03\xe0\x41\x01\x12Y\n\x0cread_options\x18\x08 \x01(\x0b\x32>.google.cloud.bigquery.storage.v1.ReadSession.TableReadOptionsB\x03\xe0\x41\x01\x12\x42\n\x07streams\x18\n \x03(\x0b\x32,.google.cloud.bigquery.storage.v1.ReadStreamB\x03\xe0\x41\x03\x12*\n\x1d\x65stimated_total_bytes_scanned\x18\x0c \x01(\x03\x42\x03\xe0\x41\x03\x12/\n\"estimated_total_physical_file_size\x18\x0f \x01(\x03\x42\x03\xe0\x41\x03\x12 \n\x13\x65stimated_row_count\x18\x0e \x01(\x03\x42\x03\xe0\x41\x03\x12\x15\n\x08trace_id\x18\r \x01(\tB\x03\xe0\x41\x01\x1a\x43\n\x0eTableModifiers\x12\x31\n\rsnapshot_time\x18\x01 \x01(\x0b\x32\x1a.google.protobuf.Timestamp\x1a\x89\x05\n\x10TableReadOptions\x12\x17\n\x0fselected_fields\x18\x01 \x03(\t\x12\x17\n\x0frow_restriction\x18\x02 \x01(\t\x12g\n\x1b\x61rrow_serialization_options\x18\x03 \x01(\x0b\x32;.google.cloud.bigquery.storage.v1.ArrowSerializationOptionsB\x03\xe0\x41\x01H\x00\x12\x65\n\x1a\x61vro_serialization_options\x18\x04 \x01(\x0b\x32:.google.cloud.bigquery.storage.v1.AvroSerializationOptionsB\x03\xe0\x41\x01H\x00\x12#\n\x11sample_percentage\x18\x05 \x01(\x01\x42\x03\xe0\x41\x01H\x01\x88\x01\x01\x12\x85\x01\n\x1aresponse_compression_codec\x18\x06 \x01(\x0e\x32W.google.cloud.bigquery.storage.v1.ReadSession.TableReadOptions.ResponseCompressionCodecB\x03\xe0\x41\x01H\x02\x88\x01\x01\"j\n\x18ResponseCompressionCodec\x12*\n&RESPONSE_COMPRESSION_CODEC_UNSPECIFIED\x10\x00\x12\"\n\x1eRESPONSE_COMPRESSION_CODEC_LZ4\x10\x02\x42%\n#output_format_serialization_optionsB\x14\n\x12_sample_percentageB\x1d\n\x1b_response_compression_codec:k\xea\x41h\n*bigquerystorage.googleapis.com/ReadSession\x12:projects/{project}/locations/{location}/sessions/{session}B\x08\n\x06schema\"\x9c\x01\n\nReadStream\x12\x11\n\x04name\x18\x01 \x01(\tB\x03\xe0\x41\x03:{\xea\x41x\n)bigquerystorage.googleapis.com/ReadStream\x12Kprojects/{project}/locations/{location}/sessions/{session}/streams/{stream}\"\xfb\x04\n\x0bWriteStream\x12\x11\n\x04name\x18\x01 \x01(\tB\x03\xe0\x41\x03\x12\x45\n\x04type\x18\x02 \x01(\x0e\x32\x32.google.cloud.bigquery.storage.v1.WriteStream.TypeB\x03\xe0\x41\x05\x12\x34\n\x0b\x63reate_time\x18\x03 \x01(\x0b\x32\x1a.google.protobuf.TimestampB\x03\xe0\x41\x03\x12\x34\n\x0b\x63ommit_time\x18\x04 \x01(\x0b\x32\x1a.google.protobuf.TimestampB\x03\xe0\x41\x03\x12H\n\x0ctable_schema\x18\x05 \x01(\x0b\x32-.google.cloud.bigquery.storage.v1.TableSchemaB\x03\xe0\x41\x03\x12P\n\nwrite_mode\x18\x07 \x01(\x0e\x32\x37.google.cloud.bigquery.storage.v1.WriteStream.WriteModeB\x03\xe0\x41\x05\x12\x15\n\x08location\x18\x08 \x01(\tB\x03\xe0\x41\x05\"F\n\x04Type\x12\x14\n\x10TYPE_UNSPECIFIED\x10\x00\x12\r\n\tCOMMITTED\x10\x01\x12\x0b\n\x07PENDING\x10\x02\x12\x0c\n\x08\x42UFFERED\x10\x03\"3\n\tWriteMode\x12\x1a\n\x16WRITE_MODE_UNSPECIFIED\x10\x00\x12\n\n\x06INSERT\x10\x01:v\xea\x41s\n*bigquerystorage.googleapis.com/WriteStream\x12\x45projects/{project}/datasets/{dataset}/tables/{table}/streams/{stream}*>\n\nDataFormat\x12\x1b\n\x17\x44\x41TA_FORMAT_UNSPECIFIED\x10\x00\x12\x08\n\x04\x41VRO\x10\x01\x12\t\n\x05\x41RROW\x10\x02*I\n\x0fWriteStreamView\x12!\n\x1dWRITE_STREAM_VIEW_UNSPECIFIED\x10\x00\x12\t\n\x05\x42\x41SIC\x10\x01\x12\x08\n\x04\x46ULL\x10\x02\x42\xbb\x01\n$com.google.cloud.bigquery.storage.v1B\x0bStreamProtoP\x01Z>cloud.google.com/go/bigquery/storage/apiv1/storagepb;storagepb\xaa\x02 Google.Cloud.BigQuery.Storage.V1\xca\x02 Google\\Cloud\\BigQuery\\Storage\\V1b\x06proto3"
16
16
 
17
17
  pool = Google::Protobuf::DescriptorPool.generated_pool
18
18
 
@@ -50,6 +50,7 @@ module Google
50
50
  ReadSession = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("google.cloud.bigquery.storage.v1.ReadSession").msgclass
51
51
  ReadSession::TableModifiers = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("google.cloud.bigquery.storage.v1.ReadSession.TableModifiers").msgclass
52
52
  ReadSession::TableReadOptions = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("google.cloud.bigquery.storage.v1.ReadSession.TableReadOptions").msgclass
53
+ ReadSession::TableReadOptions::ResponseCompressionCodec = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("google.cloud.bigquery.storage.v1.ReadSession.TableReadOptions.ResponseCompressionCodec").enummodule
53
54
  ReadStream = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("google.cloud.bigquery.storage.v1.ReadStream").msgclass
54
55
  WriteStream = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("google.cloud.bigquery.storage.v1.WriteStream").msgclass
55
56
  WriteStream::Type = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("google.cloud.bigquery.storage.v1.WriteStream.Type").enummodule
@@ -22,7 +22,7 @@ module Google
22
22
  module Bigquery
23
23
  module Storage
24
24
  module V1
25
- VERSION = "0.23.0"
25
+ VERSION = "0.25.0"
26
26
  end
27
27
  end
28
28
  end
@@ -21,6 +21,7 @@ module Google
21
21
  module Api
22
22
  # Required information for every language.
23
23
  # @!attribute [rw] reference_docs_uri
24
+ # @deprecated This field is deprecated and may be removed in the next major version update.
24
25
  # @return [::String]
25
26
  # Link to automatically generated reference documentation. Example:
26
27
  # https://cloud.google.com/nodejs/docs/reference/asset/latest
@@ -304,6 +305,19 @@ module Google
304
305
  # seconds: 360 # 6 minutes
305
306
  # total_poll_timeout:
306
307
  # seconds: 54000 # 90 minutes
308
+ # @!attribute [rw] auto_populated_fields
309
+ # @return [::Array<::String>]
310
+ # List of top-level fields of the request message, that should be
311
+ # automatically populated by the client libraries based on their
312
+ # (google.api.field_info).format. Currently supported format: UUID4.
313
+ #
314
+ # Example of a YAML configuration:
315
+ #
316
+ # publishing:
317
+ # method_settings:
318
+ # - selector: google.example.v1.ExampleService.CreateExample
319
+ # auto_populated_fields:
320
+ # - request_id
307
321
  class MethodSettings
308
322
  include ::Google::Protobuf::MessageExts
309
323
  extend ::Google::Protobuf::MessageExts::ClassMethods
@@ -41,6 +41,7 @@ module Google
41
41
  # @return [::String]
42
42
  # IPC-serialized Arrow RecordBatch.
43
43
  # @!attribute [rw] row_count
44
+ # @deprecated This field is deprecated and may be removed in the next major version update.
44
45
  # @return [::Integer]
45
46
  # [Deprecated] The count of rows in `serialized_record_batch`.
46
47
  # Please use the format-independent ReadRowsResponse.row_count instead.
@@ -37,6 +37,7 @@ module Google
37
37
  # @return [::String]
38
38
  # Binary serialized rows in a block.
39
39
  # @!attribute [rw] row_count
40
+ # @deprecated This field is deprecated and may be removed in the next major version update.
40
41
  # @return [::Integer]
41
42
  # [Deprecated] The count of rows in the returning block.
42
43
  # Please use the format-independent ReadRowsResponse.row_count instead.
@@ -137,6 +137,22 @@ module Google
137
137
  # @!attribute [r] arrow_schema
138
138
  # @return [::Google::Cloud::Bigquery::Storage::V1::ArrowSchema]
139
139
  # Output only. Arrow schema.
140
+ # @!attribute [rw] uncompressed_byte_size
141
+ # @return [::Integer]
142
+ # Optional. If the row data in this ReadRowsResponse is compressed, then
143
+ # uncompressed byte size is the original size of the uncompressed row data.
144
+ # If it is set to a value greater than 0, then decompress into a buffer of
145
+ # size uncompressed_byte_size using the compression codec that was requested
146
+ # during session creation time and which is specified in
147
+ # TableReadOptions.response_compression_codec in ReadSession.
148
+ # This value is not set if no response_compression_codec was not requested
149
+ # and it is -1 if the requested compression would not have reduced the size
150
+ # of this ReadRowsResponse's row data. This attempts to match Apache Arrow's
151
+ # behavior described here https://github.com/apache/arrow/issues/15102 where
152
+ # the uncompressed length may be set to -1 to indicate that the data that
153
+ # follows is not compressed, which can be useful for cases where compression
154
+ # does not yield appreciable savings. When uncompressed_byte_size is not
155
+ # greater than 0, the client should skip decompression.
140
156
  class ReadRowsResponse
141
157
  include ::Google::Protobuf::MessageExts
142
158
  extend ::Google::Protobuf::MessageExts::ClassMethods
@@ -176,9 +176,28 @@ module Google
176
176
  # randomly choose for each data block whether to read the rows in that data
177
177
  # block. For more details, see
178
178
  # https://cloud.google.com/bigquery/docs/table-sampling)
179
+ # @!attribute [rw] response_compression_codec
180
+ # @return [::Google::Cloud::Bigquery::Storage::V1::ReadSession::TableReadOptions::ResponseCompressionCodec]
181
+ # Optional. Set response_compression_codec when creating a read session to
182
+ # enable application-level compression of ReadRows responses.
179
183
  class TableReadOptions
180
184
  include ::Google::Protobuf::MessageExts
181
185
  extend ::Google::Protobuf::MessageExts::ClassMethods
186
+
187
+ # Specifies which compression codec to attempt on the entire serialized
188
+ # response payload (either Arrow record batch or Avro rows). This is
189
+ # not to be confused with the Apache Arrow native compression codecs
190
+ # specified in ArrowSerializationOptions. For performance reasons, when
191
+ # creating a read session requesting Arrow responses, setting both native
192
+ # Arrow compression and application-level response compression will not be
193
+ # allowed - choose, at most, one kind of compression.
194
+ module ResponseCompressionCodec
195
+ # Default is no compression.
196
+ RESPONSE_COMPRESSION_CODEC_UNSPECIFIED = 0
197
+
198
+ # Use raw LZ4 compression.
199
+ RESPONSE_COMPRESSION_CODEC_LZ4 = 2
200
+ end
182
201
  end
183
202
  end
184
203
 
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: google-cloud-bigquery-storage-v1
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.23.0
4
+ version: 0.25.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Google LLC
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2023-09-12 00:00:00.000000000 Z
11
+ date: 2024-01-11 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: gapic-common
@@ -16,7 +16,7 @@ dependencies:
16
16
  requirements:
17
17
  - - ">="
18
18
  - !ruby/object:Gem::Version
19
- version: 0.20.0
19
+ version: 0.21.1
20
20
  - - "<"
21
21
  - !ruby/object:Gem::Version
22
22
  version: 2.a
@@ -26,7 +26,7 @@ dependencies:
26
26
  requirements:
27
27
  - - ">="
28
28
  - !ruby/object:Gem::Version
29
- version: 0.20.0
29
+ version: 0.21.1
30
30
  - - "<"
31
31
  - !ruby/object:Gem::Version
32
32
  version: 2.a
@@ -223,7 +223,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
223
223
  - !ruby/object:Gem::Version
224
224
  version: '0'
225
225
  requirements: []
226
- rubygems_version: 3.4.19
226
+ rubygems_version: 3.5.3
227
227
  signing_key:
228
228
  specification_version: 4
229
229
  summary: API Client library for the BigQuery Storage V1 API