google-apis-spanner_v1 0.1.0

Sign up to get free protection for your applications and to get access to all the features.
@@ -0,0 +1,7 @@
1
+ ---
2
+ SHA256:
3
+ metadata.gz: 625e4ebcbf0da1d443e489addb6607a46b0dda44997b096f81f78a1b2a66f55e
4
+ data.tar.gz: 26c8f98f6954b9cf6d8ed56eec3cc58ff07637fe5dc9563c7151799ae64e35e4
5
+ SHA512:
6
+ metadata.gz: 0d1137d4f8a15222bdb338a4ab275fe3510af66e805697ffbd0b5f32664f421bb492105aee1fe069b884d0b8ab29c35cdd259e4cb05187331323f2bb64dc16e8
7
+ data.tar.gz: 57b20be9d4bf8e90d7bbd7787099db71b2b57952abd3eb5ecb18649a106ef7a65090aacc2c01652e7543277a87f3e9cdf8a1af7155201410d9bfecdc262cebda
@@ -0,0 +1,13 @@
1
+ --hide-void-return
2
+ --no-private
3
+ --verbose
4
+ --title=google-apis-spanner_v1
5
+ --markup-provider=redcarpet
6
+ --markup=markdown
7
+ --main OVERVIEW.md
8
+ lib/google/apis/spanner_v1/*.rb
9
+ lib/google/apis/spanner_v1.rb
10
+ -
11
+ OVERVIEW.md
12
+ CHANGELOG.md
13
+ LICENSE.md
@@ -0,0 +1,7 @@
1
+ # Release history for google-apis-spanner_v1
2
+
3
+ ### v0.1.0 (2021-01-07)
4
+
5
+ * Regenerated using generator version 0.1.1
6
+ * Regenerated from discovery document revision 20201130
7
+
@@ -0,0 +1,202 @@
1
+
2
+ Apache License
3
+ Version 2.0, January 2004
4
+ http://www.apache.org/licenses/
5
+
6
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
7
+
8
+ 1. Definitions.
9
+
10
+ "License" shall mean the terms and conditions for use, reproduction,
11
+ and distribution as defined by Sections 1 through 9 of this document.
12
+
13
+ "Licensor" shall mean the copyright owner or entity authorized by
14
+ the copyright owner that is granting the License.
15
+
16
+ "Legal Entity" shall mean the union of the acting entity and all
17
+ other entities that control, are controlled by, or are under common
18
+ control with that entity. For the purposes of this definition,
19
+ "control" means (i) the power, direct or indirect, to cause the
20
+ direction or management of such entity, whether by contract or
21
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
22
+ outstanding shares, or (iii) beneficial ownership of such entity.
23
+
24
+ "You" (or "Your") shall mean an individual or Legal Entity
25
+ exercising permissions granted by this License.
26
+
27
+ "Source" form shall mean the preferred form for making modifications,
28
+ including but not limited to software source code, documentation
29
+ source, and configuration files.
30
+
31
+ "Object" form shall mean any form resulting from mechanical
32
+ transformation or translation of a Source form, including but
33
+ not limited to compiled object code, generated documentation,
34
+ and conversions to other media types.
35
+
36
+ "Work" shall mean the work of authorship, whether in Source or
37
+ Object form, made available under the License, as indicated by a
38
+ copyright notice that is included in or attached to the work
39
+ (an example is provided in the Appendix below).
40
+
41
+ "Derivative Works" shall mean any work, whether in Source or Object
42
+ form, that is based on (or derived from) the Work and for which the
43
+ editorial revisions, annotations, elaborations, or other modifications
44
+ represent, as a whole, an original work of authorship. For the purposes
45
+ of this License, Derivative Works shall not include works that remain
46
+ separable from, or merely link (or bind by name) to the interfaces of,
47
+ the Work and Derivative Works thereof.
48
+
49
+ "Contribution" shall mean any work of authorship, including
50
+ the original version of the Work and any modifications or additions
51
+ to that Work or Derivative Works thereof, that is intentionally
52
+ submitted to Licensor for inclusion in the Work by the copyright owner
53
+ or by an individual or Legal Entity authorized to submit on behalf of
54
+ the copyright owner. For the purposes of this definition, "submitted"
55
+ means any form of electronic, verbal, or written communication sent
56
+ to the Licensor or its representatives, including but not limited to
57
+ communication on electronic mailing lists, source code control systems,
58
+ and issue tracking systems that are managed by, or on behalf of, the
59
+ Licensor for the purpose of discussing and improving the Work, but
60
+ excluding communication that is conspicuously marked or otherwise
61
+ designated in writing by the copyright owner as "Not a Contribution."
62
+
63
+ "Contributor" shall mean Licensor and any individual or Legal Entity
64
+ on behalf of whom a Contribution has been received by Licensor and
65
+ subsequently incorporated within the Work.
66
+
67
+ 2. Grant of Copyright License. Subject to the terms and conditions of
68
+ this License, each Contributor hereby grants to You a perpetual,
69
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
70
+ copyright license to reproduce, prepare Derivative Works of,
71
+ publicly display, publicly perform, sublicense, and distribute the
72
+ Work and such Derivative Works in Source or Object form.
73
+
74
+ 3. Grant of Patent License. Subject to the terms and conditions of
75
+ this License, each Contributor hereby grants to You a perpetual,
76
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
77
+ (except as stated in this section) patent license to make, have made,
78
+ use, offer to sell, sell, import, and otherwise transfer the Work,
79
+ where such license applies only to those patent claims licensable
80
+ by such Contributor that are necessarily infringed by their
81
+ Contribution(s) alone or by combination of their Contribution(s)
82
+ with the Work to which such Contribution(s) was submitted. If You
83
+ institute patent litigation against any entity (including a
84
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
85
+ or a Contribution incorporated within the Work constitutes direct
86
+ or contributory patent infringement, then any patent licenses
87
+ granted to You under this License for that Work shall terminate
88
+ as of the date such litigation is filed.
89
+
90
+ 4. Redistribution. You may reproduce and distribute copies of the
91
+ Work or Derivative Works thereof in any medium, with or without
92
+ modifications, and in Source or Object form, provided that You
93
+ meet the following conditions:
94
+
95
+ (a) You must give any other recipients of the Work or
96
+ Derivative Works a copy of this License; and
97
+
98
+ (b) You must cause any modified files to carry prominent notices
99
+ stating that You changed the files; and
100
+
101
+ (c) You must retain, in the Source form of any Derivative Works
102
+ that You distribute, all copyright, patent, trademark, and
103
+ attribution notices from the Source form of the Work,
104
+ excluding those notices that do not pertain to any part of
105
+ the Derivative Works; and
106
+
107
+ (d) If the Work includes a "NOTICE" text file as part of its
108
+ distribution, then any Derivative Works that You distribute must
109
+ include a readable copy of the attribution notices contained
110
+ within such NOTICE file, excluding those notices that do not
111
+ pertain to any part of the Derivative Works, in at least one
112
+ of the following places: within a NOTICE text file distributed
113
+ as part of the Derivative Works; within the Source form or
114
+ documentation, if provided along with the Derivative Works; or,
115
+ within a display generated by the Derivative Works, if and
116
+ wherever such third-party notices normally appear. The contents
117
+ of the NOTICE file are for informational purposes only and
118
+ do not modify the License. You may add Your own attribution
119
+ notices within Derivative Works that You distribute, alongside
120
+ or as an addendum to the NOTICE text from the Work, provided
121
+ that such additional attribution notices cannot be construed
122
+ as modifying the License.
123
+
124
+ You may add Your own copyright statement to Your modifications and
125
+ may provide additional or different license terms and conditions
126
+ for use, reproduction, or distribution of Your modifications, or
127
+ for any such Derivative Works as a whole, provided Your use,
128
+ reproduction, and distribution of the Work otherwise complies with
129
+ the conditions stated in this License.
130
+
131
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
132
+ any Contribution intentionally submitted for inclusion in the Work
133
+ by You to the Licensor shall be under the terms and conditions of
134
+ this License, without any additional terms or conditions.
135
+ Notwithstanding the above, nothing herein shall supersede or modify
136
+ the terms of any separate license agreement you may have executed
137
+ with Licensor regarding such Contributions.
138
+
139
+ 6. Trademarks. This License does not grant permission to use the trade
140
+ names, trademarks, service marks, or product names of the Licensor,
141
+ except as required for reasonable and customary use in describing the
142
+ origin of the Work and reproducing the content of the NOTICE file.
143
+
144
+ 7. Disclaimer of Warranty. Unless required by applicable law or
145
+ agreed to in writing, Licensor provides the Work (and each
146
+ Contributor provides its Contributions) on an "AS IS" BASIS,
147
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
148
+ implied, including, without limitation, any warranties or conditions
149
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
150
+ PARTICULAR PURPOSE. You are solely responsible for determining the
151
+ appropriateness of using or redistributing the Work and assume any
152
+ risks associated with Your exercise of permissions under this License.
153
+
154
+ 8. Limitation of Liability. In no event and under no legal theory,
155
+ whether in tort (including negligence), contract, or otherwise,
156
+ unless required by applicable law (such as deliberate and grossly
157
+ negligent acts) or agreed to in writing, shall any Contributor be
158
+ liable to You for damages, including any direct, indirect, special,
159
+ incidental, or consequential damages of any character arising as a
160
+ result of this License or out of the use or inability to use the
161
+ Work (including but not limited to damages for loss of goodwill,
162
+ work stoppage, computer failure or malfunction, or any and all
163
+ other commercial damages or losses), even if such Contributor
164
+ has been advised of the possibility of such damages.
165
+
166
+ 9. Accepting Warranty or Additional Liability. While redistributing
167
+ the Work or Derivative Works thereof, You may choose to offer,
168
+ and charge a fee for, acceptance of support, warranty, indemnity,
169
+ or other liability obligations and/or rights consistent with this
170
+ License. However, in accepting such obligations, You may act only
171
+ on Your own behalf and on Your sole responsibility, not on behalf
172
+ of any other Contributor, and only if You agree to indemnify,
173
+ defend, and hold each Contributor harmless for any liability
174
+ incurred by, or claims asserted against, such Contributor by reason
175
+ of your accepting any such warranty or additional liability.
176
+
177
+ END OF TERMS AND CONDITIONS
178
+
179
+ APPENDIX: How to apply the Apache License to your work.
180
+
181
+ To apply the Apache License to your work, attach the following
182
+ boilerplate notice, with the fields enclosed by brackets "[]"
183
+ replaced with your own identifying information. (Don't include
184
+ the brackets!) The text should be enclosed in the appropriate
185
+ comment syntax for the file format. We also recommend that a
186
+ file or class name and description of purpose be included on the
187
+ same "printed page" as the copyright notice for easier
188
+ identification within third-party archives.
189
+
190
+ Copyright [yyyy] [name of copyright owner]
191
+
192
+ Licensed under the Apache License, Version 2.0 (the "License");
193
+ you may not use this file except in compliance with the License.
194
+ You may obtain a copy of the License at
195
+
196
+ http://www.apache.org/licenses/LICENSE-2.0
197
+
198
+ Unless required by applicable law or agreed to in writing, software
199
+ distributed under the License is distributed on an "AS IS" BASIS,
200
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
201
+ See the License for the specific language governing permissions and
202
+ limitations under the License.
@@ -0,0 +1,96 @@
1
+ # Simple REST client for version V1 of the Cloud Spanner API
2
+
3
+ This is a simple client library for version V1 of the Cloud Spanner API. It provides:
4
+
5
+ * A client object that connects to the HTTP/JSON REST endpoint for the service.
6
+ * Ruby objects for data structures related to the service.
7
+ * Integration with the googleauth gem for authentication using OAuth, API keys, and service accounts.
8
+ * Control of retry, pagination, and timeouts.
9
+
10
+ Note that although this client library is supported and will continue to be updated to track changes to the service, it is otherwise considered complete and not under active development. Many Google services, especially Google Cloud Platform services, may provide a more modern client that is under more active development and improvement. See the section below titled *Which client should I use?* for more information.
11
+
12
+ ## Getting started
13
+
14
+ ### Before you begin
15
+
16
+ There are a few setup steps you need to complete before you can use this library:
17
+
18
+ 1. If you don't already have a Google account, [sign up](https://www.google.com/accounts).
19
+ 2. If you have never created a Google APIs Console project, read about [Managing Projects](https://cloud.google.com/resource-manager/docs/creating-managing-projects) and create a project in the [Google API Console](https://console.cloud.google.com/).
20
+ 3. Most APIs need to be enabled for your project. [Enable it](https://console.cloud.google.com/apis/library/spanner.googleapis.com) in the console.
21
+
22
+ ### Installation
23
+
24
+ Add this line to your application's Gemfile:
25
+
26
+ ```ruby
27
+ gem 'google-apis-spanner_v1', '~> 0.1'
28
+ ```
29
+
30
+ And then execute:
31
+
32
+ ```
33
+ $ bundle
34
+ ```
35
+
36
+ Or install it yourself as:
37
+
38
+ ```
39
+ $ gem install google-apis-spanner_v1
40
+ ```
41
+
42
+ ### Creating a client object
43
+
44
+ Once the gem is installed, you can load the client code and instantiate a client.
45
+
46
+ ```ruby
47
+ # Load the client
48
+ require "google/apis/spanner_v1"
49
+
50
+ # Create a client object
51
+ client = Google::Apis::SpannerV1::SpannerService.new
52
+
53
+ # Authenticate calls
54
+ client.authentication = # ... use the googleauth gem to create credentials
55
+ ```
56
+
57
+ See the class reference docs for information on the methods you can call from a client.
58
+
59
+ ## Documentation
60
+
61
+ More detailed descriptions of the Google simple REST clients are available in two documents.
62
+
63
+ * The [Usage Guide](https://github.com/googleapis/google-api-ruby-client/blob/master/docs/usage-guide.md) discusses how to make API calls, how to use the provided data structures, and how to work the various features of the client library, including media upload and download, error handling, retries, pagination, and logging.
64
+ * The [Auth Guide](https://github.com/googleapis/google-api-ruby-client/blob/master/docs/auth-guide.md) discusses authentication in the client libraries, including API keys, OAuth 2.0, service accounts, and environment variables.
65
+
66
+ (Note: the above documents are written for the simple REST clients in general, and their examples may not reflect the Spanner service in particular.)
67
+
68
+ For reference information on specific calls in the Cloud Spanner API, see the {Google::Apis::SpannerV1::SpannerService class reference docs}.
69
+
70
+ ## Which client should I use?
71
+
72
+ Google provides two types of Ruby API client libraries: **simple REST clients** and **modern clients**.
73
+
74
+ This library, `google-apis-spanner_v1`, is a simple REST client. You can identify these clients by their gem names, which are always in the form `google-apis-<servicename>_<serviceversion>`. The simple REST clients connect to HTTP/JSON REST endpoints and are automatically generated from service discovery documents. They support most API functionality, but their class interfaces are sometimes awkward.
75
+
76
+ Modern clients are produced by a modern code generator, sometimes combined with hand-crafted functionality. Most modern clients connect to high-performance gRPC endpoints, although a few are backed by REST services. Modern clients are available for many Google services, especially Google Cloud Platform services, but do not yet support all the services covered by the simple clients.
77
+
78
+ Gem names for modern clients are often of the form `google-cloud-<service_name>`. (For example, [google-cloud-pubsub](https://rubygems.org/gems/google-cloud-pubsub).) Note that most modern clients also have corresponding "versioned" gems with names like `google-cloud-<service_name>-<version>`. (For example, [google-cloud-pubsub-v1](https://rubygems.org/gems/google-cloud-pubsub-v1).) The "versioned" gems can be used directly, but often provide lower-level interfaces. In most cases, the main gem is recommended.
79
+
80
+ **For most users, we recommend the modern client, if one is available.** Compared with simple clients, modern clients are generally much easier to use and more Ruby-like, support more advanced features such as streaming and long-running operations, and often provide much better performance. You may consider using a simple client instead, if a modern client is not yet available for the service you want to use, or if you are not able to use gRPC on your infrastructure.
81
+
82
+ The [product documentation](https://cloud.google.com/spanner/) may provide guidance regarding the preferred client library to use.
83
+
84
+ ## Supported Ruby versions
85
+
86
+ This library is supported on Ruby 2.5+.
87
+
88
+ Google provides official support for Ruby versions that are actively supported by Ruby Core -- that is, Ruby versions that are either in normal maintenance or in security maintenance, and not end of life. Currently, this means Ruby 2.5 and later. Older versions of Ruby _may_ still work, but are unsupported and not recommended. See https://www.ruby-lang.org/en/downloads/branches/ for details about the Ruby support schedule.
89
+
90
+ ## License
91
+
92
+ This library is licensed under Apache 2.0. Full license text is available in the {file:LICENSE.md LICENSE}.
93
+
94
+ ## Support
95
+
96
+ Please [report bugs at the project on Github](https://github.com/google/google-api-ruby-client/issues). Don't hesitate to [ask questions](http://stackoverflow.com/questions/tagged/google-api-ruby-client) about the client or APIs on [StackOverflow](http://stackoverflow.com).
@@ -0,0 +1,15 @@
1
+ # Copyright 2020 Google LLC
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ require "google/apis/spanner_v1"
@@ -0,0 +1,43 @@
1
+ # Copyright 2020 Google LLC
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ require 'google/apis/spanner_v1/service.rb'
16
+ require 'google/apis/spanner_v1/classes.rb'
17
+ require 'google/apis/spanner_v1/representations.rb'
18
+ require 'google/apis/spanner_v1/gem_version.rb'
19
+
20
+ module Google
21
+ module Apis
22
+ # Cloud Spanner API
23
+ #
24
+ # Cloud Spanner is a managed, mission-critical, globally consistent and scalable
25
+ # relational database service.
26
+ #
27
+ # @see https://cloud.google.com/spanner/
28
+ module SpannerV1
29
+ # Version of the Cloud Spanner API this client connects to.
30
+ # This is NOT the gem version.
31
+ VERSION = 'V1'
32
+
33
+ # View and manage your data across Google Cloud Platform services
34
+ AUTH_CLOUD_PLATFORM = 'https://www.googleapis.com/auth/cloud-platform'
35
+
36
+ # Administer your Spanner databases
37
+ AUTH_SPANNER_ADMIN = 'https://www.googleapis.com/auth/spanner.admin'
38
+
39
+ # View and manage the contents of your Spanner databases
40
+ AUTH_SPANNER_DATA = 'https://www.googleapis.com/auth/spanner.data'
41
+ end
42
+ end
43
+ end
@@ -0,0 +1,3869 @@
1
+ # Copyright 2020 Google LLC
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ require 'date'
16
+ require 'google/apis/core/base_service'
17
+ require 'google/apis/core/json_representation'
18
+ require 'google/apis/core/hashable'
19
+ require 'google/apis/errors'
20
+
21
+ module Google
22
+ module Apis
23
+ module SpannerV1
24
+
25
+ # A backup of a Cloud Spanner database.
26
+ class Backup
27
+ include Google::Apis::Core::Hashable
28
+
29
+ # Output only. The backup will contain an externally consistent copy of the
30
+ # database at the timestamp specified by `create_time`. `create_time` is
31
+ # approximately the time the CreateBackup request is received.
32
+ # Corresponds to the JSON property `createTime`
33
+ # @return [String]
34
+ attr_accessor :create_time
35
+
36
+ # Required for the CreateBackup operation. Name of the database from which this
37
+ # backup was created. This needs to be in the same instance as the backup.
38
+ # Values are of the form `projects//instances//databases/`.
39
+ # Corresponds to the JSON property `database`
40
+ # @return [String]
41
+ attr_accessor :database
42
+
43
+ # Required for the CreateBackup operation. The expiration time of the backup,
44
+ # with microseconds granularity that must be at least 6 hours and at most 366
45
+ # days from the time the CreateBackup request is processed. Once the `
46
+ # expire_time` has passed, the backup is eligible to be automatically deleted by
47
+ # Cloud Spanner to free the resources used by the backup.
48
+ # Corresponds to the JSON property `expireTime`
49
+ # @return [String]
50
+ attr_accessor :expire_time
51
+
52
+ # Output only for the CreateBackup operation. Required for the UpdateBackup
53
+ # operation. A globally unique identifier for the backup which cannot be changed.
54
+ # Values are of the form `projects//instances//backups/a-z*[a-z0-9]` The final
55
+ # segment of the name must be between 2 and 60 characters in length. The backup
56
+ # is stored in the location(s) specified in the instance configuration of the
57
+ # instance containing the backup, identified by the prefix of the backup name of
58
+ # the form `projects//instances/`.
59
+ # Corresponds to the JSON property `name`
60
+ # @return [String]
61
+ attr_accessor :name
62
+
63
+ # Output only. The names of the restored databases that reference the backup.
64
+ # The database names are of the form `projects//instances//databases/`.
65
+ # Referencing databases may exist in different instances. The existence of any
66
+ # referencing database prevents the backup from being deleted. When a restored
67
+ # database from the backup enters the `READY` state, the reference to the backup
68
+ # is removed.
69
+ # Corresponds to the JSON property `referencingDatabases`
70
+ # @return [Array<String>]
71
+ attr_accessor :referencing_databases
72
+
73
+ # Output only. Size of the backup in bytes.
74
+ # Corresponds to the JSON property `sizeBytes`
75
+ # @return [Fixnum]
76
+ attr_accessor :size_bytes
77
+
78
+ # Output only. The current state of the backup.
79
+ # Corresponds to the JSON property `state`
80
+ # @return [String]
81
+ attr_accessor :state
82
+
83
+ def initialize(**args)
84
+ update!(**args)
85
+ end
86
+
87
+ # Update properties of this object
88
+ def update!(**args)
89
+ @create_time = args[:create_time] if args.key?(:create_time)
90
+ @database = args[:database] if args.key?(:database)
91
+ @expire_time = args[:expire_time] if args.key?(:expire_time)
92
+ @name = args[:name] if args.key?(:name)
93
+ @referencing_databases = args[:referencing_databases] if args.key?(:referencing_databases)
94
+ @size_bytes = args[:size_bytes] if args.key?(:size_bytes)
95
+ @state = args[:state] if args.key?(:state)
96
+ end
97
+ end
98
+
99
+ # Information about a backup.
100
+ class BackupInfo
101
+ include Google::Apis::Core::Hashable
102
+
103
+ # Name of the backup.
104
+ # Corresponds to the JSON property `backup`
105
+ # @return [String]
106
+ attr_accessor :backup
107
+
108
+ # The backup contains an externally consistent copy of `source_database` at the
109
+ # timestamp specified by `create_time`.
110
+ # Corresponds to the JSON property `createTime`
111
+ # @return [String]
112
+ attr_accessor :create_time
113
+
114
+ # Name of the database the backup was created from.
115
+ # Corresponds to the JSON property `sourceDatabase`
116
+ # @return [String]
117
+ attr_accessor :source_database
118
+
119
+ def initialize(**args)
120
+ update!(**args)
121
+ end
122
+
123
+ # Update properties of this object
124
+ def update!(**args)
125
+ @backup = args[:backup] if args.key?(:backup)
126
+ @create_time = args[:create_time] if args.key?(:create_time)
127
+ @source_database = args[:source_database] if args.key?(:source_database)
128
+ end
129
+ end
130
+
131
+ # The request for BatchCreateSessions.
132
+ class BatchCreateSessionsRequest
133
+ include Google::Apis::Core::Hashable
134
+
135
+ # Required. The number of sessions to be created in this batch call. The API may
136
+ # return fewer than the requested number of sessions. If a specific number of
137
+ # sessions are desired, the client can make additional calls to
138
+ # BatchCreateSessions (adjusting session_count as necessary).
139
+ # Corresponds to the JSON property `sessionCount`
140
+ # @return [Fixnum]
141
+ attr_accessor :session_count
142
+
143
+ # A session in the Cloud Spanner API.
144
+ # Corresponds to the JSON property `sessionTemplate`
145
+ # @return [Google::Apis::SpannerV1::Session]
146
+ attr_accessor :session_template
147
+
148
+ def initialize(**args)
149
+ update!(**args)
150
+ end
151
+
152
+ # Update properties of this object
153
+ def update!(**args)
154
+ @session_count = args[:session_count] if args.key?(:session_count)
155
+ @session_template = args[:session_template] if args.key?(:session_template)
156
+ end
157
+ end
158
+
159
+ # The response for BatchCreateSessions.
160
+ class BatchCreateSessionsResponse
161
+ include Google::Apis::Core::Hashable
162
+
163
+ # The freshly created sessions.
164
+ # Corresponds to the JSON property `session`
165
+ # @return [Array<Google::Apis::SpannerV1::Session>]
166
+ attr_accessor :session
167
+
168
+ def initialize(**args)
169
+ update!(**args)
170
+ end
171
+
172
+ # Update properties of this object
173
+ def update!(**args)
174
+ @session = args[:session] if args.key?(:session)
175
+ end
176
+ end
177
+
178
+ # The request for BeginTransaction.
179
+ class BeginTransactionRequest
180
+ include Google::Apis::Core::Hashable
181
+
182
+ # # Transactions Each session can have at most one active transaction at a time (
183
+ # note that standalone reads and queries use a transaction internally and do
184
+ # count towards the one transaction limit). After the active transaction is
185
+ # completed, the session can immediately be re-used for the next transaction. It
186
+ # is not necessary to create a new session for each transaction. # Transaction
187
+ # Modes Cloud Spanner supports three transaction modes: 1. Locking read-write.
188
+ # This type of transaction is the only way to write data into Cloud Spanner.
189
+ # These transactions rely on pessimistic locking and, if necessary, two-phase
190
+ # commit. Locking read-write transactions may abort, requiring the application
191
+ # to retry. 2. Snapshot read-only. This transaction type provides guaranteed
192
+ # consistency across several reads, but does not allow writes. Snapshot read-
193
+ # only transactions can be configured to read at timestamps in the past.
194
+ # Snapshot read-only transactions do not need to be committed. 3. Partitioned
195
+ # DML. This type of transaction is used to execute a single Partitioned DML
196
+ # statement. Partitioned DML partitions the key space and runs the DML statement
197
+ # over each partition in parallel using separate, internal transactions that
198
+ # commit independently. Partitioned DML transactions do not need to be committed.
199
+ # For transactions that only read, snapshot read-only transactions provide
200
+ # simpler semantics and are almost always faster. In particular, read-only
201
+ # transactions do not take locks, so they do not conflict with read-write
202
+ # transactions. As a consequence of not taking locks, they also do not abort, so
203
+ # retry loops are not needed. Transactions may only read/write data in a single
204
+ # database. They may, however, read/write data in different tables within that
205
+ # database. ## Locking Read-Write Transactions Locking transactions may be used
206
+ # to atomically read-modify-write data anywhere in a database. This type of
207
+ # transaction is externally consistent. Clients should attempt to minimize the
208
+ # amount of time a transaction is active. Faster transactions commit with higher
209
+ # probability and cause less contention. Cloud Spanner attempts to keep read
210
+ # locks active as long as the transaction continues to do reads, and the
211
+ # transaction has not been terminated by Commit or Rollback. Long periods of
212
+ # inactivity at the client may cause Cloud Spanner to release a transaction's
213
+ # locks and abort it. Conceptually, a read-write transaction consists of zero or
214
+ # more reads or SQL statements followed by Commit. At any time before Commit,
215
+ # the client can send a Rollback request to abort the transaction. ### Semantics
216
+ # Cloud Spanner can commit the transaction if all read locks it acquired are
217
+ # still valid at commit time, and it is able to acquire write locks for all
218
+ # writes. Cloud Spanner can abort the transaction for any reason. If a commit
219
+ # attempt returns `ABORTED`, Cloud Spanner guarantees that the transaction has
220
+ # not modified any user data in Cloud Spanner. Unless the transaction commits,
221
+ # Cloud Spanner makes no guarantees about how long the transaction's locks were
222
+ # held for. It is an error to use Cloud Spanner locks for any sort of mutual
223
+ # exclusion other than between Cloud Spanner transactions themselves. ###
224
+ # Retrying Aborted Transactions When a transaction aborts, the application can
225
+ # choose to retry the whole transaction again. To maximize the chances of
226
+ # successfully committing the retry, the client should execute the retry in the
227
+ # same session as the original attempt. The original session's lock priority
228
+ # increases with each consecutive abort, meaning that each attempt has a
229
+ # slightly better chance of success than the previous. Under some circumstances (
230
+ # e.g., many transactions attempting to modify the same row(s)), a transaction
231
+ # can abort many times in a short period before successfully committing. Thus,
232
+ # it is not a good idea to cap the number of retries a transaction can attempt;
233
+ # instead, it is better to limit the total amount of wall time spent retrying. ##
234
+ # # Idle Transactions A transaction is considered idle if it has no outstanding
235
+ # reads or SQL queries and has not started a read or SQL query within the last
236
+ # 10 seconds. Idle transactions can be aborted by Cloud Spanner so that they don'
237
+ # t hold on to locks indefinitely. In that case, the commit will fail with error
238
+ # `ABORTED`. If this behavior is undesirable, periodically executing a simple
239
+ # SQL query in the transaction (e.g., `SELECT 1`) prevents the transaction from
240
+ # becoming idle. ## Snapshot Read-Only Transactions Snapshot read-only
241
+ # transactions provides a simpler method than locking read-write transactions
242
+ # for doing several consistent reads. However, this type of transaction does not
243
+ # support writes. Snapshot transactions do not take locks. Instead, they work by
244
+ # choosing a Cloud Spanner timestamp, then executing all reads at that timestamp.
245
+ # Since they do not acquire locks, they do not block concurrent read-write
246
+ # transactions. Unlike locking read-write transactions, snapshot read-only
247
+ # transactions never abort. They can fail if the chosen read timestamp is
248
+ # garbage collected; however, the default garbage collection policy is generous
249
+ # enough that most applications do not need to worry about this in practice.
250
+ # Snapshot read-only transactions do not need to call Commit or Rollback (and in
251
+ # fact are not permitted to do so). To execute a snapshot transaction, the
252
+ # client specifies a timestamp bound, which tells Cloud Spanner how to choose a
253
+ # read timestamp. The types of timestamp bound are: - Strong (the default). -
254
+ # Bounded staleness. - Exact staleness. If the Cloud Spanner database to be read
255
+ # is geographically distributed, stale read-only transactions can execute more
256
+ # quickly than strong or read-write transaction, because they are able to
257
+ # execute far from the leader replica. Each type of timestamp bound is discussed
258
+ # in detail below. ### Strong Strong reads are guaranteed to see the effects of
259
+ # all transactions that have committed before the start of the read. Furthermore,
260
+ # all rows yielded by a single read are consistent with each other -- if any
261
+ # part of the read observes a transaction, all parts of the read see the
262
+ # transaction. Strong reads are not repeatable: two consecutive strong read-only
263
+ # transactions might return inconsistent results if there are concurrent writes.
264
+ # If consistency across reads is required, the reads should be executed within a
265
+ # transaction or at an exact read timestamp. See TransactionOptions.ReadOnly.
266
+ # strong. ### Exact Staleness These timestamp bounds execute reads at a user-
267
+ # specified timestamp. Reads at a timestamp are guaranteed to see a consistent
268
+ # prefix of the global transaction history: they observe modifications done by
269
+ # all transactions with a commit timestamp <= the read timestamp, and observe
270
+ # none of the modifications done by transactions with a larger commit timestamp.
271
+ # They will block until all conflicting transactions that may be assigned commit
272
+ # timestamps <= the read timestamp have finished. The timestamp can either be
273
+ # expressed as an absolute Cloud Spanner commit timestamp or a staleness
274
+ # relative to the current time. These modes do not require a "negotiation phase"
275
+ # to pick a timestamp. As a result, they execute slightly faster than the
276
+ # equivalent boundedly stale concurrency modes. On the other hand, boundedly
277
+ # stale reads usually return fresher results. See TransactionOptions.ReadOnly.
278
+ # read_timestamp and TransactionOptions.ReadOnly.exact_staleness. ### Bounded
279
+ # Staleness Bounded staleness modes allow Cloud Spanner to pick the read
280
+ # timestamp, subject to a user-provided staleness bound. Cloud Spanner chooses
281
+ # the newest timestamp within the staleness bound that allows execution of the
282
+ # reads at the closest available replica without blocking. All rows yielded are
283
+ # consistent with each other -- if any part of the read observes a transaction,
284
+ # all parts of the read see the transaction. Boundedly stale reads are not
285
+ # repeatable: two stale reads, even if they use the same staleness bound, can
286
+ # execute at different timestamps and thus return inconsistent results.
287
+ # Boundedly stale reads execute in two phases: the first phase negotiates a
288
+ # timestamp among all replicas needed to serve the read. In the second phase,
289
+ # reads are executed at the negotiated timestamp. As a result of the two phase
290
+ # execution, bounded staleness reads are usually a little slower than comparable
291
+ # exact staleness reads. However, they are typically able to return fresher
292
+ # results, and are more likely to execute at the closest replica. Because the
293
+ # timestamp negotiation requires up-front knowledge of which rows will be read,
294
+ # it can only be used with single-use read-only transactions. See
295
+ # TransactionOptions.ReadOnly.max_staleness and TransactionOptions.ReadOnly.
296
+ # min_read_timestamp. ### Old Read Timestamps and Garbage Collection Cloud
297
+ # Spanner continuously garbage collects deleted and overwritten data in the
298
+ # background to reclaim storage space. This process is known as "version GC". By
299
+ # default, version GC reclaims versions after they are one hour old. Because of
300
+ # this, Cloud Spanner cannot perform reads at read timestamps more than one hour
301
+ # in the past. This restriction also applies to in-progress reads and/or SQL
302
+ # queries whose timestamp become too old while executing. Reads and SQL queries
303
+ # with too-old read timestamps fail with the error `FAILED_PRECONDITION`. ##
304
+ # Partitioned DML Transactions Partitioned DML transactions are used to execute
305
+ # DML statements with a different execution strategy that provides different,
306
+ # and often better, scalability properties for large, table-wide operations than
307
+ # DML in a ReadWrite transaction. Smaller scoped statements, such as an OLTP
308
+ # workload, should prefer using ReadWrite transactions. Partitioned DML
309
+ # partitions the keyspace and runs the DML statement on each partition in
310
+ # separate, internal transactions. These transactions commit automatically when
311
+ # complete, and run independently from one another. To reduce lock contention,
312
+ # this execution strategy only acquires read locks on rows that match the WHERE
313
+ # clause of the statement. Additionally, the smaller per-partition transactions
314
+ # hold locks for less time. That said, Partitioned DML is not a drop-in
315
+ # replacement for standard DML used in ReadWrite transactions. - The DML
316
+ # statement must be fully-partitionable. Specifically, the statement must be
317
+ # expressible as the union of many statements which each access only a single
318
+ # row of the table. - The statement is not applied atomically to all rows of the
319
+ # table. Rather, the statement is applied atomically to partitions of the table,
320
+ # in independent transactions. Secondary index rows are updated atomically with
321
+ # the base table rows. - Partitioned DML does not guarantee exactly-once
322
+ # execution semantics against a partition. The statement will be applied at
323
+ # least once to each partition. It is strongly recommended that the DML
324
+ # statement should be idempotent to avoid unexpected results. For instance, it
325
+ # is potentially dangerous to run a statement such as `UPDATE table SET column =
326
+ # column + 1` as it could be run multiple times against some rows. - The
327
+ # partitions are committed automatically - there is no support for Commit or
328
+ # Rollback. If the call returns an error, or if the client issuing the
329
+ # ExecuteSql call dies, it is possible that some rows had the statement executed
330
+ # on them successfully. It is also possible that statement was never executed
331
+ # against other rows. - Partitioned DML transactions may only contain the
332
+ # execution of a single DML statement via ExecuteSql or ExecuteStreamingSql. -
333
+ # If any error is encountered during the execution of the partitioned DML
334
+ # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
335
+ # value that cannot be stored due to schema constraints), then the operation is
336
+ # stopped at that point and an error is returned. It is possible that at this
337
+ # point, some partitions have been committed (or even committed multiple times),
338
+ # and other partitions have not been run at all. Given the above, Partitioned
339
+ # DML is good fit for large, database-wide, operations that are idempotent, such
340
+ # as deleting old rows from a very large table.
341
+ # Corresponds to the JSON property `options`
342
+ # @return [Google::Apis::SpannerV1::TransactionOptions]
343
+ attr_accessor :options
344
+
345
+ def initialize(**args)
346
+ update!(**args)
347
+ end
348
+
349
+ # Update properties of this object
350
+ def update!(**args)
351
+ @options = args[:options] if args.key?(:options)
352
+ end
353
+ end
354
+
355
+ # Associates `members` with a `role`.
356
+ class Binding
357
+ include Google::Apis::Core::Hashable
358
+
359
+ # Represents a textual expression in the Common Expression Language (CEL) syntax.
360
+ # CEL is a C-like expression language. The syntax and semantics of CEL are
361
+ # documented at https://github.com/google/cel-spec. Example (Comparison): title:
362
+ # "Summary size limit" description: "Determines if a summary is less than 100
363
+ # chars" expression: "document.summary.size() < 100" Example (Equality): title: "
364
+ # Requestor is owner" description: "Determines if requestor is the document
365
+ # owner" expression: "document.owner == request.auth.claims.email" Example (
366
+ # Logic): title: "Public documents" description: "Determine whether the document
367
+ # should be publicly visible" expression: "document.type != 'private' &&
368
+ # document.type != 'internal'" Example (Data Manipulation): title: "Notification
369
+ # string" description: "Create a notification string with a timestamp."
370
+ # expression: "'New message received at ' + string(document.create_time)" The
371
+ # exact variables and functions that may be referenced within an expression are
372
+ # determined by the service that evaluates it. See the service documentation for
373
+ # additional information.
374
+ # Corresponds to the JSON property `condition`
375
+ # @return [Google::Apis::SpannerV1::Expr]
376
+ attr_accessor :condition
377
+
378
+ # Specifies the identities requesting access for a Cloud Platform resource. `
379
+ # members` can have the following values: * `allUsers`: A special identifier
380
+ # that represents anyone who is on the internet; with or without a Google
381
+ # account. * `allAuthenticatedUsers`: A special identifier that represents
382
+ # anyone who is authenticated with a Google account or a service account. * `
383
+ # user:`emailid``: An email address that represents a specific Google account.
384
+ # For example, `alice@example.com` . * `serviceAccount:`emailid``: An email
385
+ # address that represents a service account. For example, `my-other-app@appspot.
386
+ # gserviceaccount.com`. * `group:`emailid``: An email address that represents a
387
+ # Google group. For example, `admins@example.com`. * `deleted:user:`emailid`?uid=
388
+ # `uniqueid``: An email address (plus unique identifier) representing a user
389
+ # that has been recently deleted. For example, `alice@example.com?uid=
390
+ # 123456789012345678901`. If the user is recovered, this value reverts to `user:`
391
+ # emailid`` and the recovered user retains the role in the binding. * `deleted:
392
+ # serviceAccount:`emailid`?uid=`uniqueid``: An email address (plus unique
393
+ # identifier) representing a service account that has been recently deleted. For
394
+ # example, `my-other-app@appspot.gserviceaccount.com?uid=123456789012345678901`.
395
+ # If the service account is undeleted, this value reverts to `serviceAccount:`
396
+ # emailid`` and the undeleted service account retains the role in the binding. *
397
+ # `deleted:group:`emailid`?uid=`uniqueid``: An email address (plus unique
398
+ # identifier) representing a Google group that has been recently deleted. For
399
+ # example, `admins@example.com?uid=123456789012345678901`. If the group is
400
+ # recovered, this value reverts to `group:`emailid`` and the recovered group
401
+ # retains the role in the binding. * `domain:`domain``: The G Suite domain (
402
+ # primary) that represents all the users of that domain. For example, `google.
403
+ # com` or `example.com`.
404
+ # Corresponds to the JSON property `members`
405
+ # @return [Array<String>]
406
+ attr_accessor :members
407
+
408
+ # Role that is assigned to `members`. For example, `roles/viewer`, `roles/editor`
409
+ # , or `roles/owner`.
410
+ # Corresponds to the JSON property `role`
411
+ # @return [String]
412
+ attr_accessor :role
413
+
414
+ def initialize(**args)
415
+ update!(**args)
416
+ end
417
+
418
+ # Update properties of this object
419
+ def update!(**args)
420
+ @condition = args[:condition] if args.key?(:condition)
421
+ @members = args[:members] if args.key?(:members)
422
+ @role = args[:role] if args.key?(:role)
423
+ end
424
+ end
425
+
426
+ # Metadata associated with a parent-child relationship appearing in a PlanNode.
427
+ class ChildLink
428
+ include Google::Apis::Core::Hashable
429
+
430
+ # The node to which the link points.
431
+ # Corresponds to the JSON property `childIndex`
432
+ # @return [Fixnum]
433
+ attr_accessor :child_index
434
+
435
+ # The type of the link. For example, in Hash Joins this could be used to
436
+ # distinguish between the build child and the probe child, or in the case of the
437
+ # child being an output variable, to represent the tag associated with the
438
+ # output variable.
439
+ # Corresponds to the JSON property `type`
440
+ # @return [String]
441
+ attr_accessor :type
442
+
443
+ # Only present if the child node is SCALAR and corresponds to an output variable
444
+ # of the parent node. The field carries the name of the output variable. For
445
+ # example, a `TableScan` operator that reads rows from a table will have child
446
+ # links to the `SCALAR` nodes representing the output variables created for each
447
+ # column that is read by the operator. The corresponding `variable` fields will
448
+ # be set to the variable names assigned to the columns.
449
+ # Corresponds to the JSON property `variable`
450
+ # @return [String]
451
+ attr_accessor :variable
452
+
453
+ def initialize(**args)
454
+ update!(**args)
455
+ end
456
+
457
+ # Update properties of this object
458
+ def update!(**args)
459
+ @child_index = args[:child_index] if args.key?(:child_index)
460
+ @type = args[:type] if args.key?(:type)
461
+ @variable = args[:variable] if args.key?(:variable)
462
+ end
463
+ end
464
+
465
+ # The request for Commit.
466
+ class CommitRequest
467
+ include Google::Apis::Core::Hashable
468
+
469
+ # The mutations to be executed when this transaction commits. All mutations are
470
+ # applied atomically, in the order they appear in this list.
471
+ # Corresponds to the JSON property `mutations`
472
+ # @return [Array<Google::Apis::SpannerV1::Mutation>]
473
+ attr_accessor :mutations
474
+
475
+ # # Transactions Each session can have at most one active transaction at a time (
476
+ # note that standalone reads and queries use a transaction internally and do
477
+ # count towards the one transaction limit). After the active transaction is
478
+ # completed, the session can immediately be re-used for the next transaction. It
479
+ # is not necessary to create a new session for each transaction. # Transaction
480
+ # Modes Cloud Spanner supports three transaction modes: 1. Locking read-write.
481
+ # This type of transaction is the only way to write data into Cloud Spanner.
482
+ # These transactions rely on pessimistic locking and, if necessary, two-phase
483
+ # commit. Locking read-write transactions may abort, requiring the application
484
+ # to retry. 2. Snapshot read-only. This transaction type provides guaranteed
485
+ # consistency across several reads, but does not allow writes. Snapshot read-
486
+ # only transactions can be configured to read at timestamps in the past.
487
+ # Snapshot read-only transactions do not need to be committed. 3. Partitioned
488
+ # DML. This type of transaction is used to execute a single Partitioned DML
489
+ # statement. Partitioned DML partitions the key space and runs the DML statement
490
+ # over each partition in parallel using separate, internal transactions that
491
+ # commit independently. Partitioned DML transactions do not need to be committed.
492
+ # For transactions that only read, snapshot read-only transactions provide
493
+ # simpler semantics and are almost always faster. In particular, read-only
494
+ # transactions do not take locks, so they do not conflict with read-write
495
+ # transactions. As a consequence of not taking locks, they also do not abort, so
496
+ # retry loops are not needed. Transactions may only read/write data in a single
497
+ # database. They may, however, read/write data in different tables within that
498
+ # database. ## Locking Read-Write Transactions Locking transactions may be used
499
+ # to atomically read-modify-write data anywhere in a database. This type of
500
+ # transaction is externally consistent. Clients should attempt to minimize the
501
+ # amount of time a transaction is active. Faster transactions commit with higher
502
+ # probability and cause less contention. Cloud Spanner attempts to keep read
503
+ # locks active as long as the transaction continues to do reads, and the
504
+ # transaction has not been terminated by Commit or Rollback. Long periods of
505
+ # inactivity at the client may cause Cloud Spanner to release a transaction's
506
+ # locks and abort it. Conceptually, a read-write transaction consists of zero or
507
+ # more reads or SQL statements followed by Commit. At any time before Commit,
508
+ # the client can send a Rollback request to abort the transaction. ### Semantics
509
+ # Cloud Spanner can commit the transaction if all read locks it acquired are
510
+ # still valid at commit time, and it is able to acquire write locks for all
511
+ # writes. Cloud Spanner can abort the transaction for any reason. If a commit
512
+ # attempt returns `ABORTED`, Cloud Spanner guarantees that the transaction has
513
+ # not modified any user data in Cloud Spanner. Unless the transaction commits,
514
+ # Cloud Spanner makes no guarantees about how long the transaction's locks were
515
+ # held for. It is an error to use Cloud Spanner locks for any sort of mutual
516
+ # exclusion other than between Cloud Spanner transactions themselves. ###
517
+ # Retrying Aborted Transactions When a transaction aborts, the application can
518
+ # choose to retry the whole transaction again. To maximize the chances of
519
+ # successfully committing the retry, the client should execute the retry in the
520
+ # same session as the original attempt. The original session's lock priority
521
+ # increases with each consecutive abort, meaning that each attempt has a
522
+ # slightly better chance of success than the previous. Under some circumstances (
523
+ # e.g., many transactions attempting to modify the same row(s)), a transaction
524
+ # can abort many times in a short period before successfully committing. Thus,
525
+ # it is not a good idea to cap the number of retries a transaction can attempt;
526
+ # instead, it is better to limit the total amount of wall time spent retrying. ##
527
+ # # Idle Transactions A transaction is considered idle if it has no outstanding
528
+ # reads or SQL queries and has not started a read or SQL query within the last
529
+ # 10 seconds. Idle transactions can be aborted by Cloud Spanner so that they don'
530
+ # t hold on to locks indefinitely. In that case, the commit will fail with error
531
+ # `ABORTED`. If this behavior is undesirable, periodically executing a simple
532
+ # SQL query in the transaction (e.g., `SELECT 1`) prevents the transaction from
533
+ # becoming idle. ## Snapshot Read-Only Transactions Snapshot read-only
534
+ # transactions provides a simpler method than locking read-write transactions
535
+ # for doing several consistent reads. However, this type of transaction does not
536
+ # support writes. Snapshot transactions do not take locks. Instead, they work by
537
+ # choosing a Cloud Spanner timestamp, then executing all reads at that timestamp.
538
+ # Since they do not acquire locks, they do not block concurrent read-write
539
+ # transactions. Unlike locking read-write transactions, snapshot read-only
540
+ # transactions never abort. They can fail if the chosen read timestamp is
541
+ # garbage collected; however, the default garbage collection policy is generous
542
+ # enough that most applications do not need to worry about this in practice.
543
+ # Snapshot read-only transactions do not need to call Commit or Rollback (and in
544
+ # fact are not permitted to do so). To execute a snapshot transaction, the
545
+ # client specifies a timestamp bound, which tells Cloud Spanner how to choose a
546
+ # read timestamp. The types of timestamp bound are: - Strong (the default). -
547
+ # Bounded staleness. - Exact staleness. If the Cloud Spanner database to be read
548
+ # is geographically distributed, stale read-only transactions can execute more
549
+ # quickly than strong or read-write transaction, because they are able to
550
+ # execute far from the leader replica. Each type of timestamp bound is discussed
551
+ # in detail below. ### Strong Strong reads are guaranteed to see the effects of
552
+ # all transactions that have committed before the start of the read. Furthermore,
553
+ # all rows yielded by a single read are consistent with each other -- if any
554
+ # part of the read observes a transaction, all parts of the read see the
555
+ # transaction. Strong reads are not repeatable: two consecutive strong read-only
556
+ # transactions might return inconsistent results if there are concurrent writes.
557
+ # If consistency across reads is required, the reads should be executed within a
558
+ # transaction or at an exact read timestamp. See TransactionOptions.ReadOnly.
559
+ # strong. ### Exact Staleness These timestamp bounds execute reads at a user-
560
+ # specified timestamp. Reads at a timestamp are guaranteed to see a consistent
561
+ # prefix of the global transaction history: they observe modifications done by
562
+ # all transactions with a commit timestamp <= the read timestamp, and observe
563
+ # none of the modifications done by transactions with a larger commit timestamp.
564
+ # They will block until all conflicting transactions that may be assigned commit
565
+ # timestamps <= the read timestamp have finished. The timestamp can either be
566
+ # expressed as an absolute Cloud Spanner commit timestamp or a staleness
567
+ # relative to the current time. These modes do not require a "negotiation phase"
568
+ # to pick a timestamp. As a result, they execute slightly faster than the
569
+ # equivalent boundedly stale concurrency modes. On the other hand, boundedly
570
+ # stale reads usually return fresher results. See TransactionOptions.ReadOnly.
571
+ # read_timestamp and TransactionOptions.ReadOnly.exact_staleness. ### Bounded
572
+ # Staleness Bounded staleness modes allow Cloud Spanner to pick the read
573
+ # timestamp, subject to a user-provided staleness bound. Cloud Spanner chooses
574
+ # the newest timestamp within the staleness bound that allows execution of the
575
+ # reads at the closest available replica without blocking. All rows yielded are
576
+ # consistent with each other -- if any part of the read observes a transaction,
577
+ # all parts of the read see the transaction. Boundedly stale reads are not
578
+ # repeatable: two stale reads, even if they use the same staleness bound, can
579
+ # execute at different timestamps and thus return inconsistent results.
580
+ # Boundedly stale reads execute in two phases: the first phase negotiates a
581
+ # timestamp among all replicas needed to serve the read. In the second phase,
582
+ # reads are executed at the negotiated timestamp. As a result of the two phase
583
+ # execution, bounded staleness reads are usually a little slower than comparable
584
+ # exact staleness reads. However, they are typically able to return fresher
585
+ # results, and are more likely to execute at the closest replica. Because the
586
+ # timestamp negotiation requires up-front knowledge of which rows will be read,
587
+ # it can only be used with single-use read-only transactions. See
588
+ # TransactionOptions.ReadOnly.max_staleness and TransactionOptions.ReadOnly.
589
+ # min_read_timestamp. ### Old Read Timestamps and Garbage Collection Cloud
590
+ # Spanner continuously garbage collects deleted and overwritten data in the
591
+ # background to reclaim storage space. This process is known as "version GC". By
592
+ # default, version GC reclaims versions after they are one hour old. Because of
593
+ # this, Cloud Spanner cannot perform reads at read timestamps more than one hour
594
+ # in the past. This restriction also applies to in-progress reads and/or SQL
595
+ # queries whose timestamp become too old while executing. Reads and SQL queries
596
+ # with too-old read timestamps fail with the error `FAILED_PRECONDITION`. ##
597
+ # Partitioned DML Transactions Partitioned DML transactions are used to execute
598
+ # DML statements with a different execution strategy that provides different,
599
+ # and often better, scalability properties for large, table-wide operations than
600
+ # DML in a ReadWrite transaction. Smaller scoped statements, such as an OLTP
601
+ # workload, should prefer using ReadWrite transactions. Partitioned DML
602
+ # partitions the keyspace and runs the DML statement on each partition in
603
+ # separate, internal transactions. These transactions commit automatically when
604
+ # complete, and run independently from one another. To reduce lock contention,
605
+ # this execution strategy only acquires read locks on rows that match the WHERE
606
+ # clause of the statement. Additionally, the smaller per-partition transactions
607
+ # hold locks for less time. That said, Partitioned DML is not a drop-in
608
+ # replacement for standard DML used in ReadWrite transactions. - The DML
609
+ # statement must be fully-partitionable. Specifically, the statement must be
610
+ # expressible as the union of many statements which each access only a single
611
+ # row of the table. - The statement is not applied atomically to all rows of the
612
+ # table. Rather, the statement is applied atomically to partitions of the table,
613
+ # in independent transactions. Secondary index rows are updated atomically with
614
+ # the base table rows. - Partitioned DML does not guarantee exactly-once
615
+ # execution semantics against a partition. The statement will be applied at
616
+ # least once to each partition. It is strongly recommended that the DML
617
+ # statement should be idempotent to avoid unexpected results. For instance, it
618
+ # is potentially dangerous to run a statement such as `UPDATE table SET column =
619
+ # column + 1` as it could be run multiple times against some rows. - The
620
+ # partitions are committed automatically - there is no support for Commit or
621
+ # Rollback. If the call returns an error, or if the client issuing the
622
+ # ExecuteSql call dies, it is possible that some rows had the statement executed
623
+ # on them successfully. It is also possible that statement was never executed
624
+ # against other rows. - Partitioned DML transactions may only contain the
625
+ # execution of a single DML statement via ExecuteSql or ExecuteStreamingSql. -
626
+ # If any error is encountered during the execution of the partitioned DML
627
+ # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
628
+ # value that cannot be stored due to schema constraints), then the operation is
629
+ # stopped at that point and an error is returned. It is possible that at this
630
+ # point, some partitions have been committed (or even committed multiple times),
631
+ # and other partitions have not been run at all. Given the above, Partitioned
632
+ # DML is good fit for large, database-wide, operations that are idempotent, such
633
+ # as deleting old rows from a very large table.
634
+ # Corresponds to the JSON property `singleUseTransaction`
635
+ # @return [Google::Apis::SpannerV1::TransactionOptions]
636
+ attr_accessor :single_use_transaction
637
+
638
+ # Commit a previously-started transaction.
639
+ # Corresponds to the JSON property `transactionId`
640
+ # NOTE: Values are automatically base64 encoded/decoded in the client library.
641
+ # @return [String]
642
+ attr_accessor :transaction_id
643
+
644
+ def initialize(**args)
645
+ update!(**args)
646
+ end
647
+
648
+ # Update properties of this object
649
+ def update!(**args)
650
+ @mutations = args[:mutations] if args.key?(:mutations)
651
+ @single_use_transaction = args[:single_use_transaction] if args.key?(:single_use_transaction)
652
+ @transaction_id = args[:transaction_id] if args.key?(:transaction_id)
653
+ end
654
+ end
655
+
656
+ # The response for Commit.
657
+ class CommitResponse
658
+ include Google::Apis::Core::Hashable
659
+
660
+ # The Cloud Spanner timestamp at which the transaction committed.
661
+ # Corresponds to the JSON property `commitTimestamp`
662
+ # @return [String]
663
+ attr_accessor :commit_timestamp
664
+
665
+ def initialize(**args)
666
+ update!(**args)
667
+ end
668
+
669
+ # Update properties of this object
670
+ def update!(**args)
671
+ @commit_timestamp = args[:commit_timestamp] if args.key?(:commit_timestamp)
672
+ end
673
+ end
674
+
675
+ # Metadata type for the operation returned by CreateBackup.
676
+ class CreateBackupMetadata
677
+ include Google::Apis::Core::Hashable
678
+
679
+ # The time at which cancellation of this operation was received. Operations.
680
+ # CancelOperation starts asynchronous cancellation on a long-running operation.
681
+ # The server makes a best effort to cancel the operation, but success is not
682
+ # guaranteed. Clients can use Operations.GetOperation or other methods to check
683
+ # whether the cancellation succeeded or whether the operation completed despite
684
+ # cancellation. On successful cancellation, the operation is not deleted;
685
+ # instead, it becomes an operation with an Operation.error value with a google.
686
+ # rpc.Status.code of 1, corresponding to `Code.CANCELLED`.
687
+ # Corresponds to the JSON property `cancelTime`
688
+ # @return [String]
689
+ attr_accessor :cancel_time
690
+
691
+ # The name of the database the backup is created from.
692
+ # Corresponds to the JSON property `database`
693
+ # @return [String]
694
+ attr_accessor :database
695
+
696
+ # The name of the backup being created.
697
+ # Corresponds to the JSON property `name`
698
+ # @return [String]
699
+ attr_accessor :name
700
+
701
+ # Encapsulates progress related information for a Cloud Spanner long running
702
+ # operation.
703
+ # Corresponds to the JSON property `progress`
704
+ # @return [Google::Apis::SpannerV1::OperationProgress]
705
+ attr_accessor :progress
706
+
707
+ def initialize(**args)
708
+ update!(**args)
709
+ end
710
+
711
+ # Update properties of this object
712
+ def update!(**args)
713
+ @cancel_time = args[:cancel_time] if args.key?(:cancel_time)
714
+ @database = args[:database] if args.key?(:database)
715
+ @name = args[:name] if args.key?(:name)
716
+ @progress = args[:progress] if args.key?(:progress)
717
+ end
718
+ end
719
+
720
+ # Metadata type for the operation returned by CreateDatabase.
721
+ class CreateDatabaseMetadata
722
+ include Google::Apis::Core::Hashable
723
+
724
+ # The database being created.
725
+ # Corresponds to the JSON property `database`
726
+ # @return [String]
727
+ attr_accessor :database
728
+
729
+ def initialize(**args)
730
+ update!(**args)
731
+ end
732
+
733
+ # Update properties of this object
734
+ def update!(**args)
735
+ @database = args[:database] if args.key?(:database)
736
+ end
737
+ end
738
+
739
+ # The request for CreateDatabase.
740
+ class CreateDatabaseRequest
741
+ include Google::Apis::Core::Hashable
742
+
743
+ # Required. A `CREATE DATABASE` statement, which specifies the ID of the new
744
+ # database. The database ID must conform to the regular expression `a-z*[a-z0-9]`
745
+ # and be between 2 and 30 characters in length. If the database ID is a
746
+ # reserved word or if it contains a hyphen, the database ID must be enclosed in
747
+ # backticks (`` ` ``).
748
+ # Corresponds to the JSON property `createStatement`
749
+ # @return [String]
750
+ attr_accessor :create_statement
751
+
752
+ # Optional. A list of DDL statements to run inside the newly created database.
753
+ # Statements can create tables, indexes, etc. These statements execute
754
+ # atomically with the creation of the database: if there is an error in any
755
+ # statement, the database is not created.
756
+ # Corresponds to the JSON property `extraStatements`
757
+ # @return [Array<String>]
758
+ attr_accessor :extra_statements
759
+
760
+ def initialize(**args)
761
+ update!(**args)
762
+ end
763
+
764
+ # Update properties of this object
765
+ def update!(**args)
766
+ @create_statement = args[:create_statement] if args.key?(:create_statement)
767
+ @extra_statements = args[:extra_statements] if args.key?(:extra_statements)
768
+ end
769
+ end
770
+
771
+ # Metadata type for the operation returned by CreateInstance.
772
+ class CreateInstanceMetadata
773
+ include Google::Apis::Core::Hashable
774
+
775
+ # The time at which this operation was cancelled. If set, this operation is in
776
+ # the process of undoing itself (which is guaranteed to succeed) and cannot be
777
+ # cancelled again.
778
+ # Corresponds to the JSON property `cancelTime`
779
+ # @return [String]
780
+ attr_accessor :cancel_time
781
+
782
+ # The time at which this operation failed or was completed successfully.
783
+ # Corresponds to the JSON property `endTime`
784
+ # @return [String]
785
+ attr_accessor :end_time
786
+
787
+ # An isolated set of Cloud Spanner resources on which databases can be hosted.
788
+ # Corresponds to the JSON property `instance`
789
+ # @return [Google::Apis::SpannerV1::Instance]
790
+ attr_accessor :instance
791
+
792
+ # The time at which the CreateInstance request was received.
793
+ # Corresponds to the JSON property `startTime`
794
+ # @return [String]
795
+ attr_accessor :start_time
796
+
797
+ def initialize(**args)
798
+ update!(**args)
799
+ end
800
+
801
+ # Update properties of this object
802
+ def update!(**args)
803
+ @cancel_time = args[:cancel_time] if args.key?(:cancel_time)
804
+ @end_time = args[:end_time] if args.key?(:end_time)
805
+ @instance = args[:instance] if args.key?(:instance)
806
+ @start_time = args[:start_time] if args.key?(:start_time)
807
+ end
808
+ end
809
+
810
+ # The request for CreateInstance.
811
+ class CreateInstanceRequest
812
+ include Google::Apis::Core::Hashable
813
+
814
+ # An isolated set of Cloud Spanner resources on which databases can be hosted.
815
+ # Corresponds to the JSON property `instance`
816
+ # @return [Google::Apis::SpannerV1::Instance]
817
+ attr_accessor :instance
818
+
819
+ # Required. The ID of the instance to create. Valid identifiers are of the form `
820
+ # a-z*[a-z0-9]` and must be between 2 and 64 characters in length.
821
+ # Corresponds to the JSON property `instanceId`
822
+ # @return [String]
823
+ attr_accessor :instance_id
824
+
825
+ def initialize(**args)
826
+ update!(**args)
827
+ end
828
+
829
+ # Update properties of this object
830
+ def update!(**args)
831
+ @instance = args[:instance] if args.key?(:instance)
832
+ @instance_id = args[:instance_id] if args.key?(:instance_id)
833
+ end
834
+ end
835
+
836
+ # The request for CreateSession.
837
+ class CreateSessionRequest
838
+ include Google::Apis::Core::Hashable
839
+
840
+ # A session in the Cloud Spanner API.
841
+ # Corresponds to the JSON property `session`
842
+ # @return [Google::Apis::SpannerV1::Session]
843
+ attr_accessor :session
844
+
845
+ def initialize(**args)
846
+ update!(**args)
847
+ end
848
+
849
+ # Update properties of this object
850
+ def update!(**args)
851
+ @session = args[:session] if args.key?(:session)
852
+ end
853
+ end
854
+
855
+ # A Cloud Spanner database.
856
+ class Database
857
+ include Google::Apis::Core::Hashable
858
+
859
+ # Output only. If exists, the time at which the database creation started.
860
+ # Corresponds to the JSON property `createTime`
861
+ # @return [String]
862
+ attr_accessor :create_time
863
+
864
+ # Required. The name of the database. Values are of the form `projects//
865
+ # instances//databases/`, where `` is as specified in the `CREATE DATABASE`
866
+ # statement. This name can be passed to other API methods to identify the
867
+ # database.
868
+ # Corresponds to the JSON property `name`
869
+ # @return [String]
870
+ attr_accessor :name
871
+
872
+ # Information about the database restore.
873
+ # Corresponds to the JSON property `restoreInfo`
874
+ # @return [Google::Apis::SpannerV1::RestoreInfo]
875
+ attr_accessor :restore_info
876
+
877
+ # Output only. The current database state.
878
+ # Corresponds to the JSON property `state`
879
+ # @return [String]
880
+ attr_accessor :state
881
+
882
+ def initialize(**args)
883
+ update!(**args)
884
+ end
885
+
886
+ # Update properties of this object
887
+ def update!(**args)
888
+ @create_time = args[:create_time] if args.key?(:create_time)
889
+ @name = args[:name] if args.key?(:name)
890
+ @restore_info = args[:restore_info] if args.key?(:restore_info)
891
+ @state = args[:state] if args.key?(:state)
892
+ end
893
+ end
894
+
895
+ # Arguments to delete operations.
896
+ class Delete
897
+ include Google::Apis::Core::Hashable
898
+
899
+ # `KeySet` defines a collection of Cloud Spanner keys and/or key ranges. All the
900
+ # keys are expected to be in the same table or index. The keys need not be
901
+ # sorted in any particular way. If the same key is specified multiple times in
902
+ # the set (for example if two ranges, two keys, or a key and a range overlap),
903
+ # Cloud Spanner behaves as if the key were only specified once.
904
+ # Corresponds to the JSON property `keySet`
905
+ # @return [Google::Apis::SpannerV1::KeySet]
906
+ attr_accessor :key_set
907
+
908
+ # Required. The table whose rows will be deleted.
909
+ # Corresponds to the JSON property `table`
910
+ # @return [String]
911
+ attr_accessor :table
912
+
913
+ def initialize(**args)
914
+ update!(**args)
915
+ end
916
+
917
+ # Update properties of this object
918
+ def update!(**args)
919
+ @key_set = args[:key_set] if args.key?(:key_set)
920
+ @table = args[:table] if args.key?(:table)
921
+ end
922
+ end
923
+
924
+ # A generic empty message that you can re-use to avoid defining duplicated empty
925
+ # messages in your APIs. A typical example is to use it as the request or the
926
+ # response type of an API method. For instance: service Foo ` rpc Bar(google.
927
+ # protobuf.Empty) returns (google.protobuf.Empty); ` The JSON representation for
928
+ # `Empty` is empty JSON object ````.
929
+ class Empty
930
+ include Google::Apis::Core::Hashable
931
+
932
+ def initialize(**args)
933
+ update!(**args)
934
+ end
935
+
936
+ # Update properties of this object
937
+ def update!(**args)
938
+ end
939
+ end
940
+
941
+ # The request for ExecuteBatchDml.
942
+ class ExecuteBatchDmlRequest
943
+ include Google::Apis::Core::Hashable
944
+
945
+ # Required. A per-transaction sequence number used to identify this request.
946
+ # This field makes each request idempotent such that if the request is received
947
+ # multiple times, at most one will succeed. The sequence number must be
948
+ # monotonically increasing within the transaction. If a request arrives for the
949
+ # first time with an out-of-order sequence number, the transaction may be
950
+ # aborted. Replays of previously handled requests will yield the same response
951
+ # as the first execution.
952
+ # Corresponds to the JSON property `seqno`
953
+ # @return [Fixnum]
954
+ attr_accessor :seqno
955
+
956
+ # Required. The list of statements to execute in this batch. Statements are
957
+ # executed serially, such that the effects of statement `i` are visible to
958
+ # statement `i+1`. Each statement must be a DML statement. Execution stops at
959
+ # the first failed statement; the remaining statements are not executed. Callers
960
+ # must provide at least one statement.
961
+ # Corresponds to the JSON property `statements`
962
+ # @return [Array<Google::Apis::SpannerV1::Statement>]
963
+ attr_accessor :statements
964
+
965
+ # This message is used to select the transaction in which a Read or ExecuteSql
966
+ # call runs. See TransactionOptions for more information about transactions.
967
+ # Corresponds to the JSON property `transaction`
968
+ # @return [Google::Apis::SpannerV1::TransactionSelector]
969
+ attr_accessor :transaction
970
+
971
+ def initialize(**args)
972
+ update!(**args)
973
+ end
974
+
975
+ # Update properties of this object
976
+ def update!(**args)
977
+ @seqno = args[:seqno] if args.key?(:seqno)
978
+ @statements = args[:statements] if args.key?(:statements)
979
+ @transaction = args[:transaction] if args.key?(:transaction)
980
+ end
981
+ end
982
+
983
+ # The response for ExecuteBatchDml. Contains a list of ResultSet messages, one
984
+ # for each DML statement that has successfully executed, in the same order as
985
+ # the statements in the request. If a statement fails, the status in the
986
+ # response body identifies the cause of the failure. To check for DML statements
987
+ # that failed, use the following approach: 1. Check the status in the response
988
+ # message. The google.rpc.Code enum value `OK` indicates that all statements
989
+ # were executed successfully. 2. If the status was not `OK`, check the number of
990
+ # result sets in the response. If the response contains `N` ResultSet messages,
991
+ # then statement `N+1` in the request failed. Example 1: * Request: 5 DML
992
+ # statements, all executed successfully. * Response: 5 ResultSet messages, with
993
+ # the status `OK`. Example 2: * Request: 5 DML statements. The third statement
994
+ # has a syntax error. * Response: 2 ResultSet messages, and a syntax error (`
995
+ # INVALID_ARGUMENT`) status. The number of ResultSet messages indicates that the
996
+ # third statement failed, and the fourth and fifth statements were not executed.
997
+ class ExecuteBatchDmlResponse
998
+ include Google::Apis::Core::Hashable
999
+
1000
+ # One ResultSet for each statement in the request that ran successfully, in the
1001
+ # same order as the statements in the request. Each ResultSet does not contain
1002
+ # any rows. The ResultSetStats in each ResultSet contain the number of rows
1003
+ # modified by the statement. Only the first ResultSet in the response contains
1004
+ # valid ResultSetMetadata.
1005
+ # Corresponds to the JSON property `resultSets`
1006
+ # @return [Array<Google::Apis::SpannerV1::ResultSet>]
1007
+ attr_accessor :result_sets
1008
+
1009
+ # The `Status` type defines a logical error model that is suitable for different
1010
+ # programming environments, including REST APIs and RPC APIs. It is used by [
1011
+ # gRPC](https://github.com/grpc). Each `Status` message contains three pieces of
1012
+ # data: error code, error message, and error details. You can find out more
1013
+ # about this error model and how to work with it in the [API Design Guide](https:
1014
+ # //cloud.google.com/apis/design/errors).
1015
+ # Corresponds to the JSON property `status`
1016
+ # @return [Google::Apis::SpannerV1::Status]
1017
+ attr_accessor :status
1018
+
1019
+ def initialize(**args)
1020
+ update!(**args)
1021
+ end
1022
+
1023
+ # Update properties of this object
1024
+ def update!(**args)
1025
+ @result_sets = args[:result_sets] if args.key?(:result_sets)
1026
+ @status = args[:status] if args.key?(:status)
1027
+ end
1028
+ end
1029
+
1030
+ # The request for ExecuteSql and ExecuteStreamingSql.
1031
+ class ExecuteSqlRequest
1032
+ include Google::Apis::Core::Hashable
1033
+
1034
+ # It is not always possible for Cloud Spanner to infer the right SQL type from a
1035
+ # JSON value. For example, values of type `BYTES` and values of type `STRING`
1036
+ # both appear in params as JSON strings. In these cases, `param_types` can be
1037
+ # used to specify the exact SQL type for some or all of the SQL statement
1038
+ # parameters. See the definition of Type for more information about SQL types.
1039
+ # Corresponds to the JSON property `paramTypes`
1040
+ # @return [Hash<String,Google::Apis::SpannerV1::Type>]
1041
+ attr_accessor :param_types
1042
+
1043
+ # Parameter names and values that bind to placeholders in the SQL string. A
1044
+ # parameter placeholder consists of the `@` character followed by the parameter
1045
+ # name (for example, `@firstName`). Parameter names must conform to the naming
1046
+ # requirements of identifiers as specified at https://cloud.google.com/spanner/
1047
+ # docs/lexical#identifiers. Parameters can appear anywhere that a literal value
1048
+ # is expected. The same parameter name can be used more than once, for example: `
1049
+ # "WHERE id > @msg_id AND id < @msg_id + 100"` It is an error to execute a SQL
1050
+ # statement with unbound parameters.
1051
+ # Corresponds to the JSON property `params`
1052
+ # @return [Hash<String,Object>]
1053
+ attr_accessor :params
1054
+
1055
+ # If present, results will be restricted to the specified partition previously
1056
+ # created using PartitionQuery(). There must be an exact match for the values of
1057
+ # fields common to this message and the PartitionQueryRequest message used to
1058
+ # create this partition_token.
1059
+ # Corresponds to the JSON property `partitionToken`
1060
+ # NOTE: Values are automatically base64 encoded/decoded in the client library.
1061
+ # @return [String]
1062
+ attr_accessor :partition_token
1063
+
1064
+ # Used to control the amount of debugging information returned in ResultSetStats.
1065
+ # If partition_token is set, query_mode can only be set to QueryMode.NORMAL.
1066
+ # Corresponds to the JSON property `queryMode`
1067
+ # @return [String]
1068
+ attr_accessor :query_mode
1069
+
1070
+ # Query optimizer configuration.
1071
+ # Corresponds to the JSON property `queryOptions`
1072
+ # @return [Google::Apis::SpannerV1::QueryOptions]
1073
+ attr_accessor :query_options
1074
+
1075
+ # If this request is resuming a previously interrupted SQL statement execution, `
1076
+ # resume_token` should be copied from the last PartialResultSet yielded before
1077
+ # the interruption. Doing this enables the new SQL statement execution to resume
1078
+ # where the last one left off. The rest of the request parameters must exactly
1079
+ # match the request that yielded this token.
1080
+ # Corresponds to the JSON property `resumeToken`
1081
+ # NOTE: Values are automatically base64 encoded/decoded in the client library.
1082
+ # @return [String]
1083
+ attr_accessor :resume_token
1084
+
1085
+ # A per-transaction sequence number used to identify this request. This field
1086
+ # makes each request idempotent such that if the request is received multiple
1087
+ # times, at most one will succeed. The sequence number must be monotonically
1088
+ # increasing within the transaction. If a request arrives for the first time
1089
+ # with an out-of-order sequence number, the transaction may be aborted. Replays
1090
+ # of previously handled requests will yield the same response as the first
1091
+ # execution. Required for DML statements. Ignored for queries.
1092
+ # Corresponds to the JSON property `seqno`
1093
+ # @return [Fixnum]
1094
+ attr_accessor :seqno
1095
+
1096
+ # Required. The SQL string.
1097
+ # Corresponds to the JSON property `sql`
1098
+ # @return [String]
1099
+ attr_accessor :sql
1100
+
1101
+ # This message is used to select the transaction in which a Read or ExecuteSql
1102
+ # call runs. See TransactionOptions for more information about transactions.
1103
+ # Corresponds to the JSON property `transaction`
1104
+ # @return [Google::Apis::SpannerV1::TransactionSelector]
1105
+ attr_accessor :transaction
1106
+
1107
+ def initialize(**args)
1108
+ update!(**args)
1109
+ end
1110
+
1111
+ # Update properties of this object
1112
+ def update!(**args)
1113
+ @param_types = args[:param_types] if args.key?(:param_types)
1114
+ @params = args[:params] if args.key?(:params)
1115
+ @partition_token = args[:partition_token] if args.key?(:partition_token)
1116
+ @query_mode = args[:query_mode] if args.key?(:query_mode)
1117
+ @query_options = args[:query_options] if args.key?(:query_options)
1118
+ @resume_token = args[:resume_token] if args.key?(:resume_token)
1119
+ @seqno = args[:seqno] if args.key?(:seqno)
1120
+ @sql = args[:sql] if args.key?(:sql)
1121
+ @transaction = args[:transaction] if args.key?(:transaction)
1122
+ end
1123
+ end
1124
+
1125
+ # Represents a textual expression in the Common Expression Language (CEL) syntax.
1126
+ # CEL is a C-like expression language. The syntax and semantics of CEL are
1127
+ # documented at https://github.com/google/cel-spec. Example (Comparison): title:
1128
+ # "Summary size limit" description: "Determines if a summary is less than 100
1129
+ # chars" expression: "document.summary.size() < 100" Example (Equality): title: "
1130
+ # Requestor is owner" description: "Determines if requestor is the document
1131
+ # owner" expression: "document.owner == request.auth.claims.email" Example (
1132
+ # Logic): title: "Public documents" description: "Determine whether the document
1133
+ # should be publicly visible" expression: "document.type != 'private' &&
1134
+ # document.type != 'internal'" Example (Data Manipulation): title: "Notification
1135
+ # string" description: "Create a notification string with a timestamp."
1136
+ # expression: "'New message received at ' + string(document.create_time)" The
1137
+ # exact variables and functions that may be referenced within an expression are
1138
+ # determined by the service that evaluates it. See the service documentation for
1139
+ # additional information.
1140
+ class Expr
1141
+ include Google::Apis::Core::Hashable
1142
+
1143
+ # Optional. Description of the expression. This is a longer text which describes
1144
+ # the expression, e.g. when hovered over it in a UI.
1145
+ # Corresponds to the JSON property `description`
1146
+ # @return [String]
1147
+ attr_accessor :description
1148
+
1149
+ # Textual representation of an expression in Common Expression Language syntax.
1150
+ # Corresponds to the JSON property `expression`
1151
+ # @return [String]
1152
+ attr_accessor :expression
1153
+
1154
+ # Optional. String indicating the location of the expression for error reporting,
1155
+ # e.g. a file name and a position in the file.
1156
+ # Corresponds to the JSON property `location`
1157
+ # @return [String]
1158
+ attr_accessor :location
1159
+
1160
+ # Optional. Title for the expression, i.e. a short string describing its purpose.
1161
+ # This can be used e.g. in UIs which allow to enter the expression.
1162
+ # Corresponds to the JSON property `title`
1163
+ # @return [String]
1164
+ attr_accessor :title
1165
+
1166
+ def initialize(**args)
1167
+ update!(**args)
1168
+ end
1169
+
1170
+ # Update properties of this object
1171
+ def update!(**args)
1172
+ @description = args[:description] if args.key?(:description)
1173
+ @expression = args[:expression] if args.key?(:expression)
1174
+ @location = args[:location] if args.key?(:location)
1175
+ @title = args[:title] if args.key?(:title)
1176
+ end
1177
+ end
1178
+
1179
+ # Message representing a single field of a struct.
1180
+ class Field
1181
+ include Google::Apis::Core::Hashable
1182
+
1183
+ # The name of the field. For reads, this is the column name. For SQL queries, it
1184
+ # is the column alias (e.g., `"Word"` in the query `"SELECT 'hello' AS Word"`),
1185
+ # or the column name (e.g., `"ColName"` in the query `"SELECT ColName FROM Table"
1186
+ # `). Some columns might have an empty name (e.g., `"SELECT UPPER(ColName)"`).
1187
+ # Note that a query result can contain multiple fields with the same name.
1188
+ # Corresponds to the JSON property `name`
1189
+ # @return [String]
1190
+ attr_accessor :name
1191
+
1192
+ # `Type` indicates the type of a Cloud Spanner value, as might be stored in a
1193
+ # table cell or returned from an SQL query.
1194
+ # Corresponds to the JSON property `type`
1195
+ # @return [Google::Apis::SpannerV1::Type]
1196
+ attr_accessor :type
1197
+
1198
+ def initialize(**args)
1199
+ update!(**args)
1200
+ end
1201
+
1202
+ # Update properties of this object
1203
+ def update!(**args)
1204
+ @name = args[:name] if args.key?(:name)
1205
+ @type = args[:type] if args.key?(:type)
1206
+ end
1207
+ end
1208
+
1209
+ # The response for GetDatabaseDdl.
1210
+ class GetDatabaseDdlResponse
1211
+ include Google::Apis::Core::Hashable
1212
+
1213
+ # A list of formatted DDL statements defining the schema of the database
1214
+ # specified in the request.
1215
+ # Corresponds to the JSON property `statements`
1216
+ # @return [Array<String>]
1217
+ attr_accessor :statements
1218
+
1219
+ def initialize(**args)
1220
+ update!(**args)
1221
+ end
1222
+
1223
+ # Update properties of this object
1224
+ def update!(**args)
1225
+ @statements = args[:statements] if args.key?(:statements)
1226
+ end
1227
+ end
1228
+
1229
+ # Request message for `GetIamPolicy` method.
1230
+ class GetIamPolicyRequest
1231
+ include Google::Apis::Core::Hashable
1232
+
1233
+ # Encapsulates settings provided to GetIamPolicy.
1234
+ # Corresponds to the JSON property `options`
1235
+ # @return [Google::Apis::SpannerV1::GetPolicyOptions]
1236
+ attr_accessor :options
1237
+
1238
+ def initialize(**args)
1239
+ update!(**args)
1240
+ end
1241
+
1242
+ # Update properties of this object
1243
+ def update!(**args)
1244
+ @options = args[:options] if args.key?(:options)
1245
+ end
1246
+ end
1247
+
1248
+ # Encapsulates settings provided to GetIamPolicy.
1249
+ class GetPolicyOptions
1250
+ include Google::Apis::Core::Hashable
1251
+
1252
+ # Optional. The policy format version to be returned. Valid values are 0, 1, and
1253
+ # 3. Requests specifying an invalid value will be rejected. Requests for
1254
+ # policies with any conditional bindings must specify version 3. Policies
1255
+ # without any conditional bindings may specify any valid value or leave the
1256
+ # field unset. To learn which resources support conditions in their IAM policies,
1257
+ # see the [IAM documentation](https://cloud.google.com/iam/help/conditions/
1258
+ # resource-policies).
1259
+ # Corresponds to the JSON property `requestedPolicyVersion`
1260
+ # @return [Fixnum]
1261
+ attr_accessor :requested_policy_version
1262
+
1263
+ def initialize(**args)
1264
+ update!(**args)
1265
+ end
1266
+
1267
+ # Update properties of this object
1268
+ def update!(**args)
1269
+ @requested_policy_version = args[:requested_policy_version] if args.key?(:requested_policy_version)
1270
+ end
1271
+ end
1272
+
1273
+ # An isolated set of Cloud Spanner resources on which databases can be hosted.
1274
+ class Instance
1275
+ include Google::Apis::Core::Hashable
1276
+
1277
+ # Required. The name of the instance's configuration. Values are of the form `
1278
+ # projects//instanceConfigs/`. See also InstanceConfig and ListInstanceConfigs.
1279
+ # Corresponds to the JSON property `config`
1280
+ # @return [String]
1281
+ attr_accessor :config
1282
+
1283
+ # Required. The descriptive name for this instance as it appears in UIs. Must be
1284
+ # unique per project and between 4 and 30 characters in length.
1285
+ # Corresponds to the JSON property `displayName`
1286
+ # @return [String]
1287
+ attr_accessor :display_name
1288
+
1289
+ # Deprecated. This field is not populated.
1290
+ # Corresponds to the JSON property `endpointUris`
1291
+ # @return [Array<String>]
1292
+ attr_accessor :endpoint_uris
1293
+
1294
+ # Cloud Labels are a flexible and lightweight mechanism for organizing cloud
1295
+ # resources into groups that reflect a customer's organizational needs and
1296
+ # deployment strategies. Cloud Labels can be used to filter collections of
1297
+ # resources. They can be used to control how resource metrics are aggregated.
1298
+ # And they can be used as arguments to policy management rules (e.g. route,
1299
+ # firewall, load balancing, etc.). * Label keys must be between 1 and 63
1300
+ # characters long and must conform to the following regular expression: `[a-z]([-
1301
+ # a-z0-9]*[a-z0-9])?`. * Label values must be between 0 and 63 characters long
1302
+ # and must conform to the regular expression `([a-z]([-a-z0-9]*[a-z0-9])?)?`. *
1303
+ # No more than 64 labels can be associated with a given resource. See https://
1304
+ # goo.gl/xmQnxf for more information on and examples of labels. If you plan to
1305
+ # use labels in your own code, please note that additional characters may be
1306
+ # allowed in the future. And so you are advised to use an internal label
1307
+ # representation, such as JSON, which doesn't rely upon specific characters
1308
+ # being disallowed. For example, representing labels as the string: name + "_" +
1309
+ # value would prove problematic if we were to allow "_" in a future release.
1310
+ # Corresponds to the JSON property `labels`
1311
+ # @return [Hash<String,String>]
1312
+ attr_accessor :labels
1313
+
1314
+ # Required. A unique identifier for the instance, which cannot be changed after
1315
+ # the instance is created. Values are of the form `projects//instances/a-z*[a-z0-
1316
+ # 9]`. The final segment of the name must be between 2 and 64 characters in
1317
+ # length.
1318
+ # Corresponds to the JSON property `name`
1319
+ # @return [String]
1320
+ attr_accessor :name
1321
+
1322
+ # The number of nodes allocated to this instance. This may be zero in API
1323
+ # responses for instances that are not yet in state `READY`. See [the
1324
+ # documentation](https://cloud.google.com/spanner/docs/instances#node_count) for
1325
+ # more information about nodes.
1326
+ # Corresponds to the JSON property `nodeCount`
1327
+ # @return [Fixnum]
1328
+ attr_accessor :node_count
1329
+
1330
+ # Output only. The current instance state. For CreateInstance, the state must be
1331
+ # either omitted or set to `CREATING`. For UpdateInstance, the state must be
1332
+ # either omitted or set to `READY`.
1333
+ # Corresponds to the JSON property `state`
1334
+ # @return [String]
1335
+ attr_accessor :state
1336
+
1337
+ def initialize(**args)
1338
+ update!(**args)
1339
+ end
1340
+
1341
+ # Update properties of this object
1342
+ def update!(**args)
1343
+ @config = args[:config] if args.key?(:config)
1344
+ @display_name = args[:display_name] if args.key?(:display_name)
1345
+ @endpoint_uris = args[:endpoint_uris] if args.key?(:endpoint_uris)
1346
+ @labels = args[:labels] if args.key?(:labels)
1347
+ @name = args[:name] if args.key?(:name)
1348
+ @node_count = args[:node_count] if args.key?(:node_count)
1349
+ @state = args[:state] if args.key?(:state)
1350
+ end
1351
+ end
1352
+
1353
+ # A possible configuration for a Cloud Spanner instance. Configurations define
1354
+ # the geographic placement of nodes and their replication.
1355
+ class InstanceConfig
1356
+ include Google::Apis::Core::Hashable
1357
+
1358
+ # The name of this instance configuration as it appears in UIs.
1359
+ # Corresponds to the JSON property `displayName`
1360
+ # @return [String]
1361
+ attr_accessor :display_name
1362
+
1363
+ # A unique identifier for the instance configuration. Values are of the form `
1364
+ # projects//instanceConfigs/a-z*`
1365
+ # Corresponds to the JSON property `name`
1366
+ # @return [String]
1367
+ attr_accessor :name
1368
+
1369
+ # The geographic placement of nodes in this instance configuration and their
1370
+ # replication properties.
1371
+ # Corresponds to the JSON property `replicas`
1372
+ # @return [Array<Google::Apis::SpannerV1::ReplicaInfo>]
1373
+ attr_accessor :replicas
1374
+
1375
+ def initialize(**args)
1376
+ update!(**args)
1377
+ end
1378
+
1379
+ # Update properties of this object
1380
+ def update!(**args)
1381
+ @display_name = args[:display_name] if args.key?(:display_name)
1382
+ @name = args[:name] if args.key?(:name)
1383
+ @replicas = args[:replicas] if args.key?(:replicas)
1384
+ end
1385
+ end
1386
+
1387
+ # KeyRange represents a range of rows in a table or index. A range has a start
1388
+ # key and an end key. These keys can be open or closed, indicating if the range
1389
+ # includes rows with that key. Keys are represented by lists, where the ith
1390
+ # value in the list corresponds to the ith component of the table or index
1391
+ # primary key. Individual values are encoded as described here. For example,
1392
+ # consider the following table definition: CREATE TABLE UserEvents ( UserName
1393
+ # STRING(MAX), EventDate STRING(10) ) PRIMARY KEY(UserName, EventDate); The
1394
+ # following keys name rows in this table: "Bob", "2014-09-23" Since the `
1395
+ # UserEvents` table's `PRIMARY KEY` clause names two columns, each `UserEvents`
1396
+ # key has two elements; the first is the `UserName`, and the second is the `
1397
+ # EventDate`. Key ranges with multiple components are interpreted
1398
+ # lexicographically by component using the table or index key's declared sort
1399
+ # order. For example, the following range returns all events for user `"Bob"`
1400
+ # that occurred in the year 2015: "start_closed": ["Bob", "2015-01-01"] "
1401
+ # end_closed": ["Bob", "2015-12-31"] Start and end keys can omit trailing key
1402
+ # components. This affects the inclusion and exclusion of rows that exactly
1403
+ # match the provided key components: if the key is closed, then rows that
1404
+ # exactly match the provided components are included; if the key is open, then
1405
+ # rows that exactly match are not included. For example, the following range
1406
+ # includes all events for `"Bob"` that occurred during and after the year 2000: "
1407
+ # start_closed": ["Bob", "2000-01-01"] "end_closed": ["Bob"] The next example
1408
+ # retrieves all events for `"Bob"`: "start_closed": ["Bob"] "end_closed": ["Bob"]
1409
+ # To retrieve events before the year 2000: "start_closed": ["Bob"] "end_open": [
1410
+ # "Bob", "2000-01-01"] The following range includes all rows in the table: "
1411
+ # start_closed": [] "end_closed": [] This range returns all users whose `
1412
+ # UserName` begins with any character from A to C: "start_closed": ["A"] "
1413
+ # end_open": ["D"] This range returns all users whose `UserName` begins with B: "
1414
+ # start_closed": ["B"] "end_open": ["C"] Key ranges honor column sort order. For
1415
+ # example, suppose a table is defined as follows: CREATE TABLE
1416
+ # DescendingSortedTable ` Key INT64, ... ) PRIMARY KEY(Key DESC); The following
1417
+ # range retrieves all rows with key values between 1 and 100 inclusive: "
1418
+ # start_closed": ["100"] "end_closed": ["1"] Note that 100 is passed as the
1419
+ # start, and 1 is passed as the end, because `Key` is a descending column in the
1420
+ # schema.
1421
+ class KeyRange
1422
+ include Google::Apis::Core::Hashable
1423
+
1424
+ # If the end is closed, then the range includes all rows whose first `len(
1425
+ # end_closed)` key columns exactly match `end_closed`.
1426
+ # Corresponds to the JSON property `endClosed`
1427
+ # @return [Array<Object>]
1428
+ attr_accessor :end_closed
1429
+
1430
+ # If the end is open, then the range excludes rows whose first `len(end_open)`
1431
+ # key columns exactly match `end_open`.
1432
+ # Corresponds to the JSON property `endOpen`
1433
+ # @return [Array<Object>]
1434
+ attr_accessor :end_open
1435
+
1436
+ # If the start is closed, then the range includes all rows whose first `len(
1437
+ # start_closed)` key columns exactly match `start_closed`.
1438
+ # Corresponds to the JSON property `startClosed`
1439
+ # @return [Array<Object>]
1440
+ attr_accessor :start_closed
1441
+
1442
+ # If the start is open, then the range excludes rows whose first `len(start_open)
1443
+ # ` key columns exactly match `start_open`.
1444
+ # Corresponds to the JSON property `startOpen`
1445
+ # @return [Array<Object>]
1446
+ attr_accessor :start_open
1447
+
1448
+ def initialize(**args)
1449
+ update!(**args)
1450
+ end
1451
+
1452
+ # Update properties of this object
1453
+ def update!(**args)
1454
+ @end_closed = args[:end_closed] if args.key?(:end_closed)
1455
+ @end_open = args[:end_open] if args.key?(:end_open)
1456
+ @start_closed = args[:start_closed] if args.key?(:start_closed)
1457
+ @start_open = args[:start_open] if args.key?(:start_open)
1458
+ end
1459
+ end
1460
+
1461
+ # `KeySet` defines a collection of Cloud Spanner keys and/or key ranges. All the
1462
+ # keys are expected to be in the same table or index. The keys need not be
1463
+ # sorted in any particular way. If the same key is specified multiple times in
1464
+ # the set (for example if two ranges, two keys, or a key and a range overlap),
1465
+ # Cloud Spanner behaves as if the key were only specified once.
1466
+ class KeySet
1467
+ include Google::Apis::Core::Hashable
1468
+
1469
+ # For convenience `all` can be set to `true` to indicate that this `KeySet`
1470
+ # matches all keys in the table or index. Note that any keys specified in `keys`
1471
+ # or `ranges` are only yielded once.
1472
+ # Corresponds to the JSON property `all`
1473
+ # @return [Boolean]
1474
+ attr_accessor :all
1475
+ alias_method :all?, :all
1476
+
1477
+ # A list of specific keys. Entries in `keys` should have exactly as many
1478
+ # elements as there are columns in the primary or index key with which this `
1479
+ # KeySet` is used. Individual key values are encoded as described here.
1480
+ # Corresponds to the JSON property `keys`
1481
+ # @return [Array<Array<Object>>]
1482
+ attr_accessor :keys
1483
+
1484
+ # A list of key ranges. See KeyRange for more information about key range
1485
+ # specifications.
1486
+ # Corresponds to the JSON property `ranges`
1487
+ # @return [Array<Google::Apis::SpannerV1::KeyRange>]
1488
+ attr_accessor :ranges
1489
+
1490
+ def initialize(**args)
1491
+ update!(**args)
1492
+ end
1493
+
1494
+ # Update properties of this object
1495
+ def update!(**args)
1496
+ @all = args[:all] if args.key?(:all)
1497
+ @keys = args[:keys] if args.key?(:keys)
1498
+ @ranges = args[:ranges] if args.key?(:ranges)
1499
+ end
1500
+ end
1501
+
1502
+ # The response for ListBackupOperations.
1503
+ class ListBackupOperationsResponse
1504
+ include Google::Apis::Core::Hashable
1505
+
1506
+ # `next_page_token` can be sent in a subsequent ListBackupOperations call to
1507
+ # fetch more of the matching metadata.
1508
+ # Corresponds to the JSON property `nextPageToken`
1509
+ # @return [String]
1510
+ attr_accessor :next_page_token
1511
+
1512
+ # The list of matching backup long-running operations. Each operation's name
1513
+ # will be prefixed by the backup's name and the operation's metadata will be of
1514
+ # type CreateBackupMetadata. Operations returned include those that are pending
1515
+ # or have completed/failed/canceled within the last 7 days. Operations returned
1516
+ # are ordered by `operation.metadata.value.progress.start_time` in descending
1517
+ # order starting from the most recently started operation.
1518
+ # Corresponds to the JSON property `operations`
1519
+ # @return [Array<Google::Apis::SpannerV1::Operation>]
1520
+ attr_accessor :operations
1521
+
1522
+ def initialize(**args)
1523
+ update!(**args)
1524
+ end
1525
+
1526
+ # Update properties of this object
1527
+ def update!(**args)
1528
+ @next_page_token = args[:next_page_token] if args.key?(:next_page_token)
1529
+ @operations = args[:operations] if args.key?(:operations)
1530
+ end
1531
+ end
1532
+
1533
+ # The response for ListBackups.
1534
+ class ListBackupsResponse
1535
+ include Google::Apis::Core::Hashable
1536
+
1537
+ # The list of matching backups. Backups returned are ordered by `create_time` in
1538
+ # descending order, starting from the most recent `create_time`.
1539
+ # Corresponds to the JSON property `backups`
1540
+ # @return [Array<Google::Apis::SpannerV1::Backup>]
1541
+ attr_accessor :backups
1542
+
1543
+ # `next_page_token` can be sent in a subsequent ListBackups call to fetch more
1544
+ # of the matching backups.
1545
+ # Corresponds to the JSON property `nextPageToken`
1546
+ # @return [String]
1547
+ attr_accessor :next_page_token
1548
+
1549
+ def initialize(**args)
1550
+ update!(**args)
1551
+ end
1552
+
1553
+ # Update properties of this object
1554
+ def update!(**args)
1555
+ @backups = args[:backups] if args.key?(:backups)
1556
+ @next_page_token = args[:next_page_token] if args.key?(:next_page_token)
1557
+ end
1558
+ end
1559
+
1560
+ # The response for ListDatabaseOperations.
1561
+ class ListDatabaseOperationsResponse
1562
+ include Google::Apis::Core::Hashable
1563
+
1564
+ # `next_page_token` can be sent in a subsequent ListDatabaseOperations call to
1565
+ # fetch more of the matching metadata.
1566
+ # Corresponds to the JSON property `nextPageToken`
1567
+ # @return [String]
1568
+ attr_accessor :next_page_token
1569
+
1570
+ # The list of matching database long-running operations. Each operation's name
1571
+ # will be prefixed by the database's name. The operation's metadata field type `
1572
+ # metadata.type_url` describes the type of the metadata.
1573
+ # Corresponds to the JSON property `operations`
1574
+ # @return [Array<Google::Apis::SpannerV1::Operation>]
1575
+ attr_accessor :operations
1576
+
1577
+ def initialize(**args)
1578
+ update!(**args)
1579
+ end
1580
+
1581
+ # Update properties of this object
1582
+ def update!(**args)
1583
+ @next_page_token = args[:next_page_token] if args.key?(:next_page_token)
1584
+ @operations = args[:operations] if args.key?(:operations)
1585
+ end
1586
+ end
1587
+
1588
+ # The response for ListDatabases.
1589
+ class ListDatabasesResponse
1590
+ include Google::Apis::Core::Hashable
1591
+
1592
+ # Databases that matched the request.
1593
+ # Corresponds to the JSON property `databases`
1594
+ # @return [Array<Google::Apis::SpannerV1::Database>]
1595
+ attr_accessor :databases
1596
+
1597
+ # `next_page_token` can be sent in a subsequent ListDatabases call to fetch more
1598
+ # of the matching databases.
1599
+ # Corresponds to the JSON property `nextPageToken`
1600
+ # @return [String]
1601
+ attr_accessor :next_page_token
1602
+
1603
+ def initialize(**args)
1604
+ update!(**args)
1605
+ end
1606
+
1607
+ # Update properties of this object
1608
+ def update!(**args)
1609
+ @databases = args[:databases] if args.key?(:databases)
1610
+ @next_page_token = args[:next_page_token] if args.key?(:next_page_token)
1611
+ end
1612
+ end
1613
+
1614
+ # The response for ListInstanceConfigs.
1615
+ class ListInstanceConfigsResponse
1616
+ include Google::Apis::Core::Hashable
1617
+
1618
+ # The list of requested instance configurations.
1619
+ # Corresponds to the JSON property `instanceConfigs`
1620
+ # @return [Array<Google::Apis::SpannerV1::InstanceConfig>]
1621
+ attr_accessor :instance_configs
1622
+
1623
+ # `next_page_token` can be sent in a subsequent ListInstanceConfigs call to
1624
+ # fetch more of the matching instance configurations.
1625
+ # Corresponds to the JSON property `nextPageToken`
1626
+ # @return [String]
1627
+ attr_accessor :next_page_token
1628
+
1629
+ def initialize(**args)
1630
+ update!(**args)
1631
+ end
1632
+
1633
+ # Update properties of this object
1634
+ def update!(**args)
1635
+ @instance_configs = args[:instance_configs] if args.key?(:instance_configs)
1636
+ @next_page_token = args[:next_page_token] if args.key?(:next_page_token)
1637
+ end
1638
+ end
1639
+
1640
+ # The response for ListInstances.
1641
+ class ListInstancesResponse
1642
+ include Google::Apis::Core::Hashable
1643
+
1644
+ # The list of requested instances.
1645
+ # Corresponds to the JSON property `instances`
1646
+ # @return [Array<Google::Apis::SpannerV1::Instance>]
1647
+ attr_accessor :instances
1648
+
1649
+ # `next_page_token` can be sent in a subsequent ListInstances call to fetch more
1650
+ # of the matching instances.
1651
+ # Corresponds to the JSON property `nextPageToken`
1652
+ # @return [String]
1653
+ attr_accessor :next_page_token
1654
+
1655
+ def initialize(**args)
1656
+ update!(**args)
1657
+ end
1658
+
1659
+ # Update properties of this object
1660
+ def update!(**args)
1661
+ @instances = args[:instances] if args.key?(:instances)
1662
+ @next_page_token = args[:next_page_token] if args.key?(:next_page_token)
1663
+ end
1664
+ end
1665
+
1666
+ # The response message for Operations.ListOperations.
1667
+ class ListOperationsResponse
1668
+ include Google::Apis::Core::Hashable
1669
+
1670
+ # The standard List next-page token.
1671
+ # Corresponds to the JSON property `nextPageToken`
1672
+ # @return [String]
1673
+ attr_accessor :next_page_token
1674
+
1675
+ # A list of operations that matches the specified filter in the request.
1676
+ # Corresponds to the JSON property `operations`
1677
+ # @return [Array<Google::Apis::SpannerV1::Operation>]
1678
+ attr_accessor :operations
1679
+
1680
+ def initialize(**args)
1681
+ update!(**args)
1682
+ end
1683
+
1684
+ # Update properties of this object
1685
+ def update!(**args)
1686
+ @next_page_token = args[:next_page_token] if args.key?(:next_page_token)
1687
+ @operations = args[:operations] if args.key?(:operations)
1688
+ end
1689
+ end
1690
+
1691
+ # The response for ListSessions.
1692
+ class ListSessionsResponse
1693
+ include Google::Apis::Core::Hashable
1694
+
1695
+ # `next_page_token` can be sent in a subsequent ListSessions call to fetch more
1696
+ # of the matching sessions.
1697
+ # Corresponds to the JSON property `nextPageToken`
1698
+ # @return [String]
1699
+ attr_accessor :next_page_token
1700
+
1701
+ # The list of requested sessions.
1702
+ # Corresponds to the JSON property `sessions`
1703
+ # @return [Array<Google::Apis::SpannerV1::Session>]
1704
+ attr_accessor :sessions
1705
+
1706
+ def initialize(**args)
1707
+ update!(**args)
1708
+ end
1709
+
1710
+ # Update properties of this object
1711
+ def update!(**args)
1712
+ @next_page_token = args[:next_page_token] if args.key?(:next_page_token)
1713
+ @sessions = args[:sessions] if args.key?(:sessions)
1714
+ end
1715
+ end
1716
+
1717
+ # A modification to one or more Cloud Spanner rows. Mutations can be applied to
1718
+ # a Cloud Spanner database by sending them in a Commit call.
1719
+ class Mutation
1720
+ include Google::Apis::Core::Hashable
1721
+
1722
+ # Arguments to delete operations.
1723
+ # Corresponds to the JSON property `delete`
1724
+ # @return [Google::Apis::SpannerV1::Delete]
1725
+ attr_accessor :delete
1726
+
1727
+ # Arguments to insert, update, insert_or_update, and replace operations.
1728
+ # Corresponds to the JSON property `insert`
1729
+ # @return [Google::Apis::SpannerV1::Write]
1730
+ attr_accessor :insert
1731
+
1732
+ # Arguments to insert, update, insert_or_update, and replace operations.
1733
+ # Corresponds to the JSON property `insertOrUpdate`
1734
+ # @return [Google::Apis::SpannerV1::Write]
1735
+ attr_accessor :insert_or_update
1736
+
1737
+ # Arguments to insert, update, insert_or_update, and replace operations.
1738
+ # Corresponds to the JSON property `replace`
1739
+ # @return [Google::Apis::SpannerV1::Write]
1740
+ attr_accessor :replace
1741
+
1742
+ # Arguments to insert, update, insert_or_update, and replace operations.
1743
+ # Corresponds to the JSON property `update`
1744
+ # @return [Google::Apis::SpannerV1::Write]
1745
+ attr_accessor :update
1746
+
1747
+ def initialize(**args)
1748
+ update!(**args)
1749
+ end
1750
+
1751
+ # Update properties of this object
1752
+ def update!(**args)
1753
+ @delete = args[:delete] if args.key?(:delete)
1754
+ @insert = args[:insert] if args.key?(:insert)
1755
+ @insert_or_update = args[:insert_or_update] if args.key?(:insert_or_update)
1756
+ @replace = args[:replace] if args.key?(:replace)
1757
+ @update = args[:update] if args.key?(:update)
1758
+ end
1759
+ end
1760
+
1761
+ # This resource represents a long-running operation that is the result of a
1762
+ # network API call.
1763
+ class Operation
1764
+ include Google::Apis::Core::Hashable
1765
+
1766
+ # If the value is `false`, it means the operation is still in progress. If `true`
1767
+ # , the operation is completed, and either `error` or `response` is available.
1768
+ # Corresponds to the JSON property `done`
1769
+ # @return [Boolean]
1770
+ attr_accessor :done
1771
+ alias_method :done?, :done
1772
+
1773
+ # The `Status` type defines a logical error model that is suitable for different
1774
+ # programming environments, including REST APIs and RPC APIs. It is used by [
1775
+ # gRPC](https://github.com/grpc). Each `Status` message contains three pieces of
1776
+ # data: error code, error message, and error details. You can find out more
1777
+ # about this error model and how to work with it in the [API Design Guide](https:
1778
+ # //cloud.google.com/apis/design/errors).
1779
+ # Corresponds to the JSON property `error`
1780
+ # @return [Google::Apis::SpannerV1::Status]
1781
+ attr_accessor :error
1782
+
1783
+ # Service-specific metadata associated with the operation. It typically contains
1784
+ # progress information and common metadata such as create time. Some services
1785
+ # might not provide such metadata. Any method that returns a long-running
1786
+ # operation should document the metadata type, if any.
1787
+ # Corresponds to the JSON property `metadata`
1788
+ # @return [Hash<String,Object>]
1789
+ attr_accessor :metadata
1790
+
1791
+ # The server-assigned name, which is only unique within the same service that
1792
+ # originally returns it. If you use the default HTTP mapping, the `name` should
1793
+ # be a resource name ending with `operations/`unique_id``.
1794
+ # Corresponds to the JSON property `name`
1795
+ # @return [String]
1796
+ attr_accessor :name
1797
+
1798
+ # The normal response of the operation in case of success. If the original
1799
+ # method returns no data on success, such as `Delete`, the response is `google.
1800
+ # protobuf.Empty`. If the original method is standard `Get`/`Create`/`Update`,
1801
+ # the response should be the resource. For other methods, the response should
1802
+ # have the type `XxxResponse`, where `Xxx` is the original method name. For
1803
+ # example, if the original method name is `TakeSnapshot()`, the inferred
1804
+ # response type is `TakeSnapshotResponse`.
1805
+ # Corresponds to the JSON property `response`
1806
+ # @return [Hash<String,Object>]
1807
+ attr_accessor :response
1808
+
1809
+ def initialize(**args)
1810
+ update!(**args)
1811
+ end
1812
+
1813
+ # Update properties of this object
1814
+ def update!(**args)
1815
+ @done = args[:done] if args.key?(:done)
1816
+ @error = args[:error] if args.key?(:error)
1817
+ @metadata = args[:metadata] if args.key?(:metadata)
1818
+ @name = args[:name] if args.key?(:name)
1819
+ @response = args[:response] if args.key?(:response)
1820
+ end
1821
+ end
1822
+
1823
+ # Encapsulates progress related information for a Cloud Spanner long running
1824
+ # operation.
1825
+ class OperationProgress
1826
+ include Google::Apis::Core::Hashable
1827
+
1828
+ # If set, the time at which this operation failed or was completed successfully.
1829
+ # Corresponds to the JSON property `endTime`
1830
+ # @return [String]
1831
+ attr_accessor :end_time
1832
+
1833
+ # Percent completion of the operation. Values are between 0 and 100 inclusive.
1834
+ # Corresponds to the JSON property `progressPercent`
1835
+ # @return [Fixnum]
1836
+ attr_accessor :progress_percent
1837
+
1838
+ # Time the request was received.
1839
+ # Corresponds to the JSON property `startTime`
1840
+ # @return [String]
1841
+ attr_accessor :start_time
1842
+
1843
+ def initialize(**args)
1844
+ update!(**args)
1845
+ end
1846
+
1847
+ # Update properties of this object
1848
+ def update!(**args)
1849
+ @end_time = args[:end_time] if args.key?(:end_time)
1850
+ @progress_percent = args[:progress_percent] if args.key?(:progress_percent)
1851
+ @start_time = args[:start_time] if args.key?(:start_time)
1852
+ end
1853
+ end
1854
+
1855
+ # Metadata type for the long-running operation used to track the progress of
1856
+ # optimizations performed on a newly restored database. This long-running
1857
+ # operation is automatically created by the system after the successful
1858
+ # completion of a database restore, and cannot be cancelled.
1859
+ class OptimizeRestoredDatabaseMetadata
1860
+ include Google::Apis::Core::Hashable
1861
+
1862
+ # Name of the restored database being optimized.
1863
+ # Corresponds to the JSON property `name`
1864
+ # @return [String]
1865
+ attr_accessor :name
1866
+
1867
+ # Encapsulates progress related information for a Cloud Spanner long running
1868
+ # operation.
1869
+ # Corresponds to the JSON property `progress`
1870
+ # @return [Google::Apis::SpannerV1::OperationProgress]
1871
+ attr_accessor :progress
1872
+
1873
+ def initialize(**args)
1874
+ update!(**args)
1875
+ end
1876
+
1877
+ # Update properties of this object
1878
+ def update!(**args)
1879
+ @name = args[:name] if args.key?(:name)
1880
+ @progress = args[:progress] if args.key?(:progress)
1881
+ end
1882
+ end
1883
+
1884
+ # Partial results from a streaming read or SQL query. Streaming reads and SQL
1885
+ # queries better tolerate large result sets, large rows, and large values, but
1886
+ # are a little trickier to consume.
1887
+ class PartialResultSet
1888
+ include Google::Apis::Core::Hashable
1889
+
1890
+ # If true, then the final value in values is chunked, and must be combined with
1891
+ # more values from subsequent `PartialResultSet`s to obtain a complete field
1892
+ # value.
1893
+ # Corresponds to the JSON property `chunkedValue`
1894
+ # @return [Boolean]
1895
+ attr_accessor :chunked_value
1896
+ alias_method :chunked_value?, :chunked_value
1897
+
1898
+ # Metadata about a ResultSet or PartialResultSet.
1899
+ # Corresponds to the JSON property `metadata`
1900
+ # @return [Google::Apis::SpannerV1::ResultSetMetadata]
1901
+ attr_accessor :metadata
1902
+
1903
+ # Streaming calls might be interrupted for a variety of reasons, such as TCP
1904
+ # connection loss. If this occurs, the stream of results can be resumed by re-
1905
+ # sending the original request and including `resume_token`. Note that executing
1906
+ # any other transaction in the same session invalidates the token.
1907
+ # Corresponds to the JSON property `resumeToken`
1908
+ # NOTE: Values are automatically base64 encoded/decoded in the client library.
1909
+ # @return [String]
1910
+ attr_accessor :resume_token
1911
+
1912
+ # Additional statistics about a ResultSet or PartialResultSet.
1913
+ # Corresponds to the JSON property `stats`
1914
+ # @return [Google::Apis::SpannerV1::ResultSetStats]
1915
+ attr_accessor :stats
1916
+
1917
+ # A streamed result set consists of a stream of values, which might be split
1918
+ # into many `PartialResultSet` messages to accommodate large rows and/or large
1919
+ # values. Every N complete values defines a row, where N is equal to the number
1920
+ # of entries in metadata.row_type.fields. Most values are encoded based on type
1921
+ # as described here. It is possible that the last value in values is "chunked",
1922
+ # meaning that the rest of the value is sent in subsequent `PartialResultSet`(s).
1923
+ # This is denoted by the chunked_value field. Two or more chunked values can be
1924
+ # merged to form a complete value as follows: * `bool/number/null`: cannot be
1925
+ # chunked * `string`: concatenate the strings * `list`: concatenate the lists.
1926
+ # If the last element in a list is a `string`, `list`, or `object`, merge it
1927
+ # with the first element in the next list by applying these rules recursively. *
1928
+ # `object`: concatenate the (field name, field value) pairs. If a field name is
1929
+ # duplicated, then apply these rules recursively to merge the field values. Some
1930
+ # examples of merging: # Strings are concatenated. "foo", "bar" => "foobar" #
1931
+ # Lists of non-strings are concatenated. [2, 3], [4] => [2, 3, 4] # Lists are
1932
+ # concatenated, but the last and first elements are merged # because they are
1933
+ # strings. ["a", "b"], ["c", "d"] => ["a", "bc", "d"] # Lists are concatenated,
1934
+ # but the last and first elements are merged # because they are lists.
1935
+ # Recursively, the last and first elements # of the inner lists are merged
1936
+ # because they are strings. ["a", ["b", "c"]], [["d"], "e"] => ["a", ["b", "cd"],
1937
+ # "e"] # Non-overlapping object fields are combined. `"a": "1"`, `"b": "2"` => `
1938
+ # "a": "1", "b": 2"` # Overlapping object fields are merged. `"a": "1"`, `"a": "
1939
+ # 2"` => `"a": "12"` # Examples of merging objects containing lists of strings. `
1940
+ # "a": ["1"]`, `"a": ["2"]` => `"a": ["12"]` For a more complete example,
1941
+ # suppose a streaming SQL query is yielding a result set whose rows contain a
1942
+ # single string field. The following `PartialResultSet`s might be yielded: ` "
1943
+ # metadata": ` ... ` "values": ["Hello", "W"] "chunked_value": true "
1944
+ # resume_token": "Af65..." ` ` "values": ["orl"] "chunked_value": true "
1945
+ # resume_token": "Bqp2..." ` ` "values": ["d"] "resume_token": "Zx1B..." ` This
1946
+ # sequence of `PartialResultSet`s encodes two rows, one containing the field
1947
+ # value `"Hello"`, and a second containing the field value `"World" = "W" + "orl"
1948
+ # + "d"`.
1949
+ # Corresponds to the JSON property `values`
1950
+ # @return [Array<Object>]
1951
+ attr_accessor :values
1952
+
1953
+ def initialize(**args)
1954
+ update!(**args)
1955
+ end
1956
+
1957
+ # Update properties of this object
1958
+ def update!(**args)
1959
+ @chunked_value = args[:chunked_value] if args.key?(:chunked_value)
1960
+ @metadata = args[:metadata] if args.key?(:metadata)
1961
+ @resume_token = args[:resume_token] if args.key?(:resume_token)
1962
+ @stats = args[:stats] if args.key?(:stats)
1963
+ @values = args[:values] if args.key?(:values)
1964
+ end
1965
+ end
1966
+
1967
+ # Information returned for each partition returned in a PartitionResponse.
1968
+ class Partition
1969
+ include Google::Apis::Core::Hashable
1970
+
1971
+ # This token can be passed to Read, StreamingRead, ExecuteSql, or
1972
+ # ExecuteStreamingSql requests to restrict the results to those identified by
1973
+ # this partition token.
1974
+ # Corresponds to the JSON property `partitionToken`
1975
+ # NOTE: Values are automatically base64 encoded/decoded in the client library.
1976
+ # @return [String]
1977
+ attr_accessor :partition_token
1978
+
1979
+ def initialize(**args)
1980
+ update!(**args)
1981
+ end
1982
+
1983
+ # Update properties of this object
1984
+ def update!(**args)
1985
+ @partition_token = args[:partition_token] if args.key?(:partition_token)
1986
+ end
1987
+ end
1988
+
1989
+ # Options for a PartitionQueryRequest and PartitionReadRequest.
1990
+ class PartitionOptions
1991
+ include Google::Apis::Core::Hashable
1992
+
1993
+ # **Note:** This hint is currently ignored by PartitionQuery and PartitionRead
1994
+ # requests. The desired maximum number of partitions to return. For example,
1995
+ # this may be set to the number of workers available. The default for this
1996
+ # option is currently 10,000. The maximum value is currently 200,000. This is
1997
+ # only a hint. The actual number of partitions returned may be smaller or larger
1998
+ # than this maximum count request.
1999
+ # Corresponds to the JSON property `maxPartitions`
2000
+ # @return [Fixnum]
2001
+ attr_accessor :max_partitions
2002
+
2003
+ # **Note:** This hint is currently ignored by PartitionQuery and PartitionRead
2004
+ # requests. The desired data size for each partition generated. The default for
2005
+ # this option is currently 1 GiB. This is only a hint. The actual size of each
2006
+ # partition may be smaller or larger than this size request.
2007
+ # Corresponds to the JSON property `partitionSizeBytes`
2008
+ # @return [Fixnum]
2009
+ attr_accessor :partition_size_bytes
2010
+
2011
+ def initialize(**args)
2012
+ update!(**args)
2013
+ end
2014
+
2015
+ # Update properties of this object
2016
+ def update!(**args)
2017
+ @max_partitions = args[:max_partitions] if args.key?(:max_partitions)
2018
+ @partition_size_bytes = args[:partition_size_bytes] if args.key?(:partition_size_bytes)
2019
+ end
2020
+ end
2021
+
2022
+ # The request for PartitionQuery
2023
+ class PartitionQueryRequest
2024
+ include Google::Apis::Core::Hashable
2025
+
2026
+ # It is not always possible for Cloud Spanner to infer the right SQL type from a
2027
+ # JSON value. For example, values of type `BYTES` and values of type `STRING`
2028
+ # both appear in params as JSON strings. In these cases, `param_types` can be
2029
+ # used to specify the exact SQL type for some or all of the SQL query parameters.
2030
+ # See the definition of Type for more information about SQL types.
2031
+ # Corresponds to the JSON property `paramTypes`
2032
+ # @return [Hash<String,Google::Apis::SpannerV1::Type>]
2033
+ attr_accessor :param_types
2034
+
2035
+ # Parameter names and values that bind to placeholders in the SQL string. A
2036
+ # parameter placeholder consists of the `@` character followed by the parameter
2037
+ # name (for example, `@firstName`). Parameter names can contain letters, numbers,
2038
+ # and underscores. Parameters can appear anywhere that a literal value is
2039
+ # expected. The same parameter name can be used more than once, for example: `"
2040
+ # WHERE id > @msg_id AND id < @msg_id + 100"` It is an error to execute a SQL
2041
+ # statement with unbound parameters.
2042
+ # Corresponds to the JSON property `params`
2043
+ # @return [Hash<String,Object>]
2044
+ attr_accessor :params
2045
+
2046
+ # Options for a PartitionQueryRequest and PartitionReadRequest.
2047
+ # Corresponds to the JSON property `partitionOptions`
2048
+ # @return [Google::Apis::SpannerV1::PartitionOptions]
2049
+ attr_accessor :partition_options
2050
+
2051
+ # Required. The query request to generate partitions for. The request will fail
2052
+ # if the query is not root partitionable. The query plan of a root partitionable
2053
+ # query has a single distributed union operator. A distributed union operator
2054
+ # conceptually divides one or more tables into multiple splits, remotely
2055
+ # evaluates a subquery independently on each split, and then unions all results.
2056
+ # This must not contain DML commands, such as INSERT, UPDATE, or DELETE. Use
2057
+ # ExecuteStreamingSql with a PartitionedDml transaction for large, partition-
2058
+ # friendly DML operations.
2059
+ # Corresponds to the JSON property `sql`
2060
+ # @return [String]
2061
+ attr_accessor :sql
2062
+
2063
+ # This message is used to select the transaction in which a Read or ExecuteSql
2064
+ # call runs. See TransactionOptions for more information about transactions.
2065
+ # Corresponds to the JSON property `transaction`
2066
+ # @return [Google::Apis::SpannerV1::TransactionSelector]
2067
+ attr_accessor :transaction
2068
+
2069
+ def initialize(**args)
2070
+ update!(**args)
2071
+ end
2072
+
2073
+ # Update properties of this object
2074
+ def update!(**args)
2075
+ @param_types = args[:param_types] if args.key?(:param_types)
2076
+ @params = args[:params] if args.key?(:params)
2077
+ @partition_options = args[:partition_options] if args.key?(:partition_options)
2078
+ @sql = args[:sql] if args.key?(:sql)
2079
+ @transaction = args[:transaction] if args.key?(:transaction)
2080
+ end
2081
+ end
2082
+
2083
+ # The request for PartitionRead
2084
+ class PartitionReadRequest
2085
+ include Google::Apis::Core::Hashable
2086
+
2087
+ # The columns of table to be returned for each row matching this request.
2088
+ # Corresponds to the JSON property `columns`
2089
+ # @return [Array<String>]
2090
+ attr_accessor :columns
2091
+
2092
+ # If non-empty, the name of an index on table. This index is used instead of the
2093
+ # table primary key when interpreting key_set and sorting result rows. See
2094
+ # key_set for further information.
2095
+ # Corresponds to the JSON property `index`
2096
+ # @return [String]
2097
+ attr_accessor :index
2098
+
2099
+ # `KeySet` defines a collection of Cloud Spanner keys and/or key ranges. All the
2100
+ # keys are expected to be in the same table or index. The keys need not be
2101
+ # sorted in any particular way. If the same key is specified multiple times in
2102
+ # the set (for example if two ranges, two keys, or a key and a range overlap),
2103
+ # Cloud Spanner behaves as if the key were only specified once.
2104
+ # Corresponds to the JSON property `keySet`
2105
+ # @return [Google::Apis::SpannerV1::KeySet]
2106
+ attr_accessor :key_set
2107
+
2108
+ # Options for a PartitionQueryRequest and PartitionReadRequest.
2109
+ # Corresponds to the JSON property `partitionOptions`
2110
+ # @return [Google::Apis::SpannerV1::PartitionOptions]
2111
+ attr_accessor :partition_options
2112
+
2113
+ # Required. The name of the table in the database to be read.
2114
+ # Corresponds to the JSON property `table`
2115
+ # @return [String]
2116
+ attr_accessor :table
2117
+
2118
+ # This message is used to select the transaction in which a Read or ExecuteSql
2119
+ # call runs. See TransactionOptions for more information about transactions.
2120
+ # Corresponds to the JSON property `transaction`
2121
+ # @return [Google::Apis::SpannerV1::TransactionSelector]
2122
+ attr_accessor :transaction
2123
+
2124
+ def initialize(**args)
2125
+ update!(**args)
2126
+ end
2127
+
2128
+ # Update properties of this object
2129
+ def update!(**args)
2130
+ @columns = args[:columns] if args.key?(:columns)
2131
+ @index = args[:index] if args.key?(:index)
2132
+ @key_set = args[:key_set] if args.key?(:key_set)
2133
+ @partition_options = args[:partition_options] if args.key?(:partition_options)
2134
+ @table = args[:table] if args.key?(:table)
2135
+ @transaction = args[:transaction] if args.key?(:transaction)
2136
+ end
2137
+ end
2138
+
2139
+ # The response for PartitionQuery or PartitionRead
2140
+ class PartitionResponse
2141
+ include Google::Apis::Core::Hashable
2142
+
2143
+ # Partitions created by this request.
2144
+ # Corresponds to the JSON property `partitions`
2145
+ # @return [Array<Google::Apis::SpannerV1::Partition>]
2146
+ attr_accessor :partitions
2147
+
2148
+ # A transaction.
2149
+ # Corresponds to the JSON property `transaction`
2150
+ # @return [Google::Apis::SpannerV1::Transaction]
2151
+ attr_accessor :transaction
2152
+
2153
+ def initialize(**args)
2154
+ update!(**args)
2155
+ end
2156
+
2157
+ # Update properties of this object
2158
+ def update!(**args)
2159
+ @partitions = args[:partitions] if args.key?(:partitions)
2160
+ @transaction = args[:transaction] if args.key?(:transaction)
2161
+ end
2162
+ end
2163
+
2164
+ # Message type to initiate a Partitioned DML transaction.
2165
+ class PartitionedDml
2166
+ include Google::Apis::Core::Hashable
2167
+
2168
+ def initialize(**args)
2169
+ update!(**args)
2170
+ end
2171
+
2172
+ # Update properties of this object
2173
+ def update!(**args)
2174
+ end
2175
+ end
2176
+
2177
+ # Node information for nodes appearing in a QueryPlan.plan_nodes.
2178
+ class PlanNode
2179
+ include Google::Apis::Core::Hashable
2180
+
2181
+ # List of child node `index`es and their relationship to this parent.
2182
+ # Corresponds to the JSON property `childLinks`
2183
+ # @return [Array<Google::Apis::SpannerV1::ChildLink>]
2184
+ attr_accessor :child_links
2185
+
2186
+ # The display name for the node.
2187
+ # Corresponds to the JSON property `displayName`
2188
+ # @return [String]
2189
+ attr_accessor :display_name
2190
+
2191
+ # The execution statistics associated with the node, contained in a group of key-
2192
+ # value pairs. Only present if the plan was returned as a result of a profile
2193
+ # query. For example, number of executions, number of rows/time per execution
2194
+ # etc.
2195
+ # Corresponds to the JSON property `executionStats`
2196
+ # @return [Hash<String,Object>]
2197
+ attr_accessor :execution_stats
2198
+
2199
+ # The `PlanNode`'s index in node list.
2200
+ # Corresponds to the JSON property `index`
2201
+ # @return [Fixnum]
2202
+ attr_accessor :index
2203
+
2204
+ # Used to determine the type of node. May be needed for visualizing different
2205
+ # kinds of nodes differently. For example, If the node is a SCALAR node, it will
2206
+ # have a condensed representation which can be used to directly embed a
2207
+ # description of the node in its parent.
2208
+ # Corresponds to the JSON property `kind`
2209
+ # @return [String]
2210
+ attr_accessor :kind
2211
+
2212
+ # Attributes relevant to the node contained in a group of key-value pairs. For
2213
+ # example, a Parameter Reference node could have the following information in
2214
+ # its metadata: ` "parameter_reference": "param1", "parameter_type": "array" `
2215
+ # Corresponds to the JSON property `metadata`
2216
+ # @return [Hash<String,Object>]
2217
+ attr_accessor :metadata
2218
+
2219
+ # Condensed representation of a node and its subtree. Only present for `SCALAR`
2220
+ # PlanNode(s).
2221
+ # Corresponds to the JSON property `shortRepresentation`
2222
+ # @return [Google::Apis::SpannerV1::ShortRepresentation]
2223
+ attr_accessor :short_representation
2224
+
2225
+ def initialize(**args)
2226
+ update!(**args)
2227
+ end
2228
+
2229
+ # Update properties of this object
2230
+ def update!(**args)
2231
+ @child_links = args[:child_links] if args.key?(:child_links)
2232
+ @display_name = args[:display_name] if args.key?(:display_name)
2233
+ @execution_stats = args[:execution_stats] if args.key?(:execution_stats)
2234
+ @index = args[:index] if args.key?(:index)
2235
+ @kind = args[:kind] if args.key?(:kind)
2236
+ @metadata = args[:metadata] if args.key?(:metadata)
2237
+ @short_representation = args[:short_representation] if args.key?(:short_representation)
2238
+ end
2239
+ end
2240
+
2241
+ # An Identity and Access Management (IAM) policy, which specifies access
2242
+ # controls for Google Cloud resources. A `Policy` is a collection of `bindings`.
2243
+ # A `binding` binds one or more `members` to a single `role`. Members can be
2244
+ # user accounts, service accounts, Google groups, and domains (such as G Suite).
2245
+ # A `role` is a named list of permissions; each `role` can be an IAM predefined
2246
+ # role or a user-created custom role. For some types of Google Cloud resources,
2247
+ # a `binding` can also specify a `condition`, which is a logical expression that
2248
+ # allows access to a resource only if the expression evaluates to `true`. A
2249
+ # condition can add constraints based on attributes of the request, the resource,
2250
+ # or both. To learn which resources support conditions in their IAM policies,
2251
+ # see the [IAM documentation](https://cloud.google.com/iam/help/conditions/
2252
+ # resource-policies). **JSON example:** ` "bindings": [ ` "role": "roles/
2253
+ # resourcemanager.organizationAdmin", "members": [ "user:mike@example.com", "
2254
+ # group:admins@example.com", "domain:google.com", "serviceAccount:my-project-id@
2255
+ # appspot.gserviceaccount.com" ] `, ` "role": "roles/resourcemanager.
2256
+ # organizationViewer", "members": [ "user:eve@example.com" ], "condition": ` "
2257
+ # title": "expirable access", "description": "Does not grant access after Sep
2258
+ # 2020", "expression": "request.time < timestamp('2020-10-01T00:00:00.000Z')", `
2259
+ # ` ], "etag": "BwWWja0YfJA=", "version": 3 ` **YAML example:** bindings: -
2260
+ # members: - user:mike@example.com - group:admins@example.com - domain:google.
2261
+ # com - serviceAccount:my-project-id@appspot.gserviceaccount.com role: roles/
2262
+ # resourcemanager.organizationAdmin - members: - user:eve@example.com role:
2263
+ # roles/resourcemanager.organizationViewer condition: title: expirable access
2264
+ # description: Does not grant access after Sep 2020 expression: request.time <
2265
+ # timestamp('2020-10-01T00:00:00.000Z') - etag: BwWWja0YfJA= - version: 3 For a
2266
+ # description of IAM and its features, see the [IAM documentation](https://cloud.
2267
+ # google.com/iam/docs/).
2268
+ class Policy
2269
+ include Google::Apis::Core::Hashable
2270
+
2271
+ # Associates a list of `members` to a `role`. Optionally, may specify a `
2272
+ # condition` that determines how and when the `bindings` are applied. Each of
2273
+ # the `bindings` must contain at least one member.
2274
+ # Corresponds to the JSON property `bindings`
2275
+ # @return [Array<Google::Apis::SpannerV1::Binding>]
2276
+ attr_accessor :bindings
2277
+
2278
+ # `etag` is used for optimistic concurrency control as a way to help prevent
2279
+ # simultaneous updates of a policy from overwriting each other. It is strongly
2280
+ # suggested that systems make use of the `etag` in the read-modify-write cycle
2281
+ # to perform policy updates in order to avoid race conditions: An `etag` is
2282
+ # returned in the response to `getIamPolicy`, and systems are expected to put
2283
+ # that etag in the request to `setIamPolicy` to ensure that their change will be
2284
+ # applied to the same version of the policy. **Important:** If you use IAM
2285
+ # Conditions, you must include the `etag` field whenever you call `setIamPolicy`.
2286
+ # If you omit this field, then IAM allows you to overwrite a version `3` policy
2287
+ # with a version `1` policy, and all of the conditions in the version `3` policy
2288
+ # are lost.
2289
+ # Corresponds to the JSON property `etag`
2290
+ # NOTE: Values are automatically base64 encoded/decoded in the client library.
2291
+ # @return [String]
2292
+ attr_accessor :etag
2293
+
2294
+ # Specifies the format of the policy. Valid values are `0`, `1`, and `3`.
2295
+ # Requests that specify an invalid value are rejected. Any operation that
2296
+ # affects conditional role bindings must specify version `3`. This requirement
2297
+ # applies to the following operations: * Getting a policy that includes a
2298
+ # conditional role binding * Adding a conditional role binding to a policy *
2299
+ # Changing a conditional role binding in a policy * Removing any role binding,
2300
+ # with or without a condition, from a policy that includes conditions **
2301
+ # Important:** If you use IAM Conditions, you must include the `etag` field
2302
+ # whenever you call `setIamPolicy`. If you omit this field, then IAM allows you
2303
+ # to overwrite a version `3` policy with a version `1` policy, and all of the
2304
+ # conditions in the version `3` policy are lost. If a policy does not include
2305
+ # any conditions, operations on that policy may specify any valid version or
2306
+ # leave the field unset. To learn which resources support conditions in their
2307
+ # IAM policies, see the [IAM documentation](https://cloud.google.com/iam/help/
2308
+ # conditions/resource-policies).
2309
+ # Corresponds to the JSON property `version`
2310
+ # @return [Fixnum]
2311
+ attr_accessor :version
2312
+
2313
+ def initialize(**args)
2314
+ update!(**args)
2315
+ end
2316
+
2317
+ # Update properties of this object
2318
+ def update!(**args)
2319
+ @bindings = args[:bindings] if args.key?(:bindings)
2320
+ @etag = args[:etag] if args.key?(:etag)
2321
+ @version = args[:version] if args.key?(:version)
2322
+ end
2323
+ end
2324
+
2325
+ # Query optimizer configuration.
2326
+ class QueryOptions
2327
+ include Google::Apis::Core::Hashable
2328
+
2329
+ # An option to control the selection of optimizer version. This parameter allows
2330
+ # individual queries to pick different query optimizer versions. Specifying "
2331
+ # latest" as a value instructs Cloud Spanner to use the latest supported query
2332
+ # optimizer version. If not specified, Cloud Spanner uses optimizer version set
2333
+ # at the database level options. Any other positive integer (from the list of
2334
+ # supported optimizer versions) overrides the default optimizer version for
2335
+ # query execution. The list of supported optimizer versions can be queried from
2336
+ # SPANNER_SYS.SUPPORTED_OPTIMIZER_VERSIONS. Executing a SQL statement with an
2337
+ # invalid optimizer version will fail with a syntax error (`INVALID_ARGUMENT`)
2338
+ # status. See https://cloud.google.com/spanner/docs/query-optimizer/manage-query-
2339
+ # optimizer for more information on managing the query optimizer. The `
2340
+ # optimizer_version` statement hint has precedence over this setting.
2341
+ # Corresponds to the JSON property `optimizerVersion`
2342
+ # @return [String]
2343
+ attr_accessor :optimizer_version
2344
+
2345
+ def initialize(**args)
2346
+ update!(**args)
2347
+ end
2348
+
2349
+ # Update properties of this object
2350
+ def update!(**args)
2351
+ @optimizer_version = args[:optimizer_version] if args.key?(:optimizer_version)
2352
+ end
2353
+ end
2354
+
2355
+ # Contains an ordered list of nodes appearing in the query plan.
2356
+ class QueryPlan
2357
+ include Google::Apis::Core::Hashable
2358
+
2359
+ # The nodes in the query plan. Plan nodes are returned in pre-order starting
2360
+ # with the plan root. Each PlanNode's `id` corresponds to its index in `
2361
+ # plan_nodes`.
2362
+ # Corresponds to the JSON property `planNodes`
2363
+ # @return [Array<Google::Apis::SpannerV1::PlanNode>]
2364
+ attr_accessor :plan_nodes
2365
+
2366
+ def initialize(**args)
2367
+ update!(**args)
2368
+ end
2369
+
2370
+ # Update properties of this object
2371
+ def update!(**args)
2372
+ @plan_nodes = args[:plan_nodes] if args.key?(:plan_nodes)
2373
+ end
2374
+ end
2375
+
2376
+ # Message type to initiate a read-only transaction.
2377
+ class ReadOnly
2378
+ include Google::Apis::Core::Hashable
2379
+
2380
+ # Executes all reads at a timestamp that is `exact_staleness` old. The timestamp
2381
+ # is chosen soon after the read is started. Guarantees that all writes that have
2382
+ # committed more than the specified number of seconds ago are visible. Because
2383
+ # Cloud Spanner chooses the exact timestamp, this mode works even if the client'
2384
+ # s local clock is substantially skewed from Cloud Spanner commit timestamps.
2385
+ # Useful for reading at nearby replicas without the distributed timestamp
2386
+ # negotiation overhead of `max_staleness`.
2387
+ # Corresponds to the JSON property `exactStaleness`
2388
+ # @return [String]
2389
+ attr_accessor :exact_staleness
2390
+
2391
+ # Read data at a timestamp >= `NOW - max_staleness` seconds. Guarantees that all
2392
+ # writes that have committed more than the specified number of seconds ago are
2393
+ # visible. Because Cloud Spanner chooses the exact timestamp, this mode works
2394
+ # even if the client's local clock is substantially skewed from Cloud Spanner
2395
+ # commit timestamps. Useful for reading the freshest data available at a nearby
2396
+ # replica, while bounding the possible staleness if the local replica has fallen
2397
+ # behind. Note that this option can only be used in single-use transactions.
2398
+ # Corresponds to the JSON property `maxStaleness`
2399
+ # @return [String]
2400
+ attr_accessor :max_staleness
2401
+
2402
+ # Executes all reads at a timestamp >= `min_read_timestamp`. This is useful for
2403
+ # requesting fresher data than some previous read, or data that is fresh enough
2404
+ # to observe the effects of some previously committed transaction whose
2405
+ # timestamp is known. Note that this option can only be used in single-use
2406
+ # transactions. A timestamp in RFC3339 UTC \"Zulu\" format, accurate to
2407
+ # nanoseconds. Example: `"2014-10-02T15:01:23.045123456Z"`.
2408
+ # Corresponds to the JSON property `minReadTimestamp`
2409
+ # @return [String]
2410
+ attr_accessor :min_read_timestamp
2411
+
2412
+ # Executes all reads at the given timestamp. Unlike other modes, reads at a
2413
+ # specific timestamp are repeatable; the same read at the same timestamp always
2414
+ # returns the same data. If the timestamp is in the future, the read will block
2415
+ # until the specified timestamp, modulo the read's deadline. Useful for large
2416
+ # scale consistent reads such as mapreduces, or for coordinating many reads
2417
+ # against a consistent snapshot of the data. A timestamp in RFC3339 UTC \"Zulu\"
2418
+ # format, accurate to nanoseconds. Example: `"2014-10-02T15:01:23.045123456Z"`.
2419
+ # Corresponds to the JSON property `readTimestamp`
2420
+ # @return [String]
2421
+ attr_accessor :read_timestamp
2422
+
2423
+ # If true, the Cloud Spanner-selected read timestamp is included in the
2424
+ # Transaction message that describes the transaction.
2425
+ # Corresponds to the JSON property `returnReadTimestamp`
2426
+ # @return [Boolean]
2427
+ attr_accessor :return_read_timestamp
2428
+ alias_method :return_read_timestamp?, :return_read_timestamp
2429
+
2430
+ # Read at a timestamp where all previously committed transactions are visible.
2431
+ # Corresponds to the JSON property `strong`
2432
+ # @return [Boolean]
2433
+ attr_accessor :strong
2434
+ alias_method :strong?, :strong
2435
+
2436
+ def initialize(**args)
2437
+ update!(**args)
2438
+ end
2439
+
2440
+ # Update properties of this object
2441
+ def update!(**args)
2442
+ @exact_staleness = args[:exact_staleness] if args.key?(:exact_staleness)
2443
+ @max_staleness = args[:max_staleness] if args.key?(:max_staleness)
2444
+ @min_read_timestamp = args[:min_read_timestamp] if args.key?(:min_read_timestamp)
2445
+ @read_timestamp = args[:read_timestamp] if args.key?(:read_timestamp)
2446
+ @return_read_timestamp = args[:return_read_timestamp] if args.key?(:return_read_timestamp)
2447
+ @strong = args[:strong] if args.key?(:strong)
2448
+ end
2449
+ end
2450
+
2451
+ # The request for Read and StreamingRead.
2452
+ class ReadRequest
2453
+ include Google::Apis::Core::Hashable
2454
+
2455
+ # Required. The columns of table to be returned for each row matching this
2456
+ # request.
2457
+ # Corresponds to the JSON property `columns`
2458
+ # @return [Array<String>]
2459
+ attr_accessor :columns
2460
+
2461
+ # If non-empty, the name of an index on table. This index is used instead of the
2462
+ # table primary key when interpreting key_set and sorting result rows. See
2463
+ # key_set for further information.
2464
+ # Corresponds to the JSON property `index`
2465
+ # @return [String]
2466
+ attr_accessor :index
2467
+
2468
+ # `KeySet` defines a collection of Cloud Spanner keys and/or key ranges. All the
2469
+ # keys are expected to be in the same table or index. The keys need not be
2470
+ # sorted in any particular way. If the same key is specified multiple times in
2471
+ # the set (for example if two ranges, two keys, or a key and a range overlap),
2472
+ # Cloud Spanner behaves as if the key were only specified once.
2473
+ # Corresponds to the JSON property `keySet`
2474
+ # @return [Google::Apis::SpannerV1::KeySet]
2475
+ attr_accessor :key_set
2476
+
2477
+ # If greater than zero, only the first `limit` rows are yielded. If `limit` is
2478
+ # zero, the default is no limit. A limit cannot be specified if `partition_token`
2479
+ # is set.
2480
+ # Corresponds to the JSON property `limit`
2481
+ # @return [Fixnum]
2482
+ attr_accessor :limit
2483
+
2484
+ # If present, results will be restricted to the specified partition previously
2485
+ # created using PartitionRead(). There must be an exact match for the values of
2486
+ # fields common to this message and the PartitionReadRequest message used to
2487
+ # create this partition_token.
2488
+ # Corresponds to the JSON property `partitionToken`
2489
+ # NOTE: Values are automatically base64 encoded/decoded in the client library.
2490
+ # @return [String]
2491
+ attr_accessor :partition_token
2492
+
2493
+ # If this request is resuming a previously interrupted read, `resume_token`
2494
+ # should be copied from the last PartialResultSet yielded before the
2495
+ # interruption. Doing this enables the new read to resume where the last read
2496
+ # left off. The rest of the request parameters must exactly match the request
2497
+ # that yielded this token.
2498
+ # Corresponds to the JSON property `resumeToken`
2499
+ # NOTE: Values are automatically base64 encoded/decoded in the client library.
2500
+ # @return [String]
2501
+ attr_accessor :resume_token
2502
+
2503
+ # Required. The name of the table in the database to be read.
2504
+ # Corresponds to the JSON property `table`
2505
+ # @return [String]
2506
+ attr_accessor :table
2507
+
2508
+ # This message is used to select the transaction in which a Read or ExecuteSql
2509
+ # call runs. See TransactionOptions for more information about transactions.
2510
+ # Corresponds to the JSON property `transaction`
2511
+ # @return [Google::Apis::SpannerV1::TransactionSelector]
2512
+ attr_accessor :transaction
2513
+
2514
+ def initialize(**args)
2515
+ update!(**args)
2516
+ end
2517
+
2518
+ # Update properties of this object
2519
+ def update!(**args)
2520
+ @columns = args[:columns] if args.key?(:columns)
2521
+ @index = args[:index] if args.key?(:index)
2522
+ @key_set = args[:key_set] if args.key?(:key_set)
2523
+ @limit = args[:limit] if args.key?(:limit)
2524
+ @partition_token = args[:partition_token] if args.key?(:partition_token)
2525
+ @resume_token = args[:resume_token] if args.key?(:resume_token)
2526
+ @table = args[:table] if args.key?(:table)
2527
+ @transaction = args[:transaction] if args.key?(:transaction)
2528
+ end
2529
+ end
2530
+
2531
+ # Message type to initiate a read-write transaction. Currently this transaction
2532
+ # type has no options.
2533
+ class ReadWrite
2534
+ include Google::Apis::Core::Hashable
2535
+
2536
+ def initialize(**args)
2537
+ update!(**args)
2538
+ end
2539
+
2540
+ # Update properties of this object
2541
+ def update!(**args)
2542
+ end
2543
+ end
2544
+
2545
+ #
2546
+ class ReplicaInfo
2547
+ include Google::Apis::Core::Hashable
2548
+
2549
+ # If true, this location is designated as the default leader location where
2550
+ # leader replicas are placed. See the [region types documentation](https://cloud.
2551
+ # google.com/spanner/docs/instances#region_types) for more details.
2552
+ # Corresponds to the JSON property `defaultLeaderLocation`
2553
+ # @return [Boolean]
2554
+ attr_accessor :default_leader_location
2555
+ alias_method :default_leader_location?, :default_leader_location
2556
+
2557
+ # The location of the serving resources, e.g. "us-central1".
2558
+ # Corresponds to the JSON property `location`
2559
+ # @return [String]
2560
+ attr_accessor :location
2561
+
2562
+ # The type of replica.
2563
+ # Corresponds to the JSON property `type`
2564
+ # @return [String]
2565
+ attr_accessor :type
2566
+
2567
+ def initialize(**args)
2568
+ update!(**args)
2569
+ end
2570
+
2571
+ # Update properties of this object
2572
+ def update!(**args)
2573
+ @default_leader_location = args[:default_leader_location] if args.key?(:default_leader_location)
2574
+ @location = args[:location] if args.key?(:location)
2575
+ @type = args[:type] if args.key?(:type)
2576
+ end
2577
+ end
2578
+
2579
+ # Metadata type for the long-running operation returned by RestoreDatabase.
2580
+ class RestoreDatabaseMetadata
2581
+ include Google::Apis::Core::Hashable
2582
+
2583
+ # Information about a backup.
2584
+ # Corresponds to the JSON property `backupInfo`
2585
+ # @return [Google::Apis::SpannerV1::BackupInfo]
2586
+ attr_accessor :backup_info
2587
+
2588
+ # The time at which cancellation of this operation was received. Operations.
2589
+ # CancelOperation starts asynchronous cancellation on a long-running operation.
2590
+ # The server makes a best effort to cancel the operation, but success is not
2591
+ # guaranteed. Clients can use Operations.GetOperation or other methods to check
2592
+ # whether the cancellation succeeded or whether the operation completed despite
2593
+ # cancellation. On successful cancellation, the operation is not deleted;
2594
+ # instead, it becomes an operation with an Operation.error value with a google.
2595
+ # rpc.Status.code of 1, corresponding to `Code.CANCELLED`.
2596
+ # Corresponds to the JSON property `cancelTime`
2597
+ # @return [String]
2598
+ attr_accessor :cancel_time
2599
+
2600
+ # Name of the database being created and restored to.
2601
+ # Corresponds to the JSON property `name`
2602
+ # @return [String]
2603
+ attr_accessor :name
2604
+
2605
+ # If exists, the name of the long-running operation that will be used to track
2606
+ # the post-restore optimization process to optimize the performance of the
2607
+ # restored database, and remove the dependency on the restore source. The name
2608
+ # is of the form `projects//instances//databases//operations/` where the is the
2609
+ # name of database being created and restored to. The metadata type of the long-
2610
+ # running operation is OptimizeRestoredDatabaseMetadata. This long-running
2611
+ # operation will be automatically created by the system after the
2612
+ # RestoreDatabase long-running operation completes successfully. This operation
2613
+ # will not be created if the restore was not successful.
2614
+ # Corresponds to the JSON property `optimizeDatabaseOperationName`
2615
+ # @return [String]
2616
+ attr_accessor :optimize_database_operation_name
2617
+
2618
+ # Encapsulates progress related information for a Cloud Spanner long running
2619
+ # operation.
2620
+ # Corresponds to the JSON property `progress`
2621
+ # @return [Google::Apis::SpannerV1::OperationProgress]
2622
+ attr_accessor :progress
2623
+
2624
+ # The type of the restore source.
2625
+ # Corresponds to the JSON property `sourceType`
2626
+ # @return [String]
2627
+ attr_accessor :source_type
2628
+
2629
+ def initialize(**args)
2630
+ update!(**args)
2631
+ end
2632
+
2633
+ # Update properties of this object
2634
+ def update!(**args)
2635
+ @backup_info = args[:backup_info] if args.key?(:backup_info)
2636
+ @cancel_time = args[:cancel_time] if args.key?(:cancel_time)
2637
+ @name = args[:name] if args.key?(:name)
2638
+ @optimize_database_operation_name = args[:optimize_database_operation_name] if args.key?(:optimize_database_operation_name)
2639
+ @progress = args[:progress] if args.key?(:progress)
2640
+ @source_type = args[:source_type] if args.key?(:source_type)
2641
+ end
2642
+ end
2643
+
2644
+ # The request for RestoreDatabase.
2645
+ class RestoreDatabaseRequest
2646
+ include Google::Apis::Core::Hashable
2647
+
2648
+ # Name of the backup from which to restore. Values are of the form `projects//
2649
+ # instances//backups/`.
2650
+ # Corresponds to the JSON property `backup`
2651
+ # @return [String]
2652
+ attr_accessor :backup
2653
+
2654
+ # Required. The id of the database to create and restore to. This database must
2655
+ # not already exist. The `database_id` appended to `parent` forms the full
2656
+ # database name of the form `projects//instances//databases/`.
2657
+ # Corresponds to the JSON property `databaseId`
2658
+ # @return [String]
2659
+ attr_accessor :database_id
2660
+
2661
+ def initialize(**args)
2662
+ update!(**args)
2663
+ end
2664
+
2665
+ # Update properties of this object
2666
+ def update!(**args)
2667
+ @backup = args[:backup] if args.key?(:backup)
2668
+ @database_id = args[:database_id] if args.key?(:database_id)
2669
+ end
2670
+ end
2671
+
2672
+ # Information about the database restore.
2673
+ class RestoreInfo
2674
+ include Google::Apis::Core::Hashable
2675
+
2676
+ # Information about a backup.
2677
+ # Corresponds to the JSON property `backupInfo`
2678
+ # @return [Google::Apis::SpannerV1::BackupInfo]
2679
+ attr_accessor :backup_info
2680
+
2681
+ # The type of the restore source.
2682
+ # Corresponds to the JSON property `sourceType`
2683
+ # @return [String]
2684
+ attr_accessor :source_type
2685
+
2686
+ def initialize(**args)
2687
+ update!(**args)
2688
+ end
2689
+
2690
+ # Update properties of this object
2691
+ def update!(**args)
2692
+ @backup_info = args[:backup_info] if args.key?(:backup_info)
2693
+ @source_type = args[:source_type] if args.key?(:source_type)
2694
+ end
2695
+ end
2696
+
2697
+ # Results from Read or ExecuteSql.
2698
+ class ResultSet
2699
+ include Google::Apis::Core::Hashable
2700
+
2701
+ # Metadata about a ResultSet or PartialResultSet.
2702
+ # Corresponds to the JSON property `metadata`
2703
+ # @return [Google::Apis::SpannerV1::ResultSetMetadata]
2704
+ attr_accessor :metadata
2705
+
2706
+ # Each element in `rows` is a row whose format is defined by metadata.row_type.
2707
+ # The ith element in each row matches the ith field in metadata.row_type.
2708
+ # Elements are encoded based on type as described here.
2709
+ # Corresponds to the JSON property `rows`
2710
+ # @return [Array<Array<Object>>]
2711
+ attr_accessor :rows
2712
+
2713
+ # Additional statistics about a ResultSet or PartialResultSet.
2714
+ # Corresponds to the JSON property `stats`
2715
+ # @return [Google::Apis::SpannerV1::ResultSetStats]
2716
+ attr_accessor :stats
2717
+
2718
+ def initialize(**args)
2719
+ update!(**args)
2720
+ end
2721
+
2722
+ # Update properties of this object
2723
+ def update!(**args)
2724
+ @metadata = args[:metadata] if args.key?(:metadata)
2725
+ @rows = args[:rows] if args.key?(:rows)
2726
+ @stats = args[:stats] if args.key?(:stats)
2727
+ end
2728
+ end
2729
+
2730
+ # Metadata about a ResultSet or PartialResultSet.
2731
+ class ResultSetMetadata
2732
+ include Google::Apis::Core::Hashable
2733
+
2734
+ # `StructType` defines the fields of a STRUCT type.
2735
+ # Corresponds to the JSON property `rowType`
2736
+ # @return [Google::Apis::SpannerV1::StructType]
2737
+ attr_accessor :row_type
2738
+
2739
+ # A transaction.
2740
+ # Corresponds to the JSON property `transaction`
2741
+ # @return [Google::Apis::SpannerV1::Transaction]
2742
+ attr_accessor :transaction
2743
+
2744
+ def initialize(**args)
2745
+ update!(**args)
2746
+ end
2747
+
2748
+ # Update properties of this object
2749
+ def update!(**args)
2750
+ @row_type = args[:row_type] if args.key?(:row_type)
2751
+ @transaction = args[:transaction] if args.key?(:transaction)
2752
+ end
2753
+ end
2754
+
2755
+ # Additional statistics about a ResultSet or PartialResultSet.
2756
+ class ResultSetStats
2757
+ include Google::Apis::Core::Hashable
2758
+
2759
+ # Contains an ordered list of nodes appearing in the query plan.
2760
+ # Corresponds to the JSON property `queryPlan`
2761
+ # @return [Google::Apis::SpannerV1::QueryPlan]
2762
+ attr_accessor :query_plan
2763
+
2764
+ # Aggregated statistics from the execution of the query. Only present when the
2765
+ # query is profiled. For example, a query could return the statistics as follows:
2766
+ # ` "rows_returned": "3", "elapsed_time": "1.22 secs", "cpu_time": "1.19 secs" `
2767
+ # Corresponds to the JSON property `queryStats`
2768
+ # @return [Hash<String,Object>]
2769
+ attr_accessor :query_stats
2770
+
2771
+ # Standard DML returns an exact count of rows that were modified.
2772
+ # Corresponds to the JSON property `rowCountExact`
2773
+ # @return [Fixnum]
2774
+ attr_accessor :row_count_exact
2775
+
2776
+ # Partitioned DML does not offer exactly-once semantics, so it returns a lower
2777
+ # bound of the rows modified.
2778
+ # Corresponds to the JSON property `rowCountLowerBound`
2779
+ # @return [Fixnum]
2780
+ attr_accessor :row_count_lower_bound
2781
+
2782
+ def initialize(**args)
2783
+ update!(**args)
2784
+ end
2785
+
2786
+ # Update properties of this object
2787
+ def update!(**args)
2788
+ @query_plan = args[:query_plan] if args.key?(:query_plan)
2789
+ @query_stats = args[:query_stats] if args.key?(:query_stats)
2790
+ @row_count_exact = args[:row_count_exact] if args.key?(:row_count_exact)
2791
+ @row_count_lower_bound = args[:row_count_lower_bound] if args.key?(:row_count_lower_bound)
2792
+ end
2793
+ end
2794
+
2795
+ # The request for Rollback.
2796
+ class RollbackRequest
2797
+ include Google::Apis::Core::Hashable
2798
+
2799
+ # Required. The transaction to roll back.
2800
+ # Corresponds to the JSON property `transactionId`
2801
+ # NOTE: Values are automatically base64 encoded/decoded in the client library.
2802
+ # @return [String]
2803
+ attr_accessor :transaction_id
2804
+
2805
+ def initialize(**args)
2806
+ update!(**args)
2807
+ end
2808
+
2809
+ # Update properties of this object
2810
+ def update!(**args)
2811
+ @transaction_id = args[:transaction_id] if args.key?(:transaction_id)
2812
+ end
2813
+ end
2814
+
2815
+ # A session in the Cloud Spanner API.
2816
+ class Session
2817
+ include Google::Apis::Core::Hashable
2818
+
2819
+ # Output only. The approximate timestamp when the session is last used. It is
2820
+ # typically earlier than the actual last use time.
2821
+ # Corresponds to the JSON property `approximateLastUseTime`
2822
+ # @return [String]
2823
+ attr_accessor :approximate_last_use_time
2824
+
2825
+ # Output only. The timestamp when the session is created.
2826
+ # Corresponds to the JSON property `createTime`
2827
+ # @return [String]
2828
+ attr_accessor :create_time
2829
+
2830
+ # The labels for the session. * Label keys must be between 1 and 63 characters
2831
+ # long and must conform to the following regular expression: `[a-z]([-a-z0-9]*[a-
2832
+ # z0-9])?`. * Label values must be between 0 and 63 characters long and must
2833
+ # conform to the regular expression `([a-z]([-a-z0-9]*[a-z0-9])?)?`. * No more
2834
+ # than 64 labels can be associated with a given session. See https://goo.gl/
2835
+ # xmQnxf for more information on and examples of labels.
2836
+ # Corresponds to the JSON property `labels`
2837
+ # @return [Hash<String,String>]
2838
+ attr_accessor :labels
2839
+
2840
+ # Output only. The name of the session. This is always system-assigned.
2841
+ # Corresponds to the JSON property `name`
2842
+ # @return [String]
2843
+ attr_accessor :name
2844
+
2845
+ def initialize(**args)
2846
+ update!(**args)
2847
+ end
2848
+
2849
+ # Update properties of this object
2850
+ def update!(**args)
2851
+ @approximate_last_use_time = args[:approximate_last_use_time] if args.key?(:approximate_last_use_time)
2852
+ @create_time = args[:create_time] if args.key?(:create_time)
2853
+ @labels = args[:labels] if args.key?(:labels)
2854
+ @name = args[:name] if args.key?(:name)
2855
+ end
2856
+ end
2857
+
2858
+ # Request message for `SetIamPolicy` method.
2859
+ class SetIamPolicyRequest
2860
+ include Google::Apis::Core::Hashable
2861
+
2862
+ # An Identity and Access Management (IAM) policy, which specifies access
2863
+ # controls for Google Cloud resources. A `Policy` is a collection of `bindings`.
2864
+ # A `binding` binds one or more `members` to a single `role`. Members can be
2865
+ # user accounts, service accounts, Google groups, and domains (such as G Suite).
2866
+ # A `role` is a named list of permissions; each `role` can be an IAM predefined
2867
+ # role or a user-created custom role. For some types of Google Cloud resources,
2868
+ # a `binding` can also specify a `condition`, which is a logical expression that
2869
+ # allows access to a resource only if the expression evaluates to `true`. A
2870
+ # condition can add constraints based on attributes of the request, the resource,
2871
+ # or both. To learn which resources support conditions in their IAM policies,
2872
+ # see the [IAM documentation](https://cloud.google.com/iam/help/conditions/
2873
+ # resource-policies). **JSON example:** ` "bindings": [ ` "role": "roles/
2874
+ # resourcemanager.organizationAdmin", "members": [ "user:mike@example.com", "
2875
+ # group:admins@example.com", "domain:google.com", "serviceAccount:my-project-id@
2876
+ # appspot.gserviceaccount.com" ] `, ` "role": "roles/resourcemanager.
2877
+ # organizationViewer", "members": [ "user:eve@example.com" ], "condition": ` "
2878
+ # title": "expirable access", "description": "Does not grant access after Sep
2879
+ # 2020", "expression": "request.time < timestamp('2020-10-01T00:00:00.000Z')", `
2880
+ # ` ], "etag": "BwWWja0YfJA=", "version": 3 ` **YAML example:** bindings: -
2881
+ # members: - user:mike@example.com - group:admins@example.com - domain:google.
2882
+ # com - serviceAccount:my-project-id@appspot.gserviceaccount.com role: roles/
2883
+ # resourcemanager.organizationAdmin - members: - user:eve@example.com role:
2884
+ # roles/resourcemanager.organizationViewer condition: title: expirable access
2885
+ # description: Does not grant access after Sep 2020 expression: request.time <
2886
+ # timestamp('2020-10-01T00:00:00.000Z') - etag: BwWWja0YfJA= - version: 3 For a
2887
+ # description of IAM and its features, see the [IAM documentation](https://cloud.
2888
+ # google.com/iam/docs/).
2889
+ # Corresponds to the JSON property `policy`
2890
+ # @return [Google::Apis::SpannerV1::Policy]
2891
+ attr_accessor :policy
2892
+
2893
+ def initialize(**args)
2894
+ update!(**args)
2895
+ end
2896
+
2897
+ # Update properties of this object
2898
+ def update!(**args)
2899
+ @policy = args[:policy] if args.key?(:policy)
2900
+ end
2901
+ end
2902
+
2903
+ # Condensed representation of a node and its subtree. Only present for `SCALAR`
2904
+ # PlanNode(s).
2905
+ class ShortRepresentation
2906
+ include Google::Apis::Core::Hashable
2907
+
2908
+ # A string representation of the expression subtree rooted at this node.
2909
+ # Corresponds to the JSON property `description`
2910
+ # @return [String]
2911
+ attr_accessor :description
2912
+
2913
+ # A mapping of (subquery variable name) -> (subquery node id) for cases where
2914
+ # the `description` string of this node references a `SCALAR` subquery contained
2915
+ # in the expression subtree rooted at this node. The referenced `SCALAR`
2916
+ # subquery may not necessarily be a direct child of this node.
2917
+ # Corresponds to the JSON property `subqueries`
2918
+ # @return [Hash<String,Fixnum>]
2919
+ attr_accessor :subqueries
2920
+
2921
+ def initialize(**args)
2922
+ update!(**args)
2923
+ end
2924
+
2925
+ # Update properties of this object
2926
+ def update!(**args)
2927
+ @description = args[:description] if args.key?(:description)
2928
+ @subqueries = args[:subqueries] if args.key?(:subqueries)
2929
+ end
2930
+ end
2931
+
2932
+ # A single DML statement.
2933
+ class Statement
2934
+ include Google::Apis::Core::Hashable
2935
+
2936
+ # It is not always possible for Cloud Spanner to infer the right SQL type from a
2937
+ # JSON value. For example, values of type `BYTES` and values of type `STRING`
2938
+ # both appear in params as JSON strings. In these cases, `param_types` can be
2939
+ # used to specify the exact SQL type for some or all of the SQL statement
2940
+ # parameters. See the definition of Type for more information about SQL types.
2941
+ # Corresponds to the JSON property `paramTypes`
2942
+ # @return [Hash<String,Google::Apis::SpannerV1::Type>]
2943
+ attr_accessor :param_types
2944
+
2945
+ # Parameter names and values that bind to placeholders in the DML string. A
2946
+ # parameter placeholder consists of the `@` character followed by the parameter
2947
+ # name (for example, `@firstName`). Parameter names can contain letters, numbers,
2948
+ # and underscores. Parameters can appear anywhere that a literal value is
2949
+ # expected. The same parameter name can be used more than once, for example: `"
2950
+ # WHERE id > @msg_id AND id < @msg_id + 100"` It is an error to execute a SQL
2951
+ # statement with unbound parameters.
2952
+ # Corresponds to the JSON property `params`
2953
+ # @return [Hash<String,Object>]
2954
+ attr_accessor :params
2955
+
2956
+ # Required. The DML string.
2957
+ # Corresponds to the JSON property `sql`
2958
+ # @return [String]
2959
+ attr_accessor :sql
2960
+
2961
+ def initialize(**args)
2962
+ update!(**args)
2963
+ end
2964
+
2965
+ # Update properties of this object
2966
+ def update!(**args)
2967
+ @param_types = args[:param_types] if args.key?(:param_types)
2968
+ @params = args[:params] if args.key?(:params)
2969
+ @sql = args[:sql] if args.key?(:sql)
2970
+ end
2971
+ end
2972
+
2973
+ # The `Status` type defines a logical error model that is suitable for different
2974
+ # programming environments, including REST APIs and RPC APIs. It is used by [
2975
+ # gRPC](https://github.com/grpc). Each `Status` message contains three pieces of
2976
+ # data: error code, error message, and error details. You can find out more
2977
+ # about this error model and how to work with it in the [API Design Guide](https:
2978
+ # //cloud.google.com/apis/design/errors).
2979
+ class Status
2980
+ include Google::Apis::Core::Hashable
2981
+
2982
+ # The status code, which should be an enum value of google.rpc.Code.
2983
+ # Corresponds to the JSON property `code`
2984
+ # @return [Fixnum]
2985
+ attr_accessor :code
2986
+
2987
+ # A list of messages that carry the error details. There is a common set of
2988
+ # message types for APIs to use.
2989
+ # Corresponds to the JSON property `details`
2990
+ # @return [Array<Hash<String,Object>>]
2991
+ attr_accessor :details
2992
+
2993
+ # A developer-facing error message, which should be in English. Any user-facing
2994
+ # error message should be localized and sent in the google.rpc.Status.details
2995
+ # field, or localized by the client.
2996
+ # Corresponds to the JSON property `message`
2997
+ # @return [String]
2998
+ attr_accessor :message
2999
+
3000
+ def initialize(**args)
3001
+ update!(**args)
3002
+ end
3003
+
3004
+ # Update properties of this object
3005
+ def update!(**args)
3006
+ @code = args[:code] if args.key?(:code)
3007
+ @details = args[:details] if args.key?(:details)
3008
+ @message = args[:message] if args.key?(:message)
3009
+ end
3010
+ end
3011
+
3012
+ # `StructType` defines the fields of a STRUCT type.
3013
+ class StructType
3014
+ include Google::Apis::Core::Hashable
3015
+
3016
+ # The list of fields that make up this struct. Order is significant, because
3017
+ # values of this struct type are represented as lists, where the order of field
3018
+ # values matches the order of fields in the StructType. In turn, the order of
3019
+ # fields matches the order of columns in a read request, or the order of fields
3020
+ # in the `SELECT` clause of a query.
3021
+ # Corresponds to the JSON property `fields`
3022
+ # @return [Array<Google::Apis::SpannerV1::Field>]
3023
+ attr_accessor :fields
3024
+
3025
+ def initialize(**args)
3026
+ update!(**args)
3027
+ end
3028
+
3029
+ # Update properties of this object
3030
+ def update!(**args)
3031
+ @fields = args[:fields] if args.key?(:fields)
3032
+ end
3033
+ end
3034
+
3035
+ # Request message for `TestIamPermissions` method.
3036
+ class TestIamPermissionsRequest
3037
+ include Google::Apis::Core::Hashable
3038
+
3039
+ # REQUIRED: The set of permissions to check for 'resource'. Permissions with
3040
+ # wildcards (such as '*', 'spanner.*', 'spanner.instances.*') are not allowed.
3041
+ # Corresponds to the JSON property `permissions`
3042
+ # @return [Array<String>]
3043
+ attr_accessor :permissions
3044
+
3045
+ def initialize(**args)
3046
+ update!(**args)
3047
+ end
3048
+
3049
+ # Update properties of this object
3050
+ def update!(**args)
3051
+ @permissions = args[:permissions] if args.key?(:permissions)
3052
+ end
3053
+ end
3054
+
3055
+ # Response message for `TestIamPermissions` method.
3056
+ class TestIamPermissionsResponse
3057
+ include Google::Apis::Core::Hashable
3058
+
3059
+ # A subset of `TestPermissionsRequest.permissions` that the caller is allowed.
3060
+ # Corresponds to the JSON property `permissions`
3061
+ # @return [Array<String>]
3062
+ attr_accessor :permissions
3063
+
3064
+ def initialize(**args)
3065
+ update!(**args)
3066
+ end
3067
+
3068
+ # Update properties of this object
3069
+ def update!(**args)
3070
+ @permissions = args[:permissions] if args.key?(:permissions)
3071
+ end
3072
+ end
3073
+
3074
+ # A transaction.
3075
+ class Transaction
3076
+ include Google::Apis::Core::Hashable
3077
+
3078
+ # `id` may be used to identify the transaction in subsequent Read, ExecuteSql,
3079
+ # Commit, or Rollback calls. Single-use read-only transactions do not have IDs,
3080
+ # because single-use transactions do not support multiple requests.
3081
+ # Corresponds to the JSON property `id`
3082
+ # NOTE: Values are automatically base64 encoded/decoded in the client library.
3083
+ # @return [String]
3084
+ attr_accessor :id
3085
+
3086
+ # For snapshot read-only transactions, the read timestamp chosen for the
3087
+ # transaction. Not returned by default: see TransactionOptions.ReadOnly.
3088
+ # return_read_timestamp. A timestamp in RFC3339 UTC \"Zulu\" format, accurate to
3089
+ # nanoseconds. Example: `"2014-10-02T15:01:23.045123456Z"`.
3090
+ # Corresponds to the JSON property `readTimestamp`
3091
+ # @return [String]
3092
+ attr_accessor :read_timestamp
3093
+
3094
+ def initialize(**args)
3095
+ update!(**args)
3096
+ end
3097
+
3098
+ # Update properties of this object
3099
+ def update!(**args)
3100
+ @id = args[:id] if args.key?(:id)
3101
+ @read_timestamp = args[:read_timestamp] if args.key?(:read_timestamp)
3102
+ end
3103
+ end
3104
+
3105
+ # # Transactions Each session can have at most one active transaction at a time (
3106
+ # note that standalone reads and queries use a transaction internally and do
3107
+ # count towards the one transaction limit). After the active transaction is
3108
+ # completed, the session can immediately be re-used for the next transaction. It
3109
+ # is not necessary to create a new session for each transaction. # Transaction
3110
+ # Modes Cloud Spanner supports three transaction modes: 1. Locking read-write.
3111
+ # This type of transaction is the only way to write data into Cloud Spanner.
3112
+ # These transactions rely on pessimistic locking and, if necessary, two-phase
3113
+ # commit. Locking read-write transactions may abort, requiring the application
3114
+ # to retry. 2. Snapshot read-only. This transaction type provides guaranteed
3115
+ # consistency across several reads, but does not allow writes. Snapshot read-
3116
+ # only transactions can be configured to read at timestamps in the past.
3117
+ # Snapshot read-only transactions do not need to be committed. 3. Partitioned
3118
+ # DML. This type of transaction is used to execute a single Partitioned DML
3119
+ # statement. Partitioned DML partitions the key space and runs the DML statement
3120
+ # over each partition in parallel using separate, internal transactions that
3121
+ # commit independently. Partitioned DML transactions do not need to be committed.
3122
+ # For transactions that only read, snapshot read-only transactions provide
3123
+ # simpler semantics and are almost always faster. In particular, read-only
3124
+ # transactions do not take locks, so they do not conflict with read-write
3125
+ # transactions. As a consequence of not taking locks, they also do not abort, so
3126
+ # retry loops are not needed. Transactions may only read/write data in a single
3127
+ # database. They may, however, read/write data in different tables within that
3128
+ # database. ## Locking Read-Write Transactions Locking transactions may be used
3129
+ # to atomically read-modify-write data anywhere in a database. This type of
3130
+ # transaction is externally consistent. Clients should attempt to minimize the
3131
+ # amount of time a transaction is active. Faster transactions commit with higher
3132
+ # probability and cause less contention. Cloud Spanner attempts to keep read
3133
+ # locks active as long as the transaction continues to do reads, and the
3134
+ # transaction has not been terminated by Commit or Rollback. Long periods of
3135
+ # inactivity at the client may cause Cloud Spanner to release a transaction's
3136
+ # locks and abort it. Conceptually, a read-write transaction consists of zero or
3137
+ # more reads or SQL statements followed by Commit. At any time before Commit,
3138
+ # the client can send a Rollback request to abort the transaction. ### Semantics
3139
+ # Cloud Spanner can commit the transaction if all read locks it acquired are
3140
+ # still valid at commit time, and it is able to acquire write locks for all
3141
+ # writes. Cloud Spanner can abort the transaction for any reason. If a commit
3142
+ # attempt returns `ABORTED`, Cloud Spanner guarantees that the transaction has
3143
+ # not modified any user data in Cloud Spanner. Unless the transaction commits,
3144
+ # Cloud Spanner makes no guarantees about how long the transaction's locks were
3145
+ # held for. It is an error to use Cloud Spanner locks for any sort of mutual
3146
+ # exclusion other than between Cloud Spanner transactions themselves. ###
3147
+ # Retrying Aborted Transactions When a transaction aborts, the application can
3148
+ # choose to retry the whole transaction again. To maximize the chances of
3149
+ # successfully committing the retry, the client should execute the retry in the
3150
+ # same session as the original attempt. The original session's lock priority
3151
+ # increases with each consecutive abort, meaning that each attempt has a
3152
+ # slightly better chance of success than the previous. Under some circumstances (
3153
+ # e.g., many transactions attempting to modify the same row(s)), a transaction
3154
+ # can abort many times in a short period before successfully committing. Thus,
3155
+ # it is not a good idea to cap the number of retries a transaction can attempt;
3156
+ # instead, it is better to limit the total amount of wall time spent retrying. ##
3157
+ # # Idle Transactions A transaction is considered idle if it has no outstanding
3158
+ # reads or SQL queries and has not started a read or SQL query within the last
3159
+ # 10 seconds. Idle transactions can be aborted by Cloud Spanner so that they don'
3160
+ # t hold on to locks indefinitely. In that case, the commit will fail with error
3161
+ # `ABORTED`. If this behavior is undesirable, periodically executing a simple
3162
+ # SQL query in the transaction (e.g., `SELECT 1`) prevents the transaction from
3163
+ # becoming idle. ## Snapshot Read-Only Transactions Snapshot read-only
3164
+ # transactions provides a simpler method than locking read-write transactions
3165
+ # for doing several consistent reads. However, this type of transaction does not
3166
+ # support writes. Snapshot transactions do not take locks. Instead, they work by
3167
+ # choosing a Cloud Spanner timestamp, then executing all reads at that timestamp.
3168
+ # Since they do not acquire locks, they do not block concurrent read-write
3169
+ # transactions. Unlike locking read-write transactions, snapshot read-only
3170
+ # transactions never abort. They can fail if the chosen read timestamp is
3171
+ # garbage collected; however, the default garbage collection policy is generous
3172
+ # enough that most applications do not need to worry about this in practice.
3173
+ # Snapshot read-only transactions do not need to call Commit or Rollback (and in
3174
+ # fact are not permitted to do so). To execute a snapshot transaction, the
3175
+ # client specifies a timestamp bound, which tells Cloud Spanner how to choose a
3176
+ # read timestamp. The types of timestamp bound are: - Strong (the default). -
3177
+ # Bounded staleness. - Exact staleness. If the Cloud Spanner database to be read
3178
+ # is geographically distributed, stale read-only transactions can execute more
3179
+ # quickly than strong or read-write transaction, because they are able to
3180
+ # execute far from the leader replica. Each type of timestamp bound is discussed
3181
+ # in detail below. ### Strong Strong reads are guaranteed to see the effects of
3182
+ # all transactions that have committed before the start of the read. Furthermore,
3183
+ # all rows yielded by a single read are consistent with each other -- if any
3184
+ # part of the read observes a transaction, all parts of the read see the
3185
+ # transaction. Strong reads are not repeatable: two consecutive strong read-only
3186
+ # transactions might return inconsistent results if there are concurrent writes.
3187
+ # If consistency across reads is required, the reads should be executed within a
3188
+ # transaction or at an exact read timestamp. See TransactionOptions.ReadOnly.
3189
+ # strong. ### Exact Staleness These timestamp bounds execute reads at a user-
3190
+ # specified timestamp. Reads at a timestamp are guaranteed to see a consistent
3191
+ # prefix of the global transaction history: they observe modifications done by
3192
+ # all transactions with a commit timestamp <= the read timestamp, and observe
3193
+ # none of the modifications done by transactions with a larger commit timestamp.
3194
+ # They will block until all conflicting transactions that may be assigned commit
3195
+ # timestamps <= the read timestamp have finished. The timestamp can either be
3196
+ # expressed as an absolute Cloud Spanner commit timestamp or a staleness
3197
+ # relative to the current time. These modes do not require a "negotiation phase"
3198
+ # to pick a timestamp. As a result, they execute slightly faster than the
3199
+ # equivalent boundedly stale concurrency modes. On the other hand, boundedly
3200
+ # stale reads usually return fresher results. See TransactionOptions.ReadOnly.
3201
+ # read_timestamp and TransactionOptions.ReadOnly.exact_staleness. ### Bounded
3202
+ # Staleness Bounded staleness modes allow Cloud Spanner to pick the read
3203
+ # timestamp, subject to a user-provided staleness bound. Cloud Spanner chooses
3204
+ # the newest timestamp within the staleness bound that allows execution of the
3205
+ # reads at the closest available replica without blocking. All rows yielded are
3206
+ # consistent with each other -- if any part of the read observes a transaction,
3207
+ # all parts of the read see the transaction. Boundedly stale reads are not
3208
+ # repeatable: two stale reads, even if they use the same staleness bound, can
3209
+ # execute at different timestamps and thus return inconsistent results.
3210
+ # Boundedly stale reads execute in two phases: the first phase negotiates a
3211
+ # timestamp among all replicas needed to serve the read. In the second phase,
3212
+ # reads are executed at the negotiated timestamp. As a result of the two phase
3213
+ # execution, bounded staleness reads are usually a little slower than comparable
3214
+ # exact staleness reads. However, they are typically able to return fresher
3215
+ # results, and are more likely to execute at the closest replica. Because the
3216
+ # timestamp negotiation requires up-front knowledge of which rows will be read,
3217
+ # it can only be used with single-use read-only transactions. See
3218
+ # TransactionOptions.ReadOnly.max_staleness and TransactionOptions.ReadOnly.
3219
+ # min_read_timestamp. ### Old Read Timestamps and Garbage Collection Cloud
3220
+ # Spanner continuously garbage collects deleted and overwritten data in the
3221
+ # background to reclaim storage space. This process is known as "version GC". By
3222
+ # default, version GC reclaims versions after they are one hour old. Because of
3223
+ # this, Cloud Spanner cannot perform reads at read timestamps more than one hour
3224
+ # in the past. This restriction also applies to in-progress reads and/or SQL
3225
+ # queries whose timestamp become too old while executing. Reads and SQL queries
3226
+ # with too-old read timestamps fail with the error `FAILED_PRECONDITION`. ##
3227
+ # Partitioned DML Transactions Partitioned DML transactions are used to execute
3228
+ # DML statements with a different execution strategy that provides different,
3229
+ # and often better, scalability properties for large, table-wide operations than
3230
+ # DML in a ReadWrite transaction. Smaller scoped statements, such as an OLTP
3231
+ # workload, should prefer using ReadWrite transactions. Partitioned DML
3232
+ # partitions the keyspace and runs the DML statement on each partition in
3233
+ # separate, internal transactions. These transactions commit automatically when
3234
+ # complete, and run independently from one another. To reduce lock contention,
3235
+ # this execution strategy only acquires read locks on rows that match the WHERE
3236
+ # clause of the statement. Additionally, the smaller per-partition transactions
3237
+ # hold locks for less time. That said, Partitioned DML is not a drop-in
3238
+ # replacement for standard DML used in ReadWrite transactions. - The DML
3239
+ # statement must be fully-partitionable. Specifically, the statement must be
3240
+ # expressible as the union of many statements which each access only a single
3241
+ # row of the table. - The statement is not applied atomically to all rows of the
3242
+ # table. Rather, the statement is applied atomically to partitions of the table,
3243
+ # in independent transactions. Secondary index rows are updated atomically with
3244
+ # the base table rows. - Partitioned DML does not guarantee exactly-once
3245
+ # execution semantics against a partition. The statement will be applied at
3246
+ # least once to each partition. It is strongly recommended that the DML
3247
+ # statement should be idempotent to avoid unexpected results. For instance, it
3248
+ # is potentially dangerous to run a statement such as `UPDATE table SET column =
3249
+ # column + 1` as it could be run multiple times against some rows. - The
3250
+ # partitions are committed automatically - there is no support for Commit or
3251
+ # Rollback. If the call returns an error, or if the client issuing the
3252
+ # ExecuteSql call dies, it is possible that some rows had the statement executed
3253
+ # on them successfully. It is also possible that statement was never executed
3254
+ # against other rows. - Partitioned DML transactions may only contain the
3255
+ # execution of a single DML statement via ExecuteSql or ExecuteStreamingSql. -
3256
+ # If any error is encountered during the execution of the partitioned DML
3257
+ # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
3258
+ # value that cannot be stored due to schema constraints), then the operation is
3259
+ # stopped at that point and an error is returned. It is possible that at this
3260
+ # point, some partitions have been committed (or even committed multiple times),
3261
+ # and other partitions have not been run at all. Given the above, Partitioned
3262
+ # DML is good fit for large, database-wide, operations that are idempotent, such
3263
+ # as deleting old rows from a very large table.
3264
+ class TransactionOptions
3265
+ include Google::Apis::Core::Hashable
3266
+
3267
+ # Message type to initiate a Partitioned DML transaction.
3268
+ # Corresponds to the JSON property `partitionedDml`
3269
+ # @return [Google::Apis::SpannerV1::PartitionedDml]
3270
+ attr_accessor :partitioned_dml
3271
+
3272
+ # Message type to initiate a read-only transaction.
3273
+ # Corresponds to the JSON property `readOnly`
3274
+ # @return [Google::Apis::SpannerV1::ReadOnly]
3275
+ attr_accessor :read_only
3276
+
3277
+ # Message type to initiate a read-write transaction. Currently this transaction
3278
+ # type has no options.
3279
+ # Corresponds to the JSON property `readWrite`
3280
+ # @return [Google::Apis::SpannerV1::ReadWrite]
3281
+ attr_accessor :read_write
3282
+
3283
+ def initialize(**args)
3284
+ update!(**args)
3285
+ end
3286
+
3287
+ # Update properties of this object
3288
+ def update!(**args)
3289
+ @partitioned_dml = args[:partitioned_dml] if args.key?(:partitioned_dml)
3290
+ @read_only = args[:read_only] if args.key?(:read_only)
3291
+ @read_write = args[:read_write] if args.key?(:read_write)
3292
+ end
3293
+ end
3294
+
3295
+ # This message is used to select the transaction in which a Read or ExecuteSql
3296
+ # call runs. See TransactionOptions for more information about transactions.
3297
+ class TransactionSelector
3298
+ include Google::Apis::Core::Hashable
3299
+
3300
+ # # Transactions Each session can have at most one active transaction at a time (
3301
+ # note that standalone reads and queries use a transaction internally and do
3302
+ # count towards the one transaction limit). After the active transaction is
3303
+ # completed, the session can immediately be re-used for the next transaction. It
3304
+ # is not necessary to create a new session for each transaction. # Transaction
3305
+ # Modes Cloud Spanner supports three transaction modes: 1. Locking read-write.
3306
+ # This type of transaction is the only way to write data into Cloud Spanner.
3307
+ # These transactions rely on pessimistic locking and, if necessary, two-phase
3308
+ # commit. Locking read-write transactions may abort, requiring the application
3309
+ # to retry. 2. Snapshot read-only. This transaction type provides guaranteed
3310
+ # consistency across several reads, but does not allow writes. Snapshot read-
3311
+ # only transactions can be configured to read at timestamps in the past.
3312
+ # Snapshot read-only transactions do not need to be committed. 3. Partitioned
3313
+ # DML. This type of transaction is used to execute a single Partitioned DML
3314
+ # statement. Partitioned DML partitions the key space and runs the DML statement
3315
+ # over each partition in parallel using separate, internal transactions that
3316
+ # commit independently. Partitioned DML transactions do not need to be committed.
3317
+ # For transactions that only read, snapshot read-only transactions provide
3318
+ # simpler semantics and are almost always faster. In particular, read-only
3319
+ # transactions do not take locks, so they do not conflict with read-write
3320
+ # transactions. As a consequence of not taking locks, they also do not abort, so
3321
+ # retry loops are not needed. Transactions may only read/write data in a single
3322
+ # database. They may, however, read/write data in different tables within that
3323
+ # database. ## Locking Read-Write Transactions Locking transactions may be used
3324
+ # to atomically read-modify-write data anywhere in a database. This type of
3325
+ # transaction is externally consistent. Clients should attempt to minimize the
3326
+ # amount of time a transaction is active. Faster transactions commit with higher
3327
+ # probability and cause less contention. Cloud Spanner attempts to keep read
3328
+ # locks active as long as the transaction continues to do reads, and the
3329
+ # transaction has not been terminated by Commit or Rollback. Long periods of
3330
+ # inactivity at the client may cause Cloud Spanner to release a transaction's
3331
+ # locks and abort it. Conceptually, a read-write transaction consists of zero or
3332
+ # more reads or SQL statements followed by Commit. At any time before Commit,
3333
+ # the client can send a Rollback request to abort the transaction. ### Semantics
3334
+ # Cloud Spanner can commit the transaction if all read locks it acquired are
3335
+ # still valid at commit time, and it is able to acquire write locks for all
3336
+ # writes. Cloud Spanner can abort the transaction for any reason. If a commit
3337
+ # attempt returns `ABORTED`, Cloud Spanner guarantees that the transaction has
3338
+ # not modified any user data in Cloud Spanner. Unless the transaction commits,
3339
+ # Cloud Spanner makes no guarantees about how long the transaction's locks were
3340
+ # held for. It is an error to use Cloud Spanner locks for any sort of mutual
3341
+ # exclusion other than between Cloud Spanner transactions themselves. ###
3342
+ # Retrying Aborted Transactions When a transaction aborts, the application can
3343
+ # choose to retry the whole transaction again. To maximize the chances of
3344
+ # successfully committing the retry, the client should execute the retry in the
3345
+ # same session as the original attempt. The original session's lock priority
3346
+ # increases with each consecutive abort, meaning that each attempt has a
3347
+ # slightly better chance of success than the previous. Under some circumstances (
3348
+ # e.g., many transactions attempting to modify the same row(s)), a transaction
3349
+ # can abort many times in a short period before successfully committing. Thus,
3350
+ # it is not a good idea to cap the number of retries a transaction can attempt;
3351
+ # instead, it is better to limit the total amount of wall time spent retrying. ##
3352
+ # # Idle Transactions A transaction is considered idle if it has no outstanding
3353
+ # reads or SQL queries and has not started a read or SQL query within the last
3354
+ # 10 seconds. Idle transactions can be aborted by Cloud Spanner so that they don'
3355
+ # t hold on to locks indefinitely. In that case, the commit will fail with error
3356
+ # `ABORTED`. If this behavior is undesirable, periodically executing a simple
3357
+ # SQL query in the transaction (e.g., `SELECT 1`) prevents the transaction from
3358
+ # becoming idle. ## Snapshot Read-Only Transactions Snapshot read-only
3359
+ # transactions provides a simpler method than locking read-write transactions
3360
+ # for doing several consistent reads. However, this type of transaction does not
3361
+ # support writes. Snapshot transactions do not take locks. Instead, they work by
3362
+ # choosing a Cloud Spanner timestamp, then executing all reads at that timestamp.
3363
+ # Since they do not acquire locks, they do not block concurrent read-write
3364
+ # transactions. Unlike locking read-write transactions, snapshot read-only
3365
+ # transactions never abort. They can fail if the chosen read timestamp is
3366
+ # garbage collected; however, the default garbage collection policy is generous
3367
+ # enough that most applications do not need to worry about this in practice.
3368
+ # Snapshot read-only transactions do not need to call Commit or Rollback (and in
3369
+ # fact are not permitted to do so). To execute a snapshot transaction, the
3370
+ # client specifies a timestamp bound, which tells Cloud Spanner how to choose a
3371
+ # read timestamp. The types of timestamp bound are: - Strong (the default). -
3372
+ # Bounded staleness. - Exact staleness. If the Cloud Spanner database to be read
3373
+ # is geographically distributed, stale read-only transactions can execute more
3374
+ # quickly than strong or read-write transaction, because they are able to
3375
+ # execute far from the leader replica. Each type of timestamp bound is discussed
3376
+ # in detail below. ### Strong Strong reads are guaranteed to see the effects of
3377
+ # all transactions that have committed before the start of the read. Furthermore,
3378
+ # all rows yielded by a single read are consistent with each other -- if any
3379
+ # part of the read observes a transaction, all parts of the read see the
3380
+ # transaction. Strong reads are not repeatable: two consecutive strong read-only
3381
+ # transactions might return inconsistent results if there are concurrent writes.
3382
+ # If consistency across reads is required, the reads should be executed within a
3383
+ # transaction or at an exact read timestamp. See TransactionOptions.ReadOnly.
3384
+ # strong. ### Exact Staleness These timestamp bounds execute reads at a user-
3385
+ # specified timestamp. Reads at a timestamp are guaranteed to see a consistent
3386
+ # prefix of the global transaction history: they observe modifications done by
3387
+ # all transactions with a commit timestamp <= the read timestamp, and observe
3388
+ # none of the modifications done by transactions with a larger commit timestamp.
3389
+ # They will block until all conflicting transactions that may be assigned commit
3390
+ # timestamps <= the read timestamp have finished. The timestamp can either be
3391
+ # expressed as an absolute Cloud Spanner commit timestamp or a staleness
3392
+ # relative to the current time. These modes do not require a "negotiation phase"
3393
+ # to pick a timestamp. As a result, they execute slightly faster than the
3394
+ # equivalent boundedly stale concurrency modes. On the other hand, boundedly
3395
+ # stale reads usually return fresher results. See TransactionOptions.ReadOnly.
3396
+ # read_timestamp and TransactionOptions.ReadOnly.exact_staleness. ### Bounded
3397
+ # Staleness Bounded staleness modes allow Cloud Spanner to pick the read
3398
+ # timestamp, subject to a user-provided staleness bound. Cloud Spanner chooses
3399
+ # the newest timestamp within the staleness bound that allows execution of the
3400
+ # reads at the closest available replica without blocking. All rows yielded are
3401
+ # consistent with each other -- if any part of the read observes a transaction,
3402
+ # all parts of the read see the transaction. Boundedly stale reads are not
3403
+ # repeatable: two stale reads, even if they use the same staleness bound, can
3404
+ # execute at different timestamps and thus return inconsistent results.
3405
+ # Boundedly stale reads execute in two phases: the first phase negotiates a
3406
+ # timestamp among all replicas needed to serve the read. In the second phase,
3407
+ # reads are executed at the negotiated timestamp. As a result of the two phase
3408
+ # execution, bounded staleness reads are usually a little slower than comparable
3409
+ # exact staleness reads. However, they are typically able to return fresher
3410
+ # results, and are more likely to execute at the closest replica. Because the
3411
+ # timestamp negotiation requires up-front knowledge of which rows will be read,
3412
+ # it can only be used with single-use read-only transactions. See
3413
+ # TransactionOptions.ReadOnly.max_staleness and TransactionOptions.ReadOnly.
3414
+ # min_read_timestamp. ### Old Read Timestamps and Garbage Collection Cloud
3415
+ # Spanner continuously garbage collects deleted and overwritten data in the
3416
+ # background to reclaim storage space. This process is known as "version GC". By
3417
+ # default, version GC reclaims versions after they are one hour old. Because of
3418
+ # this, Cloud Spanner cannot perform reads at read timestamps more than one hour
3419
+ # in the past. This restriction also applies to in-progress reads and/or SQL
3420
+ # queries whose timestamp become too old while executing. Reads and SQL queries
3421
+ # with too-old read timestamps fail with the error `FAILED_PRECONDITION`. ##
3422
+ # Partitioned DML Transactions Partitioned DML transactions are used to execute
3423
+ # DML statements with a different execution strategy that provides different,
3424
+ # and often better, scalability properties for large, table-wide operations than
3425
+ # DML in a ReadWrite transaction. Smaller scoped statements, such as an OLTP
3426
+ # workload, should prefer using ReadWrite transactions. Partitioned DML
3427
+ # partitions the keyspace and runs the DML statement on each partition in
3428
+ # separate, internal transactions. These transactions commit automatically when
3429
+ # complete, and run independently from one another. To reduce lock contention,
3430
+ # this execution strategy only acquires read locks on rows that match the WHERE
3431
+ # clause of the statement. Additionally, the smaller per-partition transactions
3432
+ # hold locks for less time. That said, Partitioned DML is not a drop-in
3433
+ # replacement for standard DML used in ReadWrite transactions. - The DML
3434
+ # statement must be fully-partitionable. Specifically, the statement must be
3435
+ # expressible as the union of many statements which each access only a single
3436
+ # row of the table. - The statement is not applied atomically to all rows of the
3437
+ # table. Rather, the statement is applied atomically to partitions of the table,
3438
+ # in independent transactions. Secondary index rows are updated atomically with
3439
+ # the base table rows. - Partitioned DML does not guarantee exactly-once
3440
+ # execution semantics against a partition. The statement will be applied at
3441
+ # least once to each partition. It is strongly recommended that the DML
3442
+ # statement should be idempotent to avoid unexpected results. For instance, it
3443
+ # is potentially dangerous to run a statement such as `UPDATE table SET column =
3444
+ # column + 1` as it could be run multiple times against some rows. - The
3445
+ # partitions are committed automatically - there is no support for Commit or
3446
+ # Rollback. If the call returns an error, or if the client issuing the
3447
+ # ExecuteSql call dies, it is possible that some rows had the statement executed
3448
+ # on them successfully. It is also possible that statement was never executed
3449
+ # against other rows. - Partitioned DML transactions may only contain the
3450
+ # execution of a single DML statement via ExecuteSql or ExecuteStreamingSql. -
3451
+ # If any error is encountered during the execution of the partitioned DML
3452
+ # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
3453
+ # value that cannot be stored due to schema constraints), then the operation is
3454
+ # stopped at that point and an error is returned. It is possible that at this
3455
+ # point, some partitions have been committed (or even committed multiple times),
3456
+ # and other partitions have not been run at all. Given the above, Partitioned
3457
+ # DML is good fit for large, database-wide, operations that are idempotent, such
3458
+ # as deleting old rows from a very large table.
3459
+ # Corresponds to the JSON property `begin`
3460
+ # @return [Google::Apis::SpannerV1::TransactionOptions]
3461
+ attr_accessor :begin
3462
+
3463
+ # Execute the read or SQL query in a previously-started transaction.
3464
+ # Corresponds to the JSON property `id`
3465
+ # NOTE: Values are automatically base64 encoded/decoded in the client library.
3466
+ # @return [String]
3467
+ attr_accessor :id
3468
+
3469
+ # # Transactions Each session can have at most one active transaction at a time (
3470
+ # note that standalone reads and queries use a transaction internally and do
3471
+ # count towards the one transaction limit). After the active transaction is
3472
+ # completed, the session can immediately be re-used for the next transaction. It
3473
+ # is not necessary to create a new session for each transaction. # Transaction
3474
+ # Modes Cloud Spanner supports three transaction modes: 1. Locking read-write.
3475
+ # This type of transaction is the only way to write data into Cloud Spanner.
3476
+ # These transactions rely on pessimistic locking and, if necessary, two-phase
3477
+ # commit. Locking read-write transactions may abort, requiring the application
3478
+ # to retry. 2. Snapshot read-only. This transaction type provides guaranteed
3479
+ # consistency across several reads, but does not allow writes. Snapshot read-
3480
+ # only transactions can be configured to read at timestamps in the past.
3481
+ # Snapshot read-only transactions do not need to be committed. 3. Partitioned
3482
+ # DML. This type of transaction is used to execute a single Partitioned DML
3483
+ # statement. Partitioned DML partitions the key space and runs the DML statement
3484
+ # over each partition in parallel using separate, internal transactions that
3485
+ # commit independently. Partitioned DML transactions do not need to be committed.
3486
+ # For transactions that only read, snapshot read-only transactions provide
3487
+ # simpler semantics and are almost always faster. In particular, read-only
3488
+ # transactions do not take locks, so they do not conflict with read-write
3489
+ # transactions. As a consequence of not taking locks, they also do not abort, so
3490
+ # retry loops are not needed. Transactions may only read/write data in a single
3491
+ # database. They may, however, read/write data in different tables within that
3492
+ # database. ## Locking Read-Write Transactions Locking transactions may be used
3493
+ # to atomically read-modify-write data anywhere in a database. This type of
3494
+ # transaction is externally consistent. Clients should attempt to minimize the
3495
+ # amount of time a transaction is active. Faster transactions commit with higher
3496
+ # probability and cause less contention. Cloud Spanner attempts to keep read
3497
+ # locks active as long as the transaction continues to do reads, and the
3498
+ # transaction has not been terminated by Commit or Rollback. Long periods of
3499
+ # inactivity at the client may cause Cloud Spanner to release a transaction's
3500
+ # locks and abort it. Conceptually, a read-write transaction consists of zero or
3501
+ # more reads or SQL statements followed by Commit. At any time before Commit,
3502
+ # the client can send a Rollback request to abort the transaction. ### Semantics
3503
+ # Cloud Spanner can commit the transaction if all read locks it acquired are
3504
+ # still valid at commit time, and it is able to acquire write locks for all
3505
+ # writes. Cloud Spanner can abort the transaction for any reason. If a commit
3506
+ # attempt returns `ABORTED`, Cloud Spanner guarantees that the transaction has
3507
+ # not modified any user data in Cloud Spanner. Unless the transaction commits,
3508
+ # Cloud Spanner makes no guarantees about how long the transaction's locks were
3509
+ # held for. It is an error to use Cloud Spanner locks for any sort of mutual
3510
+ # exclusion other than between Cloud Spanner transactions themselves. ###
3511
+ # Retrying Aborted Transactions When a transaction aborts, the application can
3512
+ # choose to retry the whole transaction again. To maximize the chances of
3513
+ # successfully committing the retry, the client should execute the retry in the
3514
+ # same session as the original attempt. The original session's lock priority
3515
+ # increases with each consecutive abort, meaning that each attempt has a
3516
+ # slightly better chance of success than the previous. Under some circumstances (
3517
+ # e.g., many transactions attempting to modify the same row(s)), a transaction
3518
+ # can abort many times in a short period before successfully committing. Thus,
3519
+ # it is not a good idea to cap the number of retries a transaction can attempt;
3520
+ # instead, it is better to limit the total amount of wall time spent retrying. ##
3521
+ # # Idle Transactions A transaction is considered idle if it has no outstanding
3522
+ # reads or SQL queries and has not started a read or SQL query within the last
3523
+ # 10 seconds. Idle transactions can be aborted by Cloud Spanner so that they don'
3524
+ # t hold on to locks indefinitely. In that case, the commit will fail with error
3525
+ # `ABORTED`. If this behavior is undesirable, periodically executing a simple
3526
+ # SQL query in the transaction (e.g., `SELECT 1`) prevents the transaction from
3527
+ # becoming idle. ## Snapshot Read-Only Transactions Snapshot read-only
3528
+ # transactions provides a simpler method than locking read-write transactions
3529
+ # for doing several consistent reads. However, this type of transaction does not
3530
+ # support writes. Snapshot transactions do not take locks. Instead, they work by
3531
+ # choosing a Cloud Spanner timestamp, then executing all reads at that timestamp.
3532
+ # Since they do not acquire locks, they do not block concurrent read-write
3533
+ # transactions. Unlike locking read-write transactions, snapshot read-only
3534
+ # transactions never abort. They can fail if the chosen read timestamp is
3535
+ # garbage collected; however, the default garbage collection policy is generous
3536
+ # enough that most applications do not need to worry about this in practice.
3537
+ # Snapshot read-only transactions do not need to call Commit or Rollback (and in
3538
+ # fact are not permitted to do so). To execute a snapshot transaction, the
3539
+ # client specifies a timestamp bound, which tells Cloud Spanner how to choose a
3540
+ # read timestamp. The types of timestamp bound are: - Strong (the default). -
3541
+ # Bounded staleness. - Exact staleness. If the Cloud Spanner database to be read
3542
+ # is geographically distributed, stale read-only transactions can execute more
3543
+ # quickly than strong or read-write transaction, because they are able to
3544
+ # execute far from the leader replica. Each type of timestamp bound is discussed
3545
+ # in detail below. ### Strong Strong reads are guaranteed to see the effects of
3546
+ # all transactions that have committed before the start of the read. Furthermore,
3547
+ # all rows yielded by a single read are consistent with each other -- if any
3548
+ # part of the read observes a transaction, all parts of the read see the
3549
+ # transaction. Strong reads are not repeatable: two consecutive strong read-only
3550
+ # transactions might return inconsistent results if there are concurrent writes.
3551
+ # If consistency across reads is required, the reads should be executed within a
3552
+ # transaction or at an exact read timestamp. See TransactionOptions.ReadOnly.
3553
+ # strong. ### Exact Staleness These timestamp bounds execute reads at a user-
3554
+ # specified timestamp. Reads at a timestamp are guaranteed to see a consistent
3555
+ # prefix of the global transaction history: they observe modifications done by
3556
+ # all transactions with a commit timestamp <= the read timestamp, and observe
3557
+ # none of the modifications done by transactions with a larger commit timestamp.
3558
+ # They will block until all conflicting transactions that may be assigned commit
3559
+ # timestamps <= the read timestamp have finished. The timestamp can either be
3560
+ # expressed as an absolute Cloud Spanner commit timestamp or a staleness
3561
+ # relative to the current time. These modes do not require a "negotiation phase"
3562
+ # to pick a timestamp. As a result, they execute slightly faster than the
3563
+ # equivalent boundedly stale concurrency modes. On the other hand, boundedly
3564
+ # stale reads usually return fresher results. See TransactionOptions.ReadOnly.
3565
+ # read_timestamp and TransactionOptions.ReadOnly.exact_staleness. ### Bounded
3566
+ # Staleness Bounded staleness modes allow Cloud Spanner to pick the read
3567
+ # timestamp, subject to a user-provided staleness bound. Cloud Spanner chooses
3568
+ # the newest timestamp within the staleness bound that allows execution of the
3569
+ # reads at the closest available replica without blocking. All rows yielded are
3570
+ # consistent with each other -- if any part of the read observes a transaction,
3571
+ # all parts of the read see the transaction. Boundedly stale reads are not
3572
+ # repeatable: two stale reads, even if they use the same staleness bound, can
3573
+ # execute at different timestamps and thus return inconsistent results.
3574
+ # Boundedly stale reads execute in two phases: the first phase negotiates a
3575
+ # timestamp among all replicas needed to serve the read. In the second phase,
3576
+ # reads are executed at the negotiated timestamp. As a result of the two phase
3577
+ # execution, bounded staleness reads are usually a little slower than comparable
3578
+ # exact staleness reads. However, they are typically able to return fresher
3579
+ # results, and are more likely to execute at the closest replica. Because the
3580
+ # timestamp negotiation requires up-front knowledge of which rows will be read,
3581
+ # it can only be used with single-use read-only transactions. See
3582
+ # TransactionOptions.ReadOnly.max_staleness and TransactionOptions.ReadOnly.
3583
+ # min_read_timestamp. ### Old Read Timestamps and Garbage Collection Cloud
3584
+ # Spanner continuously garbage collects deleted and overwritten data in the
3585
+ # background to reclaim storage space. This process is known as "version GC". By
3586
+ # default, version GC reclaims versions after they are one hour old. Because of
3587
+ # this, Cloud Spanner cannot perform reads at read timestamps more than one hour
3588
+ # in the past. This restriction also applies to in-progress reads and/or SQL
3589
+ # queries whose timestamp become too old while executing. Reads and SQL queries
3590
+ # with too-old read timestamps fail with the error `FAILED_PRECONDITION`. ##
3591
+ # Partitioned DML Transactions Partitioned DML transactions are used to execute
3592
+ # DML statements with a different execution strategy that provides different,
3593
+ # and often better, scalability properties for large, table-wide operations than
3594
+ # DML in a ReadWrite transaction. Smaller scoped statements, such as an OLTP
3595
+ # workload, should prefer using ReadWrite transactions. Partitioned DML
3596
+ # partitions the keyspace and runs the DML statement on each partition in
3597
+ # separate, internal transactions. These transactions commit automatically when
3598
+ # complete, and run independently from one another. To reduce lock contention,
3599
+ # this execution strategy only acquires read locks on rows that match the WHERE
3600
+ # clause of the statement. Additionally, the smaller per-partition transactions
3601
+ # hold locks for less time. That said, Partitioned DML is not a drop-in
3602
+ # replacement for standard DML used in ReadWrite transactions. - The DML
3603
+ # statement must be fully-partitionable. Specifically, the statement must be
3604
+ # expressible as the union of many statements which each access only a single
3605
+ # row of the table. - The statement is not applied atomically to all rows of the
3606
+ # table. Rather, the statement is applied atomically to partitions of the table,
3607
+ # in independent transactions. Secondary index rows are updated atomically with
3608
+ # the base table rows. - Partitioned DML does not guarantee exactly-once
3609
+ # execution semantics against a partition. The statement will be applied at
3610
+ # least once to each partition. It is strongly recommended that the DML
3611
+ # statement should be idempotent to avoid unexpected results. For instance, it
3612
+ # is potentially dangerous to run a statement such as `UPDATE table SET column =
3613
+ # column + 1` as it could be run multiple times against some rows. - The
3614
+ # partitions are committed automatically - there is no support for Commit or
3615
+ # Rollback. If the call returns an error, or if the client issuing the
3616
+ # ExecuteSql call dies, it is possible that some rows had the statement executed
3617
+ # on them successfully. It is also possible that statement was never executed
3618
+ # against other rows. - Partitioned DML transactions may only contain the
3619
+ # execution of a single DML statement via ExecuteSql or ExecuteStreamingSql. -
3620
+ # If any error is encountered during the execution of the partitioned DML
3621
+ # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
3622
+ # value that cannot be stored due to schema constraints), then the operation is
3623
+ # stopped at that point and an error is returned. It is possible that at this
3624
+ # point, some partitions have been committed (or even committed multiple times),
3625
+ # and other partitions have not been run at all. Given the above, Partitioned
3626
+ # DML is good fit for large, database-wide, operations that are idempotent, such
3627
+ # as deleting old rows from a very large table.
3628
+ # Corresponds to the JSON property `singleUse`
3629
+ # @return [Google::Apis::SpannerV1::TransactionOptions]
3630
+ attr_accessor :single_use
3631
+
3632
+ def initialize(**args)
3633
+ update!(**args)
3634
+ end
3635
+
3636
+ # Update properties of this object
3637
+ def update!(**args)
3638
+ @begin = args[:begin] if args.key?(:begin)
3639
+ @id = args[:id] if args.key?(:id)
3640
+ @single_use = args[:single_use] if args.key?(:single_use)
3641
+ end
3642
+ end
3643
+
3644
+ # `Type` indicates the type of a Cloud Spanner value, as might be stored in a
3645
+ # table cell or returned from an SQL query.
3646
+ class Type
3647
+ include Google::Apis::Core::Hashable
3648
+
3649
+ # `Type` indicates the type of a Cloud Spanner value, as might be stored in a
3650
+ # table cell or returned from an SQL query.
3651
+ # Corresponds to the JSON property `arrayElementType`
3652
+ # @return [Google::Apis::SpannerV1::Type]
3653
+ attr_accessor :array_element_type
3654
+
3655
+ # Required. The TypeCode for this type.
3656
+ # Corresponds to the JSON property `code`
3657
+ # @return [String]
3658
+ attr_accessor :code
3659
+
3660
+ # `StructType` defines the fields of a STRUCT type.
3661
+ # Corresponds to the JSON property `structType`
3662
+ # @return [Google::Apis::SpannerV1::StructType]
3663
+ attr_accessor :struct_type
3664
+
3665
+ def initialize(**args)
3666
+ update!(**args)
3667
+ end
3668
+
3669
+ # Update properties of this object
3670
+ def update!(**args)
3671
+ @array_element_type = args[:array_element_type] if args.key?(:array_element_type)
3672
+ @code = args[:code] if args.key?(:code)
3673
+ @struct_type = args[:struct_type] if args.key?(:struct_type)
3674
+ end
3675
+ end
3676
+
3677
+ # Metadata type for the operation returned by UpdateDatabaseDdl.
3678
+ class UpdateDatabaseDdlMetadata
3679
+ include Google::Apis::Core::Hashable
3680
+
3681
+ # Reports the commit timestamps of all statements that have succeeded so far,
3682
+ # where `commit_timestamps[i]` is the commit timestamp for the statement `
3683
+ # statements[i]`.
3684
+ # Corresponds to the JSON property `commitTimestamps`
3685
+ # @return [Array<String>]
3686
+ attr_accessor :commit_timestamps
3687
+
3688
+ # The database being modified.
3689
+ # Corresponds to the JSON property `database`
3690
+ # @return [String]
3691
+ attr_accessor :database
3692
+
3693
+ # For an update this list contains all the statements. For an individual
3694
+ # statement, this list contains only that statement.
3695
+ # Corresponds to the JSON property `statements`
3696
+ # @return [Array<String>]
3697
+ attr_accessor :statements
3698
+
3699
+ # Output only. When true, indicates that the operation is throttled e.g due to
3700
+ # resource constraints. When resources become available the operation will
3701
+ # resume and this field will be false again.
3702
+ # Corresponds to the JSON property `throttled`
3703
+ # @return [Boolean]
3704
+ attr_accessor :throttled
3705
+ alias_method :throttled?, :throttled
3706
+
3707
+ def initialize(**args)
3708
+ update!(**args)
3709
+ end
3710
+
3711
+ # Update properties of this object
3712
+ def update!(**args)
3713
+ @commit_timestamps = args[:commit_timestamps] if args.key?(:commit_timestamps)
3714
+ @database = args[:database] if args.key?(:database)
3715
+ @statements = args[:statements] if args.key?(:statements)
3716
+ @throttled = args[:throttled] if args.key?(:throttled)
3717
+ end
3718
+ end
3719
+
3720
+ # Enqueues the given DDL statements to be applied, in order but not necessarily
3721
+ # all at once, to the database schema at some point (or points) in the future.
3722
+ # The server checks that the statements are executable (syntactically valid,
3723
+ # name tables that exist, etc.) before enqueueing them, but they may still fail
3724
+ # upon later execution (e.g., if a statement from another batch of statements is
3725
+ # applied first and it conflicts in some way, or if there is some data-related
3726
+ # problem like a `NULL` value in a column to which `NOT NULL` would be added).
3727
+ # If a statement fails, all subsequent statements in the batch are automatically
3728
+ # cancelled. Each batch of statements is assigned a name which can be used with
3729
+ # the Operations API to monitor progress. See the operation_id field for more
3730
+ # details.
3731
+ class UpdateDatabaseDdlRequest
3732
+ include Google::Apis::Core::Hashable
3733
+
3734
+ # If empty, the new update request is assigned an automatically-generated
3735
+ # operation ID. Otherwise, `operation_id` is used to construct the name of the
3736
+ # resulting Operation. Specifying an explicit operation ID simplifies
3737
+ # determining whether the statements were executed in the event that the
3738
+ # UpdateDatabaseDdl call is replayed, or the return value is otherwise lost: the
3739
+ # database and `operation_id` fields can be combined to form the name of the
3740
+ # resulting longrunning.Operation: `/operations/`. `operation_id` should be
3741
+ # unique within the database, and must be a valid identifier: `a-z*`. Note that
3742
+ # automatically-generated operation IDs always begin with an underscore. If the
3743
+ # named operation already exists, UpdateDatabaseDdl returns `ALREADY_EXISTS`.
3744
+ # Corresponds to the JSON property `operationId`
3745
+ # @return [String]
3746
+ attr_accessor :operation_id
3747
+
3748
+ # Required. DDL statements to be applied to the database.
3749
+ # Corresponds to the JSON property `statements`
3750
+ # @return [Array<String>]
3751
+ attr_accessor :statements
3752
+
3753
+ def initialize(**args)
3754
+ update!(**args)
3755
+ end
3756
+
3757
+ # Update properties of this object
3758
+ def update!(**args)
3759
+ @operation_id = args[:operation_id] if args.key?(:operation_id)
3760
+ @statements = args[:statements] if args.key?(:statements)
3761
+ end
3762
+ end
3763
+
3764
+ # Metadata type for the operation returned by UpdateInstance.
3765
+ class UpdateInstanceMetadata
3766
+ include Google::Apis::Core::Hashable
3767
+
3768
+ # The time at which this operation was cancelled. If set, this operation is in
3769
+ # the process of undoing itself (which is guaranteed to succeed) and cannot be
3770
+ # cancelled again.
3771
+ # Corresponds to the JSON property `cancelTime`
3772
+ # @return [String]
3773
+ attr_accessor :cancel_time
3774
+
3775
+ # The time at which this operation failed or was completed successfully.
3776
+ # Corresponds to the JSON property `endTime`
3777
+ # @return [String]
3778
+ attr_accessor :end_time
3779
+
3780
+ # An isolated set of Cloud Spanner resources on which databases can be hosted.
3781
+ # Corresponds to the JSON property `instance`
3782
+ # @return [Google::Apis::SpannerV1::Instance]
3783
+ attr_accessor :instance
3784
+
3785
+ # The time at which UpdateInstance request was received.
3786
+ # Corresponds to the JSON property `startTime`
3787
+ # @return [String]
3788
+ attr_accessor :start_time
3789
+
3790
+ def initialize(**args)
3791
+ update!(**args)
3792
+ end
3793
+
3794
+ # Update properties of this object
3795
+ def update!(**args)
3796
+ @cancel_time = args[:cancel_time] if args.key?(:cancel_time)
3797
+ @end_time = args[:end_time] if args.key?(:end_time)
3798
+ @instance = args[:instance] if args.key?(:instance)
3799
+ @start_time = args[:start_time] if args.key?(:start_time)
3800
+ end
3801
+ end
3802
+
3803
+ # The request for UpdateInstance.
3804
+ class UpdateInstanceRequest
3805
+ include Google::Apis::Core::Hashable
3806
+
3807
+ # Required. A mask specifying which fields in Instance should be updated. The
3808
+ # field mask must always be specified; this prevents any future fields in
3809
+ # Instance from being erased accidentally by clients that do not know about them.
3810
+ # Corresponds to the JSON property `fieldMask`
3811
+ # @return [String]
3812
+ attr_accessor :field_mask
3813
+
3814
+ # An isolated set of Cloud Spanner resources on which databases can be hosted.
3815
+ # Corresponds to the JSON property `instance`
3816
+ # @return [Google::Apis::SpannerV1::Instance]
3817
+ attr_accessor :instance
3818
+
3819
+ def initialize(**args)
3820
+ update!(**args)
3821
+ end
3822
+
3823
+ # Update properties of this object
3824
+ def update!(**args)
3825
+ @field_mask = args[:field_mask] if args.key?(:field_mask)
3826
+ @instance = args[:instance] if args.key?(:instance)
3827
+ end
3828
+ end
3829
+
3830
+ # Arguments to insert, update, insert_or_update, and replace operations.
3831
+ class Write
3832
+ include Google::Apis::Core::Hashable
3833
+
3834
+ # The names of the columns in table to be written. The list of columns must
3835
+ # contain enough columns to allow Cloud Spanner to derive values for all primary
3836
+ # key columns in the row(s) to be modified.
3837
+ # Corresponds to the JSON property `columns`
3838
+ # @return [Array<String>]
3839
+ attr_accessor :columns
3840
+
3841
+ # Required. The table whose rows will be written.
3842
+ # Corresponds to the JSON property `table`
3843
+ # @return [String]
3844
+ attr_accessor :table
3845
+
3846
+ # The values to be written. `values` can contain more than one list of values.
3847
+ # If it does, then multiple rows are written, one for each entry in `values`.
3848
+ # Each list in `values` must have exactly as many entries as there are entries
3849
+ # in columns above. Sending multiple lists is equivalent to sending multiple `
3850
+ # Mutation`s, each containing one `values` entry and repeating table and columns.
3851
+ # Individual values in each list are encoded as described here.
3852
+ # Corresponds to the JSON property `values`
3853
+ # @return [Array<Array<Object>>]
3854
+ attr_accessor :values
3855
+
3856
+ def initialize(**args)
3857
+ update!(**args)
3858
+ end
3859
+
3860
+ # Update properties of this object
3861
+ def update!(**args)
3862
+ @columns = args[:columns] if args.key?(:columns)
3863
+ @table = args[:table] if args.key?(:table)
3864
+ @values = args[:values] if args.key?(:values)
3865
+ end
3866
+ end
3867
+ end
3868
+ end
3869
+ end