google-apis-remotebuildexecution_v2 0.1.0

Sign up to get free protection for your applications and to get access to all the features.
@@ -0,0 +1,7 @@
1
+ ---
2
+ SHA256:
3
+ metadata.gz: cc91f89d33fd9ee728a1d0e3d08ba53932f98acc4232bbde11f0c7fb29999d85
4
+ data.tar.gz: dde16c694f3595f25f52c1195a887083281e69c2afef8c15025102781ad97ad7
5
+ SHA512:
6
+ metadata.gz: e73ddb2c051aa20d7e90eef0ceaeedae241f59ecd69309e1bfd0fef1ccab32412b0ac1ece7cc880d939a20eccec25d2803da83b73f3c38e7ef981186bc63b659
7
+ data.tar.gz: 15a8d954d028edddadd1142457fbb26bbc3920468a1720cd531328414c904c826eaae9c755409f9808dc1c83ada9a17770650961114ea5541bf844bc8646cb06
@@ -0,0 +1,13 @@
1
+ --hide-void-return
2
+ --no-private
3
+ --verbose
4
+ --title=google-apis-remotebuildexecution_v2
5
+ --markup-provider=redcarpet
6
+ --markup=markdown
7
+ --main OVERVIEW.md
8
+ lib/google/apis/remotebuildexecution_v2/*.rb
9
+ lib/google/apis/remotebuildexecution_v2.rb
10
+ -
11
+ OVERVIEW.md
12
+ CHANGELOG.md
13
+ LICENSE.md
@@ -0,0 +1,7 @@
1
+ # Release history for google-apis-remotebuildexecution_v2
2
+
3
+ ### v0.1.0 (2021-01-07)
4
+
5
+ * Regenerated using generator version 0.1.1
6
+ * Regenerated from discovery document revision 20201201
7
+
@@ -0,0 +1,202 @@
1
+
2
+ Apache License
3
+ Version 2.0, January 2004
4
+ http://www.apache.org/licenses/
5
+
6
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
7
+
8
+ 1. Definitions.
9
+
10
+ "License" shall mean the terms and conditions for use, reproduction,
11
+ and distribution as defined by Sections 1 through 9 of this document.
12
+
13
+ "Licensor" shall mean the copyright owner or entity authorized by
14
+ the copyright owner that is granting the License.
15
+
16
+ "Legal Entity" shall mean the union of the acting entity and all
17
+ other entities that control, are controlled by, or are under common
18
+ control with that entity. For the purposes of this definition,
19
+ "control" means (i) the power, direct or indirect, to cause the
20
+ direction or management of such entity, whether by contract or
21
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
22
+ outstanding shares, or (iii) beneficial ownership of such entity.
23
+
24
+ "You" (or "Your") shall mean an individual or Legal Entity
25
+ exercising permissions granted by this License.
26
+
27
+ "Source" form shall mean the preferred form for making modifications,
28
+ including but not limited to software source code, documentation
29
+ source, and configuration files.
30
+
31
+ "Object" form shall mean any form resulting from mechanical
32
+ transformation or translation of a Source form, including but
33
+ not limited to compiled object code, generated documentation,
34
+ and conversions to other media types.
35
+
36
+ "Work" shall mean the work of authorship, whether in Source or
37
+ Object form, made available under the License, as indicated by a
38
+ copyright notice that is included in or attached to the work
39
+ (an example is provided in the Appendix below).
40
+
41
+ "Derivative Works" shall mean any work, whether in Source or Object
42
+ form, that is based on (or derived from) the Work and for which the
43
+ editorial revisions, annotations, elaborations, or other modifications
44
+ represent, as a whole, an original work of authorship. For the purposes
45
+ of this License, Derivative Works shall not include works that remain
46
+ separable from, or merely link (or bind by name) to the interfaces of,
47
+ the Work and Derivative Works thereof.
48
+
49
+ "Contribution" shall mean any work of authorship, including
50
+ the original version of the Work and any modifications or additions
51
+ to that Work or Derivative Works thereof, that is intentionally
52
+ submitted to Licensor for inclusion in the Work by the copyright owner
53
+ or by an individual or Legal Entity authorized to submit on behalf of
54
+ the copyright owner. For the purposes of this definition, "submitted"
55
+ means any form of electronic, verbal, or written communication sent
56
+ to the Licensor or its representatives, including but not limited to
57
+ communication on electronic mailing lists, source code control systems,
58
+ and issue tracking systems that are managed by, or on behalf of, the
59
+ Licensor for the purpose of discussing and improving the Work, but
60
+ excluding communication that is conspicuously marked or otherwise
61
+ designated in writing by the copyright owner as "Not a Contribution."
62
+
63
+ "Contributor" shall mean Licensor and any individual or Legal Entity
64
+ on behalf of whom a Contribution has been received by Licensor and
65
+ subsequently incorporated within the Work.
66
+
67
+ 2. Grant of Copyright License. Subject to the terms and conditions of
68
+ this License, each Contributor hereby grants to You a perpetual,
69
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
70
+ copyright license to reproduce, prepare Derivative Works of,
71
+ publicly display, publicly perform, sublicense, and distribute the
72
+ Work and such Derivative Works in Source or Object form.
73
+
74
+ 3. Grant of Patent License. Subject to the terms and conditions of
75
+ this License, each Contributor hereby grants to You a perpetual,
76
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
77
+ (except as stated in this section) patent license to make, have made,
78
+ use, offer to sell, sell, import, and otherwise transfer the Work,
79
+ where such license applies only to those patent claims licensable
80
+ by such Contributor that are necessarily infringed by their
81
+ Contribution(s) alone or by combination of their Contribution(s)
82
+ with the Work to which such Contribution(s) was submitted. If You
83
+ institute patent litigation against any entity (including a
84
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
85
+ or a Contribution incorporated within the Work constitutes direct
86
+ or contributory patent infringement, then any patent licenses
87
+ granted to You under this License for that Work shall terminate
88
+ as of the date such litigation is filed.
89
+
90
+ 4. Redistribution. You may reproduce and distribute copies of the
91
+ Work or Derivative Works thereof in any medium, with or without
92
+ modifications, and in Source or Object form, provided that You
93
+ meet the following conditions:
94
+
95
+ (a) You must give any other recipients of the Work or
96
+ Derivative Works a copy of this License; and
97
+
98
+ (b) You must cause any modified files to carry prominent notices
99
+ stating that You changed the files; and
100
+
101
+ (c) You must retain, in the Source form of any Derivative Works
102
+ that You distribute, all copyright, patent, trademark, and
103
+ attribution notices from the Source form of the Work,
104
+ excluding those notices that do not pertain to any part of
105
+ the Derivative Works; and
106
+
107
+ (d) If the Work includes a "NOTICE" text file as part of its
108
+ distribution, then any Derivative Works that You distribute must
109
+ include a readable copy of the attribution notices contained
110
+ within such NOTICE file, excluding those notices that do not
111
+ pertain to any part of the Derivative Works, in at least one
112
+ of the following places: within a NOTICE text file distributed
113
+ as part of the Derivative Works; within the Source form or
114
+ documentation, if provided along with the Derivative Works; or,
115
+ within a display generated by the Derivative Works, if and
116
+ wherever such third-party notices normally appear. The contents
117
+ of the NOTICE file are for informational purposes only and
118
+ do not modify the License. You may add Your own attribution
119
+ notices within Derivative Works that You distribute, alongside
120
+ or as an addendum to the NOTICE text from the Work, provided
121
+ that such additional attribution notices cannot be construed
122
+ as modifying the License.
123
+
124
+ You may add Your own copyright statement to Your modifications and
125
+ may provide additional or different license terms and conditions
126
+ for use, reproduction, or distribution of Your modifications, or
127
+ for any such Derivative Works as a whole, provided Your use,
128
+ reproduction, and distribution of the Work otherwise complies with
129
+ the conditions stated in this License.
130
+
131
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
132
+ any Contribution intentionally submitted for inclusion in the Work
133
+ by You to the Licensor shall be under the terms and conditions of
134
+ this License, without any additional terms or conditions.
135
+ Notwithstanding the above, nothing herein shall supersede or modify
136
+ the terms of any separate license agreement you may have executed
137
+ with Licensor regarding such Contributions.
138
+
139
+ 6. Trademarks. This License does not grant permission to use the trade
140
+ names, trademarks, service marks, or product names of the Licensor,
141
+ except as required for reasonable and customary use in describing the
142
+ origin of the Work and reproducing the content of the NOTICE file.
143
+
144
+ 7. Disclaimer of Warranty. Unless required by applicable law or
145
+ agreed to in writing, Licensor provides the Work (and each
146
+ Contributor provides its Contributions) on an "AS IS" BASIS,
147
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
148
+ implied, including, without limitation, any warranties or conditions
149
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
150
+ PARTICULAR PURPOSE. You are solely responsible for determining the
151
+ appropriateness of using or redistributing the Work and assume any
152
+ risks associated with Your exercise of permissions under this License.
153
+
154
+ 8. Limitation of Liability. In no event and under no legal theory,
155
+ whether in tort (including negligence), contract, or otherwise,
156
+ unless required by applicable law (such as deliberate and grossly
157
+ negligent acts) or agreed to in writing, shall any Contributor be
158
+ liable to You for damages, including any direct, indirect, special,
159
+ incidental, or consequential damages of any character arising as a
160
+ result of this License or out of the use or inability to use the
161
+ Work (including but not limited to damages for loss of goodwill,
162
+ work stoppage, computer failure or malfunction, or any and all
163
+ other commercial damages or losses), even if such Contributor
164
+ has been advised of the possibility of such damages.
165
+
166
+ 9. Accepting Warranty or Additional Liability. While redistributing
167
+ the Work or Derivative Works thereof, You may choose to offer,
168
+ and charge a fee for, acceptance of support, warranty, indemnity,
169
+ or other liability obligations and/or rights consistent with this
170
+ License. However, in accepting such obligations, You may act only
171
+ on Your own behalf and on Your sole responsibility, not on behalf
172
+ of any other Contributor, and only if You agree to indemnify,
173
+ defend, and hold each Contributor harmless for any liability
174
+ incurred by, or claims asserted against, such Contributor by reason
175
+ of your accepting any such warranty or additional liability.
176
+
177
+ END OF TERMS AND CONDITIONS
178
+
179
+ APPENDIX: How to apply the Apache License to your work.
180
+
181
+ To apply the Apache License to your work, attach the following
182
+ boilerplate notice, with the fields enclosed by brackets "[]"
183
+ replaced with your own identifying information. (Don't include
184
+ the brackets!) The text should be enclosed in the appropriate
185
+ comment syntax for the file format. We also recommend that a
186
+ file or class name and description of purpose be included on the
187
+ same "printed page" as the copyright notice for easier
188
+ identification within third-party archives.
189
+
190
+ Copyright [yyyy] [name of copyright owner]
191
+
192
+ Licensed under the Apache License, Version 2.0 (the "License");
193
+ you may not use this file except in compliance with the License.
194
+ You may obtain a copy of the License at
195
+
196
+ http://www.apache.org/licenses/LICENSE-2.0
197
+
198
+ Unless required by applicable law or agreed to in writing, software
199
+ distributed under the License is distributed on an "AS IS" BASIS,
200
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
201
+ See the License for the specific language governing permissions and
202
+ limitations under the License.
@@ -0,0 +1,96 @@
1
+ # Simple REST client for version V2 of the Remote Build Execution API
2
+
3
+ This is a simple client library for version V2 of the Remote Build Execution API. It provides:
4
+
5
+ * A client object that connects to the HTTP/JSON REST endpoint for the service.
6
+ * Ruby objects for data structures related to the service.
7
+ * Integration with the googleauth gem for authentication using OAuth, API keys, and service accounts.
8
+ * Control of retry, pagination, and timeouts.
9
+
10
+ Note that although this client library is supported and will continue to be updated to track changes to the service, it is otherwise considered complete and not under active development. Many Google services, especially Google Cloud Platform services, may provide a more modern client that is under more active development and improvement. See the section below titled *Which client should I use?* for more information.
11
+
12
+ ## Getting started
13
+
14
+ ### Before you begin
15
+
16
+ There are a few setup steps you need to complete before you can use this library:
17
+
18
+ 1. If you don't already have a Google account, [sign up](https://www.google.com/accounts).
19
+ 2. If you have never created a Google APIs Console project, read about [Managing Projects](https://cloud.google.com/resource-manager/docs/creating-managing-projects) and create a project in the [Google API Console](https://console.cloud.google.com/).
20
+ 3. Most APIs need to be enabled for your project. [Enable it](https://console.cloud.google.com/apis/library/remotebuildexecution.googleapis.com) in the console.
21
+
22
+ ### Installation
23
+
24
+ Add this line to your application's Gemfile:
25
+
26
+ ```ruby
27
+ gem 'google-apis-remotebuildexecution_v2', '~> 0.1'
28
+ ```
29
+
30
+ And then execute:
31
+
32
+ ```
33
+ $ bundle
34
+ ```
35
+
36
+ Or install it yourself as:
37
+
38
+ ```
39
+ $ gem install google-apis-remotebuildexecution_v2
40
+ ```
41
+
42
+ ### Creating a client object
43
+
44
+ Once the gem is installed, you can load the client code and instantiate a client.
45
+
46
+ ```ruby
47
+ # Load the client
48
+ require "google/apis/remotebuildexecution_v2"
49
+
50
+ # Create a client object
51
+ client = Google::Apis::RemotebuildexecutionV2::RemoteBuildExecutionService.new
52
+
53
+ # Authenticate calls
54
+ client.authentication = # ... use the googleauth gem to create credentials
55
+ ```
56
+
57
+ See the class reference docs for information on the methods you can call from a client.
58
+
59
+ ## Documentation
60
+
61
+ More detailed descriptions of the Google simple REST clients are available in two documents.
62
+
63
+ * The [Usage Guide](https://github.com/googleapis/google-api-ruby-client/blob/master/docs/usage-guide.md) discusses how to make API calls, how to use the provided data structures, and how to work the various features of the client library, including media upload and download, error handling, retries, pagination, and logging.
64
+ * The [Auth Guide](https://github.com/googleapis/google-api-ruby-client/blob/master/docs/auth-guide.md) discusses authentication in the client libraries, including API keys, OAuth 2.0, service accounts, and environment variables.
65
+
66
+ (Note: the above documents are written for the simple REST clients in general, and their examples may not reflect the Remotebuildexecution service in particular.)
67
+
68
+ For reference information on specific calls in the Remote Build Execution API, see the {Google::Apis::RemotebuildexecutionV2::RemoteBuildExecutionService class reference docs}.
69
+
70
+ ## Which client should I use?
71
+
72
+ Google provides two types of Ruby API client libraries: **simple REST clients** and **modern clients**.
73
+
74
+ This library, `google-apis-remotebuildexecution_v2`, is a simple REST client. You can identify these clients by their gem names, which are always in the form `google-apis-<servicename>_<serviceversion>`. The simple REST clients connect to HTTP/JSON REST endpoints and are automatically generated from service discovery documents. They support most API functionality, but their class interfaces are sometimes awkward.
75
+
76
+ Modern clients are produced by a modern code generator, sometimes combined with hand-crafted functionality. Most modern clients connect to high-performance gRPC endpoints, although a few are backed by REST services. Modern clients are available for many Google services, especially Google Cloud Platform services, but do not yet support all the services covered by the simple clients.
77
+
78
+ Gem names for modern clients are often of the form `google-cloud-<service_name>`. (For example, [google-cloud-pubsub](https://rubygems.org/gems/google-cloud-pubsub).) Note that most modern clients also have corresponding "versioned" gems with names like `google-cloud-<service_name>-<version>`. (For example, [google-cloud-pubsub-v1](https://rubygems.org/gems/google-cloud-pubsub-v1).) The "versioned" gems can be used directly, but often provide lower-level interfaces. In most cases, the main gem is recommended.
79
+
80
+ **For most users, we recommend the modern client, if one is available.** Compared with simple clients, modern clients are generally much easier to use and more Ruby-like, support more advanced features such as streaming and long-running operations, and often provide much better performance. You may consider using a simple client instead, if a modern client is not yet available for the service you want to use, or if you are not able to use gRPC on your infrastructure.
81
+
82
+ The [product documentation](https://cloud.google.com/remote-build-execution/docs/) may provide guidance regarding the preferred client library to use.
83
+
84
+ ## Supported Ruby versions
85
+
86
+ This library is supported on Ruby 2.5+.
87
+
88
+ Google provides official support for Ruby versions that are actively supported by Ruby Core -- that is, Ruby versions that are either in normal maintenance or in security maintenance, and not end of life. Currently, this means Ruby 2.5 and later. Older versions of Ruby _may_ still work, but are unsupported and not recommended. See https://www.ruby-lang.org/en/downloads/branches/ for details about the Ruby support schedule.
89
+
90
+ ## License
91
+
92
+ This library is licensed under Apache 2.0. Full license text is available in the {file:LICENSE.md LICENSE}.
93
+
94
+ ## Support
95
+
96
+ Please [report bugs at the project on Github](https://github.com/google/google-api-ruby-client/issues). Don't hesitate to [ask questions](http://stackoverflow.com/questions/tagged/google-api-ruby-client) about the client or APIs on [StackOverflow](http://stackoverflow.com).
@@ -0,0 +1,15 @@
1
+ # Copyright 2020 Google LLC
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ require "google/apis/remotebuildexecution_v2"
@@ -0,0 +1,36 @@
1
+ # Copyright 2020 Google LLC
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ require 'google/apis/remotebuildexecution_v2/service.rb'
16
+ require 'google/apis/remotebuildexecution_v2/classes.rb'
17
+ require 'google/apis/remotebuildexecution_v2/representations.rb'
18
+ require 'google/apis/remotebuildexecution_v2/gem_version.rb'
19
+
20
+ module Google
21
+ module Apis
22
+ # Remote Build Execution API
23
+ #
24
+ # Supplies a Remote Execution API service for tools such as bazel.
25
+ #
26
+ # @see https://cloud.google.com/remote-build-execution/docs/
27
+ module RemotebuildexecutionV2
28
+ # Version of the Remote Build Execution API this client connects to.
29
+ # This is NOT the gem version.
30
+ VERSION = 'V2'
31
+
32
+ # View and manage your data across Google Cloud Platform services
33
+ AUTH_CLOUD_PLATFORM = 'https://www.googleapis.com/auth/cloud-platform'
34
+ end
35
+ end
36
+ end
@@ -0,0 +1,3577 @@
1
+ # Copyright 2020 Google LLC
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ require 'date'
16
+ require 'google/apis/core/base_service'
17
+ require 'google/apis/core/json_representation'
18
+ require 'google/apis/core/hashable'
19
+ require 'google/apis/errors'
20
+
21
+ module Google
22
+ module Apis
23
+ module RemotebuildexecutionV2
24
+
25
+ # An `Action` captures all the information about an execution which is required
26
+ # to reproduce it. `Action`s are the core component of the [Execution] service.
27
+ # A single `Action` represents a repeatable action that can be performed by the
28
+ # execution service. `Action`s can be succinctly identified by the digest of
29
+ # their wire format encoding and, once an `Action` has been executed, will be
30
+ # cached in the action cache. Future requests can then use the cached result
31
+ # rather than needing to run afresh. When a server completes execution of an
32
+ # Action, it MAY choose to cache the result in the ActionCache unless `
33
+ # do_not_cache` is `true`. Clients SHOULD expect the server to do so. By default,
34
+ # future calls to Execute the same `Action` will also serve their results from
35
+ # the cache. Clients must take care to understand the caching behaviour. Ideally,
36
+ # all `Action`s will be reproducible so that serving a result from cache is
37
+ # always desirable and correct.
38
+ class BuildBazelRemoteExecutionV2Action
39
+ include Google::Apis::Core::Hashable
40
+
41
+ # A content digest. A digest for a given blob consists of the size of the blob
42
+ # and its hash. The hash algorithm to use is defined by the server. The size is
43
+ # considered to be an integral part of the digest and cannot be separated. That
44
+ # is, even if the `hash` field is correctly specified but `size_bytes` is not,
45
+ # the server MUST reject the request. The reason for including the size in the
46
+ # digest is as follows: in a great many cases, the server needs to know the size
47
+ # of the blob it is about to work with prior to starting an operation with it,
48
+ # such as flattening Merkle tree structures or streaming it to a worker.
49
+ # Technically, the server could implement a separate metadata store, but this
50
+ # results in a significantly more complicated implementation as opposed to
51
+ # having the client specify the size up-front (or storing the size along with
52
+ # the digest in every message where digests are embedded). This does mean that
53
+ # the API leaks some implementation details of (what we consider to be) a
54
+ # reasonable server implementation, but we consider this to be a worthwhile
55
+ # tradeoff. When a `Digest` is used to refer to a proto message, it always
56
+ # refers to the message in binary encoded form. To ensure consistent hashing,
57
+ # clients and servers MUST ensure that they serialize messages according to the
58
+ # following rules, even if there are alternate valid encodings for the same
59
+ # message: * Fields are serialized in tag order. * There are no unknown fields. *
60
+ # There are no duplicate fields. * Fields are serialized according to the
61
+ # default semantics for their type. Most protocol buffer implementations will
62
+ # always follow these rules when serializing, but care should be taken to avoid
63
+ # shortcuts. For instance, concatenating two messages to merge them may produce
64
+ # duplicate fields.
65
+ # Corresponds to the JSON property `commandDigest`
66
+ # @return [Google::Apis::RemotebuildexecutionV2::BuildBazelRemoteExecutionV2Digest]
67
+ attr_accessor :command_digest
68
+
69
+ # If true, then the `Action`'s result cannot be cached, and in-flight requests
70
+ # for the same `Action` may not be merged.
71
+ # Corresponds to the JSON property `doNotCache`
72
+ # @return [Boolean]
73
+ attr_accessor :do_not_cache
74
+ alias_method :do_not_cache?, :do_not_cache
75
+
76
+ # A content digest. A digest for a given blob consists of the size of the blob
77
+ # and its hash. The hash algorithm to use is defined by the server. The size is
78
+ # considered to be an integral part of the digest and cannot be separated. That
79
+ # is, even if the `hash` field is correctly specified but `size_bytes` is not,
80
+ # the server MUST reject the request. The reason for including the size in the
81
+ # digest is as follows: in a great many cases, the server needs to know the size
82
+ # of the blob it is about to work with prior to starting an operation with it,
83
+ # such as flattening Merkle tree structures or streaming it to a worker.
84
+ # Technically, the server could implement a separate metadata store, but this
85
+ # results in a significantly more complicated implementation as opposed to
86
+ # having the client specify the size up-front (or storing the size along with
87
+ # the digest in every message where digests are embedded). This does mean that
88
+ # the API leaks some implementation details of (what we consider to be) a
89
+ # reasonable server implementation, but we consider this to be a worthwhile
90
+ # tradeoff. When a `Digest` is used to refer to a proto message, it always
91
+ # refers to the message in binary encoded form. To ensure consistent hashing,
92
+ # clients and servers MUST ensure that they serialize messages according to the
93
+ # following rules, even if there are alternate valid encodings for the same
94
+ # message: * Fields are serialized in tag order. * There are no unknown fields. *
95
+ # There are no duplicate fields. * Fields are serialized according to the
96
+ # default semantics for their type. Most protocol buffer implementations will
97
+ # always follow these rules when serializing, but care should be taken to avoid
98
+ # shortcuts. For instance, concatenating two messages to merge them may produce
99
+ # duplicate fields.
100
+ # Corresponds to the JSON property `inputRootDigest`
101
+ # @return [Google::Apis::RemotebuildexecutionV2::BuildBazelRemoteExecutionV2Digest]
102
+ attr_accessor :input_root_digest
103
+
104
+ # List of required supported NodeProperty keys. In order to ensure that
105
+ # equivalent `Action`s always hash to the same value, the supported node
106
+ # properties MUST be lexicographically sorted by name. Sorting of strings is
107
+ # done by code point, equivalently, by the UTF-8 bytes. The interpretation of
108
+ # these properties is server-dependent. If a property is not recognized by the
109
+ # server, the server will return an `INVALID_ARGUMENT` error.
110
+ # Corresponds to the JSON property `outputNodeProperties`
111
+ # @return [Array<String>]
112
+ attr_accessor :output_node_properties
113
+
114
+ # A timeout after which the execution should be killed. If the timeout is absent,
115
+ # then the client is specifying that the execution should continue as long as
116
+ # the server will let it. The server SHOULD impose a timeout if the client does
117
+ # not specify one, however, if the client does specify a timeout that is longer
118
+ # than the server's maximum timeout, the server MUST reject the request. The
119
+ # timeout is a part of the Action message, and therefore two `Actions` with
120
+ # different timeouts are different, even if they are otherwise identical. This
121
+ # is because, if they were not, running an `Action` with a lower timeout than is
122
+ # required might result in a cache hit from an execution run with a longer
123
+ # timeout, hiding the fact that the timeout is too short. By encoding it
124
+ # directly in the `Action`, a lower timeout will result in a cache miss and the
125
+ # execution timeout will fail immediately, rather than whenever the cache entry
126
+ # gets evicted.
127
+ # Corresponds to the JSON property `timeout`
128
+ # @return [String]
129
+ attr_accessor :timeout
130
+
131
+ def initialize(**args)
132
+ update!(**args)
133
+ end
134
+
135
+ # Update properties of this object
136
+ def update!(**args)
137
+ @command_digest = args[:command_digest] if args.key?(:command_digest)
138
+ @do_not_cache = args[:do_not_cache] if args.key?(:do_not_cache)
139
+ @input_root_digest = args[:input_root_digest] if args.key?(:input_root_digest)
140
+ @output_node_properties = args[:output_node_properties] if args.key?(:output_node_properties)
141
+ @timeout = args[:timeout] if args.key?(:timeout)
142
+ end
143
+ end
144
+
145
+ # Describes the server/instance capabilities for updating the action cache.
146
+ class BuildBazelRemoteExecutionV2ActionCacheUpdateCapabilities
147
+ include Google::Apis::Core::Hashable
148
+
149
+ #
150
+ # Corresponds to the JSON property `updateEnabled`
151
+ # @return [Boolean]
152
+ attr_accessor :update_enabled
153
+ alias_method :update_enabled?, :update_enabled
154
+
155
+ def initialize(**args)
156
+ update!(**args)
157
+ end
158
+
159
+ # Update properties of this object
160
+ def update!(**args)
161
+ @update_enabled = args[:update_enabled] if args.key?(:update_enabled)
162
+ end
163
+ end
164
+
165
+ # An ActionResult represents the result of an Action being run.
166
+ class BuildBazelRemoteExecutionV2ActionResult
167
+ include Google::Apis::Core::Hashable
168
+
169
+ # ExecutedActionMetadata contains details about a completed execution.
170
+ # Corresponds to the JSON property `executionMetadata`
171
+ # @return [Google::Apis::RemotebuildexecutionV2::BuildBazelRemoteExecutionV2ExecutedActionMetadata]
172
+ attr_accessor :execution_metadata
173
+
174
+ # The exit code of the command.
175
+ # Corresponds to the JSON property `exitCode`
176
+ # @return [Fixnum]
177
+ attr_accessor :exit_code
178
+
179
+ # The output directories of the action. For each output directory requested in
180
+ # the `output_directories` or `output_paths` field of the Action, if the
181
+ # corresponding directory existed after the action completed, a single entry
182
+ # will be present in the output list, which will contain the digest of a Tree
183
+ # message containing the directory tree, and the path equal exactly to the
184
+ # corresponding Action output_directories member. As an example, suppose the
185
+ # Action had an output directory `a/b/dir` and the execution produced the
186
+ # following contents in `a/b/dir`: a file named `bar` and a directory named `foo`
187
+ # with an executable file named `baz`. Then, output_directory will contain (
188
+ # hashes shortened for readability): ```json // OutputDirectory proto: ` path: "
189
+ # a/b/dir" tree_digest: ` hash: "4a73bc9d03...", size: 55 ` ` // Tree proto with
190
+ # hash "4a73bc9d03..." and size 55: ` root: ` files: [ ` name: "bar", digest: `
191
+ # hash: "4a73bc9d03...", size: 65534 ` ` ], directories: [ ` name: "foo", digest:
192
+ # ` hash: "4cf2eda940...", size: 43 ` ` ] ` children : ` // (Directory proto
193
+ # with hash "4cf2eda940..." and size 43) files: [ ` name: "baz", digest: ` hash:
194
+ # "b2c941073e...", size: 1294, `, is_executable: true ` ] ` ` ``` If an output
195
+ # of the same name as listed in `output_files` of the Command was found in `
196
+ # output_directories`, but was not a directory, the server will return a
197
+ # FAILED_PRECONDITION.
198
+ # Corresponds to the JSON property `outputDirectories`
199
+ # @return [Array<Google::Apis::RemotebuildexecutionV2::BuildBazelRemoteExecutionV2OutputDirectory>]
200
+ attr_accessor :output_directories
201
+
202
+ # The output directories of the action that are symbolic links to other
203
+ # directories. Those may be links to other output directories, or input
204
+ # directories, or even absolute paths outside of the working directory, if the
205
+ # server supports SymlinkAbsolutePathStrategy.ALLOWED. For each output directory
206
+ # requested in the `output_directories` field of the Action, if the directory
207
+ # existed after the action completed, a single entry will be present either in
208
+ # this field, or in the `output_directories` field, if the directory was not a
209
+ # symbolic link. If an output of the same name was found, but was a symbolic
210
+ # link to a file instead of a directory, the server will return a
211
+ # FAILED_PRECONDITION. If the action does not produce the requested output, then
212
+ # that output will be omitted from the list. The server is free to arrange the
213
+ # output list as desired; clients MUST NOT assume that the output list is sorted.
214
+ # DEPRECATED as of v2.1. Servers that wish to be compatible with v2.0 API
215
+ # should still populate this field in addition to `output_symlinks`.
216
+ # Corresponds to the JSON property `outputDirectorySymlinks`
217
+ # @return [Array<Google::Apis::RemotebuildexecutionV2::BuildBazelRemoteExecutionV2OutputSymlink>]
218
+ attr_accessor :output_directory_symlinks
219
+
220
+ # The output files of the action that are symbolic links to other files. Those
221
+ # may be links to other output files, or input files, or even absolute paths
222
+ # outside of the working directory, if the server supports
223
+ # SymlinkAbsolutePathStrategy.ALLOWED. For each output file requested in the `
224
+ # output_files` or `output_paths` field of the Action, if the corresponding file
225
+ # existed after the action completed, a single entry will be present either in
226
+ # this field, or in the `output_files` field, if the file was not a symbolic
227
+ # link. If an output symbolic link of the same name as listed in `output_files`
228
+ # of the Command was found, but its target type was not a regular file, the
229
+ # server will return a FAILED_PRECONDITION. If the action does not produce the
230
+ # requested output, then that output will be omitted from the list. The server
231
+ # is free to arrange the output list as desired; clients MUST NOT assume that
232
+ # the output list is sorted. DEPRECATED as of v2.1. Servers that wish to be
233
+ # compatible with v2.0 API should still populate this field in addition to `
234
+ # output_symlinks`.
235
+ # Corresponds to the JSON property `outputFileSymlinks`
236
+ # @return [Array<Google::Apis::RemotebuildexecutionV2::BuildBazelRemoteExecutionV2OutputSymlink>]
237
+ attr_accessor :output_file_symlinks
238
+
239
+ # The output files of the action. For each output file requested in the `
240
+ # output_files` or `output_paths` field of the Action, if the corresponding file
241
+ # existed after the action completed, a single entry will be present either in
242
+ # this field, or the `output_file_symlinks` field if the file was a symbolic
243
+ # link to another file (`output_symlinks` field after v2.1). If an output listed
244
+ # in `output_files` was found, but was a directory rather than a regular file,
245
+ # the server will return a FAILED_PRECONDITION. If the action does not produce
246
+ # the requested output, then that output will be omitted from the list. The
247
+ # server is free to arrange the output list as desired; clients MUST NOT assume
248
+ # that the output list is sorted.
249
+ # Corresponds to the JSON property `outputFiles`
250
+ # @return [Array<Google::Apis::RemotebuildexecutionV2::BuildBazelRemoteExecutionV2OutputFile>]
251
+ attr_accessor :output_files
252
+
253
+ # New in v2.1: this field will only be populated if the command `output_paths`
254
+ # field was used, and not the pre v2.1 `output_files` or `output_directories`
255
+ # fields. The output paths of the action that are symbolic links to other paths.
256
+ # Those may be links to other outputs, or inputs, or even absolute paths outside
257
+ # of the working directory, if the server supports SymlinkAbsolutePathStrategy.
258
+ # ALLOWED. A single entry for each output requested in `output_paths` field of
259
+ # the Action, if the corresponding path existed after the action completed and
260
+ # was a symbolic link. If the action does not produce a requested output, then
261
+ # that output will be omitted from the list. The server is free to arrange the
262
+ # output list as desired; clients MUST NOT assume that the output list is sorted.
263
+ # Corresponds to the JSON property `outputSymlinks`
264
+ # @return [Array<Google::Apis::RemotebuildexecutionV2::BuildBazelRemoteExecutionV2OutputSymlink>]
265
+ attr_accessor :output_symlinks
266
+
267
+ # A content digest. A digest for a given blob consists of the size of the blob
268
+ # and its hash. The hash algorithm to use is defined by the server. The size is
269
+ # considered to be an integral part of the digest and cannot be separated. That
270
+ # is, even if the `hash` field is correctly specified but `size_bytes` is not,
271
+ # the server MUST reject the request. The reason for including the size in the
272
+ # digest is as follows: in a great many cases, the server needs to know the size
273
+ # of the blob it is about to work with prior to starting an operation with it,
274
+ # such as flattening Merkle tree structures or streaming it to a worker.
275
+ # Technically, the server could implement a separate metadata store, but this
276
+ # results in a significantly more complicated implementation as opposed to
277
+ # having the client specify the size up-front (or storing the size along with
278
+ # the digest in every message where digests are embedded). This does mean that
279
+ # the API leaks some implementation details of (what we consider to be) a
280
+ # reasonable server implementation, but we consider this to be a worthwhile
281
+ # tradeoff. When a `Digest` is used to refer to a proto message, it always
282
+ # refers to the message in binary encoded form. To ensure consistent hashing,
283
+ # clients and servers MUST ensure that they serialize messages according to the
284
+ # following rules, even if there are alternate valid encodings for the same
285
+ # message: * Fields are serialized in tag order. * There are no unknown fields. *
286
+ # There are no duplicate fields. * Fields are serialized according to the
287
+ # default semantics for their type. Most protocol buffer implementations will
288
+ # always follow these rules when serializing, but care should be taken to avoid
289
+ # shortcuts. For instance, concatenating two messages to merge them may produce
290
+ # duplicate fields.
291
+ # Corresponds to the JSON property `stderrDigest`
292
+ # @return [Google::Apis::RemotebuildexecutionV2::BuildBazelRemoteExecutionV2Digest]
293
+ attr_accessor :stderr_digest
294
+
295
+ # The standard error buffer of the action. The server SHOULD NOT inline stderr
296
+ # unless requested by the client in the GetActionResultRequest message. The
297
+ # server MAY omit inlining, even if requested, and MUST do so if inlining would
298
+ # cause the response to exceed message size limits.
299
+ # Corresponds to the JSON property `stderrRaw`
300
+ # NOTE: Values are automatically base64 encoded/decoded in the client library.
301
+ # @return [String]
302
+ attr_accessor :stderr_raw
303
+
304
+ # A content digest. A digest for a given blob consists of the size of the blob
305
+ # and its hash. The hash algorithm to use is defined by the server. The size is
306
+ # considered to be an integral part of the digest and cannot be separated. That
307
+ # is, even if the `hash` field is correctly specified but `size_bytes` is not,
308
+ # the server MUST reject the request. The reason for including the size in the
309
+ # digest is as follows: in a great many cases, the server needs to know the size
310
+ # of the blob it is about to work with prior to starting an operation with it,
311
+ # such as flattening Merkle tree structures or streaming it to a worker.
312
+ # Technically, the server could implement a separate metadata store, but this
313
+ # results in a significantly more complicated implementation as opposed to
314
+ # having the client specify the size up-front (or storing the size along with
315
+ # the digest in every message where digests are embedded). This does mean that
316
+ # the API leaks some implementation details of (what we consider to be) a
317
+ # reasonable server implementation, but we consider this to be a worthwhile
318
+ # tradeoff. When a `Digest` is used to refer to a proto message, it always
319
+ # refers to the message in binary encoded form. To ensure consistent hashing,
320
+ # clients and servers MUST ensure that they serialize messages according to the
321
+ # following rules, even if there are alternate valid encodings for the same
322
+ # message: * Fields are serialized in tag order. * There are no unknown fields. *
323
+ # There are no duplicate fields. * Fields are serialized according to the
324
+ # default semantics for their type. Most protocol buffer implementations will
325
+ # always follow these rules when serializing, but care should be taken to avoid
326
+ # shortcuts. For instance, concatenating two messages to merge them may produce
327
+ # duplicate fields.
328
+ # Corresponds to the JSON property `stdoutDigest`
329
+ # @return [Google::Apis::RemotebuildexecutionV2::BuildBazelRemoteExecutionV2Digest]
330
+ attr_accessor :stdout_digest
331
+
332
+ # The standard output buffer of the action. The server SHOULD NOT inline stdout
333
+ # unless requested by the client in the GetActionResultRequest message. The
334
+ # server MAY omit inlining, even if requested, and MUST do so if inlining would
335
+ # cause the response to exceed message size limits.
336
+ # Corresponds to the JSON property `stdoutRaw`
337
+ # NOTE: Values are automatically base64 encoded/decoded in the client library.
338
+ # @return [String]
339
+ attr_accessor :stdout_raw
340
+
341
+ def initialize(**args)
342
+ update!(**args)
343
+ end
344
+
345
+ # Update properties of this object
346
+ def update!(**args)
347
+ @execution_metadata = args[:execution_metadata] if args.key?(:execution_metadata)
348
+ @exit_code = args[:exit_code] if args.key?(:exit_code)
349
+ @output_directories = args[:output_directories] if args.key?(:output_directories)
350
+ @output_directory_symlinks = args[:output_directory_symlinks] if args.key?(:output_directory_symlinks)
351
+ @output_file_symlinks = args[:output_file_symlinks] if args.key?(:output_file_symlinks)
352
+ @output_files = args[:output_files] if args.key?(:output_files)
353
+ @output_symlinks = args[:output_symlinks] if args.key?(:output_symlinks)
354
+ @stderr_digest = args[:stderr_digest] if args.key?(:stderr_digest)
355
+ @stderr_raw = args[:stderr_raw] if args.key?(:stderr_raw)
356
+ @stdout_digest = args[:stdout_digest] if args.key?(:stdout_digest)
357
+ @stdout_raw = args[:stdout_raw] if args.key?(:stdout_raw)
358
+ end
359
+ end
360
+
361
+ # A request message for ContentAddressableStorage.BatchReadBlobs.
362
+ class BuildBazelRemoteExecutionV2BatchReadBlobsRequest
363
+ include Google::Apis::Core::Hashable
364
+
365
+ # The individual blob digests.
366
+ # Corresponds to the JSON property `digests`
367
+ # @return [Array<Google::Apis::RemotebuildexecutionV2::BuildBazelRemoteExecutionV2Digest>]
368
+ attr_accessor :digests
369
+
370
+ def initialize(**args)
371
+ update!(**args)
372
+ end
373
+
374
+ # Update properties of this object
375
+ def update!(**args)
376
+ @digests = args[:digests] if args.key?(:digests)
377
+ end
378
+ end
379
+
380
+ # A response message for ContentAddressableStorage.BatchReadBlobs.
381
+ class BuildBazelRemoteExecutionV2BatchReadBlobsResponse
382
+ include Google::Apis::Core::Hashable
383
+
384
+ # The responses to the requests.
385
+ # Corresponds to the JSON property `responses`
386
+ # @return [Array<Google::Apis::RemotebuildexecutionV2::BuildBazelRemoteExecutionV2BatchReadBlobsResponseResponse>]
387
+ attr_accessor :responses
388
+
389
+ def initialize(**args)
390
+ update!(**args)
391
+ end
392
+
393
+ # Update properties of this object
394
+ def update!(**args)
395
+ @responses = args[:responses] if args.key?(:responses)
396
+ end
397
+ end
398
+
399
+ # A response corresponding to a single blob that the client tried to download.
400
+ class BuildBazelRemoteExecutionV2BatchReadBlobsResponseResponse
401
+ include Google::Apis::Core::Hashable
402
+
403
+ # The raw binary data.
404
+ # Corresponds to the JSON property `data`
405
+ # NOTE: Values are automatically base64 encoded/decoded in the client library.
406
+ # @return [String]
407
+ attr_accessor :data
408
+
409
+ # A content digest. A digest for a given blob consists of the size of the blob
410
+ # and its hash. The hash algorithm to use is defined by the server. The size is
411
+ # considered to be an integral part of the digest and cannot be separated. That
412
+ # is, even if the `hash` field is correctly specified but `size_bytes` is not,
413
+ # the server MUST reject the request. The reason for including the size in the
414
+ # digest is as follows: in a great many cases, the server needs to know the size
415
+ # of the blob it is about to work with prior to starting an operation with it,
416
+ # such as flattening Merkle tree structures or streaming it to a worker.
417
+ # Technically, the server could implement a separate metadata store, but this
418
+ # results in a significantly more complicated implementation as opposed to
419
+ # having the client specify the size up-front (or storing the size along with
420
+ # the digest in every message where digests are embedded). This does mean that
421
+ # the API leaks some implementation details of (what we consider to be) a
422
+ # reasonable server implementation, but we consider this to be a worthwhile
423
+ # tradeoff. When a `Digest` is used to refer to a proto message, it always
424
+ # refers to the message in binary encoded form. To ensure consistent hashing,
425
+ # clients and servers MUST ensure that they serialize messages according to the
426
+ # following rules, even if there are alternate valid encodings for the same
427
+ # message: * Fields are serialized in tag order. * There are no unknown fields. *
428
+ # There are no duplicate fields. * Fields are serialized according to the
429
+ # default semantics for their type. Most protocol buffer implementations will
430
+ # always follow these rules when serializing, but care should be taken to avoid
431
+ # shortcuts. For instance, concatenating two messages to merge them may produce
432
+ # duplicate fields.
433
+ # Corresponds to the JSON property `digest`
434
+ # @return [Google::Apis::RemotebuildexecutionV2::BuildBazelRemoteExecutionV2Digest]
435
+ attr_accessor :digest
436
+
437
+ # The `Status` type defines a logical error model that is suitable for different
438
+ # programming environments, including REST APIs and RPC APIs. It is used by [
439
+ # gRPC](https://github.com/grpc). Each `Status` message contains three pieces of
440
+ # data: error code, error message, and error details. You can find out more
441
+ # about this error model and how to work with it in the [API Design Guide](https:
442
+ # //cloud.google.com/apis/design/errors).
443
+ # Corresponds to the JSON property `status`
444
+ # @return [Google::Apis::RemotebuildexecutionV2::GoogleRpcStatus]
445
+ attr_accessor :status
446
+
447
+ def initialize(**args)
448
+ update!(**args)
449
+ end
450
+
451
+ # Update properties of this object
452
+ def update!(**args)
453
+ @data = args[:data] if args.key?(:data)
454
+ @digest = args[:digest] if args.key?(:digest)
455
+ @status = args[:status] if args.key?(:status)
456
+ end
457
+ end
458
+
459
+ # A request message for ContentAddressableStorage.BatchUpdateBlobs.
460
+ class BuildBazelRemoteExecutionV2BatchUpdateBlobsRequest
461
+ include Google::Apis::Core::Hashable
462
+
463
+ # The individual upload requests.
464
+ # Corresponds to the JSON property `requests`
465
+ # @return [Array<Google::Apis::RemotebuildexecutionV2::BuildBazelRemoteExecutionV2BatchUpdateBlobsRequestRequest>]
466
+ attr_accessor :requests
467
+
468
+ def initialize(**args)
469
+ update!(**args)
470
+ end
471
+
472
+ # Update properties of this object
473
+ def update!(**args)
474
+ @requests = args[:requests] if args.key?(:requests)
475
+ end
476
+ end
477
+
478
+ # A request corresponding to a single blob that the client wants to upload.
479
+ class BuildBazelRemoteExecutionV2BatchUpdateBlobsRequestRequest
480
+ include Google::Apis::Core::Hashable
481
+
482
+ # The raw binary data.
483
+ # Corresponds to the JSON property `data`
484
+ # NOTE: Values are automatically base64 encoded/decoded in the client library.
485
+ # @return [String]
486
+ attr_accessor :data
487
+
488
+ # A content digest. A digest for a given blob consists of the size of the blob
489
+ # and its hash. The hash algorithm to use is defined by the server. The size is
490
+ # considered to be an integral part of the digest and cannot be separated. That
491
+ # is, even if the `hash` field is correctly specified but `size_bytes` is not,
492
+ # the server MUST reject the request. The reason for including the size in the
493
+ # digest is as follows: in a great many cases, the server needs to know the size
494
+ # of the blob it is about to work with prior to starting an operation with it,
495
+ # such as flattening Merkle tree structures or streaming it to a worker.
496
+ # Technically, the server could implement a separate metadata store, but this
497
+ # results in a significantly more complicated implementation as opposed to
498
+ # having the client specify the size up-front (or storing the size along with
499
+ # the digest in every message where digests are embedded). This does mean that
500
+ # the API leaks some implementation details of (what we consider to be) a
501
+ # reasonable server implementation, but we consider this to be a worthwhile
502
+ # tradeoff. When a `Digest` is used to refer to a proto message, it always
503
+ # refers to the message in binary encoded form. To ensure consistent hashing,
504
+ # clients and servers MUST ensure that they serialize messages according to the
505
+ # following rules, even if there are alternate valid encodings for the same
506
+ # message: * Fields are serialized in tag order. * There are no unknown fields. *
507
+ # There are no duplicate fields. * Fields are serialized according to the
508
+ # default semantics for their type. Most protocol buffer implementations will
509
+ # always follow these rules when serializing, but care should be taken to avoid
510
+ # shortcuts. For instance, concatenating two messages to merge them may produce
511
+ # duplicate fields.
512
+ # Corresponds to the JSON property `digest`
513
+ # @return [Google::Apis::RemotebuildexecutionV2::BuildBazelRemoteExecutionV2Digest]
514
+ attr_accessor :digest
515
+
516
+ def initialize(**args)
517
+ update!(**args)
518
+ end
519
+
520
+ # Update properties of this object
521
+ def update!(**args)
522
+ @data = args[:data] if args.key?(:data)
523
+ @digest = args[:digest] if args.key?(:digest)
524
+ end
525
+ end
526
+
527
+ # A response message for ContentAddressableStorage.BatchUpdateBlobs.
528
+ class BuildBazelRemoteExecutionV2BatchUpdateBlobsResponse
529
+ include Google::Apis::Core::Hashable
530
+
531
+ # The responses to the requests.
532
+ # Corresponds to the JSON property `responses`
533
+ # @return [Array<Google::Apis::RemotebuildexecutionV2::BuildBazelRemoteExecutionV2BatchUpdateBlobsResponseResponse>]
534
+ attr_accessor :responses
535
+
536
+ def initialize(**args)
537
+ update!(**args)
538
+ end
539
+
540
+ # Update properties of this object
541
+ def update!(**args)
542
+ @responses = args[:responses] if args.key?(:responses)
543
+ end
544
+ end
545
+
546
+ # A response corresponding to a single blob that the client tried to upload.
547
+ class BuildBazelRemoteExecutionV2BatchUpdateBlobsResponseResponse
548
+ include Google::Apis::Core::Hashable
549
+
550
+ # A content digest. A digest for a given blob consists of the size of the blob
551
+ # and its hash. The hash algorithm to use is defined by the server. The size is
552
+ # considered to be an integral part of the digest and cannot be separated. That
553
+ # is, even if the `hash` field is correctly specified but `size_bytes` is not,
554
+ # the server MUST reject the request. The reason for including the size in the
555
+ # digest is as follows: in a great many cases, the server needs to know the size
556
+ # of the blob it is about to work with prior to starting an operation with it,
557
+ # such as flattening Merkle tree structures or streaming it to a worker.
558
+ # Technically, the server could implement a separate metadata store, but this
559
+ # results in a significantly more complicated implementation as opposed to
560
+ # having the client specify the size up-front (or storing the size along with
561
+ # the digest in every message where digests are embedded). This does mean that
562
+ # the API leaks some implementation details of (what we consider to be) a
563
+ # reasonable server implementation, but we consider this to be a worthwhile
564
+ # tradeoff. When a `Digest` is used to refer to a proto message, it always
565
+ # refers to the message in binary encoded form. To ensure consistent hashing,
566
+ # clients and servers MUST ensure that they serialize messages according to the
567
+ # following rules, even if there are alternate valid encodings for the same
568
+ # message: * Fields are serialized in tag order. * There are no unknown fields. *
569
+ # There are no duplicate fields. * Fields are serialized according to the
570
+ # default semantics for their type. Most protocol buffer implementations will
571
+ # always follow these rules when serializing, but care should be taken to avoid
572
+ # shortcuts. For instance, concatenating two messages to merge them may produce
573
+ # duplicate fields.
574
+ # Corresponds to the JSON property `digest`
575
+ # @return [Google::Apis::RemotebuildexecutionV2::BuildBazelRemoteExecutionV2Digest]
576
+ attr_accessor :digest
577
+
578
+ # The `Status` type defines a logical error model that is suitable for different
579
+ # programming environments, including REST APIs and RPC APIs. It is used by [
580
+ # gRPC](https://github.com/grpc). Each `Status` message contains three pieces of
581
+ # data: error code, error message, and error details. You can find out more
582
+ # about this error model and how to work with it in the [API Design Guide](https:
583
+ # //cloud.google.com/apis/design/errors).
584
+ # Corresponds to the JSON property `status`
585
+ # @return [Google::Apis::RemotebuildexecutionV2::GoogleRpcStatus]
586
+ attr_accessor :status
587
+
588
+ def initialize(**args)
589
+ update!(**args)
590
+ end
591
+
592
+ # Update properties of this object
593
+ def update!(**args)
594
+ @digest = args[:digest] if args.key?(:digest)
595
+ @status = args[:status] if args.key?(:status)
596
+ end
597
+ end
598
+
599
+ # Capabilities of the remote cache system.
600
+ class BuildBazelRemoteExecutionV2CacheCapabilities
601
+ include Google::Apis::Core::Hashable
602
+
603
+ # Describes the server/instance capabilities for updating the action cache.
604
+ # Corresponds to the JSON property `actionCacheUpdateCapabilities`
605
+ # @return [Google::Apis::RemotebuildexecutionV2::BuildBazelRemoteExecutionV2ActionCacheUpdateCapabilities]
606
+ attr_accessor :action_cache_update_capabilities
607
+
608
+ # Allowed values for priority in ResultsCachePolicy Used for querying both cache
609
+ # and execution valid priority ranges.
610
+ # Corresponds to the JSON property `cachePriorityCapabilities`
611
+ # @return [Google::Apis::RemotebuildexecutionV2::BuildBazelRemoteExecutionV2PriorityCapabilities]
612
+ attr_accessor :cache_priority_capabilities
613
+
614
+ # All the digest functions supported by the remote cache. Remote cache may
615
+ # support multiple digest functions simultaneously.
616
+ # Corresponds to the JSON property `digestFunction`
617
+ # @return [Array<String>]
618
+ attr_accessor :digest_function
619
+
620
+ # Maximum total size of blobs to be uploaded/downloaded using batch methods. A
621
+ # value of 0 means no limit is set, although in practice there will always be a
622
+ # message size limitation of the protocol in use, e.g. GRPC.
623
+ # Corresponds to the JSON property `maxBatchTotalSizeBytes`
624
+ # @return [Fixnum]
625
+ attr_accessor :max_batch_total_size_bytes
626
+
627
+ # Whether absolute symlink targets are supported.
628
+ # Corresponds to the JSON property `symlinkAbsolutePathStrategy`
629
+ # @return [String]
630
+ attr_accessor :symlink_absolute_path_strategy
631
+
632
+ def initialize(**args)
633
+ update!(**args)
634
+ end
635
+
636
+ # Update properties of this object
637
+ def update!(**args)
638
+ @action_cache_update_capabilities = args[:action_cache_update_capabilities] if args.key?(:action_cache_update_capabilities)
639
+ @cache_priority_capabilities = args[:cache_priority_capabilities] if args.key?(:cache_priority_capabilities)
640
+ @digest_function = args[:digest_function] if args.key?(:digest_function)
641
+ @max_batch_total_size_bytes = args[:max_batch_total_size_bytes] if args.key?(:max_batch_total_size_bytes)
642
+ @symlink_absolute_path_strategy = args[:symlink_absolute_path_strategy] if args.key?(:symlink_absolute_path_strategy)
643
+ end
644
+ end
645
+
646
+ # A `Command` is the actual command executed by a worker running an Action and
647
+ # specifications of its environment. Except as otherwise required, the
648
+ # environment (such as which system libraries or binaries are available, and
649
+ # what filesystems are mounted where) is defined by and specific to the
650
+ # implementation of the remote execution API.
651
+ class BuildBazelRemoteExecutionV2Command
652
+ include Google::Apis::Core::Hashable
653
+
654
+ # The arguments to the command. The first argument must be the path to the
655
+ # executable, which must be either a relative path, in which case it is
656
+ # evaluated with respect to the input root, or an absolute path.
657
+ # Corresponds to the JSON property `arguments`
658
+ # @return [Array<String>]
659
+ attr_accessor :arguments
660
+
661
+ # The environment variables to set when running the program. The worker may
662
+ # provide its own default environment variables; these defaults can be
663
+ # overridden using this field. Additional variables can also be specified. In
664
+ # order to ensure that equivalent Commands always hash to the same value, the
665
+ # environment variables MUST be lexicographically sorted by name. Sorting of
666
+ # strings is done by code point, equivalently, by the UTF-8 bytes.
667
+ # Corresponds to the JSON property `environmentVariables`
668
+ # @return [Array<Google::Apis::RemotebuildexecutionV2::BuildBazelRemoteExecutionV2CommandEnvironmentVariable>]
669
+ attr_accessor :environment_variables
670
+
671
+ # A list of the output directories that the client expects to retrieve from the
672
+ # action. Only the listed directories will be returned (an entire directory
673
+ # structure will be returned as a Tree message digest, see OutputDirectory), as
674
+ # well as files listed in `output_files`. Other files or directories that may be
675
+ # created during command execution are discarded. The paths are relative to the
676
+ # working directory of the action execution. The paths are specified using a
677
+ # single forward slash (`/`) as a path separator, even if the execution platform
678
+ # natively uses a different separator. The path MUST NOT include a trailing
679
+ # slash, nor a leading slash, being a relative path. The special value of empty
680
+ # string is allowed, although not recommended, and can be used to capture the
681
+ # entire working directory tree, including inputs. In order to ensure consistent
682
+ # hashing of the same Action, the output paths MUST be sorted lexicographically
683
+ # by code point (or, equivalently, by UTF-8 bytes). An output directory cannot
684
+ # be duplicated or have the same path as any of the listed output files. An
685
+ # output directory is allowed to be a parent of another output directory.
686
+ # Directories leading up to the output directories (but not the output
687
+ # directories themselves) are created by the worker prior to execution, even if
688
+ # they are not explicitly part of the input root. DEPRECATED since 2.1: Use `
689
+ # output_paths` instead.
690
+ # Corresponds to the JSON property `outputDirectories`
691
+ # @return [Array<String>]
692
+ attr_accessor :output_directories
693
+
694
+ # A list of the output files that the client expects to retrieve from the action.
695
+ # Only the listed files, as well as directories listed in `output_directories`,
696
+ # will be returned to the client as output. Other files or directories that may
697
+ # be created during command execution are discarded. The paths are relative to
698
+ # the working directory of the action execution. The paths are specified using a
699
+ # single forward slash (`/`) as a path separator, even if the execution platform
700
+ # natively uses a different separator. The path MUST NOT include a trailing
701
+ # slash, nor a leading slash, being a relative path. In order to ensure
702
+ # consistent hashing of the same Action, the output paths MUST be sorted
703
+ # lexicographically by code point (or, equivalently, by UTF-8 bytes). An output
704
+ # file cannot be duplicated, be a parent of another output file, or have the
705
+ # same path as any of the listed output directories. Directories leading up to
706
+ # the output files are created by the worker prior to execution, even if they
707
+ # are not explicitly part of the input root. DEPRECATED since v2.1: Use `
708
+ # output_paths` instead.
709
+ # Corresponds to the JSON property `outputFiles`
710
+ # @return [Array<String>]
711
+ attr_accessor :output_files
712
+
713
+ # A list of the output paths that the client expects to retrieve from the action.
714
+ # Only the listed paths will be returned to the client as output. The type of
715
+ # the output (file or directory) is not specified, and will be determined by the
716
+ # server after action execution. If the resulting path is a file, it will be
717
+ # returned in an OutputFile) typed field. If the path is a directory, the entire
718
+ # directory structure will be returned as a Tree message digest, see
719
+ # OutputDirectory) Other files or directories that may be created during command
720
+ # execution are discarded. The paths are relative to the working directory of
721
+ # the action execution. The paths are specified using a single forward slash (`/`
722
+ # ) as a path separator, even if the execution platform natively uses a
723
+ # different separator. The path MUST NOT include a trailing slash, nor a leading
724
+ # slash, being a relative path. In order to ensure consistent hashing of the
725
+ # same Action, the output paths MUST be deduplicated and sorted
726
+ # lexicographically by code point (or, equivalently, by UTF-8 bytes).
727
+ # Directories leading up to the output paths are created by the worker prior to
728
+ # execution, even if they are not explicitly part of the input root. New in v2.1:
729
+ # this field supersedes the DEPRECATED `output_files` and `output_directories`
730
+ # fields. If `output_paths` is used, `output_files` and `output_directories`
731
+ # will be ignored!
732
+ # Corresponds to the JSON property `outputPaths`
733
+ # @return [Array<String>]
734
+ attr_accessor :output_paths
735
+
736
+ # A `Platform` is a set of requirements, such as hardware, operating system, or
737
+ # compiler toolchain, for an Action's execution environment. A `Platform` is
738
+ # represented as a series of key-value pairs representing the properties that
739
+ # are required of the platform.
740
+ # Corresponds to the JSON property `platform`
741
+ # @return [Google::Apis::RemotebuildexecutionV2::BuildBazelRemoteExecutionV2Platform]
742
+ attr_accessor :platform
743
+
744
+ # The working directory, relative to the input root, for the command to run in.
745
+ # It must be a directory which exists in the input tree. If it is left empty,
746
+ # then the action is run in the input root.
747
+ # Corresponds to the JSON property `workingDirectory`
748
+ # @return [String]
749
+ attr_accessor :working_directory
750
+
751
+ def initialize(**args)
752
+ update!(**args)
753
+ end
754
+
755
+ # Update properties of this object
756
+ def update!(**args)
757
+ @arguments = args[:arguments] if args.key?(:arguments)
758
+ @environment_variables = args[:environment_variables] if args.key?(:environment_variables)
759
+ @output_directories = args[:output_directories] if args.key?(:output_directories)
760
+ @output_files = args[:output_files] if args.key?(:output_files)
761
+ @output_paths = args[:output_paths] if args.key?(:output_paths)
762
+ @platform = args[:platform] if args.key?(:platform)
763
+ @working_directory = args[:working_directory] if args.key?(:working_directory)
764
+ end
765
+ end
766
+
767
+ # An `EnvironmentVariable` is one variable to set in the running program's
768
+ # environment.
769
+ class BuildBazelRemoteExecutionV2CommandEnvironmentVariable
770
+ include Google::Apis::Core::Hashable
771
+
772
+ # The variable name.
773
+ # Corresponds to the JSON property `name`
774
+ # @return [String]
775
+ attr_accessor :name
776
+
777
+ # The variable value.
778
+ # Corresponds to the JSON property `value`
779
+ # @return [String]
780
+ attr_accessor :value
781
+
782
+ def initialize(**args)
783
+ update!(**args)
784
+ end
785
+
786
+ # Update properties of this object
787
+ def update!(**args)
788
+ @name = args[:name] if args.key?(:name)
789
+ @value = args[:value] if args.key?(:value)
790
+ end
791
+ end
792
+
793
+ # A content digest. A digest for a given blob consists of the size of the blob
794
+ # and its hash. The hash algorithm to use is defined by the server. The size is
795
+ # considered to be an integral part of the digest and cannot be separated. That
796
+ # is, even if the `hash` field is correctly specified but `size_bytes` is not,
797
+ # the server MUST reject the request. The reason for including the size in the
798
+ # digest is as follows: in a great many cases, the server needs to know the size
799
+ # of the blob it is about to work with prior to starting an operation with it,
800
+ # such as flattening Merkle tree structures or streaming it to a worker.
801
+ # Technically, the server could implement a separate metadata store, but this
802
+ # results in a significantly more complicated implementation as opposed to
803
+ # having the client specify the size up-front (or storing the size along with
804
+ # the digest in every message where digests are embedded). This does mean that
805
+ # the API leaks some implementation details of (what we consider to be) a
806
+ # reasonable server implementation, but we consider this to be a worthwhile
807
+ # tradeoff. When a `Digest` is used to refer to a proto message, it always
808
+ # refers to the message in binary encoded form. To ensure consistent hashing,
809
+ # clients and servers MUST ensure that they serialize messages according to the
810
+ # following rules, even if there are alternate valid encodings for the same
811
+ # message: * Fields are serialized in tag order. * There are no unknown fields. *
812
+ # There are no duplicate fields. * Fields are serialized according to the
813
+ # default semantics for their type. Most protocol buffer implementations will
814
+ # always follow these rules when serializing, but care should be taken to avoid
815
+ # shortcuts. For instance, concatenating two messages to merge them may produce
816
+ # duplicate fields.
817
+ class BuildBazelRemoteExecutionV2Digest
818
+ include Google::Apis::Core::Hashable
819
+
820
+ # The hash. In the case of SHA-256, it will always be a lowercase hex string
821
+ # exactly 64 characters long.
822
+ # Corresponds to the JSON property `hash`
823
+ # @return [String]
824
+ attr_accessor :hash_prop
825
+
826
+ # The size of the blob, in bytes.
827
+ # Corresponds to the JSON property `sizeBytes`
828
+ # @return [Fixnum]
829
+ attr_accessor :size_bytes
830
+
831
+ def initialize(**args)
832
+ update!(**args)
833
+ end
834
+
835
+ # Update properties of this object
836
+ def update!(**args)
837
+ @hash_prop = args[:hash_prop] if args.key?(:hash_prop)
838
+ @size_bytes = args[:size_bytes] if args.key?(:size_bytes)
839
+ end
840
+ end
841
+
842
+ # A `Directory` represents a directory node in a file tree, containing zero or
843
+ # more children FileNodes, DirectoryNodes and SymlinkNodes. Each `Node` contains
844
+ # its name in the directory, either the digest of its content (either a file
845
+ # blob or a `Directory` proto) or a symlink target, as well as possibly some
846
+ # metadata about the file or directory. In order to ensure that two equivalent
847
+ # directory trees hash to the same value, the following restrictions MUST be
848
+ # obeyed when constructing a a `Directory`: * Every child in the directory must
849
+ # have a path of exactly one segment. Multiple levels of directory hierarchy may
850
+ # not be collapsed. * Each child in the directory must have a unique path
851
+ # segment (file name). Note that while the API itself is case-sensitive, the
852
+ # environment where the Action is executed may or may not be case-sensitive.
853
+ # That is, it is legal to call the API with a Directory that has both "Foo" and "
854
+ # foo" as children, but the Action may be rejected by the remote system upon
855
+ # execution. * The files, directories and symlinks in the directory must each be
856
+ # sorted in lexicographical order by path. The path strings must be sorted by
857
+ # code point, equivalently, by UTF-8 bytes. * The NodeProperties of files,
858
+ # directories, and symlinks must be sorted in lexicographical order by property
859
+ # name. A `Directory` that obeys the restrictions is said to be in canonical
860
+ # form. As an example, the following could be used for a file named `bar` and a
861
+ # directory named `foo` with an executable file named `baz` (hashes shortened
862
+ # for readability): ```json // (Directory proto) ` files: [ ` name: "bar",
863
+ # digest: ` hash: "4a73bc9d03...", size: 65534 `, node_properties: [ ` "name": "
864
+ # MTime", "value": "2017-01-15T01:30:15.01Z" ` ] ` ], directories: [ ` name: "
865
+ # foo", digest: ` hash: "4cf2eda940...", size: 43 ` ` ] ` // (Directory proto
866
+ # with hash "4cf2eda940..." and size 43) ` files: [ ` name: "baz", digest: `
867
+ # hash: "b2c941073e...", size: 1294, `, is_executable: true ` ] ` ```
868
+ class BuildBazelRemoteExecutionV2Directory
869
+ include Google::Apis::Core::Hashable
870
+
871
+ # The subdirectories in the directory.
872
+ # Corresponds to the JSON property `directories`
873
+ # @return [Array<Google::Apis::RemotebuildexecutionV2::BuildBazelRemoteExecutionV2DirectoryNode>]
874
+ attr_accessor :directories
875
+
876
+ # The files in the directory.
877
+ # Corresponds to the JSON property `files`
878
+ # @return [Array<Google::Apis::RemotebuildexecutionV2::BuildBazelRemoteExecutionV2FileNode>]
879
+ attr_accessor :files
880
+
881
+ # The node properties of the Directory.
882
+ # Corresponds to the JSON property `nodeProperties`
883
+ # @return [Array<Google::Apis::RemotebuildexecutionV2::BuildBazelRemoteExecutionV2NodeProperty>]
884
+ attr_accessor :node_properties
885
+
886
+ # The symlinks in the directory.
887
+ # Corresponds to the JSON property `symlinks`
888
+ # @return [Array<Google::Apis::RemotebuildexecutionV2::BuildBazelRemoteExecutionV2SymlinkNode>]
889
+ attr_accessor :symlinks
890
+
891
+ def initialize(**args)
892
+ update!(**args)
893
+ end
894
+
895
+ # Update properties of this object
896
+ def update!(**args)
897
+ @directories = args[:directories] if args.key?(:directories)
898
+ @files = args[:files] if args.key?(:files)
899
+ @node_properties = args[:node_properties] if args.key?(:node_properties)
900
+ @symlinks = args[:symlinks] if args.key?(:symlinks)
901
+ end
902
+ end
903
+
904
+ # A `DirectoryNode` represents a child of a Directory which is itself a `
905
+ # Directory` and its associated metadata.
906
+ class BuildBazelRemoteExecutionV2DirectoryNode
907
+ include Google::Apis::Core::Hashable
908
+
909
+ # A content digest. A digest for a given blob consists of the size of the blob
910
+ # and its hash. The hash algorithm to use is defined by the server. The size is
911
+ # considered to be an integral part of the digest and cannot be separated. That
912
+ # is, even if the `hash` field is correctly specified but `size_bytes` is not,
913
+ # the server MUST reject the request. The reason for including the size in the
914
+ # digest is as follows: in a great many cases, the server needs to know the size
915
+ # of the blob it is about to work with prior to starting an operation with it,
916
+ # such as flattening Merkle tree structures or streaming it to a worker.
917
+ # Technically, the server could implement a separate metadata store, but this
918
+ # results in a significantly more complicated implementation as opposed to
919
+ # having the client specify the size up-front (or storing the size along with
920
+ # the digest in every message where digests are embedded). This does mean that
921
+ # the API leaks some implementation details of (what we consider to be) a
922
+ # reasonable server implementation, but we consider this to be a worthwhile
923
+ # tradeoff. When a `Digest` is used to refer to a proto message, it always
924
+ # refers to the message in binary encoded form. To ensure consistent hashing,
925
+ # clients and servers MUST ensure that they serialize messages according to the
926
+ # following rules, even if there are alternate valid encodings for the same
927
+ # message: * Fields are serialized in tag order. * There are no unknown fields. *
928
+ # There are no duplicate fields. * Fields are serialized according to the
929
+ # default semantics for their type. Most protocol buffer implementations will
930
+ # always follow these rules when serializing, but care should be taken to avoid
931
+ # shortcuts. For instance, concatenating two messages to merge them may produce
932
+ # duplicate fields.
933
+ # Corresponds to the JSON property `digest`
934
+ # @return [Google::Apis::RemotebuildexecutionV2::BuildBazelRemoteExecutionV2Digest]
935
+ attr_accessor :digest
936
+
937
+ # The name of the directory.
938
+ # Corresponds to the JSON property `name`
939
+ # @return [String]
940
+ attr_accessor :name
941
+
942
+ def initialize(**args)
943
+ update!(**args)
944
+ end
945
+
946
+ # Update properties of this object
947
+ def update!(**args)
948
+ @digest = args[:digest] if args.key?(:digest)
949
+ @name = args[:name] if args.key?(:name)
950
+ end
951
+ end
952
+
953
+ # Metadata about an ongoing execution, which will be contained in the metadata
954
+ # field of the Operation.
955
+ class BuildBazelRemoteExecutionV2ExecuteOperationMetadata
956
+ include Google::Apis::Core::Hashable
957
+
958
+ # A content digest. A digest for a given blob consists of the size of the blob
959
+ # and its hash. The hash algorithm to use is defined by the server. The size is
960
+ # considered to be an integral part of the digest and cannot be separated. That
961
+ # is, even if the `hash` field is correctly specified but `size_bytes` is not,
962
+ # the server MUST reject the request. The reason for including the size in the
963
+ # digest is as follows: in a great many cases, the server needs to know the size
964
+ # of the blob it is about to work with prior to starting an operation with it,
965
+ # such as flattening Merkle tree structures or streaming it to a worker.
966
+ # Technically, the server could implement a separate metadata store, but this
967
+ # results in a significantly more complicated implementation as opposed to
968
+ # having the client specify the size up-front (or storing the size along with
969
+ # the digest in every message where digests are embedded). This does mean that
970
+ # the API leaks some implementation details of (what we consider to be) a
971
+ # reasonable server implementation, but we consider this to be a worthwhile
972
+ # tradeoff. When a `Digest` is used to refer to a proto message, it always
973
+ # refers to the message in binary encoded form. To ensure consistent hashing,
974
+ # clients and servers MUST ensure that they serialize messages according to the
975
+ # following rules, even if there are alternate valid encodings for the same
976
+ # message: * Fields are serialized in tag order. * There are no unknown fields. *
977
+ # There are no duplicate fields. * Fields are serialized according to the
978
+ # default semantics for their type. Most protocol buffer implementations will
979
+ # always follow these rules when serializing, but care should be taken to avoid
980
+ # shortcuts. For instance, concatenating two messages to merge them may produce
981
+ # duplicate fields.
982
+ # Corresponds to the JSON property `actionDigest`
983
+ # @return [Google::Apis::RemotebuildexecutionV2::BuildBazelRemoteExecutionV2Digest]
984
+ attr_accessor :action_digest
985
+
986
+ # The current stage of execution.
987
+ # Corresponds to the JSON property `stage`
988
+ # @return [String]
989
+ attr_accessor :stage
990
+
991
+ # If set, the client can use this name with ByteStream.Read to stream the
992
+ # standard error.
993
+ # Corresponds to the JSON property `stderrStreamName`
994
+ # @return [String]
995
+ attr_accessor :stderr_stream_name
996
+
997
+ # If set, the client can use this name with ByteStream.Read to stream the
998
+ # standard output.
999
+ # Corresponds to the JSON property `stdoutStreamName`
1000
+ # @return [String]
1001
+ attr_accessor :stdout_stream_name
1002
+
1003
+ def initialize(**args)
1004
+ update!(**args)
1005
+ end
1006
+
1007
+ # Update properties of this object
1008
+ def update!(**args)
1009
+ @action_digest = args[:action_digest] if args.key?(:action_digest)
1010
+ @stage = args[:stage] if args.key?(:stage)
1011
+ @stderr_stream_name = args[:stderr_stream_name] if args.key?(:stderr_stream_name)
1012
+ @stdout_stream_name = args[:stdout_stream_name] if args.key?(:stdout_stream_name)
1013
+ end
1014
+ end
1015
+
1016
+ # A request message for Execution.Execute.
1017
+ class BuildBazelRemoteExecutionV2ExecuteRequest
1018
+ include Google::Apis::Core::Hashable
1019
+
1020
+ # A content digest. A digest for a given blob consists of the size of the blob
1021
+ # and its hash. The hash algorithm to use is defined by the server. The size is
1022
+ # considered to be an integral part of the digest and cannot be separated. That
1023
+ # is, even if the `hash` field is correctly specified but `size_bytes` is not,
1024
+ # the server MUST reject the request. The reason for including the size in the
1025
+ # digest is as follows: in a great many cases, the server needs to know the size
1026
+ # of the blob it is about to work with prior to starting an operation with it,
1027
+ # such as flattening Merkle tree structures or streaming it to a worker.
1028
+ # Technically, the server could implement a separate metadata store, but this
1029
+ # results in a significantly more complicated implementation as opposed to
1030
+ # having the client specify the size up-front (or storing the size along with
1031
+ # the digest in every message where digests are embedded). This does mean that
1032
+ # the API leaks some implementation details of (what we consider to be) a
1033
+ # reasonable server implementation, but we consider this to be a worthwhile
1034
+ # tradeoff. When a `Digest` is used to refer to a proto message, it always
1035
+ # refers to the message in binary encoded form. To ensure consistent hashing,
1036
+ # clients and servers MUST ensure that they serialize messages according to the
1037
+ # following rules, even if there are alternate valid encodings for the same
1038
+ # message: * Fields are serialized in tag order. * There are no unknown fields. *
1039
+ # There are no duplicate fields. * Fields are serialized according to the
1040
+ # default semantics for their type. Most protocol buffer implementations will
1041
+ # always follow these rules when serializing, but care should be taken to avoid
1042
+ # shortcuts. For instance, concatenating two messages to merge them may produce
1043
+ # duplicate fields.
1044
+ # Corresponds to the JSON property `actionDigest`
1045
+ # @return [Google::Apis::RemotebuildexecutionV2::BuildBazelRemoteExecutionV2Digest]
1046
+ attr_accessor :action_digest
1047
+
1048
+ # An `ExecutionPolicy` can be used to control the scheduling of the action.
1049
+ # Corresponds to the JSON property `executionPolicy`
1050
+ # @return [Google::Apis::RemotebuildexecutionV2::BuildBazelRemoteExecutionV2ExecutionPolicy]
1051
+ attr_accessor :execution_policy
1052
+
1053
+ # A `ResultsCachePolicy` is used for fine-grained control over how action
1054
+ # outputs are stored in the CAS and Action Cache.
1055
+ # Corresponds to the JSON property `resultsCachePolicy`
1056
+ # @return [Google::Apis::RemotebuildexecutionV2::BuildBazelRemoteExecutionV2ResultsCachePolicy]
1057
+ attr_accessor :results_cache_policy
1058
+
1059
+ # If true, the action will be executed even if its result is already present in
1060
+ # the ActionCache. The execution is still allowed to be merged with other in-
1061
+ # flight executions of the same action, however - semantically, the service MUST
1062
+ # only guarantee that the results of an execution with this field set were not
1063
+ # visible before the corresponding execution request was sent. Note that actions
1064
+ # from execution requests setting this field set are still eligible to be
1065
+ # entered into the action cache upon completion, and services SHOULD overwrite
1066
+ # any existing entries that may exist. This allows skip_cache_lookup requests to
1067
+ # be used as a mechanism for replacing action cache entries that reference
1068
+ # outputs no longer available or that are poisoned in any way. If false, the
1069
+ # result may be served from the action cache.
1070
+ # Corresponds to the JSON property `skipCacheLookup`
1071
+ # @return [Boolean]
1072
+ attr_accessor :skip_cache_lookup
1073
+ alias_method :skip_cache_lookup?, :skip_cache_lookup
1074
+
1075
+ def initialize(**args)
1076
+ update!(**args)
1077
+ end
1078
+
1079
+ # Update properties of this object
1080
+ def update!(**args)
1081
+ @action_digest = args[:action_digest] if args.key?(:action_digest)
1082
+ @execution_policy = args[:execution_policy] if args.key?(:execution_policy)
1083
+ @results_cache_policy = args[:results_cache_policy] if args.key?(:results_cache_policy)
1084
+ @skip_cache_lookup = args[:skip_cache_lookup] if args.key?(:skip_cache_lookup)
1085
+ end
1086
+ end
1087
+
1088
+ # The response message for Execution.Execute, which will be contained in the
1089
+ # response field of the Operation.
1090
+ class BuildBazelRemoteExecutionV2ExecuteResponse
1091
+ include Google::Apis::Core::Hashable
1092
+
1093
+ # True if the result was served from cache, false if it was executed.
1094
+ # Corresponds to the JSON property `cachedResult`
1095
+ # @return [Boolean]
1096
+ attr_accessor :cached_result
1097
+ alias_method :cached_result?, :cached_result
1098
+
1099
+ # Freeform informational message with details on the execution of the action
1100
+ # that may be displayed to the user upon failure or when requested explicitly.
1101
+ # Corresponds to the JSON property `message`
1102
+ # @return [String]
1103
+ attr_accessor :message
1104
+
1105
+ # An ActionResult represents the result of an Action being run.
1106
+ # Corresponds to the JSON property `result`
1107
+ # @return [Google::Apis::RemotebuildexecutionV2::BuildBazelRemoteExecutionV2ActionResult]
1108
+ attr_accessor :result
1109
+
1110
+ # An optional list of additional log outputs the server wishes to provide. A
1111
+ # server can use this to return execution-specific logs however it wishes. This
1112
+ # is intended primarily to make it easier for users to debug issues that may be
1113
+ # outside of the actual job execution, such as by identifying the worker
1114
+ # executing the action or by providing logs from the worker's setup phase. The
1115
+ # keys SHOULD be human readable so that a client can display them to a user.
1116
+ # Corresponds to the JSON property `serverLogs`
1117
+ # @return [Hash<String,Google::Apis::RemotebuildexecutionV2::BuildBazelRemoteExecutionV2LogFile>]
1118
+ attr_accessor :server_logs
1119
+
1120
+ # The `Status` type defines a logical error model that is suitable for different
1121
+ # programming environments, including REST APIs and RPC APIs. It is used by [
1122
+ # gRPC](https://github.com/grpc). Each `Status` message contains three pieces of
1123
+ # data: error code, error message, and error details. You can find out more
1124
+ # about this error model and how to work with it in the [API Design Guide](https:
1125
+ # //cloud.google.com/apis/design/errors).
1126
+ # Corresponds to the JSON property `status`
1127
+ # @return [Google::Apis::RemotebuildexecutionV2::GoogleRpcStatus]
1128
+ attr_accessor :status
1129
+
1130
+ def initialize(**args)
1131
+ update!(**args)
1132
+ end
1133
+
1134
+ # Update properties of this object
1135
+ def update!(**args)
1136
+ @cached_result = args[:cached_result] if args.key?(:cached_result)
1137
+ @message = args[:message] if args.key?(:message)
1138
+ @result = args[:result] if args.key?(:result)
1139
+ @server_logs = args[:server_logs] if args.key?(:server_logs)
1140
+ @status = args[:status] if args.key?(:status)
1141
+ end
1142
+ end
1143
+
1144
+ # ExecutedActionMetadata contains details about a completed execution.
1145
+ class BuildBazelRemoteExecutionV2ExecutedActionMetadata
1146
+ include Google::Apis::Core::Hashable
1147
+
1148
+ # When the worker completed executing the action command.
1149
+ # Corresponds to the JSON property `executionCompletedTimestamp`
1150
+ # @return [String]
1151
+ attr_accessor :execution_completed_timestamp
1152
+
1153
+ # When the worker started executing the action command.
1154
+ # Corresponds to the JSON property `executionStartTimestamp`
1155
+ # @return [String]
1156
+ attr_accessor :execution_start_timestamp
1157
+
1158
+ # When the worker finished fetching action inputs.
1159
+ # Corresponds to the JSON property `inputFetchCompletedTimestamp`
1160
+ # @return [String]
1161
+ attr_accessor :input_fetch_completed_timestamp
1162
+
1163
+ # When the worker started fetching action inputs.
1164
+ # Corresponds to the JSON property `inputFetchStartTimestamp`
1165
+ # @return [String]
1166
+ attr_accessor :input_fetch_start_timestamp
1167
+
1168
+ # When the worker finished uploading action outputs.
1169
+ # Corresponds to the JSON property `outputUploadCompletedTimestamp`
1170
+ # @return [String]
1171
+ attr_accessor :output_upload_completed_timestamp
1172
+
1173
+ # When the worker started uploading action outputs.
1174
+ # Corresponds to the JSON property `outputUploadStartTimestamp`
1175
+ # @return [String]
1176
+ attr_accessor :output_upload_start_timestamp
1177
+
1178
+ # When was the action added to the queue.
1179
+ # Corresponds to the JSON property `queuedTimestamp`
1180
+ # @return [String]
1181
+ attr_accessor :queued_timestamp
1182
+
1183
+ # The name of the worker which ran the execution.
1184
+ # Corresponds to the JSON property `worker`
1185
+ # @return [String]
1186
+ attr_accessor :worker
1187
+
1188
+ # When the worker completed the action, including all stages.
1189
+ # Corresponds to the JSON property `workerCompletedTimestamp`
1190
+ # @return [String]
1191
+ attr_accessor :worker_completed_timestamp
1192
+
1193
+ # When the worker received the action.
1194
+ # Corresponds to the JSON property `workerStartTimestamp`
1195
+ # @return [String]
1196
+ attr_accessor :worker_start_timestamp
1197
+
1198
+ def initialize(**args)
1199
+ update!(**args)
1200
+ end
1201
+
1202
+ # Update properties of this object
1203
+ def update!(**args)
1204
+ @execution_completed_timestamp = args[:execution_completed_timestamp] if args.key?(:execution_completed_timestamp)
1205
+ @execution_start_timestamp = args[:execution_start_timestamp] if args.key?(:execution_start_timestamp)
1206
+ @input_fetch_completed_timestamp = args[:input_fetch_completed_timestamp] if args.key?(:input_fetch_completed_timestamp)
1207
+ @input_fetch_start_timestamp = args[:input_fetch_start_timestamp] if args.key?(:input_fetch_start_timestamp)
1208
+ @output_upload_completed_timestamp = args[:output_upload_completed_timestamp] if args.key?(:output_upload_completed_timestamp)
1209
+ @output_upload_start_timestamp = args[:output_upload_start_timestamp] if args.key?(:output_upload_start_timestamp)
1210
+ @queued_timestamp = args[:queued_timestamp] if args.key?(:queued_timestamp)
1211
+ @worker = args[:worker] if args.key?(:worker)
1212
+ @worker_completed_timestamp = args[:worker_completed_timestamp] if args.key?(:worker_completed_timestamp)
1213
+ @worker_start_timestamp = args[:worker_start_timestamp] if args.key?(:worker_start_timestamp)
1214
+ end
1215
+ end
1216
+
1217
+ # Capabilities of the remote execution system.
1218
+ class BuildBazelRemoteExecutionV2ExecutionCapabilities
1219
+ include Google::Apis::Core::Hashable
1220
+
1221
+ # Remote execution may only support a single digest function.
1222
+ # Corresponds to the JSON property `digestFunction`
1223
+ # @return [String]
1224
+ attr_accessor :digest_function
1225
+
1226
+ # Whether remote execution is enabled for the particular server/instance.
1227
+ # Corresponds to the JSON property `execEnabled`
1228
+ # @return [Boolean]
1229
+ attr_accessor :exec_enabled
1230
+ alias_method :exec_enabled?, :exec_enabled
1231
+
1232
+ # Allowed values for priority in ResultsCachePolicy Used for querying both cache
1233
+ # and execution valid priority ranges.
1234
+ # Corresponds to the JSON property `executionPriorityCapabilities`
1235
+ # @return [Google::Apis::RemotebuildexecutionV2::BuildBazelRemoteExecutionV2PriorityCapabilities]
1236
+ attr_accessor :execution_priority_capabilities
1237
+
1238
+ # Supported node properties.
1239
+ # Corresponds to the JSON property `supportedNodeProperties`
1240
+ # @return [Array<String>]
1241
+ attr_accessor :supported_node_properties
1242
+
1243
+ def initialize(**args)
1244
+ update!(**args)
1245
+ end
1246
+
1247
+ # Update properties of this object
1248
+ def update!(**args)
1249
+ @digest_function = args[:digest_function] if args.key?(:digest_function)
1250
+ @exec_enabled = args[:exec_enabled] if args.key?(:exec_enabled)
1251
+ @execution_priority_capabilities = args[:execution_priority_capabilities] if args.key?(:execution_priority_capabilities)
1252
+ @supported_node_properties = args[:supported_node_properties] if args.key?(:supported_node_properties)
1253
+ end
1254
+ end
1255
+
1256
+ # An `ExecutionPolicy` can be used to control the scheduling of the action.
1257
+ class BuildBazelRemoteExecutionV2ExecutionPolicy
1258
+ include Google::Apis::Core::Hashable
1259
+
1260
+ # The priority (relative importance) of this action. Generally, a lower value
1261
+ # means that the action should be run sooner than actions having a greater
1262
+ # priority value, but the interpretation of a given value is server- dependent.
1263
+ # A priority of 0 means the *default* priority. Priorities may be positive or
1264
+ # negative, and such actions should run later or sooner than actions having the
1265
+ # default priority, respectively. The particular semantics of this field is up
1266
+ # to the server. In particular, every server will have their own supported range
1267
+ # of priorities, and will decide how these map into scheduling policy.
1268
+ # Corresponds to the JSON property `priority`
1269
+ # @return [Fixnum]
1270
+ attr_accessor :priority
1271
+
1272
+ def initialize(**args)
1273
+ update!(**args)
1274
+ end
1275
+
1276
+ # Update properties of this object
1277
+ def update!(**args)
1278
+ @priority = args[:priority] if args.key?(:priority)
1279
+ end
1280
+ end
1281
+
1282
+ # A `FileNode` represents a single file and associated metadata.
1283
+ class BuildBazelRemoteExecutionV2FileNode
1284
+ include Google::Apis::Core::Hashable
1285
+
1286
+ # A content digest. A digest for a given blob consists of the size of the blob
1287
+ # and its hash. The hash algorithm to use is defined by the server. The size is
1288
+ # considered to be an integral part of the digest and cannot be separated. That
1289
+ # is, even if the `hash` field is correctly specified but `size_bytes` is not,
1290
+ # the server MUST reject the request. The reason for including the size in the
1291
+ # digest is as follows: in a great many cases, the server needs to know the size
1292
+ # of the blob it is about to work with prior to starting an operation with it,
1293
+ # such as flattening Merkle tree structures or streaming it to a worker.
1294
+ # Technically, the server could implement a separate metadata store, but this
1295
+ # results in a significantly more complicated implementation as opposed to
1296
+ # having the client specify the size up-front (or storing the size along with
1297
+ # the digest in every message where digests are embedded). This does mean that
1298
+ # the API leaks some implementation details of (what we consider to be) a
1299
+ # reasonable server implementation, but we consider this to be a worthwhile
1300
+ # tradeoff. When a `Digest` is used to refer to a proto message, it always
1301
+ # refers to the message in binary encoded form. To ensure consistent hashing,
1302
+ # clients and servers MUST ensure that they serialize messages according to the
1303
+ # following rules, even if there are alternate valid encodings for the same
1304
+ # message: * Fields are serialized in tag order. * There are no unknown fields. *
1305
+ # There are no duplicate fields. * Fields are serialized according to the
1306
+ # default semantics for their type. Most protocol buffer implementations will
1307
+ # always follow these rules when serializing, but care should be taken to avoid
1308
+ # shortcuts. For instance, concatenating two messages to merge them may produce
1309
+ # duplicate fields.
1310
+ # Corresponds to the JSON property `digest`
1311
+ # @return [Google::Apis::RemotebuildexecutionV2::BuildBazelRemoteExecutionV2Digest]
1312
+ attr_accessor :digest
1313
+
1314
+ # True if file is executable, false otherwise.
1315
+ # Corresponds to the JSON property `isExecutable`
1316
+ # @return [Boolean]
1317
+ attr_accessor :is_executable
1318
+ alias_method :is_executable?, :is_executable
1319
+
1320
+ # The name of the file.
1321
+ # Corresponds to the JSON property `name`
1322
+ # @return [String]
1323
+ attr_accessor :name
1324
+
1325
+ # The node properties of the FileNode.
1326
+ # Corresponds to the JSON property `nodeProperties`
1327
+ # @return [Array<Google::Apis::RemotebuildexecutionV2::BuildBazelRemoteExecutionV2NodeProperty>]
1328
+ attr_accessor :node_properties
1329
+
1330
+ def initialize(**args)
1331
+ update!(**args)
1332
+ end
1333
+
1334
+ # Update properties of this object
1335
+ def update!(**args)
1336
+ @digest = args[:digest] if args.key?(:digest)
1337
+ @is_executable = args[:is_executable] if args.key?(:is_executable)
1338
+ @name = args[:name] if args.key?(:name)
1339
+ @node_properties = args[:node_properties] if args.key?(:node_properties)
1340
+ end
1341
+ end
1342
+
1343
+ # A request message for ContentAddressableStorage.FindMissingBlobs.
1344
+ class BuildBazelRemoteExecutionV2FindMissingBlobsRequest
1345
+ include Google::Apis::Core::Hashable
1346
+
1347
+ # A list of the blobs to check.
1348
+ # Corresponds to the JSON property `blobDigests`
1349
+ # @return [Array<Google::Apis::RemotebuildexecutionV2::BuildBazelRemoteExecutionV2Digest>]
1350
+ attr_accessor :blob_digests
1351
+
1352
+ def initialize(**args)
1353
+ update!(**args)
1354
+ end
1355
+
1356
+ # Update properties of this object
1357
+ def update!(**args)
1358
+ @blob_digests = args[:blob_digests] if args.key?(:blob_digests)
1359
+ end
1360
+ end
1361
+
1362
+ # A response message for ContentAddressableStorage.FindMissingBlobs.
1363
+ class BuildBazelRemoteExecutionV2FindMissingBlobsResponse
1364
+ include Google::Apis::Core::Hashable
1365
+
1366
+ # A list of the blobs requested *not* present in the storage.
1367
+ # Corresponds to the JSON property `missingBlobDigests`
1368
+ # @return [Array<Google::Apis::RemotebuildexecutionV2::BuildBazelRemoteExecutionV2Digest>]
1369
+ attr_accessor :missing_blob_digests
1370
+
1371
+ def initialize(**args)
1372
+ update!(**args)
1373
+ end
1374
+
1375
+ # Update properties of this object
1376
+ def update!(**args)
1377
+ @missing_blob_digests = args[:missing_blob_digests] if args.key?(:missing_blob_digests)
1378
+ end
1379
+ end
1380
+
1381
+ # A response message for ContentAddressableStorage.GetTree.
1382
+ class BuildBazelRemoteExecutionV2GetTreeResponse
1383
+ include Google::Apis::Core::Hashable
1384
+
1385
+ # The directories descended from the requested root.
1386
+ # Corresponds to the JSON property `directories`
1387
+ # @return [Array<Google::Apis::RemotebuildexecutionV2::BuildBazelRemoteExecutionV2Directory>]
1388
+ attr_accessor :directories
1389
+
1390
+ # If present, signifies that there are more results which the client can
1391
+ # retrieve by passing this as the page_token in a subsequent request. If empty,
1392
+ # signifies that this is the last page of results.
1393
+ # Corresponds to the JSON property `nextPageToken`
1394
+ # @return [String]
1395
+ attr_accessor :next_page_token
1396
+
1397
+ def initialize(**args)
1398
+ update!(**args)
1399
+ end
1400
+
1401
+ # Update properties of this object
1402
+ def update!(**args)
1403
+ @directories = args[:directories] if args.key?(:directories)
1404
+ @next_page_token = args[:next_page_token] if args.key?(:next_page_token)
1405
+ end
1406
+ end
1407
+
1408
+ # A `LogFile` is a log stored in the CAS.
1409
+ class BuildBazelRemoteExecutionV2LogFile
1410
+ include Google::Apis::Core::Hashable
1411
+
1412
+ # A content digest. A digest for a given blob consists of the size of the blob
1413
+ # and its hash. The hash algorithm to use is defined by the server. The size is
1414
+ # considered to be an integral part of the digest and cannot be separated. That
1415
+ # is, even if the `hash` field is correctly specified but `size_bytes` is not,
1416
+ # the server MUST reject the request. The reason for including the size in the
1417
+ # digest is as follows: in a great many cases, the server needs to know the size
1418
+ # of the blob it is about to work with prior to starting an operation with it,
1419
+ # such as flattening Merkle tree structures or streaming it to a worker.
1420
+ # Technically, the server could implement a separate metadata store, but this
1421
+ # results in a significantly more complicated implementation as opposed to
1422
+ # having the client specify the size up-front (or storing the size along with
1423
+ # the digest in every message where digests are embedded). This does mean that
1424
+ # the API leaks some implementation details of (what we consider to be) a
1425
+ # reasonable server implementation, but we consider this to be a worthwhile
1426
+ # tradeoff. When a `Digest` is used to refer to a proto message, it always
1427
+ # refers to the message in binary encoded form. To ensure consistent hashing,
1428
+ # clients and servers MUST ensure that they serialize messages according to the
1429
+ # following rules, even if there are alternate valid encodings for the same
1430
+ # message: * Fields are serialized in tag order. * There are no unknown fields. *
1431
+ # There are no duplicate fields. * Fields are serialized according to the
1432
+ # default semantics for their type. Most protocol buffer implementations will
1433
+ # always follow these rules when serializing, but care should be taken to avoid
1434
+ # shortcuts. For instance, concatenating two messages to merge them may produce
1435
+ # duplicate fields.
1436
+ # Corresponds to the JSON property `digest`
1437
+ # @return [Google::Apis::RemotebuildexecutionV2::BuildBazelRemoteExecutionV2Digest]
1438
+ attr_accessor :digest
1439
+
1440
+ # This is a hint as to the purpose of the log, and is set to true if the log is
1441
+ # human-readable text that can be usefully displayed to a user, and false
1442
+ # otherwise. For instance, if a command-line client wishes to print the server
1443
+ # logs to the terminal for a failed action, this allows it to avoid displaying a
1444
+ # binary file.
1445
+ # Corresponds to the JSON property `humanReadable`
1446
+ # @return [Boolean]
1447
+ attr_accessor :human_readable
1448
+ alias_method :human_readable?, :human_readable
1449
+
1450
+ def initialize(**args)
1451
+ update!(**args)
1452
+ end
1453
+
1454
+ # Update properties of this object
1455
+ def update!(**args)
1456
+ @digest = args[:digest] if args.key?(:digest)
1457
+ @human_readable = args[:human_readable] if args.key?(:human_readable)
1458
+ end
1459
+ end
1460
+
1461
+ # A single property for FileNodes, DirectoryNodes, and SymlinkNodes. The server
1462
+ # is responsible for specifying the property `name`s that it accepts. If
1463
+ # permitted by the server, the same `name` may occur multiple times.
1464
+ class BuildBazelRemoteExecutionV2NodeProperty
1465
+ include Google::Apis::Core::Hashable
1466
+
1467
+ # The property name.
1468
+ # Corresponds to the JSON property `name`
1469
+ # @return [String]
1470
+ attr_accessor :name
1471
+
1472
+ # The property value.
1473
+ # Corresponds to the JSON property `value`
1474
+ # @return [String]
1475
+ attr_accessor :value
1476
+
1477
+ def initialize(**args)
1478
+ update!(**args)
1479
+ end
1480
+
1481
+ # Update properties of this object
1482
+ def update!(**args)
1483
+ @name = args[:name] if args.key?(:name)
1484
+ @value = args[:value] if args.key?(:value)
1485
+ end
1486
+ end
1487
+
1488
+ # An `OutputDirectory` is the output in an `ActionResult` corresponding to a
1489
+ # directory's full contents rather than a single file.
1490
+ class BuildBazelRemoteExecutionV2OutputDirectory
1491
+ include Google::Apis::Core::Hashable
1492
+
1493
+ # The full path of the directory relative to the working directory. The path
1494
+ # separator is a forward slash `/`. Since this is a relative path, it MUST NOT
1495
+ # begin with a leading forward slash. The empty string value is allowed, and it
1496
+ # denotes the entire working directory.
1497
+ # Corresponds to the JSON property `path`
1498
+ # @return [String]
1499
+ attr_accessor :path
1500
+
1501
+ # A content digest. A digest for a given blob consists of the size of the blob
1502
+ # and its hash. The hash algorithm to use is defined by the server. The size is
1503
+ # considered to be an integral part of the digest and cannot be separated. That
1504
+ # is, even if the `hash` field is correctly specified but `size_bytes` is not,
1505
+ # the server MUST reject the request. The reason for including the size in the
1506
+ # digest is as follows: in a great many cases, the server needs to know the size
1507
+ # of the blob it is about to work with prior to starting an operation with it,
1508
+ # such as flattening Merkle tree structures or streaming it to a worker.
1509
+ # Technically, the server could implement a separate metadata store, but this
1510
+ # results in a significantly more complicated implementation as opposed to
1511
+ # having the client specify the size up-front (or storing the size along with
1512
+ # the digest in every message where digests are embedded). This does mean that
1513
+ # the API leaks some implementation details of (what we consider to be) a
1514
+ # reasonable server implementation, but we consider this to be a worthwhile
1515
+ # tradeoff. When a `Digest` is used to refer to a proto message, it always
1516
+ # refers to the message in binary encoded form. To ensure consistent hashing,
1517
+ # clients and servers MUST ensure that they serialize messages according to the
1518
+ # following rules, even if there are alternate valid encodings for the same
1519
+ # message: * Fields are serialized in tag order. * There are no unknown fields. *
1520
+ # There are no duplicate fields. * Fields are serialized according to the
1521
+ # default semantics for their type. Most protocol buffer implementations will
1522
+ # always follow these rules when serializing, but care should be taken to avoid
1523
+ # shortcuts. For instance, concatenating two messages to merge them may produce
1524
+ # duplicate fields.
1525
+ # Corresponds to the JSON property `treeDigest`
1526
+ # @return [Google::Apis::RemotebuildexecutionV2::BuildBazelRemoteExecutionV2Digest]
1527
+ attr_accessor :tree_digest
1528
+
1529
+ def initialize(**args)
1530
+ update!(**args)
1531
+ end
1532
+
1533
+ # Update properties of this object
1534
+ def update!(**args)
1535
+ @path = args[:path] if args.key?(:path)
1536
+ @tree_digest = args[:tree_digest] if args.key?(:tree_digest)
1537
+ end
1538
+ end
1539
+
1540
+ # An `OutputFile` is similar to a FileNode, but it is used as an output in an `
1541
+ # ActionResult`. It allows a full file path rather than only a name.
1542
+ class BuildBazelRemoteExecutionV2OutputFile
1543
+ include Google::Apis::Core::Hashable
1544
+
1545
+ # The contents of the file if inlining was requested. The server SHOULD NOT
1546
+ # inline file contents unless requested by the client in the
1547
+ # GetActionResultRequest message. The server MAY omit inlining, even if
1548
+ # requested, and MUST do so if inlining would cause the response to exceed
1549
+ # message size limits.
1550
+ # Corresponds to the JSON property `contents`
1551
+ # NOTE: Values are automatically base64 encoded/decoded in the client library.
1552
+ # @return [String]
1553
+ attr_accessor :contents
1554
+
1555
+ # A content digest. A digest for a given blob consists of the size of the blob
1556
+ # and its hash. The hash algorithm to use is defined by the server. The size is
1557
+ # considered to be an integral part of the digest and cannot be separated. That
1558
+ # is, even if the `hash` field is correctly specified but `size_bytes` is not,
1559
+ # the server MUST reject the request. The reason for including the size in the
1560
+ # digest is as follows: in a great many cases, the server needs to know the size
1561
+ # of the blob it is about to work with prior to starting an operation with it,
1562
+ # such as flattening Merkle tree structures or streaming it to a worker.
1563
+ # Technically, the server could implement a separate metadata store, but this
1564
+ # results in a significantly more complicated implementation as opposed to
1565
+ # having the client specify the size up-front (or storing the size along with
1566
+ # the digest in every message where digests are embedded). This does mean that
1567
+ # the API leaks some implementation details of (what we consider to be) a
1568
+ # reasonable server implementation, but we consider this to be a worthwhile
1569
+ # tradeoff. When a `Digest` is used to refer to a proto message, it always
1570
+ # refers to the message in binary encoded form. To ensure consistent hashing,
1571
+ # clients and servers MUST ensure that they serialize messages according to the
1572
+ # following rules, even if there are alternate valid encodings for the same
1573
+ # message: * Fields are serialized in tag order. * There are no unknown fields. *
1574
+ # There are no duplicate fields. * Fields are serialized according to the
1575
+ # default semantics for their type. Most protocol buffer implementations will
1576
+ # always follow these rules when serializing, but care should be taken to avoid
1577
+ # shortcuts. For instance, concatenating two messages to merge them may produce
1578
+ # duplicate fields.
1579
+ # Corresponds to the JSON property `digest`
1580
+ # @return [Google::Apis::RemotebuildexecutionV2::BuildBazelRemoteExecutionV2Digest]
1581
+ attr_accessor :digest
1582
+
1583
+ # True if file is executable, false otherwise.
1584
+ # Corresponds to the JSON property `isExecutable`
1585
+ # @return [Boolean]
1586
+ attr_accessor :is_executable
1587
+ alias_method :is_executable?, :is_executable
1588
+
1589
+ # The supported node properties of the OutputFile, if requested by the Action.
1590
+ # Corresponds to the JSON property `nodeProperties`
1591
+ # @return [Array<Google::Apis::RemotebuildexecutionV2::BuildBazelRemoteExecutionV2NodeProperty>]
1592
+ attr_accessor :node_properties
1593
+
1594
+ # The full path of the file relative to the working directory, including the
1595
+ # filename. The path separator is a forward slash `/`. Since this is a relative
1596
+ # path, it MUST NOT begin with a leading forward slash.
1597
+ # Corresponds to the JSON property `path`
1598
+ # @return [String]
1599
+ attr_accessor :path
1600
+
1601
+ def initialize(**args)
1602
+ update!(**args)
1603
+ end
1604
+
1605
+ # Update properties of this object
1606
+ def update!(**args)
1607
+ @contents = args[:contents] if args.key?(:contents)
1608
+ @digest = args[:digest] if args.key?(:digest)
1609
+ @is_executable = args[:is_executable] if args.key?(:is_executable)
1610
+ @node_properties = args[:node_properties] if args.key?(:node_properties)
1611
+ @path = args[:path] if args.key?(:path)
1612
+ end
1613
+ end
1614
+
1615
+ # An `OutputSymlink` is similar to a Symlink, but it is used as an output in an `
1616
+ # ActionResult`. `OutputSymlink` is binary-compatible with `SymlinkNode`.
1617
+ class BuildBazelRemoteExecutionV2OutputSymlink
1618
+ include Google::Apis::Core::Hashable
1619
+
1620
+ # The supported node properties of the OutputSymlink, if requested by the Action.
1621
+ # Corresponds to the JSON property `nodeProperties`
1622
+ # @return [Array<Google::Apis::RemotebuildexecutionV2::BuildBazelRemoteExecutionV2NodeProperty>]
1623
+ attr_accessor :node_properties
1624
+
1625
+ # The full path of the symlink relative to the working directory, including the
1626
+ # filename. The path separator is a forward slash `/`. Since this is a relative
1627
+ # path, it MUST NOT begin with a leading forward slash.
1628
+ # Corresponds to the JSON property `path`
1629
+ # @return [String]
1630
+ attr_accessor :path
1631
+
1632
+ # The target path of the symlink. The path separator is a forward slash `/`. The
1633
+ # target path can be relative to the parent directory of the symlink or it can
1634
+ # be an absolute path starting with `/`. Support for absolute paths can be
1635
+ # checked using the Capabilities API. The canonical form forbids the substrings `
1636
+ # /./` and `//` in the target path. `..` components are allowed anywhere in the
1637
+ # target path.
1638
+ # Corresponds to the JSON property `target`
1639
+ # @return [String]
1640
+ attr_accessor :target
1641
+
1642
+ def initialize(**args)
1643
+ update!(**args)
1644
+ end
1645
+
1646
+ # Update properties of this object
1647
+ def update!(**args)
1648
+ @node_properties = args[:node_properties] if args.key?(:node_properties)
1649
+ @path = args[:path] if args.key?(:path)
1650
+ @target = args[:target] if args.key?(:target)
1651
+ end
1652
+ end
1653
+
1654
+ # A `Platform` is a set of requirements, such as hardware, operating system, or
1655
+ # compiler toolchain, for an Action's execution environment. A `Platform` is
1656
+ # represented as a series of key-value pairs representing the properties that
1657
+ # are required of the platform.
1658
+ class BuildBazelRemoteExecutionV2Platform
1659
+ include Google::Apis::Core::Hashable
1660
+
1661
+ # The properties that make up this platform. In order to ensure that equivalent `
1662
+ # Platform`s always hash to the same value, the properties MUST be
1663
+ # lexicographically sorted by name, and then by value. Sorting of strings is
1664
+ # done by code point, equivalently, by the UTF-8 bytes.
1665
+ # Corresponds to the JSON property `properties`
1666
+ # @return [Array<Google::Apis::RemotebuildexecutionV2::BuildBazelRemoteExecutionV2PlatformProperty>]
1667
+ attr_accessor :properties
1668
+
1669
+ def initialize(**args)
1670
+ update!(**args)
1671
+ end
1672
+
1673
+ # Update properties of this object
1674
+ def update!(**args)
1675
+ @properties = args[:properties] if args.key?(:properties)
1676
+ end
1677
+ end
1678
+
1679
+ # A single property for the environment. The server is responsible for
1680
+ # specifying the property `name`s that it accepts. If an unknown `name` is
1681
+ # provided in the requirements for an Action, the server SHOULD reject the
1682
+ # execution request. If permitted by the server, the same `name` may occur
1683
+ # multiple times. The server is also responsible for specifying the
1684
+ # interpretation of property `value`s. For instance, a property describing how
1685
+ # much RAM must be available may be interpreted as allowing a worker with 16GB
1686
+ # to fulfill a request for 8GB, while a property describing the OS environment
1687
+ # on which the action must be performed may require an exact match with the
1688
+ # worker's OS. The server MAY use the `value` of one or more properties to
1689
+ # determine how it sets up the execution environment, such as by making specific
1690
+ # system files available to the worker.
1691
+ class BuildBazelRemoteExecutionV2PlatformProperty
1692
+ include Google::Apis::Core::Hashable
1693
+
1694
+ # The property name.
1695
+ # Corresponds to the JSON property `name`
1696
+ # @return [String]
1697
+ attr_accessor :name
1698
+
1699
+ # The property value.
1700
+ # Corresponds to the JSON property `value`
1701
+ # @return [String]
1702
+ attr_accessor :value
1703
+
1704
+ def initialize(**args)
1705
+ update!(**args)
1706
+ end
1707
+
1708
+ # Update properties of this object
1709
+ def update!(**args)
1710
+ @name = args[:name] if args.key?(:name)
1711
+ @value = args[:value] if args.key?(:value)
1712
+ end
1713
+ end
1714
+
1715
+ # Allowed values for priority in ResultsCachePolicy Used for querying both cache
1716
+ # and execution valid priority ranges.
1717
+ class BuildBazelRemoteExecutionV2PriorityCapabilities
1718
+ include Google::Apis::Core::Hashable
1719
+
1720
+ #
1721
+ # Corresponds to the JSON property `priorities`
1722
+ # @return [Array<Google::Apis::RemotebuildexecutionV2::BuildBazelRemoteExecutionV2PriorityCapabilitiesPriorityRange>]
1723
+ attr_accessor :priorities
1724
+
1725
+ def initialize(**args)
1726
+ update!(**args)
1727
+ end
1728
+
1729
+ # Update properties of this object
1730
+ def update!(**args)
1731
+ @priorities = args[:priorities] if args.key?(:priorities)
1732
+ end
1733
+ end
1734
+
1735
+ # Supported range of priorities, including boundaries.
1736
+ class BuildBazelRemoteExecutionV2PriorityCapabilitiesPriorityRange
1737
+ include Google::Apis::Core::Hashable
1738
+
1739
+ #
1740
+ # Corresponds to the JSON property `maxPriority`
1741
+ # @return [Fixnum]
1742
+ attr_accessor :max_priority
1743
+
1744
+ #
1745
+ # Corresponds to the JSON property `minPriority`
1746
+ # @return [Fixnum]
1747
+ attr_accessor :min_priority
1748
+
1749
+ def initialize(**args)
1750
+ update!(**args)
1751
+ end
1752
+
1753
+ # Update properties of this object
1754
+ def update!(**args)
1755
+ @max_priority = args[:max_priority] if args.key?(:max_priority)
1756
+ @min_priority = args[:min_priority] if args.key?(:min_priority)
1757
+ end
1758
+ end
1759
+
1760
+ # An optional Metadata to attach to any RPC request to tell the server about an
1761
+ # external context of the request. The server may use this for logging or other
1762
+ # purposes. To use it, the client attaches the header to the call using the
1763
+ # canonical proto serialization: * name: `build.bazel.remote.execution.v2.
1764
+ # requestmetadata-bin` * contents: the base64 encoded binary `RequestMetadata`
1765
+ # message. Note: the gRPC library serializes binary headers encoded in base 64
1766
+ # by default (https://github.com/grpc/grpc/blob/master/doc/PROTOCOL-HTTP2.md#
1767
+ # requests). Therefore, if the gRPC library is used to pass/retrieve this
1768
+ # metadata, the user may ignore the base64 encoding and assume it is simply
1769
+ # serialized as a binary message.
1770
+ class BuildBazelRemoteExecutionV2RequestMetadata
1771
+ include Google::Apis::Core::Hashable
1772
+
1773
+ # An identifier that ties multiple requests to the same action. For example,
1774
+ # multiple requests to the CAS, Action Cache, and Execution API are used in
1775
+ # order to compile foo.cc.
1776
+ # Corresponds to the JSON property `actionId`
1777
+ # @return [String]
1778
+ attr_accessor :action_id
1779
+
1780
+ # An identifier to tie multiple tool invocations together. For example, runs of
1781
+ # foo_test, bar_test and baz_test on a post-submit of a given patch.
1782
+ # Corresponds to the JSON property `correlatedInvocationsId`
1783
+ # @return [String]
1784
+ attr_accessor :correlated_invocations_id
1785
+
1786
+ # Details for the tool used to call the API.
1787
+ # Corresponds to the JSON property `toolDetails`
1788
+ # @return [Google::Apis::RemotebuildexecutionV2::BuildBazelRemoteExecutionV2ToolDetails]
1789
+ attr_accessor :tool_details
1790
+
1791
+ # An identifier that ties multiple actions together to a final result. For
1792
+ # example, multiple actions are required to build and run foo_test.
1793
+ # Corresponds to the JSON property `toolInvocationId`
1794
+ # @return [String]
1795
+ attr_accessor :tool_invocation_id
1796
+
1797
+ def initialize(**args)
1798
+ update!(**args)
1799
+ end
1800
+
1801
+ # Update properties of this object
1802
+ def update!(**args)
1803
+ @action_id = args[:action_id] if args.key?(:action_id)
1804
+ @correlated_invocations_id = args[:correlated_invocations_id] if args.key?(:correlated_invocations_id)
1805
+ @tool_details = args[:tool_details] if args.key?(:tool_details)
1806
+ @tool_invocation_id = args[:tool_invocation_id] if args.key?(:tool_invocation_id)
1807
+ end
1808
+ end
1809
+
1810
+ # A `ResultsCachePolicy` is used for fine-grained control over how action
1811
+ # outputs are stored in the CAS and Action Cache.
1812
+ class BuildBazelRemoteExecutionV2ResultsCachePolicy
1813
+ include Google::Apis::Core::Hashable
1814
+
1815
+ # The priority (relative importance) of this content in the overall cache.
1816
+ # Generally, a lower value means a longer retention time or other advantage, but
1817
+ # the interpretation of a given value is server-dependent. A priority of 0 means
1818
+ # a *default* value, decided by the server. The particular semantics of this
1819
+ # field is up to the server. In particular, every server will have their own
1820
+ # supported range of priorities, and will decide how these map into retention/
1821
+ # eviction policy.
1822
+ # Corresponds to the JSON property `priority`
1823
+ # @return [Fixnum]
1824
+ attr_accessor :priority
1825
+
1826
+ def initialize(**args)
1827
+ update!(**args)
1828
+ end
1829
+
1830
+ # Update properties of this object
1831
+ def update!(**args)
1832
+ @priority = args[:priority] if args.key?(:priority)
1833
+ end
1834
+ end
1835
+
1836
+ # A response message for Capabilities.GetCapabilities.
1837
+ class BuildBazelRemoteExecutionV2ServerCapabilities
1838
+ include Google::Apis::Core::Hashable
1839
+
1840
+ # Capabilities of the remote cache system.
1841
+ # Corresponds to the JSON property `cacheCapabilities`
1842
+ # @return [Google::Apis::RemotebuildexecutionV2::BuildBazelRemoteExecutionV2CacheCapabilities]
1843
+ attr_accessor :cache_capabilities
1844
+
1845
+ # The full version of a given tool.
1846
+ # Corresponds to the JSON property `deprecatedApiVersion`
1847
+ # @return [Google::Apis::RemotebuildexecutionV2::BuildBazelSemverSemVer]
1848
+ attr_accessor :deprecated_api_version
1849
+
1850
+ # Capabilities of the remote execution system.
1851
+ # Corresponds to the JSON property `executionCapabilities`
1852
+ # @return [Google::Apis::RemotebuildexecutionV2::BuildBazelRemoteExecutionV2ExecutionCapabilities]
1853
+ attr_accessor :execution_capabilities
1854
+
1855
+ # The full version of a given tool.
1856
+ # Corresponds to the JSON property `highApiVersion`
1857
+ # @return [Google::Apis::RemotebuildexecutionV2::BuildBazelSemverSemVer]
1858
+ attr_accessor :high_api_version
1859
+
1860
+ # The full version of a given tool.
1861
+ # Corresponds to the JSON property `lowApiVersion`
1862
+ # @return [Google::Apis::RemotebuildexecutionV2::BuildBazelSemverSemVer]
1863
+ attr_accessor :low_api_version
1864
+
1865
+ def initialize(**args)
1866
+ update!(**args)
1867
+ end
1868
+
1869
+ # Update properties of this object
1870
+ def update!(**args)
1871
+ @cache_capabilities = args[:cache_capabilities] if args.key?(:cache_capabilities)
1872
+ @deprecated_api_version = args[:deprecated_api_version] if args.key?(:deprecated_api_version)
1873
+ @execution_capabilities = args[:execution_capabilities] if args.key?(:execution_capabilities)
1874
+ @high_api_version = args[:high_api_version] if args.key?(:high_api_version)
1875
+ @low_api_version = args[:low_api_version] if args.key?(:low_api_version)
1876
+ end
1877
+ end
1878
+
1879
+ # A `SymlinkNode` represents a symbolic link.
1880
+ class BuildBazelRemoteExecutionV2SymlinkNode
1881
+ include Google::Apis::Core::Hashable
1882
+
1883
+ # The name of the symlink.
1884
+ # Corresponds to the JSON property `name`
1885
+ # @return [String]
1886
+ attr_accessor :name
1887
+
1888
+ # The node properties of the SymlinkNode.
1889
+ # Corresponds to the JSON property `nodeProperties`
1890
+ # @return [Array<Google::Apis::RemotebuildexecutionV2::BuildBazelRemoteExecutionV2NodeProperty>]
1891
+ attr_accessor :node_properties
1892
+
1893
+ # The target path of the symlink. The path separator is a forward slash `/`. The
1894
+ # target path can be relative to the parent directory of the symlink or it can
1895
+ # be an absolute path starting with `/`. Support for absolute paths can be
1896
+ # checked using the Capabilities API. The canonical form forbids the substrings `
1897
+ # /./` and `//` in the target path. `..` components are allowed anywhere in the
1898
+ # target path.
1899
+ # Corresponds to the JSON property `target`
1900
+ # @return [String]
1901
+ attr_accessor :target
1902
+
1903
+ def initialize(**args)
1904
+ update!(**args)
1905
+ end
1906
+
1907
+ # Update properties of this object
1908
+ def update!(**args)
1909
+ @name = args[:name] if args.key?(:name)
1910
+ @node_properties = args[:node_properties] if args.key?(:node_properties)
1911
+ @target = args[:target] if args.key?(:target)
1912
+ end
1913
+ end
1914
+
1915
+ # Details for the tool used to call the API.
1916
+ class BuildBazelRemoteExecutionV2ToolDetails
1917
+ include Google::Apis::Core::Hashable
1918
+
1919
+ # Name of the tool, e.g. bazel.
1920
+ # Corresponds to the JSON property `toolName`
1921
+ # @return [String]
1922
+ attr_accessor :tool_name
1923
+
1924
+ # Version of the tool used for the request, e.g. 5.0.3.
1925
+ # Corresponds to the JSON property `toolVersion`
1926
+ # @return [String]
1927
+ attr_accessor :tool_version
1928
+
1929
+ def initialize(**args)
1930
+ update!(**args)
1931
+ end
1932
+
1933
+ # Update properties of this object
1934
+ def update!(**args)
1935
+ @tool_name = args[:tool_name] if args.key?(:tool_name)
1936
+ @tool_version = args[:tool_version] if args.key?(:tool_version)
1937
+ end
1938
+ end
1939
+
1940
+ # A `Tree` contains all the Directory protos in a single directory Merkle tree,
1941
+ # compressed into one message.
1942
+ class BuildBazelRemoteExecutionV2Tree
1943
+ include Google::Apis::Core::Hashable
1944
+
1945
+ # All the child directories: the directories referred to by the root and,
1946
+ # recursively, all its children. In order to reconstruct the directory tree, the
1947
+ # client must take the digests of each of the child directories and then build
1948
+ # up a tree starting from the `root`.
1949
+ # Corresponds to the JSON property `children`
1950
+ # @return [Array<Google::Apis::RemotebuildexecutionV2::BuildBazelRemoteExecutionV2Directory>]
1951
+ attr_accessor :children
1952
+
1953
+ # A `Directory` represents a directory node in a file tree, containing zero or
1954
+ # more children FileNodes, DirectoryNodes and SymlinkNodes. Each `Node` contains
1955
+ # its name in the directory, either the digest of its content (either a file
1956
+ # blob or a `Directory` proto) or a symlink target, as well as possibly some
1957
+ # metadata about the file or directory. In order to ensure that two equivalent
1958
+ # directory trees hash to the same value, the following restrictions MUST be
1959
+ # obeyed when constructing a a `Directory`: * Every child in the directory must
1960
+ # have a path of exactly one segment. Multiple levels of directory hierarchy may
1961
+ # not be collapsed. * Each child in the directory must have a unique path
1962
+ # segment (file name). Note that while the API itself is case-sensitive, the
1963
+ # environment where the Action is executed may or may not be case-sensitive.
1964
+ # That is, it is legal to call the API with a Directory that has both "Foo" and "
1965
+ # foo" as children, but the Action may be rejected by the remote system upon
1966
+ # execution. * The files, directories and symlinks in the directory must each be
1967
+ # sorted in lexicographical order by path. The path strings must be sorted by
1968
+ # code point, equivalently, by UTF-8 bytes. * The NodeProperties of files,
1969
+ # directories, and symlinks must be sorted in lexicographical order by property
1970
+ # name. A `Directory` that obeys the restrictions is said to be in canonical
1971
+ # form. As an example, the following could be used for a file named `bar` and a
1972
+ # directory named `foo` with an executable file named `baz` (hashes shortened
1973
+ # for readability): ```json // (Directory proto) ` files: [ ` name: "bar",
1974
+ # digest: ` hash: "4a73bc9d03...", size: 65534 `, node_properties: [ ` "name": "
1975
+ # MTime", "value": "2017-01-15T01:30:15.01Z" ` ] ` ], directories: [ ` name: "
1976
+ # foo", digest: ` hash: "4cf2eda940...", size: 43 ` ` ] ` // (Directory proto
1977
+ # with hash "4cf2eda940..." and size 43) ` files: [ ` name: "baz", digest: `
1978
+ # hash: "b2c941073e...", size: 1294, `, is_executable: true ` ] ` ```
1979
+ # Corresponds to the JSON property `root`
1980
+ # @return [Google::Apis::RemotebuildexecutionV2::BuildBazelRemoteExecutionV2Directory]
1981
+ attr_accessor :root
1982
+
1983
+ def initialize(**args)
1984
+ update!(**args)
1985
+ end
1986
+
1987
+ # Update properties of this object
1988
+ def update!(**args)
1989
+ @children = args[:children] if args.key?(:children)
1990
+ @root = args[:root] if args.key?(:root)
1991
+ end
1992
+ end
1993
+
1994
+ # A request message for WaitExecution.
1995
+ class BuildBazelRemoteExecutionV2WaitExecutionRequest
1996
+ include Google::Apis::Core::Hashable
1997
+
1998
+ def initialize(**args)
1999
+ update!(**args)
2000
+ end
2001
+
2002
+ # Update properties of this object
2003
+ def update!(**args)
2004
+ end
2005
+ end
2006
+
2007
+ # The full version of a given tool.
2008
+ class BuildBazelSemverSemVer
2009
+ include Google::Apis::Core::Hashable
2010
+
2011
+ # The major version, e.g 10 for 10.2.3.
2012
+ # Corresponds to the JSON property `major`
2013
+ # @return [Fixnum]
2014
+ attr_accessor :major
2015
+
2016
+ # The minor version, e.g. 2 for 10.2.3.
2017
+ # Corresponds to the JSON property `minor`
2018
+ # @return [Fixnum]
2019
+ attr_accessor :minor
2020
+
2021
+ # The patch version, e.g 3 for 10.2.3.
2022
+ # Corresponds to the JSON property `patch`
2023
+ # @return [Fixnum]
2024
+ attr_accessor :patch
2025
+
2026
+ # The pre-release version. Either this field or major/minor/patch fields must be
2027
+ # filled. They are mutually exclusive. Pre-release versions are assumed to be
2028
+ # earlier than any released versions.
2029
+ # Corresponds to the JSON property `prerelease`
2030
+ # @return [String]
2031
+ attr_accessor :prerelease
2032
+
2033
+ def initialize(**args)
2034
+ update!(**args)
2035
+ end
2036
+
2037
+ # Update properties of this object
2038
+ def update!(**args)
2039
+ @major = args[:major] if args.key?(:major)
2040
+ @minor = args[:minor] if args.key?(:minor)
2041
+ @patch = args[:patch] if args.key?(:patch)
2042
+ @prerelease = args[:prerelease] if args.key?(:prerelease)
2043
+ end
2044
+ end
2045
+
2046
+ # CommandDuration contains the various duration metrics tracked when a bot
2047
+ # performs a command.
2048
+ class GoogleDevtoolsRemotebuildbotCommandDurations
2049
+ include Google::Apis::Core::Hashable
2050
+
2051
+ # The time spent preparing the command to be run in a Docker container (includes
2052
+ # pulling the Docker image, if necessary).
2053
+ # Corresponds to the JSON property `dockerPrep`
2054
+ # @return [String]
2055
+ attr_accessor :docker_prep
2056
+
2057
+ # The timestamp when docker preparation begins.
2058
+ # Corresponds to the JSON property `dockerPrepStartTime`
2059
+ # @return [String]
2060
+ attr_accessor :docker_prep_start_time
2061
+
2062
+ # The time spent downloading the input files and constructing the working
2063
+ # directory.
2064
+ # Corresponds to the JSON property `download`
2065
+ # @return [String]
2066
+ attr_accessor :download
2067
+
2068
+ # The timestamp when downloading the input files begins.
2069
+ # Corresponds to the JSON property `downloadStartTime`
2070
+ # @return [String]
2071
+ attr_accessor :download_start_time
2072
+
2073
+ # The timestamp when execution begins.
2074
+ # Corresponds to the JSON property `execStartTime`
2075
+ # @return [String]
2076
+ attr_accessor :exec_start_time
2077
+
2078
+ # The time spent executing the command (i.e., doing useful work).
2079
+ # Corresponds to the JSON property `execution`
2080
+ # @return [String]
2081
+ attr_accessor :execution
2082
+
2083
+ # The timestamp when preparation is done and bot starts downloading files.
2084
+ # Corresponds to the JSON property `isoPrepDone`
2085
+ # @return [String]
2086
+ attr_accessor :iso_prep_done
2087
+
2088
+ # The time spent completing the command, in total.
2089
+ # Corresponds to the JSON property `overall`
2090
+ # @return [String]
2091
+ attr_accessor :overall
2092
+
2093
+ # The time spent uploading the stdout logs.
2094
+ # Corresponds to the JSON property `stdout`
2095
+ # @return [String]
2096
+ attr_accessor :stdout
2097
+
2098
+ # The time spent uploading the output files.
2099
+ # Corresponds to the JSON property `upload`
2100
+ # @return [String]
2101
+ attr_accessor :upload
2102
+
2103
+ # The timestamp when uploading the output files begins.
2104
+ # Corresponds to the JSON property `uploadStartTime`
2105
+ # @return [String]
2106
+ attr_accessor :upload_start_time
2107
+
2108
+ def initialize(**args)
2109
+ update!(**args)
2110
+ end
2111
+
2112
+ # Update properties of this object
2113
+ def update!(**args)
2114
+ @docker_prep = args[:docker_prep] if args.key?(:docker_prep)
2115
+ @docker_prep_start_time = args[:docker_prep_start_time] if args.key?(:docker_prep_start_time)
2116
+ @download = args[:download] if args.key?(:download)
2117
+ @download_start_time = args[:download_start_time] if args.key?(:download_start_time)
2118
+ @exec_start_time = args[:exec_start_time] if args.key?(:exec_start_time)
2119
+ @execution = args[:execution] if args.key?(:execution)
2120
+ @iso_prep_done = args[:iso_prep_done] if args.key?(:iso_prep_done)
2121
+ @overall = args[:overall] if args.key?(:overall)
2122
+ @stdout = args[:stdout] if args.key?(:stdout)
2123
+ @upload = args[:upload] if args.key?(:upload)
2124
+ @upload_start_time = args[:upload_start_time] if args.key?(:upload_start_time)
2125
+ end
2126
+ end
2127
+
2128
+ # CommandEvents contains counters for the number of warnings and errors that
2129
+ # occurred during the execution of a command.
2130
+ class GoogleDevtoolsRemotebuildbotCommandEvents
2131
+ include Google::Apis::Core::Hashable
2132
+
2133
+ # Indicates whether we are using a cached Docker image (true) or had to pull the
2134
+ # Docker image (false) for this command.
2135
+ # Corresponds to the JSON property `dockerCacheHit`
2136
+ # @return [Boolean]
2137
+ attr_accessor :docker_cache_hit
2138
+ alias_method :docker_cache_hit?, :docker_cache_hit
2139
+
2140
+ # Docker Image name.
2141
+ # Corresponds to the JSON property `dockerImageName`
2142
+ # @return [String]
2143
+ attr_accessor :docker_image_name
2144
+
2145
+ # The input cache miss ratio.
2146
+ # Corresponds to the JSON property `inputCacheMiss`
2147
+ # @return [Float]
2148
+ attr_accessor :input_cache_miss
2149
+
2150
+ # The number of errors reported.
2151
+ # Corresponds to the JSON property `numErrors`
2152
+ # @return [Fixnum]
2153
+ attr_accessor :num_errors
2154
+
2155
+ # The number of warnings reported.
2156
+ # Corresponds to the JSON property `numWarnings`
2157
+ # @return [Fixnum]
2158
+ attr_accessor :num_warnings
2159
+
2160
+ def initialize(**args)
2161
+ update!(**args)
2162
+ end
2163
+
2164
+ # Update properties of this object
2165
+ def update!(**args)
2166
+ @docker_cache_hit = args[:docker_cache_hit] if args.key?(:docker_cache_hit)
2167
+ @docker_image_name = args[:docker_image_name] if args.key?(:docker_image_name)
2168
+ @input_cache_miss = args[:input_cache_miss] if args.key?(:input_cache_miss)
2169
+ @num_errors = args[:num_errors] if args.key?(:num_errors)
2170
+ @num_warnings = args[:num_warnings] if args.key?(:num_warnings)
2171
+ end
2172
+ end
2173
+
2174
+ # The internal status of the command result.
2175
+ class GoogleDevtoolsRemotebuildbotCommandStatus
2176
+ include Google::Apis::Core::Hashable
2177
+
2178
+ # The status code.
2179
+ # Corresponds to the JSON property `code`
2180
+ # @return [String]
2181
+ attr_accessor :code
2182
+
2183
+ # The error message.
2184
+ # Corresponds to the JSON property `message`
2185
+ # @return [String]
2186
+ attr_accessor :message
2187
+
2188
+ def initialize(**args)
2189
+ update!(**args)
2190
+ end
2191
+
2192
+ # Update properties of this object
2193
+ def update!(**args)
2194
+ @code = args[:code] if args.key?(:code)
2195
+ @message = args[:message] if args.key?(:message)
2196
+ end
2197
+ end
2198
+
2199
+ # ResourceUsage is the system resource usage of the host machine.
2200
+ class GoogleDevtoolsRemotebuildbotResourceUsage
2201
+ include Google::Apis::Core::Hashable
2202
+
2203
+ #
2204
+ # Corresponds to the JSON property `cpuUsedPercent`
2205
+ # @return [Float]
2206
+ attr_accessor :cpu_used_percent
2207
+
2208
+ #
2209
+ # Corresponds to the JSON property `diskUsage`
2210
+ # @return [Google::Apis::RemotebuildexecutionV2::GoogleDevtoolsRemotebuildbotResourceUsageStat]
2211
+ attr_accessor :disk_usage
2212
+
2213
+ #
2214
+ # Corresponds to the JSON property `memoryUsage`
2215
+ # @return [Google::Apis::RemotebuildexecutionV2::GoogleDevtoolsRemotebuildbotResourceUsageStat]
2216
+ attr_accessor :memory_usage
2217
+
2218
+ def initialize(**args)
2219
+ update!(**args)
2220
+ end
2221
+
2222
+ # Update properties of this object
2223
+ def update!(**args)
2224
+ @cpu_used_percent = args[:cpu_used_percent] if args.key?(:cpu_used_percent)
2225
+ @disk_usage = args[:disk_usage] if args.key?(:disk_usage)
2226
+ @memory_usage = args[:memory_usage] if args.key?(:memory_usage)
2227
+ end
2228
+ end
2229
+
2230
+ #
2231
+ class GoogleDevtoolsRemotebuildbotResourceUsageStat
2232
+ include Google::Apis::Core::Hashable
2233
+
2234
+ #
2235
+ # Corresponds to the JSON property `total`
2236
+ # @return [Fixnum]
2237
+ attr_accessor :total
2238
+
2239
+ #
2240
+ # Corresponds to the JSON property `used`
2241
+ # @return [Fixnum]
2242
+ attr_accessor :used
2243
+
2244
+ def initialize(**args)
2245
+ update!(**args)
2246
+ end
2247
+
2248
+ # Update properties of this object
2249
+ def update!(**args)
2250
+ @total = args[:total] if args.key?(:total)
2251
+ @used = args[:used] if args.key?(:used)
2252
+ end
2253
+ end
2254
+
2255
+ # AcceleratorConfig defines the accelerator cards to attach to the VM.
2256
+ class GoogleDevtoolsRemotebuildexecutionAdminV1alphaAcceleratorConfig
2257
+ include Google::Apis::Core::Hashable
2258
+
2259
+ # The number of guest accelerator cards exposed to each VM.
2260
+ # Corresponds to the JSON property `acceleratorCount`
2261
+ # @return [Fixnum]
2262
+ attr_accessor :accelerator_count
2263
+
2264
+ # The type of accelerator to attach to each VM, e.g. "nvidia-tesla-k80" for
2265
+ # nVidia Tesla K80.
2266
+ # Corresponds to the JSON property `acceleratorType`
2267
+ # @return [String]
2268
+ attr_accessor :accelerator_type
2269
+
2270
+ def initialize(**args)
2271
+ update!(**args)
2272
+ end
2273
+
2274
+ # Update properties of this object
2275
+ def update!(**args)
2276
+ @accelerator_count = args[:accelerator_count] if args.key?(:accelerator_count)
2277
+ @accelerator_type = args[:accelerator_type] if args.key?(:accelerator_type)
2278
+ end
2279
+ end
2280
+
2281
+ # Autoscale defines the autoscaling policy of a worker pool.
2282
+ class GoogleDevtoolsRemotebuildexecutionAdminV1alphaAutoscale
2283
+ include Google::Apis::Core::Hashable
2284
+
2285
+ # The maximal number of workers. Must be equal to or greater than min_size.
2286
+ # Corresponds to the JSON property `maxSize`
2287
+ # @return [Fixnum]
2288
+ attr_accessor :max_size
2289
+
2290
+ # The minimal number of workers. Must be greater than 0.
2291
+ # Corresponds to the JSON property `minSize`
2292
+ # @return [Fixnum]
2293
+ attr_accessor :min_size
2294
+
2295
+ def initialize(**args)
2296
+ update!(**args)
2297
+ end
2298
+
2299
+ # Update properties of this object
2300
+ def update!(**args)
2301
+ @max_size = args[:max_size] if args.key?(:max_size)
2302
+ @min_size = args[:min_size] if args.key?(:min_size)
2303
+ end
2304
+ end
2305
+
2306
+ # The request used for `CreateInstance`.
2307
+ class GoogleDevtoolsRemotebuildexecutionAdminV1alphaCreateInstanceRequest
2308
+ include Google::Apis::Core::Hashable
2309
+
2310
+ # Instance conceptually encapsulates all Remote Build Execution resources for
2311
+ # remote builds. An instance consists of storage and compute resources (for
2312
+ # example, `ContentAddressableStorage`, `ActionCache`, `WorkerPools`) used for
2313
+ # running remote builds. All Remote Build Execution API calls are scoped to an
2314
+ # instance.
2315
+ # Corresponds to the JSON property `instance`
2316
+ # @return [Google::Apis::RemotebuildexecutionV2::GoogleDevtoolsRemotebuildexecutionAdminV1alphaInstance]
2317
+ attr_accessor :instance
2318
+
2319
+ # ID of the created instance. A valid `instance_id` must: be 6-50 characters
2320
+ # long, contain only lowercase letters, digits, hyphens and underscores, start
2321
+ # with a lowercase letter, and end with a lowercase letter or a digit.
2322
+ # Corresponds to the JSON property `instanceId`
2323
+ # @return [String]
2324
+ attr_accessor :instance_id
2325
+
2326
+ # Resource name of the project containing the instance. Format: `projects/[
2327
+ # PROJECT_ID]`.
2328
+ # Corresponds to the JSON property `parent`
2329
+ # @return [String]
2330
+ attr_accessor :parent
2331
+
2332
+ def initialize(**args)
2333
+ update!(**args)
2334
+ end
2335
+
2336
+ # Update properties of this object
2337
+ def update!(**args)
2338
+ @instance = args[:instance] if args.key?(:instance)
2339
+ @instance_id = args[:instance_id] if args.key?(:instance_id)
2340
+ @parent = args[:parent] if args.key?(:parent)
2341
+ end
2342
+ end
2343
+
2344
+ # The request used for `CreateWorkerPool`.
2345
+ class GoogleDevtoolsRemotebuildexecutionAdminV1alphaCreateWorkerPoolRequest
2346
+ include Google::Apis::Core::Hashable
2347
+
2348
+ # Resource name of the instance in which to create the new worker pool. Format: `
2349
+ # projects/[PROJECT_ID]/instances/[INSTANCE_ID]`.
2350
+ # Corresponds to the JSON property `parent`
2351
+ # @return [String]
2352
+ attr_accessor :parent
2353
+
2354
+ # ID of the created worker pool. A valid pool ID must: be 6-50 characters long,
2355
+ # contain only lowercase letters, digits, hyphens and underscores, start with a
2356
+ # lowercase letter, and end with a lowercase letter or a digit.
2357
+ # Corresponds to the JSON property `poolId`
2358
+ # @return [String]
2359
+ attr_accessor :pool_id
2360
+
2361
+ # A worker pool resource in the Remote Build Execution API.
2362
+ # Corresponds to the JSON property `workerPool`
2363
+ # @return [Google::Apis::RemotebuildexecutionV2::GoogleDevtoolsRemotebuildexecutionAdminV1alphaWorkerPool]
2364
+ attr_accessor :worker_pool
2365
+
2366
+ def initialize(**args)
2367
+ update!(**args)
2368
+ end
2369
+
2370
+ # Update properties of this object
2371
+ def update!(**args)
2372
+ @parent = args[:parent] if args.key?(:parent)
2373
+ @pool_id = args[:pool_id] if args.key?(:pool_id)
2374
+ @worker_pool = args[:worker_pool] if args.key?(:worker_pool)
2375
+ end
2376
+ end
2377
+
2378
+ # The request used for `DeleteInstance`.
2379
+ class GoogleDevtoolsRemotebuildexecutionAdminV1alphaDeleteInstanceRequest
2380
+ include Google::Apis::Core::Hashable
2381
+
2382
+ # Name of the instance to delete. Format: `projects/[PROJECT_ID]/instances/[
2383
+ # INSTANCE_ID]`.
2384
+ # Corresponds to the JSON property `name`
2385
+ # @return [String]
2386
+ attr_accessor :name
2387
+
2388
+ def initialize(**args)
2389
+ update!(**args)
2390
+ end
2391
+
2392
+ # Update properties of this object
2393
+ def update!(**args)
2394
+ @name = args[:name] if args.key?(:name)
2395
+ end
2396
+ end
2397
+
2398
+ # The request used for DeleteWorkerPool.
2399
+ class GoogleDevtoolsRemotebuildexecutionAdminV1alphaDeleteWorkerPoolRequest
2400
+ include Google::Apis::Core::Hashable
2401
+
2402
+ # Name of the worker pool to delete. Format: `projects/[PROJECT_ID]/instances/[
2403
+ # INSTANCE_ID]/workerpools/[POOL_ID]`.
2404
+ # Corresponds to the JSON property `name`
2405
+ # @return [String]
2406
+ attr_accessor :name
2407
+
2408
+ def initialize(**args)
2409
+ update!(**args)
2410
+ end
2411
+
2412
+ # Update properties of this object
2413
+ def update!(**args)
2414
+ @name = args[:name] if args.key?(:name)
2415
+ end
2416
+ end
2417
+
2418
+ # FeaturePolicy defines features allowed to be used on RBE instances, as well as
2419
+ # instance-wide behavior changes that take effect without opt-in or opt-out at
2420
+ # usage time.
2421
+ class GoogleDevtoolsRemotebuildexecutionAdminV1alphaFeaturePolicy
2422
+ include Google::Apis::Core::Hashable
2423
+
2424
+ # Defines whether a feature can be used or what values are accepted.
2425
+ # Corresponds to the JSON property `containerImageSources`
2426
+ # @return [Google::Apis::RemotebuildexecutionV2::GoogleDevtoolsRemotebuildexecutionAdminV1alphaFeaturePolicyFeature]
2427
+ attr_accessor :container_image_sources
2428
+
2429
+ # Defines whether a feature can be used or what values are accepted.
2430
+ # Corresponds to the JSON property `dockerAddCapabilities`
2431
+ # @return [Google::Apis::RemotebuildexecutionV2::GoogleDevtoolsRemotebuildexecutionAdminV1alphaFeaturePolicyFeature]
2432
+ attr_accessor :docker_add_capabilities
2433
+
2434
+ # Defines whether a feature can be used or what values are accepted.
2435
+ # Corresponds to the JSON property `dockerChrootPath`
2436
+ # @return [Google::Apis::RemotebuildexecutionV2::GoogleDevtoolsRemotebuildexecutionAdminV1alphaFeaturePolicyFeature]
2437
+ attr_accessor :docker_chroot_path
2438
+
2439
+ # Defines whether a feature can be used or what values are accepted.
2440
+ # Corresponds to the JSON property `dockerNetwork`
2441
+ # @return [Google::Apis::RemotebuildexecutionV2::GoogleDevtoolsRemotebuildexecutionAdminV1alphaFeaturePolicyFeature]
2442
+ attr_accessor :docker_network
2443
+
2444
+ # Defines whether a feature can be used or what values are accepted.
2445
+ # Corresponds to the JSON property `dockerPrivileged`
2446
+ # @return [Google::Apis::RemotebuildexecutionV2::GoogleDevtoolsRemotebuildexecutionAdminV1alphaFeaturePolicyFeature]
2447
+ attr_accessor :docker_privileged
2448
+
2449
+ # Defines whether a feature can be used or what values are accepted.
2450
+ # Corresponds to the JSON property `dockerRunAsRoot`
2451
+ # @return [Google::Apis::RemotebuildexecutionV2::GoogleDevtoolsRemotebuildexecutionAdminV1alphaFeaturePolicyFeature]
2452
+ attr_accessor :docker_run_as_root
2453
+
2454
+ # Defines whether a feature can be used or what values are accepted.
2455
+ # Corresponds to the JSON property `dockerRuntime`
2456
+ # @return [Google::Apis::RemotebuildexecutionV2::GoogleDevtoolsRemotebuildexecutionAdminV1alphaFeaturePolicyFeature]
2457
+ attr_accessor :docker_runtime
2458
+
2459
+ # Defines whether a feature can be used or what values are accepted.
2460
+ # Corresponds to the JSON property `dockerSiblingContainers`
2461
+ # @return [Google::Apis::RemotebuildexecutionV2::GoogleDevtoolsRemotebuildexecutionAdminV1alphaFeaturePolicyFeature]
2462
+ attr_accessor :docker_sibling_containers
2463
+
2464
+ # linux_isolation allows overriding the docker runtime used for containers
2465
+ # started on Linux.
2466
+ # Corresponds to the JSON property `linuxIsolation`
2467
+ # @return [String]
2468
+ attr_accessor :linux_isolation
2469
+
2470
+ def initialize(**args)
2471
+ update!(**args)
2472
+ end
2473
+
2474
+ # Update properties of this object
2475
+ def update!(**args)
2476
+ @container_image_sources = args[:container_image_sources] if args.key?(:container_image_sources)
2477
+ @docker_add_capabilities = args[:docker_add_capabilities] if args.key?(:docker_add_capabilities)
2478
+ @docker_chroot_path = args[:docker_chroot_path] if args.key?(:docker_chroot_path)
2479
+ @docker_network = args[:docker_network] if args.key?(:docker_network)
2480
+ @docker_privileged = args[:docker_privileged] if args.key?(:docker_privileged)
2481
+ @docker_run_as_root = args[:docker_run_as_root] if args.key?(:docker_run_as_root)
2482
+ @docker_runtime = args[:docker_runtime] if args.key?(:docker_runtime)
2483
+ @docker_sibling_containers = args[:docker_sibling_containers] if args.key?(:docker_sibling_containers)
2484
+ @linux_isolation = args[:linux_isolation] if args.key?(:linux_isolation)
2485
+ end
2486
+ end
2487
+
2488
+ # Defines whether a feature can be used or what values are accepted.
2489
+ class GoogleDevtoolsRemotebuildexecutionAdminV1alphaFeaturePolicyFeature
2490
+ include Google::Apis::Core::Hashable
2491
+
2492
+ # A list of acceptable values. Only effective when the policy is `RESTRICTED`.
2493
+ # Corresponds to the JSON property `allowedValues`
2494
+ # @return [Array<String>]
2495
+ attr_accessor :allowed_values
2496
+
2497
+ # The policy of the feature.
2498
+ # Corresponds to the JSON property `policy`
2499
+ # @return [String]
2500
+ attr_accessor :policy
2501
+
2502
+ def initialize(**args)
2503
+ update!(**args)
2504
+ end
2505
+
2506
+ # Update properties of this object
2507
+ def update!(**args)
2508
+ @allowed_values = args[:allowed_values] if args.key?(:allowed_values)
2509
+ @policy = args[:policy] if args.key?(:policy)
2510
+ end
2511
+ end
2512
+
2513
+ # The request used for `GetInstance`.
2514
+ class GoogleDevtoolsRemotebuildexecutionAdminV1alphaGetInstanceRequest
2515
+ include Google::Apis::Core::Hashable
2516
+
2517
+ # Name of the instance to retrieve. Format: `projects/[PROJECT_ID]/instances/[
2518
+ # INSTANCE_ID]`.
2519
+ # Corresponds to the JSON property `name`
2520
+ # @return [String]
2521
+ attr_accessor :name
2522
+
2523
+ def initialize(**args)
2524
+ update!(**args)
2525
+ end
2526
+
2527
+ # Update properties of this object
2528
+ def update!(**args)
2529
+ @name = args[:name] if args.key?(:name)
2530
+ end
2531
+ end
2532
+
2533
+ # The request used for GetWorkerPool.
2534
+ class GoogleDevtoolsRemotebuildexecutionAdminV1alphaGetWorkerPoolRequest
2535
+ include Google::Apis::Core::Hashable
2536
+
2537
+ # Name of the worker pool to retrieve. Format: `projects/[PROJECT_ID]/instances/[
2538
+ # INSTANCE_ID]/workerpools/[POOL_ID]`.
2539
+ # Corresponds to the JSON property `name`
2540
+ # @return [String]
2541
+ attr_accessor :name
2542
+
2543
+ def initialize(**args)
2544
+ update!(**args)
2545
+ end
2546
+
2547
+ # Update properties of this object
2548
+ def update!(**args)
2549
+ @name = args[:name] if args.key?(:name)
2550
+ end
2551
+ end
2552
+
2553
+ # Instance conceptually encapsulates all Remote Build Execution resources for
2554
+ # remote builds. An instance consists of storage and compute resources (for
2555
+ # example, `ContentAddressableStorage`, `ActionCache`, `WorkerPools`) used for
2556
+ # running remote builds. All Remote Build Execution API calls are scoped to an
2557
+ # instance.
2558
+ class GoogleDevtoolsRemotebuildexecutionAdminV1alphaInstance
2559
+ include Google::Apis::Core::Hashable
2560
+
2561
+ # FeaturePolicy defines features allowed to be used on RBE instances, as well as
2562
+ # instance-wide behavior changes that take effect without opt-in or opt-out at
2563
+ # usage time.
2564
+ # Corresponds to the JSON property `featurePolicy`
2565
+ # @return [Google::Apis::RemotebuildexecutionV2::GoogleDevtoolsRemotebuildexecutionAdminV1alphaFeaturePolicy]
2566
+ attr_accessor :feature_policy
2567
+
2568
+ # The location is a GCP region. Currently only `us-central1` is supported.
2569
+ # Corresponds to the JSON property `location`
2570
+ # @return [String]
2571
+ attr_accessor :location
2572
+
2573
+ # Output only. Whether stack driver logging is enabled for the instance.
2574
+ # Corresponds to the JSON property `loggingEnabled`
2575
+ # @return [Boolean]
2576
+ attr_accessor :logging_enabled
2577
+ alias_method :logging_enabled?, :logging_enabled
2578
+
2579
+ # Output only. Instance resource name formatted as: `projects/[PROJECT_ID]/
2580
+ # instances/[INSTANCE_ID]`. Name should not be populated when creating an
2581
+ # instance since it is provided in the `instance_id` field.
2582
+ # Corresponds to the JSON property `name`
2583
+ # @return [String]
2584
+ attr_accessor :name
2585
+
2586
+ # Output only. State of the instance.
2587
+ # Corresponds to the JSON property `state`
2588
+ # @return [String]
2589
+ attr_accessor :state
2590
+
2591
+ def initialize(**args)
2592
+ update!(**args)
2593
+ end
2594
+
2595
+ # Update properties of this object
2596
+ def update!(**args)
2597
+ @feature_policy = args[:feature_policy] if args.key?(:feature_policy)
2598
+ @location = args[:location] if args.key?(:location)
2599
+ @logging_enabled = args[:logging_enabled] if args.key?(:logging_enabled)
2600
+ @name = args[:name] if args.key?(:name)
2601
+ @state = args[:state] if args.key?(:state)
2602
+ end
2603
+ end
2604
+
2605
+ #
2606
+ class GoogleDevtoolsRemotebuildexecutionAdminV1alphaListInstancesRequest
2607
+ include Google::Apis::Core::Hashable
2608
+
2609
+ # Resource name of the project. Format: `projects/[PROJECT_ID]`.
2610
+ # Corresponds to the JSON property `parent`
2611
+ # @return [String]
2612
+ attr_accessor :parent
2613
+
2614
+ def initialize(**args)
2615
+ update!(**args)
2616
+ end
2617
+
2618
+ # Update properties of this object
2619
+ def update!(**args)
2620
+ @parent = args[:parent] if args.key?(:parent)
2621
+ end
2622
+ end
2623
+
2624
+ #
2625
+ class GoogleDevtoolsRemotebuildexecutionAdminV1alphaListInstancesResponse
2626
+ include Google::Apis::Core::Hashable
2627
+
2628
+ # The list of instances in a given project.
2629
+ # Corresponds to the JSON property `instances`
2630
+ # @return [Array<Google::Apis::RemotebuildexecutionV2::GoogleDevtoolsRemotebuildexecutionAdminV1alphaInstance>]
2631
+ attr_accessor :instances
2632
+
2633
+ def initialize(**args)
2634
+ update!(**args)
2635
+ end
2636
+
2637
+ # Update properties of this object
2638
+ def update!(**args)
2639
+ @instances = args[:instances] if args.key?(:instances)
2640
+ end
2641
+ end
2642
+
2643
+ #
2644
+ class GoogleDevtoolsRemotebuildexecutionAdminV1alphaListWorkerPoolsRequest
2645
+ include Google::Apis::Core::Hashable
2646
+
2647
+ # Optional. A filter expression that filters resources listed in the response.
2648
+ # The expression must specify the field name, a comparison operator, and the
2649
+ # value that you want to use for filtering. The value must be a string, a number,
2650
+ # or a boolean. String values are case-insensitive. The comparison operator
2651
+ # must be either `:`, `=`, `!=`, `>`, `>=`, `<=` or `<`. The `:` operator can be
2652
+ # used with string fields to match substrings. For non-string fields it is
2653
+ # equivalent to the `=` operator. The `:*` comparison can be used to test
2654
+ # whether a key has been defined. You can also filter on nested fields. To
2655
+ # filter on multiple expressions, you can separate expression using `AND` and `
2656
+ # OR` operators, using parentheses to specify precedence. If neither operator is
2657
+ # specified, `AND` is assumed. Examples: Include only pools with more than 100
2658
+ # reserved workers: `(worker_count > 100) (worker_config.reserved = true)`
2659
+ # Include only pools with a certain label or machines of the n1-standard family:
2660
+ # `worker_config.labels.key1 : * OR worker_config.machine_type: n1-standard`
2661
+ # Corresponds to the JSON property `filter`
2662
+ # @return [String]
2663
+ attr_accessor :filter
2664
+
2665
+ # Resource name of the instance. Format: `projects/[PROJECT_ID]/instances/[
2666
+ # INSTANCE_ID]`.
2667
+ # Corresponds to the JSON property `parent`
2668
+ # @return [String]
2669
+ attr_accessor :parent
2670
+
2671
+ def initialize(**args)
2672
+ update!(**args)
2673
+ end
2674
+
2675
+ # Update properties of this object
2676
+ def update!(**args)
2677
+ @filter = args[:filter] if args.key?(:filter)
2678
+ @parent = args[:parent] if args.key?(:parent)
2679
+ end
2680
+ end
2681
+
2682
+ #
2683
+ class GoogleDevtoolsRemotebuildexecutionAdminV1alphaListWorkerPoolsResponse
2684
+ include Google::Apis::Core::Hashable
2685
+
2686
+ # The list of worker pools in a given instance.
2687
+ # Corresponds to the JSON property `workerPools`
2688
+ # @return [Array<Google::Apis::RemotebuildexecutionV2::GoogleDevtoolsRemotebuildexecutionAdminV1alphaWorkerPool>]
2689
+ attr_accessor :worker_pools
2690
+
2691
+ def initialize(**args)
2692
+ update!(**args)
2693
+ end
2694
+
2695
+ # Update properties of this object
2696
+ def update!(**args)
2697
+ @worker_pools = args[:worker_pools] if args.key?(:worker_pools)
2698
+ end
2699
+ end
2700
+
2701
+ # The request used for `UpdateInstance`.
2702
+ class GoogleDevtoolsRemotebuildexecutionAdminV1alphaUpdateInstanceRequest
2703
+ include Google::Apis::Core::Hashable
2704
+
2705
+ # Instance conceptually encapsulates all Remote Build Execution resources for
2706
+ # remote builds. An instance consists of storage and compute resources (for
2707
+ # example, `ContentAddressableStorage`, `ActionCache`, `WorkerPools`) used for
2708
+ # running remote builds. All Remote Build Execution API calls are scoped to an
2709
+ # instance.
2710
+ # Corresponds to the JSON property `instance`
2711
+ # @return [Google::Apis::RemotebuildexecutionV2::GoogleDevtoolsRemotebuildexecutionAdminV1alphaInstance]
2712
+ attr_accessor :instance
2713
+
2714
+ # Deprecated, use instance.logging_enabled instead. Whether to enable
2715
+ # Stackdriver logging for this instance.
2716
+ # Corresponds to the JSON property `loggingEnabled`
2717
+ # @return [Boolean]
2718
+ attr_accessor :logging_enabled
2719
+ alias_method :logging_enabled?, :logging_enabled
2720
+
2721
+ # Deprecated, use instance.Name instead. Name of the instance to update. Format:
2722
+ # `projects/[PROJECT_ID]/instances/[INSTANCE_ID]`.
2723
+ # Corresponds to the JSON property `name`
2724
+ # @return [String]
2725
+ attr_accessor :name
2726
+
2727
+ # The update mask applies to instance. For the `FieldMask` definition, see https:
2728
+ # //developers.google.com/protocol-buffers/docs/reference/google.protobuf#
2729
+ # fieldmask If an empty update_mask is provided, only the non-default valued
2730
+ # field in the worker pool field will be updated. Note that in order to update a
2731
+ # field to the default value (zero, false, empty string) an explicit update_mask
2732
+ # must be provided.
2733
+ # Corresponds to the JSON property `updateMask`
2734
+ # @return [String]
2735
+ attr_accessor :update_mask
2736
+
2737
+ def initialize(**args)
2738
+ update!(**args)
2739
+ end
2740
+
2741
+ # Update properties of this object
2742
+ def update!(**args)
2743
+ @instance = args[:instance] if args.key?(:instance)
2744
+ @logging_enabled = args[:logging_enabled] if args.key?(:logging_enabled)
2745
+ @name = args[:name] if args.key?(:name)
2746
+ @update_mask = args[:update_mask] if args.key?(:update_mask)
2747
+ end
2748
+ end
2749
+
2750
+ # The request used for UpdateWorkerPool.
2751
+ class GoogleDevtoolsRemotebuildexecutionAdminV1alphaUpdateWorkerPoolRequest
2752
+ include Google::Apis::Core::Hashable
2753
+
2754
+ # The update mask applies to worker_pool. For the `FieldMask` definition, see
2755
+ # https://developers.google.com/protocol-buffers/docs/reference/google.protobuf#
2756
+ # fieldmask If an empty update_mask is provided, only the non-default valued
2757
+ # field in the worker pool field will be updated. Note that in order to update a
2758
+ # field to the default value (zero, false, empty string) an explicit update_mask
2759
+ # must be provided.
2760
+ # Corresponds to the JSON property `updateMask`
2761
+ # @return [String]
2762
+ attr_accessor :update_mask
2763
+
2764
+ # A worker pool resource in the Remote Build Execution API.
2765
+ # Corresponds to the JSON property `workerPool`
2766
+ # @return [Google::Apis::RemotebuildexecutionV2::GoogleDevtoolsRemotebuildexecutionAdminV1alphaWorkerPool]
2767
+ attr_accessor :worker_pool
2768
+
2769
+ def initialize(**args)
2770
+ update!(**args)
2771
+ end
2772
+
2773
+ # Update properties of this object
2774
+ def update!(**args)
2775
+ @update_mask = args[:update_mask] if args.key?(:update_mask)
2776
+ @worker_pool = args[:worker_pool] if args.key?(:worker_pool)
2777
+ end
2778
+ end
2779
+
2780
+ # Defines the configuration to be used for creating workers in the worker pool.
2781
+ class GoogleDevtoolsRemotebuildexecutionAdminV1alphaWorkerConfig
2782
+ include Google::Apis::Core::Hashable
2783
+
2784
+ # AcceleratorConfig defines the accelerator cards to attach to the VM.
2785
+ # Corresponds to the JSON property `accelerator`
2786
+ # @return [Google::Apis::RemotebuildexecutionV2::GoogleDevtoolsRemotebuildexecutionAdminV1alphaAcceleratorConfig]
2787
+ attr_accessor :accelerator
2788
+
2789
+ # Required. Size of the disk attached to the worker, in GB. See https://cloud.
2790
+ # google.com/compute/docs/disks/
2791
+ # Corresponds to the JSON property `diskSizeGb`
2792
+ # @return [Fixnum]
2793
+ attr_accessor :disk_size_gb
2794
+
2795
+ # Required. Disk Type to use for the worker. See [Storage options](https://cloud.
2796
+ # google.com/compute/docs/disks/#introduction). Currently only `pd-standard` and
2797
+ # `pd-ssd` are supported.
2798
+ # Corresponds to the JSON property `diskType`
2799
+ # @return [String]
2800
+ attr_accessor :disk_type
2801
+
2802
+ # Labels associated with the workers. Label keys and values can be no longer
2803
+ # than 63 characters, can only contain lowercase letters, numeric characters,
2804
+ # underscores and dashes. International letters are permitted. Label keys must
2805
+ # start with a letter. Label values are optional. There can not be more than 64
2806
+ # labels per resource.
2807
+ # Corresponds to the JSON property `labels`
2808
+ # @return [Hash<String,String>]
2809
+ attr_accessor :labels
2810
+
2811
+ # Required. Machine type of the worker, such as `n1-standard-2`. See https://
2812
+ # cloud.google.com/compute/docs/machine-types for a list of supported machine
2813
+ # types. Note that `f1-micro` and `g1-small` are not yet supported.
2814
+ # Corresponds to the JSON property `machineType`
2815
+ # @return [String]
2816
+ attr_accessor :machine_type
2817
+
2818
+ # The maximum number of actions a worker can execute concurrently.
2819
+ # Corresponds to the JSON property `maxConcurrentActions`
2820
+ # @return [Fixnum]
2821
+ attr_accessor :max_concurrent_actions
2822
+
2823
+ # Minimum CPU platform to use when creating the worker. See [CPU Platforms](
2824
+ # https://cloud.google.com/compute/docs/cpu-platforms).
2825
+ # Corresponds to the JSON property `minCpuPlatform`
2826
+ # @return [String]
2827
+ attr_accessor :min_cpu_platform
2828
+
2829
+ # Determines the type of network access granted to workers. Possible values: - "
2830
+ # public": Workers can connect to the public internet. - "private": Workers can
2831
+ # only connect to Google APIs and services. - "restricted-private": Workers can
2832
+ # only connect to Google APIs that are reachable through `restricted.googleapis.
2833
+ # com` (`199.36.153.4/30`).
2834
+ # Corresponds to the JSON property `networkAccess`
2835
+ # @return [String]
2836
+ attr_accessor :network_access
2837
+
2838
+ # Determines whether the worker is reserved (equivalent to a Compute Engine on-
2839
+ # demand VM and therefore won't be preempted). See [Preemptible VMs](https://
2840
+ # cloud.google.com/preemptible-vms/) for more details.
2841
+ # Corresponds to the JSON property `reserved`
2842
+ # @return [Boolean]
2843
+ attr_accessor :reserved
2844
+ alias_method :reserved?, :reserved
2845
+
2846
+ # The node type name to be used for sole-tenant nodes.
2847
+ # Corresponds to the JSON property `soleTenantNodeType`
2848
+ # @return [String]
2849
+ attr_accessor :sole_tenant_node_type
2850
+
2851
+ # The name of the image used by each VM.
2852
+ # Corresponds to the JSON property `vmImage`
2853
+ # @return [String]
2854
+ attr_accessor :vm_image
2855
+
2856
+ def initialize(**args)
2857
+ update!(**args)
2858
+ end
2859
+
2860
+ # Update properties of this object
2861
+ def update!(**args)
2862
+ @accelerator = args[:accelerator] if args.key?(:accelerator)
2863
+ @disk_size_gb = args[:disk_size_gb] if args.key?(:disk_size_gb)
2864
+ @disk_type = args[:disk_type] if args.key?(:disk_type)
2865
+ @labels = args[:labels] if args.key?(:labels)
2866
+ @machine_type = args[:machine_type] if args.key?(:machine_type)
2867
+ @max_concurrent_actions = args[:max_concurrent_actions] if args.key?(:max_concurrent_actions)
2868
+ @min_cpu_platform = args[:min_cpu_platform] if args.key?(:min_cpu_platform)
2869
+ @network_access = args[:network_access] if args.key?(:network_access)
2870
+ @reserved = args[:reserved] if args.key?(:reserved)
2871
+ @sole_tenant_node_type = args[:sole_tenant_node_type] if args.key?(:sole_tenant_node_type)
2872
+ @vm_image = args[:vm_image] if args.key?(:vm_image)
2873
+ end
2874
+ end
2875
+
2876
+ # A worker pool resource in the Remote Build Execution API.
2877
+ class GoogleDevtoolsRemotebuildexecutionAdminV1alphaWorkerPool
2878
+ include Google::Apis::Core::Hashable
2879
+
2880
+ # Autoscale defines the autoscaling policy of a worker pool.
2881
+ # Corresponds to the JSON property `autoscale`
2882
+ # @return [Google::Apis::RemotebuildexecutionV2::GoogleDevtoolsRemotebuildexecutionAdminV1alphaAutoscale]
2883
+ attr_accessor :autoscale
2884
+
2885
+ # Channel specifies the release channel of the pool.
2886
+ # Corresponds to the JSON property `channel`
2887
+ # @return [String]
2888
+ attr_accessor :channel
2889
+
2890
+ # WorkerPool resource name formatted as: `projects/[PROJECT_ID]/instances/[
2891
+ # INSTANCE_ID]/workerpools/[POOL_ID]`. name should not be populated when
2892
+ # creating a worker pool since it is provided in the `poolId` field.
2893
+ # Corresponds to the JSON property `name`
2894
+ # @return [String]
2895
+ attr_accessor :name
2896
+
2897
+ # Output only. State of the worker pool.
2898
+ # Corresponds to the JSON property `state`
2899
+ # @return [String]
2900
+ attr_accessor :state
2901
+
2902
+ # Defines the configuration to be used for creating workers in the worker pool.
2903
+ # Corresponds to the JSON property `workerConfig`
2904
+ # @return [Google::Apis::RemotebuildexecutionV2::GoogleDevtoolsRemotebuildexecutionAdminV1alphaWorkerConfig]
2905
+ attr_accessor :worker_config
2906
+
2907
+ # The desired number of workers in the worker pool. Must be a value between 0
2908
+ # and 15000.
2909
+ # Corresponds to the JSON property `workerCount`
2910
+ # @return [Fixnum]
2911
+ attr_accessor :worker_count
2912
+
2913
+ def initialize(**args)
2914
+ update!(**args)
2915
+ end
2916
+
2917
+ # Update properties of this object
2918
+ def update!(**args)
2919
+ @autoscale = args[:autoscale] if args.key?(:autoscale)
2920
+ @channel = args[:channel] if args.key?(:channel)
2921
+ @name = args[:name] if args.key?(:name)
2922
+ @state = args[:state] if args.key?(:state)
2923
+ @worker_config = args[:worker_config] if args.key?(:worker_config)
2924
+ @worker_count = args[:worker_count] if args.key?(:worker_count)
2925
+ end
2926
+ end
2927
+
2928
+ # AdminTemp is a prelimiary set of administration tasks. It's called "Temp"
2929
+ # because we do not yet know the best way to represent admin tasks; it's
2930
+ # possible that this will be entirely replaced in later versions of this API. If
2931
+ # this message proves to be sufficient, it will be renamed in the alpha or beta
2932
+ # release of this API. This message (suitably marshalled into a protobuf.Any)
2933
+ # can be used as the inline_assignment field in a lease; the lease assignment
2934
+ # field should simply be `"admin"` in these cases. This message is heavily based
2935
+ # on Swarming administration tasks from the LUCI project (http://github.com/luci/
2936
+ # luci-py/appengine/swarming).
2937
+ class GoogleDevtoolsRemoteworkersV1test2AdminTemp
2938
+ include Google::Apis::Core::Hashable
2939
+
2940
+ # The argument to the admin action; see `Command` for semantics.
2941
+ # Corresponds to the JSON property `arg`
2942
+ # @return [String]
2943
+ attr_accessor :arg
2944
+
2945
+ # The admin action; see `Command` for legal values.
2946
+ # Corresponds to the JSON property `command`
2947
+ # @return [String]
2948
+ attr_accessor :command
2949
+
2950
+ def initialize(**args)
2951
+ update!(**args)
2952
+ end
2953
+
2954
+ # Update properties of this object
2955
+ def update!(**args)
2956
+ @arg = args[:arg] if args.key?(:arg)
2957
+ @command = args[:command] if args.key?(:command)
2958
+ end
2959
+ end
2960
+
2961
+ # Describes a blob of binary content with its digest.
2962
+ class GoogleDevtoolsRemoteworkersV1test2Blob
2963
+ include Google::Apis::Core::Hashable
2964
+
2965
+ # The contents of the blob.
2966
+ # Corresponds to the JSON property `contents`
2967
+ # NOTE: Values are automatically base64 encoded/decoded in the client library.
2968
+ # @return [String]
2969
+ attr_accessor :contents
2970
+
2971
+ # The CommandTask and CommandResult messages assume the existence of a service
2972
+ # that can serve blobs of content, identified by a hash and size known as a "
2973
+ # digest." The method by which these blobs may be retrieved is not specified
2974
+ # here, but a model implementation is in the Remote Execution API's "
2975
+ # ContentAddressibleStorage" interface. In the context of the RWAPI, a Digest
2976
+ # will virtually always refer to the contents of a file or a directory. The
2977
+ # latter is represented by the byte-encoded Directory message.
2978
+ # Corresponds to the JSON property `digest`
2979
+ # @return [Google::Apis::RemotebuildexecutionV2::GoogleDevtoolsRemoteworkersV1test2Digest]
2980
+ attr_accessor :digest
2981
+
2982
+ def initialize(**args)
2983
+ update!(**args)
2984
+ end
2985
+
2986
+ # Update properties of this object
2987
+ def update!(**args)
2988
+ @contents = args[:contents] if args.key?(:contents)
2989
+ @digest = args[:digest] if args.key?(:digest)
2990
+ end
2991
+ end
2992
+
2993
+ # DEPRECATED - use CommandResult instead. Describes the actual outputs from the
2994
+ # task.
2995
+ class GoogleDevtoolsRemoteworkersV1test2CommandOutputs
2996
+ include Google::Apis::Core::Hashable
2997
+
2998
+ # exit_code is only fully reliable if the status' code is OK. If the task
2999
+ # exceeded its deadline or was cancelled, the process may still produce an exit
3000
+ # code as it is cancelled, and this will be populated, but a successful (zero)
3001
+ # is unlikely to be correct unless the status code is OK.
3002
+ # Corresponds to the JSON property `exitCode`
3003
+ # @return [Fixnum]
3004
+ attr_accessor :exit_code
3005
+
3006
+ # The CommandTask and CommandResult messages assume the existence of a service
3007
+ # that can serve blobs of content, identified by a hash and size known as a "
3008
+ # digest." The method by which these blobs may be retrieved is not specified
3009
+ # here, but a model implementation is in the Remote Execution API's "
3010
+ # ContentAddressibleStorage" interface. In the context of the RWAPI, a Digest
3011
+ # will virtually always refer to the contents of a file or a directory. The
3012
+ # latter is represented by the byte-encoded Directory message.
3013
+ # Corresponds to the JSON property `outputs`
3014
+ # @return [Google::Apis::RemotebuildexecutionV2::GoogleDevtoolsRemoteworkersV1test2Digest]
3015
+ attr_accessor :outputs
3016
+
3017
+ def initialize(**args)
3018
+ update!(**args)
3019
+ end
3020
+
3021
+ # Update properties of this object
3022
+ def update!(**args)
3023
+ @exit_code = args[:exit_code] if args.key?(:exit_code)
3024
+ @outputs = args[:outputs] if args.key?(:outputs)
3025
+ end
3026
+ end
3027
+
3028
+ # DEPRECATED - use CommandResult instead. Can be used as part of CompleteRequest.
3029
+ # metadata, or are part of a more sophisticated message.
3030
+ class GoogleDevtoolsRemoteworkersV1test2CommandOverhead
3031
+ include Google::Apis::Core::Hashable
3032
+
3033
+ # The elapsed time between calling Accept and Complete. The server will also
3034
+ # have its own idea of what this should be, but this excludes the overhead of
3035
+ # the RPCs and the bot response time.
3036
+ # Corresponds to the JSON property `duration`
3037
+ # @return [String]
3038
+ attr_accessor :duration
3039
+
3040
+ # The amount of time *not* spent executing the command (ie uploading/downloading
3041
+ # files).
3042
+ # Corresponds to the JSON property `overhead`
3043
+ # @return [String]
3044
+ attr_accessor :overhead
3045
+
3046
+ def initialize(**args)
3047
+ update!(**args)
3048
+ end
3049
+
3050
+ # Update properties of this object
3051
+ def update!(**args)
3052
+ @duration = args[:duration] if args.key?(:duration)
3053
+ @overhead = args[:overhead] if args.key?(:overhead)
3054
+ end
3055
+ end
3056
+
3057
+ # All information about the execution of a command, suitable for providing as
3058
+ # the Bots interface's `Lease.result` field.
3059
+ class GoogleDevtoolsRemoteworkersV1test2CommandResult
3060
+ include Google::Apis::Core::Hashable
3061
+
3062
+ # The elapsed time between calling Accept and Complete. The server will also
3063
+ # have its own idea of what this should be, but this excludes the overhead of
3064
+ # the RPCs and the bot response time.
3065
+ # Corresponds to the JSON property `duration`
3066
+ # @return [String]
3067
+ attr_accessor :duration
3068
+
3069
+ # The exit code of the process. An exit code of "0" should only be trusted if `
3070
+ # status` has a code of OK (otherwise it may simply be unset).
3071
+ # Corresponds to the JSON property `exitCode`
3072
+ # @return [Fixnum]
3073
+ attr_accessor :exit_code
3074
+
3075
+ # Implementation-dependent metadata about the task. Both servers and bots may
3076
+ # define messages which can be encoded here; bots are free to provide metadata
3077
+ # in multiple formats, and servers are free to choose one or more of the values
3078
+ # to process and ignore others. In particular, it is *not* considered an error
3079
+ # for the bot to provide the server with a field that it doesn't know about.
3080
+ # Corresponds to the JSON property `metadata`
3081
+ # @return [Array<Hash<String,Object>>]
3082
+ attr_accessor :metadata
3083
+
3084
+ # The CommandTask and CommandResult messages assume the existence of a service
3085
+ # that can serve blobs of content, identified by a hash and size known as a "
3086
+ # digest." The method by which these blobs may be retrieved is not specified
3087
+ # here, but a model implementation is in the Remote Execution API's "
3088
+ # ContentAddressibleStorage" interface. In the context of the RWAPI, a Digest
3089
+ # will virtually always refer to the contents of a file or a directory. The
3090
+ # latter is represented by the byte-encoded Directory message.
3091
+ # Corresponds to the JSON property `outputs`
3092
+ # @return [Google::Apis::RemotebuildexecutionV2::GoogleDevtoolsRemoteworkersV1test2Digest]
3093
+ attr_accessor :outputs
3094
+
3095
+ # The amount of time *not* spent executing the command (ie uploading/downloading
3096
+ # files).
3097
+ # Corresponds to the JSON property `overhead`
3098
+ # @return [String]
3099
+ attr_accessor :overhead
3100
+
3101
+ # The `Status` type defines a logical error model that is suitable for different
3102
+ # programming environments, including REST APIs and RPC APIs. It is used by [
3103
+ # gRPC](https://github.com/grpc). Each `Status` message contains three pieces of
3104
+ # data: error code, error message, and error details. You can find out more
3105
+ # about this error model and how to work with it in the [API Design Guide](https:
3106
+ # //cloud.google.com/apis/design/errors).
3107
+ # Corresponds to the JSON property `status`
3108
+ # @return [Google::Apis::RemotebuildexecutionV2::GoogleRpcStatus]
3109
+ attr_accessor :status
3110
+
3111
+ def initialize(**args)
3112
+ update!(**args)
3113
+ end
3114
+
3115
+ # Update properties of this object
3116
+ def update!(**args)
3117
+ @duration = args[:duration] if args.key?(:duration)
3118
+ @exit_code = args[:exit_code] if args.key?(:exit_code)
3119
+ @metadata = args[:metadata] if args.key?(:metadata)
3120
+ @outputs = args[:outputs] if args.key?(:outputs)
3121
+ @overhead = args[:overhead] if args.key?(:overhead)
3122
+ @status = args[:status] if args.key?(:status)
3123
+ end
3124
+ end
3125
+
3126
+ # Describes a shell-style task to execute, suitable for providing as the Bots
3127
+ # interface's `Lease.payload` field.
3128
+ class GoogleDevtoolsRemoteworkersV1test2CommandTask
3129
+ include Google::Apis::Core::Hashable
3130
+
3131
+ # Describes the expected outputs of the command.
3132
+ # Corresponds to the JSON property `expectedOutputs`
3133
+ # @return [Google::Apis::RemotebuildexecutionV2::GoogleDevtoolsRemoteworkersV1test2CommandTaskOutputs]
3134
+ attr_accessor :expected_outputs
3135
+
3136
+ # Describes the inputs to a shell-style task.
3137
+ # Corresponds to the JSON property `inputs`
3138
+ # @return [Google::Apis::RemotebuildexecutionV2::GoogleDevtoolsRemoteworkersV1test2CommandTaskInputs]
3139
+ attr_accessor :inputs
3140
+
3141
+ # Describes the timeouts associated with this task.
3142
+ # Corresponds to the JSON property `timeouts`
3143
+ # @return [Google::Apis::RemotebuildexecutionV2::GoogleDevtoolsRemoteworkersV1test2CommandTaskTimeouts]
3144
+ attr_accessor :timeouts
3145
+
3146
+ def initialize(**args)
3147
+ update!(**args)
3148
+ end
3149
+
3150
+ # Update properties of this object
3151
+ def update!(**args)
3152
+ @expected_outputs = args[:expected_outputs] if args.key?(:expected_outputs)
3153
+ @inputs = args[:inputs] if args.key?(:inputs)
3154
+ @timeouts = args[:timeouts] if args.key?(:timeouts)
3155
+ end
3156
+ end
3157
+
3158
+ # Describes the inputs to a shell-style task.
3159
+ class GoogleDevtoolsRemoteworkersV1test2CommandTaskInputs
3160
+ include Google::Apis::Core::Hashable
3161
+
3162
+ # The command itself to run (e.g., argv). This field should be passed directly
3163
+ # to the underlying operating system, and so it must be sensible to that
3164
+ # operating system. For example, on Windows, the first argument might be "C:\
3165
+ # Windows\System32\ping.exe" - that is, using drive letters and backslashes. A
3166
+ # command for a *nix system, on the other hand, would use forward slashes. All
3167
+ # other fields in the RWAPI must consistently use forward slashes, since those
3168
+ # fields may be interpretted by both the service and the bot.
3169
+ # Corresponds to the JSON property `arguments`
3170
+ # @return [Array<String>]
3171
+ attr_accessor :arguments
3172
+
3173
+ # All environment variables required by the task.
3174
+ # Corresponds to the JSON property `environmentVariables`
3175
+ # @return [Array<Google::Apis::RemotebuildexecutionV2::GoogleDevtoolsRemoteworkersV1test2CommandTaskInputsEnvironmentVariable>]
3176
+ attr_accessor :environment_variables
3177
+
3178
+ # The input filesystem to be set up prior to the task beginning. The contents
3179
+ # should be a repeated set of FileMetadata messages though other formats are
3180
+ # allowed if better for the implementation (eg, a LUCI-style .isolated file).
3181
+ # This field is repeated since implementations might want to cache the metadata,
3182
+ # in which case it may be useful to break up portions of the filesystem that
3183
+ # change frequently (eg, specific input files) from those that don't (eg,
3184
+ # standard header files).
3185
+ # Corresponds to the JSON property `files`
3186
+ # @return [Array<Google::Apis::RemotebuildexecutionV2::GoogleDevtoolsRemoteworkersV1test2Digest>]
3187
+ attr_accessor :files
3188
+
3189
+ # Inline contents for blobs expected to be needed by the bot to execute the task.
3190
+ # For example, contents of entries in `files` or blobs that are indirectly
3191
+ # referenced by an entry there. The bot should check against this list before
3192
+ # downloading required task inputs to reduce the number of communications
3193
+ # between itself and the remote CAS server.
3194
+ # Corresponds to the JSON property `inlineBlobs`
3195
+ # @return [Array<Google::Apis::RemotebuildexecutionV2::GoogleDevtoolsRemoteworkersV1test2Blob>]
3196
+ attr_accessor :inline_blobs
3197
+
3198
+ # Directory from which a command is executed. It is a relative directory with
3199
+ # respect to the bot's working directory (i.e., "./"). If it is non-empty, then
3200
+ # it must exist under "./". Otherwise, "./" will be used.
3201
+ # Corresponds to the JSON property `workingDirectory`
3202
+ # @return [String]
3203
+ attr_accessor :working_directory
3204
+
3205
+ def initialize(**args)
3206
+ update!(**args)
3207
+ end
3208
+
3209
+ # Update properties of this object
3210
+ def update!(**args)
3211
+ @arguments = args[:arguments] if args.key?(:arguments)
3212
+ @environment_variables = args[:environment_variables] if args.key?(:environment_variables)
3213
+ @files = args[:files] if args.key?(:files)
3214
+ @inline_blobs = args[:inline_blobs] if args.key?(:inline_blobs)
3215
+ @working_directory = args[:working_directory] if args.key?(:working_directory)
3216
+ end
3217
+ end
3218
+
3219
+ # An environment variable required by this task.
3220
+ class GoogleDevtoolsRemoteworkersV1test2CommandTaskInputsEnvironmentVariable
3221
+ include Google::Apis::Core::Hashable
3222
+
3223
+ # The envvar name.
3224
+ # Corresponds to the JSON property `name`
3225
+ # @return [String]
3226
+ attr_accessor :name
3227
+
3228
+ # The envvar value.
3229
+ # Corresponds to the JSON property `value`
3230
+ # @return [String]
3231
+ attr_accessor :value
3232
+
3233
+ def initialize(**args)
3234
+ update!(**args)
3235
+ end
3236
+
3237
+ # Update properties of this object
3238
+ def update!(**args)
3239
+ @name = args[:name] if args.key?(:name)
3240
+ @value = args[:value] if args.key?(:value)
3241
+ end
3242
+ end
3243
+
3244
+ # Describes the expected outputs of the command.
3245
+ class GoogleDevtoolsRemoteworkersV1test2CommandTaskOutputs
3246
+ include Google::Apis::Core::Hashable
3247
+
3248
+ # A list of expected directories, relative to the execution root. All paths MUST
3249
+ # be delimited by forward slashes.
3250
+ # Corresponds to the JSON property `directories`
3251
+ # @return [Array<String>]
3252
+ attr_accessor :directories
3253
+
3254
+ # A list of expected files, relative to the execution root. All paths MUST be
3255
+ # delimited by forward slashes.
3256
+ # Corresponds to the JSON property `files`
3257
+ # @return [Array<String>]
3258
+ attr_accessor :files
3259
+
3260
+ # The destination to which any stderr should be sent. The method by which the
3261
+ # bot should send the stream contents to that destination is not defined in this
3262
+ # API. As examples, the destination could be a file referenced in the `files`
3263
+ # field in this message, or it could be a URI that must be written via the
3264
+ # ByteStream API.
3265
+ # Corresponds to the JSON property `stderrDestination`
3266
+ # @return [String]
3267
+ attr_accessor :stderr_destination
3268
+
3269
+ # The destination to which any stdout should be sent. The method by which the
3270
+ # bot should send the stream contents to that destination is not defined in this
3271
+ # API. As examples, the destination could be a file referenced in the `files`
3272
+ # field in this message, or it could be a URI that must be written via the
3273
+ # ByteStream API.
3274
+ # Corresponds to the JSON property `stdoutDestination`
3275
+ # @return [String]
3276
+ attr_accessor :stdout_destination
3277
+
3278
+ def initialize(**args)
3279
+ update!(**args)
3280
+ end
3281
+
3282
+ # Update properties of this object
3283
+ def update!(**args)
3284
+ @directories = args[:directories] if args.key?(:directories)
3285
+ @files = args[:files] if args.key?(:files)
3286
+ @stderr_destination = args[:stderr_destination] if args.key?(:stderr_destination)
3287
+ @stdout_destination = args[:stdout_destination] if args.key?(:stdout_destination)
3288
+ end
3289
+ end
3290
+
3291
+ # Describes the timeouts associated with this task.
3292
+ class GoogleDevtoolsRemoteworkersV1test2CommandTaskTimeouts
3293
+ include Google::Apis::Core::Hashable
3294
+
3295
+ # This specifies the maximum time that the task can run, excluding the time
3296
+ # required to download inputs or upload outputs. That is, the worker will
3297
+ # terminate the task if it runs longer than this.
3298
+ # Corresponds to the JSON property `execution`
3299
+ # @return [String]
3300
+ attr_accessor :execution
3301
+
3302
+ # This specifies the maximum amount of time the task can be idle - that is, go
3303
+ # without generating some output in either stdout or stderr. If the process is
3304
+ # silent for more than the specified time, the worker will terminate the task.
3305
+ # Corresponds to the JSON property `idle`
3306
+ # @return [String]
3307
+ attr_accessor :idle
3308
+
3309
+ # If the execution or IO timeouts are exceeded, the worker will try to
3310
+ # gracefully terminate the task and return any existing logs. However, tasks may
3311
+ # be hard-frozen in which case this process will fail. This timeout specifies
3312
+ # how long to wait for a terminated task to shut down gracefully (e.g. via
3313
+ # SIGTERM) before we bring down the hammer (e.g. SIGKILL on *nix,
3314
+ # CTRL_BREAK_EVENT on Windows).
3315
+ # Corresponds to the JSON property `shutdown`
3316
+ # @return [String]
3317
+ attr_accessor :shutdown
3318
+
3319
+ def initialize(**args)
3320
+ update!(**args)
3321
+ end
3322
+
3323
+ # Update properties of this object
3324
+ def update!(**args)
3325
+ @execution = args[:execution] if args.key?(:execution)
3326
+ @idle = args[:idle] if args.key?(:idle)
3327
+ @shutdown = args[:shutdown] if args.key?(:shutdown)
3328
+ end
3329
+ end
3330
+
3331
+ # The CommandTask and CommandResult messages assume the existence of a service
3332
+ # that can serve blobs of content, identified by a hash and size known as a "
3333
+ # digest." The method by which these blobs may be retrieved is not specified
3334
+ # here, but a model implementation is in the Remote Execution API's "
3335
+ # ContentAddressibleStorage" interface. In the context of the RWAPI, a Digest
3336
+ # will virtually always refer to the contents of a file or a directory. The
3337
+ # latter is represented by the byte-encoded Directory message.
3338
+ class GoogleDevtoolsRemoteworkersV1test2Digest
3339
+ include Google::Apis::Core::Hashable
3340
+
3341
+ # A string-encoded hash (eg "1a2b3c", not the byte array [0x1a, 0x2b, 0x3c])
3342
+ # using an implementation-defined hash algorithm (eg SHA-256).
3343
+ # Corresponds to the JSON property `hash`
3344
+ # @return [String]
3345
+ attr_accessor :hash_prop
3346
+
3347
+ # The size of the contents. While this is not strictly required as part of an
3348
+ # identifier (after all, any given hash will have exactly one canonical size),
3349
+ # it's useful in almost all cases when one might want to send or retrieve blobs
3350
+ # of content and is included here for this reason.
3351
+ # Corresponds to the JSON property `sizeBytes`
3352
+ # @return [Fixnum]
3353
+ attr_accessor :size_bytes
3354
+
3355
+ def initialize(**args)
3356
+ update!(**args)
3357
+ end
3358
+
3359
+ # Update properties of this object
3360
+ def update!(**args)
3361
+ @hash_prop = args[:hash_prop] if args.key?(:hash_prop)
3362
+ @size_bytes = args[:size_bytes] if args.key?(:size_bytes)
3363
+ end
3364
+ end
3365
+
3366
+ # The contents of a directory. Similar to the equivalent message in the Remote
3367
+ # Execution API.
3368
+ class GoogleDevtoolsRemoteworkersV1test2Directory
3369
+ include Google::Apis::Core::Hashable
3370
+
3371
+ # Any subdirectories
3372
+ # Corresponds to the JSON property `directories`
3373
+ # @return [Array<Google::Apis::RemotebuildexecutionV2::GoogleDevtoolsRemoteworkersV1test2DirectoryMetadata>]
3374
+ attr_accessor :directories
3375
+
3376
+ # The files in this directory
3377
+ # Corresponds to the JSON property `files`
3378
+ # @return [Array<Google::Apis::RemotebuildexecutionV2::GoogleDevtoolsRemoteworkersV1test2FileMetadata>]
3379
+ attr_accessor :files
3380
+
3381
+ def initialize(**args)
3382
+ update!(**args)
3383
+ end
3384
+
3385
+ # Update properties of this object
3386
+ def update!(**args)
3387
+ @directories = args[:directories] if args.key?(:directories)
3388
+ @files = args[:files] if args.key?(:files)
3389
+ end
3390
+ end
3391
+
3392
+ # The metadata for a directory. Similar to the equivalent message in the Remote
3393
+ # Execution API.
3394
+ class GoogleDevtoolsRemoteworkersV1test2DirectoryMetadata
3395
+ include Google::Apis::Core::Hashable
3396
+
3397
+ # The CommandTask and CommandResult messages assume the existence of a service
3398
+ # that can serve blobs of content, identified by a hash and size known as a "
3399
+ # digest." The method by which these blobs may be retrieved is not specified
3400
+ # here, but a model implementation is in the Remote Execution API's "
3401
+ # ContentAddressibleStorage" interface. In the context of the RWAPI, a Digest
3402
+ # will virtually always refer to the contents of a file or a directory. The
3403
+ # latter is represented by the byte-encoded Directory message.
3404
+ # Corresponds to the JSON property `digest`
3405
+ # @return [Google::Apis::RemotebuildexecutionV2::GoogleDevtoolsRemoteworkersV1test2Digest]
3406
+ attr_accessor :digest
3407
+
3408
+ # The path of the directory, as in FileMetadata.path.
3409
+ # Corresponds to the JSON property `path`
3410
+ # @return [String]
3411
+ attr_accessor :path
3412
+
3413
+ def initialize(**args)
3414
+ update!(**args)
3415
+ end
3416
+
3417
+ # Update properties of this object
3418
+ def update!(**args)
3419
+ @digest = args[:digest] if args.key?(:digest)
3420
+ @path = args[:path] if args.key?(:path)
3421
+ end
3422
+ end
3423
+
3424
+ # The metadata for a file. Similar to the equivalent message in the Remote
3425
+ # Execution API.
3426
+ class GoogleDevtoolsRemoteworkersV1test2FileMetadata
3427
+ include Google::Apis::Core::Hashable
3428
+
3429
+ # If the file is small enough, its contents may also or alternatively be listed
3430
+ # here.
3431
+ # Corresponds to the JSON property `contents`
3432
+ # NOTE: Values are automatically base64 encoded/decoded in the client library.
3433
+ # @return [String]
3434
+ attr_accessor :contents
3435
+
3436
+ # The CommandTask and CommandResult messages assume the existence of a service
3437
+ # that can serve blobs of content, identified by a hash and size known as a "
3438
+ # digest." The method by which these blobs may be retrieved is not specified
3439
+ # here, but a model implementation is in the Remote Execution API's "
3440
+ # ContentAddressibleStorage" interface. In the context of the RWAPI, a Digest
3441
+ # will virtually always refer to the contents of a file or a directory. The
3442
+ # latter is represented by the byte-encoded Directory message.
3443
+ # Corresponds to the JSON property `digest`
3444
+ # @return [Google::Apis::RemotebuildexecutionV2::GoogleDevtoolsRemoteworkersV1test2Digest]
3445
+ attr_accessor :digest
3446
+
3447
+ # Properties of the file
3448
+ # Corresponds to the JSON property `isExecutable`
3449
+ # @return [Boolean]
3450
+ attr_accessor :is_executable
3451
+ alias_method :is_executable?, :is_executable
3452
+
3453
+ # The path of this file. If this message is part of the CommandOutputs.outputs
3454
+ # fields, the path is relative to the execution root and must correspond to an
3455
+ # entry in CommandTask.outputs.files. If this message is part of a Directory
3456
+ # message, then the path is relative to the root of that directory. All paths
3457
+ # MUST be delimited by forward slashes.
3458
+ # Corresponds to the JSON property `path`
3459
+ # @return [String]
3460
+ attr_accessor :path
3461
+
3462
+ def initialize(**args)
3463
+ update!(**args)
3464
+ end
3465
+
3466
+ # Update properties of this object
3467
+ def update!(**args)
3468
+ @contents = args[:contents] if args.key?(:contents)
3469
+ @digest = args[:digest] if args.key?(:digest)
3470
+ @is_executable = args[:is_executable] if args.key?(:is_executable)
3471
+ @path = args[:path] if args.key?(:path)
3472
+ end
3473
+ end
3474
+
3475
+ # This resource represents a long-running operation that is the result of a
3476
+ # network API call.
3477
+ class GoogleLongrunningOperation
3478
+ include Google::Apis::Core::Hashable
3479
+
3480
+ # If the value is `false`, it means the operation is still in progress. If `true`
3481
+ # , the operation is completed, and either `error` or `response` is available.
3482
+ # Corresponds to the JSON property `done`
3483
+ # @return [Boolean]
3484
+ attr_accessor :done
3485
+ alias_method :done?, :done
3486
+
3487
+ # The `Status` type defines a logical error model that is suitable for different
3488
+ # programming environments, including REST APIs and RPC APIs. It is used by [
3489
+ # gRPC](https://github.com/grpc). Each `Status` message contains three pieces of
3490
+ # data: error code, error message, and error details. You can find out more
3491
+ # about this error model and how to work with it in the [API Design Guide](https:
3492
+ # //cloud.google.com/apis/design/errors).
3493
+ # Corresponds to the JSON property `error`
3494
+ # @return [Google::Apis::RemotebuildexecutionV2::GoogleRpcStatus]
3495
+ attr_accessor :error
3496
+
3497
+ # Service-specific metadata associated with the operation. It typically contains
3498
+ # progress information and common metadata such as create time. Some services
3499
+ # might not provide such metadata. Any method that returns a long-running
3500
+ # operation should document the metadata type, if any.
3501
+ # Corresponds to the JSON property `metadata`
3502
+ # @return [Hash<String,Object>]
3503
+ attr_accessor :metadata
3504
+
3505
+ # The server-assigned name, which is only unique within the same service that
3506
+ # originally returns it. If you use the default HTTP mapping, the `name` should
3507
+ # be a resource name ending with `operations/`unique_id``.
3508
+ # Corresponds to the JSON property `name`
3509
+ # @return [String]
3510
+ attr_accessor :name
3511
+
3512
+ # The normal response of the operation in case of success. If the original
3513
+ # method returns no data on success, such as `Delete`, the response is `google.
3514
+ # protobuf.Empty`. If the original method is standard `Get`/`Create`/`Update`,
3515
+ # the response should be the resource. For other methods, the response should
3516
+ # have the type `XxxResponse`, where `Xxx` is the original method name. For
3517
+ # example, if the original method name is `TakeSnapshot()`, the inferred
3518
+ # response type is `TakeSnapshotResponse`.
3519
+ # Corresponds to the JSON property `response`
3520
+ # @return [Hash<String,Object>]
3521
+ attr_accessor :response
3522
+
3523
+ def initialize(**args)
3524
+ update!(**args)
3525
+ end
3526
+
3527
+ # Update properties of this object
3528
+ def update!(**args)
3529
+ @done = args[:done] if args.key?(:done)
3530
+ @error = args[:error] if args.key?(:error)
3531
+ @metadata = args[:metadata] if args.key?(:metadata)
3532
+ @name = args[:name] if args.key?(:name)
3533
+ @response = args[:response] if args.key?(:response)
3534
+ end
3535
+ end
3536
+
3537
+ # The `Status` type defines a logical error model that is suitable for different
3538
+ # programming environments, including REST APIs and RPC APIs. It is used by [
3539
+ # gRPC](https://github.com/grpc). Each `Status` message contains three pieces of
3540
+ # data: error code, error message, and error details. You can find out more
3541
+ # about this error model and how to work with it in the [API Design Guide](https:
3542
+ # //cloud.google.com/apis/design/errors).
3543
+ class GoogleRpcStatus
3544
+ include Google::Apis::Core::Hashable
3545
+
3546
+ # The status code, which should be an enum value of google.rpc.Code.
3547
+ # Corresponds to the JSON property `code`
3548
+ # @return [Fixnum]
3549
+ attr_accessor :code
3550
+
3551
+ # A list of messages that carry the error details. There is a common set of
3552
+ # message types for APIs to use.
3553
+ # Corresponds to the JSON property `details`
3554
+ # @return [Array<Hash<String,Object>>]
3555
+ attr_accessor :details
3556
+
3557
+ # A developer-facing error message, which should be in English. Any user-facing
3558
+ # error message should be localized and sent in the google.rpc.Status.details
3559
+ # field, or localized by the client.
3560
+ # Corresponds to the JSON property `message`
3561
+ # @return [String]
3562
+ attr_accessor :message
3563
+
3564
+ def initialize(**args)
3565
+ update!(**args)
3566
+ end
3567
+
3568
+ # Update properties of this object
3569
+ def update!(**args)
3570
+ @code = args[:code] if args.key?(:code)
3571
+ @details = args[:details] if args.key?(:details)
3572
+ @message = args[:message] if args.key?(:message)
3573
+ end
3574
+ end
3575
+ end
3576
+ end
3577
+ end