google-apis-remotebuildexecution_v1 0.1.0

Sign up to get free protection for your applications and to get access to all the features.
@@ -0,0 +1,7 @@
1
+ ---
2
+ SHA256:
3
+ metadata.gz: 3fa937a09388d2f721c0b6b3d5c4599b50c8af356c248cb0d388bbf4975734ba
4
+ data.tar.gz: dbed44d2204bed05ec7345083fff60ba151d0c4d82412c5e4d9fb9649dd68e26
5
+ SHA512:
6
+ metadata.gz: 47a02daf2779b0e873f1a3c3b77fe6329bf3f1a4f1a9cdf406e170b3923e2c4f9a2c996cb7171bab1900e2f61742202f6aedfa230f121b3e2f71c336dc9d59d7
7
+ data.tar.gz: fa15eaa6c85d79a5ef376e0e0e6aa8dd01789b67a25ab85c1a6eb97680dd529a2d6fe21906267f2090110d1ffeb7312d72736cfedff903b7bf3dfda943786920
@@ -0,0 +1,13 @@
1
+ --hide-void-return
2
+ --no-private
3
+ --verbose
4
+ --title=google-apis-remotebuildexecution_v1
5
+ --markup-provider=redcarpet
6
+ --markup=markdown
7
+ --main OVERVIEW.md
8
+ lib/google/apis/remotebuildexecution_v1/*.rb
9
+ lib/google/apis/remotebuildexecution_v1.rb
10
+ -
11
+ OVERVIEW.md
12
+ CHANGELOG.md
13
+ LICENSE.md
@@ -0,0 +1,7 @@
1
+ # Release history for google-apis-remotebuildexecution_v1
2
+
3
+ ### v0.1.0 (2021-01-07)
4
+
5
+ * Regenerated using generator version 0.1.1
6
+ * Regenerated from discovery document revision 20201201
7
+
@@ -0,0 +1,202 @@
1
+
2
+ Apache License
3
+ Version 2.0, January 2004
4
+ http://www.apache.org/licenses/
5
+
6
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
7
+
8
+ 1. Definitions.
9
+
10
+ "License" shall mean the terms and conditions for use, reproduction,
11
+ and distribution as defined by Sections 1 through 9 of this document.
12
+
13
+ "Licensor" shall mean the copyright owner or entity authorized by
14
+ the copyright owner that is granting the License.
15
+
16
+ "Legal Entity" shall mean the union of the acting entity and all
17
+ other entities that control, are controlled by, or are under common
18
+ control with that entity. For the purposes of this definition,
19
+ "control" means (i) the power, direct or indirect, to cause the
20
+ direction or management of such entity, whether by contract or
21
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
22
+ outstanding shares, or (iii) beneficial ownership of such entity.
23
+
24
+ "You" (or "Your") shall mean an individual or Legal Entity
25
+ exercising permissions granted by this License.
26
+
27
+ "Source" form shall mean the preferred form for making modifications,
28
+ including but not limited to software source code, documentation
29
+ source, and configuration files.
30
+
31
+ "Object" form shall mean any form resulting from mechanical
32
+ transformation or translation of a Source form, including but
33
+ not limited to compiled object code, generated documentation,
34
+ and conversions to other media types.
35
+
36
+ "Work" shall mean the work of authorship, whether in Source or
37
+ Object form, made available under the License, as indicated by a
38
+ copyright notice that is included in or attached to the work
39
+ (an example is provided in the Appendix below).
40
+
41
+ "Derivative Works" shall mean any work, whether in Source or Object
42
+ form, that is based on (or derived from) the Work and for which the
43
+ editorial revisions, annotations, elaborations, or other modifications
44
+ represent, as a whole, an original work of authorship. For the purposes
45
+ of this License, Derivative Works shall not include works that remain
46
+ separable from, or merely link (or bind by name) to the interfaces of,
47
+ the Work and Derivative Works thereof.
48
+
49
+ "Contribution" shall mean any work of authorship, including
50
+ the original version of the Work and any modifications or additions
51
+ to that Work or Derivative Works thereof, that is intentionally
52
+ submitted to Licensor for inclusion in the Work by the copyright owner
53
+ or by an individual or Legal Entity authorized to submit on behalf of
54
+ the copyright owner. For the purposes of this definition, "submitted"
55
+ means any form of electronic, verbal, or written communication sent
56
+ to the Licensor or its representatives, including but not limited to
57
+ communication on electronic mailing lists, source code control systems,
58
+ and issue tracking systems that are managed by, or on behalf of, the
59
+ Licensor for the purpose of discussing and improving the Work, but
60
+ excluding communication that is conspicuously marked or otherwise
61
+ designated in writing by the copyright owner as "Not a Contribution."
62
+
63
+ "Contributor" shall mean Licensor and any individual or Legal Entity
64
+ on behalf of whom a Contribution has been received by Licensor and
65
+ subsequently incorporated within the Work.
66
+
67
+ 2. Grant of Copyright License. Subject to the terms and conditions of
68
+ this License, each Contributor hereby grants to You a perpetual,
69
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
70
+ copyright license to reproduce, prepare Derivative Works of,
71
+ publicly display, publicly perform, sublicense, and distribute the
72
+ Work and such Derivative Works in Source or Object form.
73
+
74
+ 3. Grant of Patent License. Subject to the terms and conditions of
75
+ this License, each Contributor hereby grants to You a perpetual,
76
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
77
+ (except as stated in this section) patent license to make, have made,
78
+ use, offer to sell, sell, import, and otherwise transfer the Work,
79
+ where such license applies only to those patent claims licensable
80
+ by such Contributor that are necessarily infringed by their
81
+ Contribution(s) alone or by combination of their Contribution(s)
82
+ with the Work to which such Contribution(s) was submitted. If You
83
+ institute patent litigation against any entity (including a
84
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
85
+ or a Contribution incorporated within the Work constitutes direct
86
+ or contributory patent infringement, then any patent licenses
87
+ granted to You under this License for that Work shall terminate
88
+ as of the date such litigation is filed.
89
+
90
+ 4. Redistribution. You may reproduce and distribute copies of the
91
+ Work or Derivative Works thereof in any medium, with or without
92
+ modifications, and in Source or Object form, provided that You
93
+ meet the following conditions:
94
+
95
+ (a) You must give any other recipients of the Work or
96
+ Derivative Works a copy of this License; and
97
+
98
+ (b) You must cause any modified files to carry prominent notices
99
+ stating that You changed the files; and
100
+
101
+ (c) You must retain, in the Source form of any Derivative Works
102
+ that You distribute, all copyright, patent, trademark, and
103
+ attribution notices from the Source form of the Work,
104
+ excluding those notices that do not pertain to any part of
105
+ the Derivative Works; and
106
+
107
+ (d) If the Work includes a "NOTICE" text file as part of its
108
+ distribution, then any Derivative Works that You distribute must
109
+ include a readable copy of the attribution notices contained
110
+ within such NOTICE file, excluding those notices that do not
111
+ pertain to any part of the Derivative Works, in at least one
112
+ of the following places: within a NOTICE text file distributed
113
+ as part of the Derivative Works; within the Source form or
114
+ documentation, if provided along with the Derivative Works; or,
115
+ within a display generated by the Derivative Works, if and
116
+ wherever such third-party notices normally appear. The contents
117
+ of the NOTICE file are for informational purposes only and
118
+ do not modify the License. You may add Your own attribution
119
+ notices within Derivative Works that You distribute, alongside
120
+ or as an addendum to the NOTICE text from the Work, provided
121
+ that such additional attribution notices cannot be construed
122
+ as modifying the License.
123
+
124
+ You may add Your own copyright statement to Your modifications and
125
+ may provide additional or different license terms and conditions
126
+ for use, reproduction, or distribution of Your modifications, or
127
+ for any such Derivative Works as a whole, provided Your use,
128
+ reproduction, and distribution of the Work otherwise complies with
129
+ the conditions stated in this License.
130
+
131
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
132
+ any Contribution intentionally submitted for inclusion in the Work
133
+ by You to the Licensor shall be under the terms and conditions of
134
+ this License, without any additional terms or conditions.
135
+ Notwithstanding the above, nothing herein shall supersede or modify
136
+ the terms of any separate license agreement you may have executed
137
+ with Licensor regarding such Contributions.
138
+
139
+ 6. Trademarks. This License does not grant permission to use the trade
140
+ names, trademarks, service marks, or product names of the Licensor,
141
+ except as required for reasonable and customary use in describing the
142
+ origin of the Work and reproducing the content of the NOTICE file.
143
+
144
+ 7. Disclaimer of Warranty. Unless required by applicable law or
145
+ agreed to in writing, Licensor provides the Work (and each
146
+ Contributor provides its Contributions) on an "AS IS" BASIS,
147
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
148
+ implied, including, without limitation, any warranties or conditions
149
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
150
+ PARTICULAR PURPOSE. You are solely responsible for determining the
151
+ appropriateness of using or redistributing the Work and assume any
152
+ risks associated with Your exercise of permissions under this License.
153
+
154
+ 8. Limitation of Liability. In no event and under no legal theory,
155
+ whether in tort (including negligence), contract, or otherwise,
156
+ unless required by applicable law (such as deliberate and grossly
157
+ negligent acts) or agreed to in writing, shall any Contributor be
158
+ liable to You for damages, including any direct, indirect, special,
159
+ incidental, or consequential damages of any character arising as a
160
+ result of this License or out of the use or inability to use the
161
+ Work (including but not limited to damages for loss of goodwill,
162
+ work stoppage, computer failure or malfunction, or any and all
163
+ other commercial damages or losses), even if such Contributor
164
+ has been advised of the possibility of such damages.
165
+
166
+ 9. Accepting Warranty or Additional Liability. While redistributing
167
+ the Work or Derivative Works thereof, You may choose to offer,
168
+ and charge a fee for, acceptance of support, warranty, indemnity,
169
+ or other liability obligations and/or rights consistent with this
170
+ License. However, in accepting such obligations, You may act only
171
+ on Your own behalf and on Your sole responsibility, not on behalf
172
+ of any other Contributor, and only if You agree to indemnify,
173
+ defend, and hold each Contributor harmless for any liability
174
+ incurred by, or claims asserted against, such Contributor by reason
175
+ of your accepting any such warranty or additional liability.
176
+
177
+ END OF TERMS AND CONDITIONS
178
+
179
+ APPENDIX: How to apply the Apache License to your work.
180
+
181
+ To apply the Apache License to your work, attach the following
182
+ boilerplate notice, with the fields enclosed by brackets "[]"
183
+ replaced with your own identifying information. (Don't include
184
+ the brackets!) The text should be enclosed in the appropriate
185
+ comment syntax for the file format. We also recommend that a
186
+ file or class name and description of purpose be included on the
187
+ same "printed page" as the copyright notice for easier
188
+ identification within third-party archives.
189
+
190
+ Copyright [yyyy] [name of copyright owner]
191
+
192
+ Licensed under the Apache License, Version 2.0 (the "License");
193
+ you may not use this file except in compliance with the License.
194
+ You may obtain a copy of the License at
195
+
196
+ http://www.apache.org/licenses/LICENSE-2.0
197
+
198
+ Unless required by applicable law or agreed to in writing, software
199
+ distributed under the License is distributed on an "AS IS" BASIS,
200
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
201
+ See the License for the specific language governing permissions and
202
+ limitations under the License.
@@ -0,0 +1,96 @@
1
+ # Simple REST client for version V1 of the Remote Build Execution API
2
+
3
+ This is a simple client library for version V1 of the Remote Build Execution API. It provides:
4
+
5
+ * A client object that connects to the HTTP/JSON REST endpoint for the service.
6
+ * Ruby objects for data structures related to the service.
7
+ * Integration with the googleauth gem for authentication using OAuth, API keys, and service accounts.
8
+ * Control of retry, pagination, and timeouts.
9
+
10
+ Note that although this client library is supported and will continue to be updated to track changes to the service, it is otherwise considered complete and not under active development. Many Google services, especially Google Cloud Platform services, may provide a more modern client that is under more active development and improvement. See the section below titled *Which client should I use?* for more information.
11
+
12
+ ## Getting started
13
+
14
+ ### Before you begin
15
+
16
+ There are a few setup steps you need to complete before you can use this library:
17
+
18
+ 1. If you don't already have a Google account, [sign up](https://www.google.com/accounts).
19
+ 2. If you have never created a Google APIs Console project, read about [Managing Projects](https://cloud.google.com/resource-manager/docs/creating-managing-projects) and create a project in the [Google API Console](https://console.cloud.google.com/).
20
+ 3. Most APIs need to be enabled for your project. [Enable it](https://console.cloud.google.com/apis/library/remotebuildexecution.googleapis.com) in the console.
21
+
22
+ ### Installation
23
+
24
+ Add this line to your application's Gemfile:
25
+
26
+ ```ruby
27
+ gem 'google-apis-remotebuildexecution_v1', '~> 0.1'
28
+ ```
29
+
30
+ And then execute:
31
+
32
+ ```
33
+ $ bundle
34
+ ```
35
+
36
+ Or install it yourself as:
37
+
38
+ ```
39
+ $ gem install google-apis-remotebuildexecution_v1
40
+ ```
41
+
42
+ ### Creating a client object
43
+
44
+ Once the gem is installed, you can load the client code and instantiate a client.
45
+
46
+ ```ruby
47
+ # Load the client
48
+ require "google/apis/remotebuildexecution_v1"
49
+
50
+ # Create a client object
51
+ client = Google::Apis::RemotebuildexecutionV1::RemoteBuildExecutionService.new
52
+
53
+ # Authenticate calls
54
+ client.authentication = # ... use the googleauth gem to create credentials
55
+ ```
56
+
57
+ See the class reference docs for information on the methods you can call from a client.
58
+
59
+ ## Documentation
60
+
61
+ More detailed descriptions of the Google simple REST clients are available in two documents.
62
+
63
+ * The [Usage Guide](https://github.com/googleapis/google-api-ruby-client/blob/master/docs/usage-guide.md) discusses how to make API calls, how to use the provided data structures, and how to work the various features of the client library, including media upload and download, error handling, retries, pagination, and logging.
64
+ * The [Auth Guide](https://github.com/googleapis/google-api-ruby-client/blob/master/docs/auth-guide.md) discusses authentication in the client libraries, including API keys, OAuth 2.0, service accounts, and environment variables.
65
+
66
+ (Note: the above documents are written for the simple REST clients in general, and their examples may not reflect the Remotebuildexecution service in particular.)
67
+
68
+ For reference information on specific calls in the Remote Build Execution API, see the {Google::Apis::RemotebuildexecutionV1::RemoteBuildExecutionService class reference docs}.
69
+
70
+ ## Which client should I use?
71
+
72
+ Google provides two types of Ruby API client libraries: **simple REST clients** and **modern clients**.
73
+
74
+ This library, `google-apis-remotebuildexecution_v1`, is a simple REST client. You can identify these clients by their gem names, which are always in the form `google-apis-<servicename>_<serviceversion>`. The simple REST clients connect to HTTP/JSON REST endpoints and are automatically generated from service discovery documents. They support most API functionality, but their class interfaces are sometimes awkward.
75
+
76
+ Modern clients are produced by a modern code generator, sometimes combined with hand-crafted functionality. Most modern clients connect to high-performance gRPC endpoints, although a few are backed by REST services. Modern clients are available for many Google services, especially Google Cloud Platform services, but do not yet support all the services covered by the simple clients.
77
+
78
+ Gem names for modern clients are often of the form `google-cloud-<service_name>`. (For example, [google-cloud-pubsub](https://rubygems.org/gems/google-cloud-pubsub).) Note that most modern clients also have corresponding "versioned" gems with names like `google-cloud-<service_name>-<version>`. (For example, [google-cloud-pubsub-v1](https://rubygems.org/gems/google-cloud-pubsub-v1).) The "versioned" gems can be used directly, but often provide lower-level interfaces. In most cases, the main gem is recommended.
79
+
80
+ **For most users, we recommend the modern client, if one is available.** Compared with simple clients, modern clients are generally much easier to use and more Ruby-like, support more advanced features such as streaming and long-running operations, and often provide much better performance. You may consider using a simple client instead, if a modern client is not yet available for the service you want to use, or if you are not able to use gRPC on your infrastructure.
81
+
82
+ The [product documentation](https://cloud.google.com/remote-build-execution/docs/) may provide guidance regarding the preferred client library to use.
83
+
84
+ ## Supported Ruby versions
85
+
86
+ This library is supported on Ruby 2.5+.
87
+
88
+ Google provides official support for Ruby versions that are actively supported by Ruby Core -- that is, Ruby versions that are either in normal maintenance or in security maintenance, and not end of life. Currently, this means Ruby 2.5 and later. Older versions of Ruby _may_ still work, but are unsupported and not recommended. See https://www.ruby-lang.org/en/downloads/branches/ for details about the Ruby support schedule.
89
+
90
+ ## License
91
+
92
+ This library is licensed under Apache 2.0. Full license text is available in the {file:LICENSE.md LICENSE}.
93
+
94
+ ## Support
95
+
96
+ Please [report bugs at the project on Github](https://github.com/google/google-api-ruby-client/issues). Don't hesitate to [ask questions](http://stackoverflow.com/questions/tagged/google-api-ruby-client) about the client or APIs on [StackOverflow](http://stackoverflow.com).
@@ -0,0 +1,15 @@
1
+ # Copyright 2020 Google LLC
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ require "google/apis/remotebuildexecution_v1"
@@ -0,0 +1,36 @@
1
+ # Copyright 2020 Google LLC
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ require 'google/apis/remotebuildexecution_v1/service.rb'
16
+ require 'google/apis/remotebuildexecution_v1/classes.rb'
17
+ require 'google/apis/remotebuildexecution_v1/representations.rb'
18
+ require 'google/apis/remotebuildexecution_v1/gem_version.rb'
19
+
20
+ module Google
21
+ module Apis
22
+ # Remote Build Execution API
23
+ #
24
+ # Supplies a Remote Execution API service for tools such as bazel.
25
+ #
26
+ # @see https://cloud.google.com/remote-build-execution/docs/
27
+ module RemotebuildexecutionV1
28
+ # Version of the Remote Build Execution API this client connects to.
29
+ # This is NOT the gem version.
30
+ VERSION = 'V1'
31
+
32
+ # View and manage your data across Google Cloud Platform services
33
+ AUTH_CLOUD_PLATFORM = 'https://www.googleapis.com/auth/cloud-platform'
34
+ end
35
+ end
36
+ end
@@ -0,0 +1,2978 @@
1
+ # Copyright 2020 Google LLC
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ require 'date'
16
+ require 'google/apis/core/base_service'
17
+ require 'google/apis/core/json_representation'
18
+ require 'google/apis/core/hashable'
19
+ require 'google/apis/errors'
20
+
21
+ module Google
22
+ module Apis
23
+ module RemotebuildexecutionV1
24
+
25
+ # An `Action` captures all the information about an execution which is required
26
+ # to reproduce it. `Action`s are the core component of the [Execution] service.
27
+ # A single `Action` represents a repeatable action that can be performed by the
28
+ # execution service. `Action`s can be succinctly identified by the digest of
29
+ # their wire format encoding and, once an `Action` has been executed, will be
30
+ # cached in the action cache. Future requests can then use the cached result
31
+ # rather than needing to run afresh. When a server completes execution of an
32
+ # Action, it MAY choose to cache the result in the ActionCache unless `
33
+ # do_not_cache` is `true`. Clients SHOULD expect the server to do so. By default,
34
+ # future calls to Execute the same `Action` will also serve their results from
35
+ # the cache. Clients must take care to understand the caching behaviour. Ideally,
36
+ # all `Action`s will be reproducible so that serving a result from cache is
37
+ # always desirable and correct.
38
+ class BuildBazelRemoteExecutionV2Action
39
+ include Google::Apis::Core::Hashable
40
+
41
+ # A content digest. A digest for a given blob consists of the size of the blob
42
+ # and its hash. The hash algorithm to use is defined by the server. The size is
43
+ # considered to be an integral part of the digest and cannot be separated. That
44
+ # is, even if the `hash` field is correctly specified but `size_bytes` is not,
45
+ # the server MUST reject the request. The reason for including the size in the
46
+ # digest is as follows: in a great many cases, the server needs to know the size
47
+ # of the blob it is about to work with prior to starting an operation with it,
48
+ # such as flattening Merkle tree structures or streaming it to a worker.
49
+ # Technically, the server could implement a separate metadata store, but this
50
+ # results in a significantly more complicated implementation as opposed to
51
+ # having the client specify the size up-front (or storing the size along with
52
+ # the digest in every message where digests are embedded). This does mean that
53
+ # the API leaks some implementation details of (what we consider to be) a
54
+ # reasonable server implementation, but we consider this to be a worthwhile
55
+ # tradeoff. When a `Digest` is used to refer to a proto message, it always
56
+ # refers to the message in binary encoded form. To ensure consistent hashing,
57
+ # clients and servers MUST ensure that they serialize messages according to the
58
+ # following rules, even if there are alternate valid encodings for the same
59
+ # message: * Fields are serialized in tag order. * There are no unknown fields. *
60
+ # There are no duplicate fields. * Fields are serialized according to the
61
+ # default semantics for their type. Most protocol buffer implementations will
62
+ # always follow these rules when serializing, but care should be taken to avoid
63
+ # shortcuts. For instance, concatenating two messages to merge them may produce
64
+ # duplicate fields.
65
+ # Corresponds to the JSON property `commandDigest`
66
+ # @return [Google::Apis::RemotebuildexecutionV1::BuildBazelRemoteExecutionV2Digest]
67
+ attr_accessor :command_digest
68
+
69
+ # If true, then the `Action`'s result cannot be cached, and in-flight requests
70
+ # for the same `Action` may not be merged.
71
+ # Corresponds to the JSON property `doNotCache`
72
+ # @return [Boolean]
73
+ attr_accessor :do_not_cache
74
+ alias_method :do_not_cache?, :do_not_cache
75
+
76
+ # A content digest. A digest for a given blob consists of the size of the blob
77
+ # and its hash. The hash algorithm to use is defined by the server. The size is
78
+ # considered to be an integral part of the digest and cannot be separated. That
79
+ # is, even if the `hash` field is correctly specified but `size_bytes` is not,
80
+ # the server MUST reject the request. The reason for including the size in the
81
+ # digest is as follows: in a great many cases, the server needs to know the size
82
+ # of the blob it is about to work with prior to starting an operation with it,
83
+ # such as flattening Merkle tree structures or streaming it to a worker.
84
+ # Technically, the server could implement a separate metadata store, but this
85
+ # results in a significantly more complicated implementation as opposed to
86
+ # having the client specify the size up-front (or storing the size along with
87
+ # the digest in every message where digests are embedded). This does mean that
88
+ # the API leaks some implementation details of (what we consider to be) a
89
+ # reasonable server implementation, but we consider this to be a worthwhile
90
+ # tradeoff. When a `Digest` is used to refer to a proto message, it always
91
+ # refers to the message in binary encoded form. To ensure consistent hashing,
92
+ # clients and servers MUST ensure that they serialize messages according to the
93
+ # following rules, even if there are alternate valid encodings for the same
94
+ # message: * Fields are serialized in tag order. * There are no unknown fields. *
95
+ # There are no duplicate fields. * Fields are serialized according to the
96
+ # default semantics for their type. Most protocol buffer implementations will
97
+ # always follow these rules when serializing, but care should be taken to avoid
98
+ # shortcuts. For instance, concatenating two messages to merge them may produce
99
+ # duplicate fields.
100
+ # Corresponds to the JSON property `inputRootDigest`
101
+ # @return [Google::Apis::RemotebuildexecutionV1::BuildBazelRemoteExecutionV2Digest]
102
+ attr_accessor :input_root_digest
103
+
104
+ # List of required supported NodeProperty keys. In order to ensure that
105
+ # equivalent `Action`s always hash to the same value, the supported node
106
+ # properties MUST be lexicographically sorted by name. Sorting of strings is
107
+ # done by code point, equivalently, by the UTF-8 bytes. The interpretation of
108
+ # these properties is server-dependent. If a property is not recognized by the
109
+ # server, the server will return an `INVALID_ARGUMENT` error.
110
+ # Corresponds to the JSON property `outputNodeProperties`
111
+ # @return [Array<String>]
112
+ attr_accessor :output_node_properties
113
+
114
+ # A timeout after which the execution should be killed. If the timeout is absent,
115
+ # then the client is specifying that the execution should continue as long as
116
+ # the server will let it. The server SHOULD impose a timeout if the client does
117
+ # not specify one, however, if the client does specify a timeout that is longer
118
+ # than the server's maximum timeout, the server MUST reject the request. The
119
+ # timeout is a part of the Action message, and therefore two `Actions` with
120
+ # different timeouts are different, even if they are otherwise identical. This
121
+ # is because, if they were not, running an `Action` with a lower timeout than is
122
+ # required might result in a cache hit from an execution run with a longer
123
+ # timeout, hiding the fact that the timeout is too short. By encoding it
124
+ # directly in the `Action`, a lower timeout will result in a cache miss and the
125
+ # execution timeout will fail immediately, rather than whenever the cache entry
126
+ # gets evicted.
127
+ # Corresponds to the JSON property `timeout`
128
+ # @return [String]
129
+ attr_accessor :timeout
130
+
131
+ def initialize(**args)
132
+ update!(**args)
133
+ end
134
+
135
+ # Update properties of this object
136
+ def update!(**args)
137
+ @command_digest = args[:command_digest] if args.key?(:command_digest)
138
+ @do_not_cache = args[:do_not_cache] if args.key?(:do_not_cache)
139
+ @input_root_digest = args[:input_root_digest] if args.key?(:input_root_digest)
140
+ @output_node_properties = args[:output_node_properties] if args.key?(:output_node_properties)
141
+ @timeout = args[:timeout] if args.key?(:timeout)
142
+ end
143
+ end
144
+
145
+ # An ActionResult represents the result of an Action being run.
146
+ class BuildBazelRemoteExecutionV2ActionResult
147
+ include Google::Apis::Core::Hashable
148
+
149
+ # ExecutedActionMetadata contains details about a completed execution.
150
+ # Corresponds to the JSON property `executionMetadata`
151
+ # @return [Google::Apis::RemotebuildexecutionV1::BuildBazelRemoteExecutionV2ExecutedActionMetadata]
152
+ attr_accessor :execution_metadata
153
+
154
+ # The exit code of the command.
155
+ # Corresponds to the JSON property `exitCode`
156
+ # @return [Fixnum]
157
+ attr_accessor :exit_code
158
+
159
+ # The output directories of the action. For each output directory requested in
160
+ # the `output_directories` or `output_paths` field of the Action, if the
161
+ # corresponding directory existed after the action completed, a single entry
162
+ # will be present in the output list, which will contain the digest of a Tree
163
+ # message containing the directory tree, and the path equal exactly to the
164
+ # corresponding Action output_directories member. As an example, suppose the
165
+ # Action had an output directory `a/b/dir` and the execution produced the
166
+ # following contents in `a/b/dir`: a file named `bar` and a directory named `foo`
167
+ # with an executable file named `baz`. Then, output_directory will contain (
168
+ # hashes shortened for readability): ```json // OutputDirectory proto: ` path: "
169
+ # a/b/dir" tree_digest: ` hash: "4a73bc9d03...", size: 55 ` ` // Tree proto with
170
+ # hash "4a73bc9d03..." and size 55: ` root: ` files: [ ` name: "bar", digest: `
171
+ # hash: "4a73bc9d03...", size: 65534 ` ` ], directories: [ ` name: "foo", digest:
172
+ # ` hash: "4cf2eda940...", size: 43 ` ` ] ` children : ` // (Directory proto
173
+ # with hash "4cf2eda940..." and size 43) files: [ ` name: "baz", digest: ` hash:
174
+ # "b2c941073e...", size: 1294, `, is_executable: true ` ] ` ` ``` If an output
175
+ # of the same name as listed in `output_files` of the Command was found in `
176
+ # output_directories`, but was not a directory, the server will return a
177
+ # FAILED_PRECONDITION.
178
+ # Corresponds to the JSON property `outputDirectories`
179
+ # @return [Array<Google::Apis::RemotebuildexecutionV1::BuildBazelRemoteExecutionV2OutputDirectory>]
180
+ attr_accessor :output_directories
181
+
182
+ # The output directories of the action that are symbolic links to other
183
+ # directories. Those may be links to other output directories, or input
184
+ # directories, or even absolute paths outside of the working directory, if the
185
+ # server supports SymlinkAbsolutePathStrategy.ALLOWED. For each output directory
186
+ # requested in the `output_directories` field of the Action, if the directory
187
+ # existed after the action completed, a single entry will be present either in
188
+ # this field, or in the `output_directories` field, if the directory was not a
189
+ # symbolic link. If an output of the same name was found, but was a symbolic
190
+ # link to a file instead of a directory, the server will return a
191
+ # FAILED_PRECONDITION. If the action does not produce the requested output, then
192
+ # that output will be omitted from the list. The server is free to arrange the
193
+ # output list as desired; clients MUST NOT assume that the output list is sorted.
194
+ # DEPRECATED as of v2.1. Servers that wish to be compatible with v2.0 API
195
+ # should still populate this field in addition to `output_symlinks`.
196
+ # Corresponds to the JSON property `outputDirectorySymlinks`
197
+ # @return [Array<Google::Apis::RemotebuildexecutionV1::BuildBazelRemoteExecutionV2OutputSymlink>]
198
+ attr_accessor :output_directory_symlinks
199
+
200
+ # The output files of the action that are symbolic links to other files. Those
201
+ # may be links to other output files, or input files, or even absolute paths
202
+ # outside of the working directory, if the server supports
203
+ # SymlinkAbsolutePathStrategy.ALLOWED. For each output file requested in the `
204
+ # output_files` or `output_paths` field of the Action, if the corresponding file
205
+ # existed after the action completed, a single entry will be present either in
206
+ # this field, or in the `output_files` field, if the file was not a symbolic
207
+ # link. If an output symbolic link of the same name as listed in `output_files`
208
+ # of the Command was found, but its target type was not a regular file, the
209
+ # server will return a FAILED_PRECONDITION. If the action does not produce the
210
+ # requested output, then that output will be omitted from the list. The server
211
+ # is free to arrange the output list as desired; clients MUST NOT assume that
212
+ # the output list is sorted. DEPRECATED as of v2.1. Servers that wish to be
213
+ # compatible with v2.0 API should still populate this field in addition to `
214
+ # output_symlinks`.
215
+ # Corresponds to the JSON property `outputFileSymlinks`
216
+ # @return [Array<Google::Apis::RemotebuildexecutionV1::BuildBazelRemoteExecutionV2OutputSymlink>]
217
+ attr_accessor :output_file_symlinks
218
+
219
+ # The output files of the action. For each output file requested in the `
220
+ # output_files` or `output_paths` field of the Action, if the corresponding file
221
+ # existed after the action completed, a single entry will be present either in
222
+ # this field, or the `output_file_symlinks` field if the file was a symbolic
223
+ # link to another file (`output_symlinks` field after v2.1). If an output listed
224
+ # in `output_files` was found, but was a directory rather than a regular file,
225
+ # the server will return a FAILED_PRECONDITION. If the action does not produce
226
+ # the requested output, then that output will be omitted from the list. The
227
+ # server is free to arrange the output list as desired; clients MUST NOT assume
228
+ # that the output list is sorted.
229
+ # Corresponds to the JSON property `outputFiles`
230
+ # @return [Array<Google::Apis::RemotebuildexecutionV1::BuildBazelRemoteExecutionV2OutputFile>]
231
+ attr_accessor :output_files
232
+
233
+ # New in v2.1: this field will only be populated if the command `output_paths`
234
+ # field was used, and not the pre v2.1 `output_files` or `output_directories`
235
+ # fields. The output paths of the action that are symbolic links to other paths.
236
+ # Those may be links to other outputs, or inputs, or even absolute paths outside
237
+ # of the working directory, if the server supports SymlinkAbsolutePathStrategy.
238
+ # ALLOWED. A single entry for each output requested in `output_paths` field of
239
+ # the Action, if the corresponding path existed after the action completed and
240
+ # was a symbolic link. If the action does not produce a requested output, then
241
+ # that output will be omitted from the list. The server is free to arrange the
242
+ # output list as desired; clients MUST NOT assume that the output list is sorted.
243
+ # Corresponds to the JSON property `outputSymlinks`
244
+ # @return [Array<Google::Apis::RemotebuildexecutionV1::BuildBazelRemoteExecutionV2OutputSymlink>]
245
+ attr_accessor :output_symlinks
246
+
247
+ # A content digest. A digest for a given blob consists of the size of the blob
248
+ # and its hash. The hash algorithm to use is defined by the server. The size is
249
+ # considered to be an integral part of the digest and cannot be separated. That
250
+ # is, even if the `hash` field is correctly specified but `size_bytes` is not,
251
+ # the server MUST reject the request. The reason for including the size in the
252
+ # digest is as follows: in a great many cases, the server needs to know the size
253
+ # of the blob it is about to work with prior to starting an operation with it,
254
+ # such as flattening Merkle tree structures or streaming it to a worker.
255
+ # Technically, the server could implement a separate metadata store, but this
256
+ # results in a significantly more complicated implementation as opposed to
257
+ # having the client specify the size up-front (or storing the size along with
258
+ # the digest in every message where digests are embedded). This does mean that
259
+ # the API leaks some implementation details of (what we consider to be) a
260
+ # reasonable server implementation, but we consider this to be a worthwhile
261
+ # tradeoff. When a `Digest` is used to refer to a proto message, it always
262
+ # refers to the message in binary encoded form. To ensure consistent hashing,
263
+ # clients and servers MUST ensure that they serialize messages according to the
264
+ # following rules, even if there are alternate valid encodings for the same
265
+ # message: * Fields are serialized in tag order. * There are no unknown fields. *
266
+ # There are no duplicate fields. * Fields are serialized according to the
267
+ # default semantics for their type. Most protocol buffer implementations will
268
+ # always follow these rules when serializing, but care should be taken to avoid
269
+ # shortcuts. For instance, concatenating two messages to merge them may produce
270
+ # duplicate fields.
271
+ # Corresponds to the JSON property `stderrDigest`
272
+ # @return [Google::Apis::RemotebuildexecutionV1::BuildBazelRemoteExecutionV2Digest]
273
+ attr_accessor :stderr_digest
274
+
275
+ # The standard error buffer of the action. The server SHOULD NOT inline stderr
276
+ # unless requested by the client in the GetActionResultRequest message. The
277
+ # server MAY omit inlining, even if requested, and MUST do so if inlining would
278
+ # cause the response to exceed message size limits.
279
+ # Corresponds to the JSON property `stderrRaw`
280
+ # NOTE: Values are automatically base64 encoded/decoded in the client library.
281
+ # @return [String]
282
+ attr_accessor :stderr_raw
283
+
284
+ # A content digest. A digest for a given blob consists of the size of the blob
285
+ # and its hash. The hash algorithm to use is defined by the server. The size is
286
+ # considered to be an integral part of the digest and cannot be separated. That
287
+ # is, even if the `hash` field is correctly specified but `size_bytes` is not,
288
+ # the server MUST reject the request. The reason for including the size in the
289
+ # digest is as follows: in a great many cases, the server needs to know the size
290
+ # of the blob it is about to work with prior to starting an operation with it,
291
+ # such as flattening Merkle tree structures or streaming it to a worker.
292
+ # Technically, the server could implement a separate metadata store, but this
293
+ # results in a significantly more complicated implementation as opposed to
294
+ # having the client specify the size up-front (or storing the size along with
295
+ # the digest in every message where digests are embedded). This does mean that
296
+ # the API leaks some implementation details of (what we consider to be) a
297
+ # reasonable server implementation, but we consider this to be a worthwhile
298
+ # tradeoff. When a `Digest` is used to refer to a proto message, it always
299
+ # refers to the message in binary encoded form. To ensure consistent hashing,
300
+ # clients and servers MUST ensure that they serialize messages according to the
301
+ # following rules, even if there are alternate valid encodings for the same
302
+ # message: * Fields are serialized in tag order. * There are no unknown fields. *
303
+ # There are no duplicate fields. * Fields are serialized according to the
304
+ # default semantics for their type. Most protocol buffer implementations will
305
+ # always follow these rules when serializing, but care should be taken to avoid
306
+ # shortcuts. For instance, concatenating two messages to merge them may produce
307
+ # duplicate fields.
308
+ # Corresponds to the JSON property `stdoutDigest`
309
+ # @return [Google::Apis::RemotebuildexecutionV1::BuildBazelRemoteExecutionV2Digest]
310
+ attr_accessor :stdout_digest
311
+
312
+ # The standard output buffer of the action. The server SHOULD NOT inline stdout
313
+ # unless requested by the client in the GetActionResultRequest message. The
314
+ # server MAY omit inlining, even if requested, and MUST do so if inlining would
315
+ # cause the response to exceed message size limits.
316
+ # Corresponds to the JSON property `stdoutRaw`
317
+ # NOTE: Values are automatically base64 encoded/decoded in the client library.
318
+ # @return [String]
319
+ attr_accessor :stdout_raw
320
+
321
+ def initialize(**args)
322
+ update!(**args)
323
+ end
324
+
325
+ # Update properties of this object
326
+ def update!(**args)
327
+ @execution_metadata = args[:execution_metadata] if args.key?(:execution_metadata)
328
+ @exit_code = args[:exit_code] if args.key?(:exit_code)
329
+ @output_directories = args[:output_directories] if args.key?(:output_directories)
330
+ @output_directory_symlinks = args[:output_directory_symlinks] if args.key?(:output_directory_symlinks)
331
+ @output_file_symlinks = args[:output_file_symlinks] if args.key?(:output_file_symlinks)
332
+ @output_files = args[:output_files] if args.key?(:output_files)
333
+ @output_symlinks = args[:output_symlinks] if args.key?(:output_symlinks)
334
+ @stderr_digest = args[:stderr_digest] if args.key?(:stderr_digest)
335
+ @stderr_raw = args[:stderr_raw] if args.key?(:stderr_raw)
336
+ @stdout_digest = args[:stdout_digest] if args.key?(:stdout_digest)
337
+ @stdout_raw = args[:stdout_raw] if args.key?(:stdout_raw)
338
+ end
339
+ end
340
+
341
+ # A `Command` is the actual command executed by a worker running an Action and
342
+ # specifications of its environment. Except as otherwise required, the
343
+ # environment (such as which system libraries or binaries are available, and
344
+ # what filesystems are mounted where) is defined by and specific to the
345
+ # implementation of the remote execution API.
346
+ class BuildBazelRemoteExecutionV2Command
347
+ include Google::Apis::Core::Hashable
348
+
349
+ # The arguments to the command. The first argument must be the path to the
350
+ # executable, which must be either a relative path, in which case it is
351
+ # evaluated with respect to the input root, or an absolute path.
352
+ # Corresponds to the JSON property `arguments`
353
+ # @return [Array<String>]
354
+ attr_accessor :arguments
355
+
356
+ # The environment variables to set when running the program. The worker may
357
+ # provide its own default environment variables; these defaults can be
358
+ # overridden using this field. Additional variables can also be specified. In
359
+ # order to ensure that equivalent Commands always hash to the same value, the
360
+ # environment variables MUST be lexicographically sorted by name. Sorting of
361
+ # strings is done by code point, equivalently, by the UTF-8 bytes.
362
+ # Corresponds to the JSON property `environmentVariables`
363
+ # @return [Array<Google::Apis::RemotebuildexecutionV1::BuildBazelRemoteExecutionV2CommandEnvironmentVariable>]
364
+ attr_accessor :environment_variables
365
+
366
+ # A list of the output directories that the client expects to retrieve from the
367
+ # action. Only the listed directories will be returned (an entire directory
368
+ # structure will be returned as a Tree message digest, see OutputDirectory), as
369
+ # well as files listed in `output_files`. Other files or directories that may be
370
+ # created during command execution are discarded. The paths are relative to the
371
+ # working directory of the action execution. The paths are specified using a
372
+ # single forward slash (`/`) as a path separator, even if the execution platform
373
+ # natively uses a different separator. The path MUST NOT include a trailing
374
+ # slash, nor a leading slash, being a relative path. The special value of empty
375
+ # string is allowed, although not recommended, and can be used to capture the
376
+ # entire working directory tree, including inputs. In order to ensure consistent
377
+ # hashing of the same Action, the output paths MUST be sorted lexicographically
378
+ # by code point (or, equivalently, by UTF-8 bytes). An output directory cannot
379
+ # be duplicated or have the same path as any of the listed output files. An
380
+ # output directory is allowed to be a parent of another output directory.
381
+ # Directories leading up to the output directories (but not the output
382
+ # directories themselves) are created by the worker prior to execution, even if
383
+ # they are not explicitly part of the input root. DEPRECATED since 2.1: Use `
384
+ # output_paths` instead.
385
+ # Corresponds to the JSON property `outputDirectories`
386
+ # @return [Array<String>]
387
+ attr_accessor :output_directories
388
+
389
+ # A list of the output files that the client expects to retrieve from the action.
390
+ # Only the listed files, as well as directories listed in `output_directories`,
391
+ # will be returned to the client as output. Other files or directories that may
392
+ # be created during command execution are discarded. The paths are relative to
393
+ # the working directory of the action execution. The paths are specified using a
394
+ # single forward slash (`/`) as a path separator, even if the execution platform
395
+ # natively uses a different separator. The path MUST NOT include a trailing
396
+ # slash, nor a leading slash, being a relative path. In order to ensure
397
+ # consistent hashing of the same Action, the output paths MUST be sorted
398
+ # lexicographically by code point (or, equivalently, by UTF-8 bytes). An output
399
+ # file cannot be duplicated, be a parent of another output file, or have the
400
+ # same path as any of the listed output directories. Directories leading up to
401
+ # the output files are created by the worker prior to execution, even if they
402
+ # are not explicitly part of the input root. DEPRECATED since v2.1: Use `
403
+ # output_paths` instead.
404
+ # Corresponds to the JSON property `outputFiles`
405
+ # @return [Array<String>]
406
+ attr_accessor :output_files
407
+
408
+ # A list of the output paths that the client expects to retrieve from the action.
409
+ # Only the listed paths will be returned to the client as output. The type of
410
+ # the output (file or directory) is not specified, and will be determined by the
411
+ # server after action execution. If the resulting path is a file, it will be
412
+ # returned in an OutputFile) typed field. If the path is a directory, the entire
413
+ # directory structure will be returned as a Tree message digest, see
414
+ # OutputDirectory) Other files or directories that may be created during command
415
+ # execution are discarded. The paths are relative to the working directory of
416
+ # the action execution. The paths are specified using a single forward slash (`/`
417
+ # ) as a path separator, even if the execution platform natively uses a
418
+ # different separator. The path MUST NOT include a trailing slash, nor a leading
419
+ # slash, being a relative path. In order to ensure consistent hashing of the
420
+ # same Action, the output paths MUST be deduplicated and sorted
421
+ # lexicographically by code point (or, equivalently, by UTF-8 bytes).
422
+ # Directories leading up to the output paths are created by the worker prior to
423
+ # execution, even if they are not explicitly part of the input root. New in v2.1:
424
+ # this field supersedes the DEPRECATED `output_files` and `output_directories`
425
+ # fields. If `output_paths` is used, `output_files` and `output_directories`
426
+ # will be ignored!
427
+ # Corresponds to the JSON property `outputPaths`
428
+ # @return [Array<String>]
429
+ attr_accessor :output_paths
430
+
431
+ # A `Platform` is a set of requirements, such as hardware, operating system, or
432
+ # compiler toolchain, for an Action's execution environment. A `Platform` is
433
+ # represented as a series of key-value pairs representing the properties that
434
+ # are required of the platform.
435
+ # Corresponds to the JSON property `platform`
436
+ # @return [Google::Apis::RemotebuildexecutionV1::BuildBazelRemoteExecutionV2Platform]
437
+ attr_accessor :platform
438
+
439
+ # The working directory, relative to the input root, for the command to run in.
440
+ # It must be a directory which exists in the input tree. If it is left empty,
441
+ # then the action is run in the input root.
442
+ # Corresponds to the JSON property `workingDirectory`
443
+ # @return [String]
444
+ attr_accessor :working_directory
445
+
446
+ def initialize(**args)
447
+ update!(**args)
448
+ end
449
+
450
+ # Update properties of this object
451
+ def update!(**args)
452
+ @arguments = args[:arguments] if args.key?(:arguments)
453
+ @environment_variables = args[:environment_variables] if args.key?(:environment_variables)
454
+ @output_directories = args[:output_directories] if args.key?(:output_directories)
455
+ @output_files = args[:output_files] if args.key?(:output_files)
456
+ @output_paths = args[:output_paths] if args.key?(:output_paths)
457
+ @platform = args[:platform] if args.key?(:platform)
458
+ @working_directory = args[:working_directory] if args.key?(:working_directory)
459
+ end
460
+ end
461
+
462
+ # An `EnvironmentVariable` is one variable to set in the running program's
463
+ # environment.
464
+ class BuildBazelRemoteExecutionV2CommandEnvironmentVariable
465
+ include Google::Apis::Core::Hashable
466
+
467
+ # The variable name.
468
+ # Corresponds to the JSON property `name`
469
+ # @return [String]
470
+ attr_accessor :name
471
+
472
+ # The variable value.
473
+ # Corresponds to the JSON property `value`
474
+ # @return [String]
475
+ attr_accessor :value
476
+
477
+ def initialize(**args)
478
+ update!(**args)
479
+ end
480
+
481
+ # Update properties of this object
482
+ def update!(**args)
483
+ @name = args[:name] if args.key?(:name)
484
+ @value = args[:value] if args.key?(:value)
485
+ end
486
+ end
487
+
488
+ # A content digest. A digest for a given blob consists of the size of the blob
489
+ # and its hash. The hash algorithm to use is defined by the server. The size is
490
+ # considered to be an integral part of the digest and cannot be separated. That
491
+ # is, even if the `hash` field is correctly specified but `size_bytes` is not,
492
+ # the server MUST reject the request. The reason for including the size in the
493
+ # digest is as follows: in a great many cases, the server needs to know the size
494
+ # of the blob it is about to work with prior to starting an operation with it,
495
+ # such as flattening Merkle tree structures or streaming it to a worker.
496
+ # Technically, the server could implement a separate metadata store, but this
497
+ # results in a significantly more complicated implementation as opposed to
498
+ # having the client specify the size up-front (or storing the size along with
499
+ # the digest in every message where digests are embedded). This does mean that
500
+ # the API leaks some implementation details of (what we consider to be) a
501
+ # reasonable server implementation, but we consider this to be a worthwhile
502
+ # tradeoff. When a `Digest` is used to refer to a proto message, it always
503
+ # refers to the message in binary encoded form. To ensure consistent hashing,
504
+ # clients and servers MUST ensure that they serialize messages according to the
505
+ # following rules, even if there are alternate valid encodings for the same
506
+ # message: * Fields are serialized in tag order. * There are no unknown fields. *
507
+ # There are no duplicate fields. * Fields are serialized according to the
508
+ # default semantics for their type. Most protocol buffer implementations will
509
+ # always follow these rules when serializing, but care should be taken to avoid
510
+ # shortcuts. For instance, concatenating two messages to merge them may produce
511
+ # duplicate fields.
512
+ class BuildBazelRemoteExecutionV2Digest
513
+ include Google::Apis::Core::Hashable
514
+
515
+ # The hash. In the case of SHA-256, it will always be a lowercase hex string
516
+ # exactly 64 characters long.
517
+ # Corresponds to the JSON property `hash`
518
+ # @return [String]
519
+ attr_accessor :hash_prop
520
+
521
+ # The size of the blob, in bytes.
522
+ # Corresponds to the JSON property `sizeBytes`
523
+ # @return [Fixnum]
524
+ attr_accessor :size_bytes
525
+
526
+ def initialize(**args)
527
+ update!(**args)
528
+ end
529
+
530
+ # Update properties of this object
531
+ def update!(**args)
532
+ @hash_prop = args[:hash_prop] if args.key?(:hash_prop)
533
+ @size_bytes = args[:size_bytes] if args.key?(:size_bytes)
534
+ end
535
+ end
536
+
537
+ # A `Directory` represents a directory node in a file tree, containing zero or
538
+ # more children FileNodes, DirectoryNodes and SymlinkNodes. Each `Node` contains
539
+ # its name in the directory, either the digest of its content (either a file
540
+ # blob or a `Directory` proto) or a symlink target, as well as possibly some
541
+ # metadata about the file or directory. In order to ensure that two equivalent
542
+ # directory trees hash to the same value, the following restrictions MUST be
543
+ # obeyed when constructing a a `Directory`: * Every child in the directory must
544
+ # have a path of exactly one segment. Multiple levels of directory hierarchy may
545
+ # not be collapsed. * Each child in the directory must have a unique path
546
+ # segment (file name). Note that while the API itself is case-sensitive, the
547
+ # environment where the Action is executed may or may not be case-sensitive.
548
+ # That is, it is legal to call the API with a Directory that has both "Foo" and "
549
+ # foo" as children, but the Action may be rejected by the remote system upon
550
+ # execution. * The files, directories and symlinks in the directory must each be
551
+ # sorted in lexicographical order by path. The path strings must be sorted by
552
+ # code point, equivalently, by UTF-8 bytes. * The NodeProperties of files,
553
+ # directories, and symlinks must be sorted in lexicographical order by property
554
+ # name. A `Directory` that obeys the restrictions is said to be in canonical
555
+ # form. As an example, the following could be used for a file named `bar` and a
556
+ # directory named `foo` with an executable file named `baz` (hashes shortened
557
+ # for readability): ```json // (Directory proto) ` files: [ ` name: "bar",
558
+ # digest: ` hash: "4a73bc9d03...", size: 65534 `, node_properties: [ ` "name": "
559
+ # MTime", "value": "2017-01-15T01:30:15.01Z" ` ] ` ], directories: [ ` name: "
560
+ # foo", digest: ` hash: "4cf2eda940...", size: 43 ` ` ] ` // (Directory proto
561
+ # with hash "4cf2eda940..." and size 43) ` files: [ ` name: "baz", digest: `
562
+ # hash: "b2c941073e...", size: 1294, `, is_executable: true ` ] ` ```
563
+ class BuildBazelRemoteExecutionV2Directory
564
+ include Google::Apis::Core::Hashable
565
+
566
+ # The subdirectories in the directory.
567
+ # Corresponds to the JSON property `directories`
568
+ # @return [Array<Google::Apis::RemotebuildexecutionV1::BuildBazelRemoteExecutionV2DirectoryNode>]
569
+ attr_accessor :directories
570
+
571
+ # The files in the directory.
572
+ # Corresponds to the JSON property `files`
573
+ # @return [Array<Google::Apis::RemotebuildexecutionV1::BuildBazelRemoteExecutionV2FileNode>]
574
+ attr_accessor :files
575
+
576
+ # The node properties of the Directory.
577
+ # Corresponds to the JSON property `nodeProperties`
578
+ # @return [Array<Google::Apis::RemotebuildexecutionV1::BuildBazelRemoteExecutionV2NodeProperty>]
579
+ attr_accessor :node_properties
580
+
581
+ # The symlinks in the directory.
582
+ # Corresponds to the JSON property `symlinks`
583
+ # @return [Array<Google::Apis::RemotebuildexecutionV1::BuildBazelRemoteExecutionV2SymlinkNode>]
584
+ attr_accessor :symlinks
585
+
586
+ def initialize(**args)
587
+ update!(**args)
588
+ end
589
+
590
+ # Update properties of this object
591
+ def update!(**args)
592
+ @directories = args[:directories] if args.key?(:directories)
593
+ @files = args[:files] if args.key?(:files)
594
+ @node_properties = args[:node_properties] if args.key?(:node_properties)
595
+ @symlinks = args[:symlinks] if args.key?(:symlinks)
596
+ end
597
+ end
598
+
599
+ # A `DirectoryNode` represents a child of a Directory which is itself a `
600
+ # Directory` and its associated metadata.
601
+ class BuildBazelRemoteExecutionV2DirectoryNode
602
+ include Google::Apis::Core::Hashable
603
+
604
+ # A content digest. A digest for a given blob consists of the size of the blob
605
+ # and its hash. The hash algorithm to use is defined by the server. The size is
606
+ # considered to be an integral part of the digest and cannot be separated. That
607
+ # is, even if the `hash` field is correctly specified but `size_bytes` is not,
608
+ # the server MUST reject the request. The reason for including the size in the
609
+ # digest is as follows: in a great many cases, the server needs to know the size
610
+ # of the blob it is about to work with prior to starting an operation with it,
611
+ # such as flattening Merkle tree structures or streaming it to a worker.
612
+ # Technically, the server could implement a separate metadata store, but this
613
+ # results in a significantly more complicated implementation as opposed to
614
+ # having the client specify the size up-front (or storing the size along with
615
+ # the digest in every message where digests are embedded). This does mean that
616
+ # the API leaks some implementation details of (what we consider to be) a
617
+ # reasonable server implementation, but we consider this to be a worthwhile
618
+ # tradeoff. When a `Digest` is used to refer to a proto message, it always
619
+ # refers to the message in binary encoded form. To ensure consistent hashing,
620
+ # clients and servers MUST ensure that they serialize messages according to the
621
+ # following rules, even if there are alternate valid encodings for the same
622
+ # message: * Fields are serialized in tag order. * There are no unknown fields. *
623
+ # There are no duplicate fields. * Fields are serialized according to the
624
+ # default semantics for their type. Most protocol buffer implementations will
625
+ # always follow these rules when serializing, but care should be taken to avoid
626
+ # shortcuts. For instance, concatenating two messages to merge them may produce
627
+ # duplicate fields.
628
+ # Corresponds to the JSON property `digest`
629
+ # @return [Google::Apis::RemotebuildexecutionV1::BuildBazelRemoteExecutionV2Digest]
630
+ attr_accessor :digest
631
+
632
+ # The name of the directory.
633
+ # Corresponds to the JSON property `name`
634
+ # @return [String]
635
+ attr_accessor :name
636
+
637
+ def initialize(**args)
638
+ update!(**args)
639
+ end
640
+
641
+ # Update properties of this object
642
+ def update!(**args)
643
+ @digest = args[:digest] if args.key?(:digest)
644
+ @name = args[:name] if args.key?(:name)
645
+ end
646
+ end
647
+
648
+ # Metadata about an ongoing execution, which will be contained in the metadata
649
+ # field of the Operation.
650
+ class BuildBazelRemoteExecutionV2ExecuteOperationMetadata
651
+ include Google::Apis::Core::Hashable
652
+
653
+ # A content digest. A digest for a given blob consists of the size of the blob
654
+ # and its hash. The hash algorithm to use is defined by the server. The size is
655
+ # considered to be an integral part of the digest and cannot be separated. That
656
+ # is, even if the `hash` field is correctly specified but `size_bytes` is not,
657
+ # the server MUST reject the request. The reason for including the size in the
658
+ # digest is as follows: in a great many cases, the server needs to know the size
659
+ # of the blob it is about to work with prior to starting an operation with it,
660
+ # such as flattening Merkle tree structures or streaming it to a worker.
661
+ # Technically, the server could implement a separate metadata store, but this
662
+ # results in a significantly more complicated implementation as opposed to
663
+ # having the client specify the size up-front (or storing the size along with
664
+ # the digest in every message where digests are embedded). This does mean that
665
+ # the API leaks some implementation details of (what we consider to be) a
666
+ # reasonable server implementation, but we consider this to be a worthwhile
667
+ # tradeoff. When a `Digest` is used to refer to a proto message, it always
668
+ # refers to the message in binary encoded form. To ensure consistent hashing,
669
+ # clients and servers MUST ensure that they serialize messages according to the
670
+ # following rules, even if there are alternate valid encodings for the same
671
+ # message: * Fields are serialized in tag order. * There are no unknown fields. *
672
+ # There are no duplicate fields. * Fields are serialized according to the
673
+ # default semantics for their type. Most protocol buffer implementations will
674
+ # always follow these rules when serializing, but care should be taken to avoid
675
+ # shortcuts. For instance, concatenating two messages to merge them may produce
676
+ # duplicate fields.
677
+ # Corresponds to the JSON property `actionDigest`
678
+ # @return [Google::Apis::RemotebuildexecutionV1::BuildBazelRemoteExecutionV2Digest]
679
+ attr_accessor :action_digest
680
+
681
+ # The current stage of execution.
682
+ # Corresponds to the JSON property `stage`
683
+ # @return [String]
684
+ attr_accessor :stage
685
+
686
+ # If set, the client can use this name with ByteStream.Read to stream the
687
+ # standard error.
688
+ # Corresponds to the JSON property `stderrStreamName`
689
+ # @return [String]
690
+ attr_accessor :stderr_stream_name
691
+
692
+ # If set, the client can use this name with ByteStream.Read to stream the
693
+ # standard output.
694
+ # Corresponds to the JSON property `stdoutStreamName`
695
+ # @return [String]
696
+ attr_accessor :stdout_stream_name
697
+
698
+ def initialize(**args)
699
+ update!(**args)
700
+ end
701
+
702
+ # Update properties of this object
703
+ def update!(**args)
704
+ @action_digest = args[:action_digest] if args.key?(:action_digest)
705
+ @stage = args[:stage] if args.key?(:stage)
706
+ @stderr_stream_name = args[:stderr_stream_name] if args.key?(:stderr_stream_name)
707
+ @stdout_stream_name = args[:stdout_stream_name] if args.key?(:stdout_stream_name)
708
+ end
709
+ end
710
+
711
+ # The response message for Execution.Execute, which will be contained in the
712
+ # response field of the Operation.
713
+ class BuildBazelRemoteExecutionV2ExecuteResponse
714
+ include Google::Apis::Core::Hashable
715
+
716
+ # True if the result was served from cache, false if it was executed.
717
+ # Corresponds to the JSON property `cachedResult`
718
+ # @return [Boolean]
719
+ attr_accessor :cached_result
720
+ alias_method :cached_result?, :cached_result
721
+
722
+ # Freeform informational message with details on the execution of the action
723
+ # that may be displayed to the user upon failure or when requested explicitly.
724
+ # Corresponds to the JSON property `message`
725
+ # @return [String]
726
+ attr_accessor :message
727
+
728
+ # An ActionResult represents the result of an Action being run.
729
+ # Corresponds to the JSON property `result`
730
+ # @return [Google::Apis::RemotebuildexecutionV1::BuildBazelRemoteExecutionV2ActionResult]
731
+ attr_accessor :result
732
+
733
+ # An optional list of additional log outputs the server wishes to provide. A
734
+ # server can use this to return execution-specific logs however it wishes. This
735
+ # is intended primarily to make it easier for users to debug issues that may be
736
+ # outside of the actual job execution, such as by identifying the worker
737
+ # executing the action or by providing logs from the worker's setup phase. The
738
+ # keys SHOULD be human readable so that a client can display them to a user.
739
+ # Corresponds to the JSON property `serverLogs`
740
+ # @return [Hash<String,Google::Apis::RemotebuildexecutionV1::BuildBazelRemoteExecutionV2LogFile>]
741
+ attr_accessor :server_logs
742
+
743
+ # The `Status` type defines a logical error model that is suitable for different
744
+ # programming environments, including REST APIs and RPC APIs. It is used by [
745
+ # gRPC](https://github.com/grpc). Each `Status` message contains three pieces of
746
+ # data: error code, error message, and error details. You can find out more
747
+ # about this error model and how to work with it in the [API Design Guide](https:
748
+ # //cloud.google.com/apis/design/errors).
749
+ # Corresponds to the JSON property `status`
750
+ # @return [Google::Apis::RemotebuildexecutionV1::GoogleRpcStatus]
751
+ attr_accessor :status
752
+
753
+ def initialize(**args)
754
+ update!(**args)
755
+ end
756
+
757
+ # Update properties of this object
758
+ def update!(**args)
759
+ @cached_result = args[:cached_result] if args.key?(:cached_result)
760
+ @message = args[:message] if args.key?(:message)
761
+ @result = args[:result] if args.key?(:result)
762
+ @server_logs = args[:server_logs] if args.key?(:server_logs)
763
+ @status = args[:status] if args.key?(:status)
764
+ end
765
+ end
766
+
767
+ # ExecutedActionMetadata contains details about a completed execution.
768
+ class BuildBazelRemoteExecutionV2ExecutedActionMetadata
769
+ include Google::Apis::Core::Hashable
770
+
771
+ # When the worker completed executing the action command.
772
+ # Corresponds to the JSON property `executionCompletedTimestamp`
773
+ # @return [String]
774
+ attr_accessor :execution_completed_timestamp
775
+
776
+ # When the worker started executing the action command.
777
+ # Corresponds to the JSON property `executionStartTimestamp`
778
+ # @return [String]
779
+ attr_accessor :execution_start_timestamp
780
+
781
+ # When the worker finished fetching action inputs.
782
+ # Corresponds to the JSON property `inputFetchCompletedTimestamp`
783
+ # @return [String]
784
+ attr_accessor :input_fetch_completed_timestamp
785
+
786
+ # When the worker started fetching action inputs.
787
+ # Corresponds to the JSON property `inputFetchStartTimestamp`
788
+ # @return [String]
789
+ attr_accessor :input_fetch_start_timestamp
790
+
791
+ # When the worker finished uploading action outputs.
792
+ # Corresponds to the JSON property `outputUploadCompletedTimestamp`
793
+ # @return [String]
794
+ attr_accessor :output_upload_completed_timestamp
795
+
796
+ # When the worker started uploading action outputs.
797
+ # Corresponds to the JSON property `outputUploadStartTimestamp`
798
+ # @return [String]
799
+ attr_accessor :output_upload_start_timestamp
800
+
801
+ # When was the action added to the queue.
802
+ # Corresponds to the JSON property `queuedTimestamp`
803
+ # @return [String]
804
+ attr_accessor :queued_timestamp
805
+
806
+ # The name of the worker which ran the execution.
807
+ # Corresponds to the JSON property `worker`
808
+ # @return [String]
809
+ attr_accessor :worker
810
+
811
+ # When the worker completed the action, including all stages.
812
+ # Corresponds to the JSON property `workerCompletedTimestamp`
813
+ # @return [String]
814
+ attr_accessor :worker_completed_timestamp
815
+
816
+ # When the worker received the action.
817
+ # Corresponds to the JSON property `workerStartTimestamp`
818
+ # @return [String]
819
+ attr_accessor :worker_start_timestamp
820
+
821
+ def initialize(**args)
822
+ update!(**args)
823
+ end
824
+
825
+ # Update properties of this object
826
+ def update!(**args)
827
+ @execution_completed_timestamp = args[:execution_completed_timestamp] if args.key?(:execution_completed_timestamp)
828
+ @execution_start_timestamp = args[:execution_start_timestamp] if args.key?(:execution_start_timestamp)
829
+ @input_fetch_completed_timestamp = args[:input_fetch_completed_timestamp] if args.key?(:input_fetch_completed_timestamp)
830
+ @input_fetch_start_timestamp = args[:input_fetch_start_timestamp] if args.key?(:input_fetch_start_timestamp)
831
+ @output_upload_completed_timestamp = args[:output_upload_completed_timestamp] if args.key?(:output_upload_completed_timestamp)
832
+ @output_upload_start_timestamp = args[:output_upload_start_timestamp] if args.key?(:output_upload_start_timestamp)
833
+ @queued_timestamp = args[:queued_timestamp] if args.key?(:queued_timestamp)
834
+ @worker = args[:worker] if args.key?(:worker)
835
+ @worker_completed_timestamp = args[:worker_completed_timestamp] if args.key?(:worker_completed_timestamp)
836
+ @worker_start_timestamp = args[:worker_start_timestamp] if args.key?(:worker_start_timestamp)
837
+ end
838
+ end
839
+
840
+ # A `FileNode` represents a single file and associated metadata.
841
+ class BuildBazelRemoteExecutionV2FileNode
842
+ include Google::Apis::Core::Hashable
843
+
844
+ # A content digest. A digest for a given blob consists of the size of the blob
845
+ # and its hash. The hash algorithm to use is defined by the server. The size is
846
+ # considered to be an integral part of the digest and cannot be separated. That
847
+ # is, even if the `hash` field is correctly specified but `size_bytes` is not,
848
+ # the server MUST reject the request. The reason for including the size in the
849
+ # digest is as follows: in a great many cases, the server needs to know the size
850
+ # of the blob it is about to work with prior to starting an operation with it,
851
+ # such as flattening Merkle tree structures or streaming it to a worker.
852
+ # Technically, the server could implement a separate metadata store, but this
853
+ # results in a significantly more complicated implementation as opposed to
854
+ # having the client specify the size up-front (or storing the size along with
855
+ # the digest in every message where digests are embedded). This does mean that
856
+ # the API leaks some implementation details of (what we consider to be) a
857
+ # reasonable server implementation, but we consider this to be a worthwhile
858
+ # tradeoff. When a `Digest` is used to refer to a proto message, it always
859
+ # refers to the message in binary encoded form. To ensure consistent hashing,
860
+ # clients and servers MUST ensure that they serialize messages according to the
861
+ # following rules, even if there are alternate valid encodings for the same
862
+ # message: * Fields are serialized in tag order. * There are no unknown fields. *
863
+ # There are no duplicate fields. * Fields are serialized according to the
864
+ # default semantics for their type. Most protocol buffer implementations will
865
+ # always follow these rules when serializing, but care should be taken to avoid
866
+ # shortcuts. For instance, concatenating two messages to merge them may produce
867
+ # duplicate fields.
868
+ # Corresponds to the JSON property `digest`
869
+ # @return [Google::Apis::RemotebuildexecutionV1::BuildBazelRemoteExecutionV2Digest]
870
+ attr_accessor :digest
871
+
872
+ # True if file is executable, false otherwise.
873
+ # Corresponds to the JSON property `isExecutable`
874
+ # @return [Boolean]
875
+ attr_accessor :is_executable
876
+ alias_method :is_executable?, :is_executable
877
+
878
+ # The name of the file.
879
+ # Corresponds to the JSON property `name`
880
+ # @return [String]
881
+ attr_accessor :name
882
+
883
+ # The node properties of the FileNode.
884
+ # Corresponds to the JSON property `nodeProperties`
885
+ # @return [Array<Google::Apis::RemotebuildexecutionV1::BuildBazelRemoteExecutionV2NodeProperty>]
886
+ attr_accessor :node_properties
887
+
888
+ def initialize(**args)
889
+ update!(**args)
890
+ end
891
+
892
+ # Update properties of this object
893
+ def update!(**args)
894
+ @digest = args[:digest] if args.key?(:digest)
895
+ @is_executable = args[:is_executable] if args.key?(:is_executable)
896
+ @name = args[:name] if args.key?(:name)
897
+ @node_properties = args[:node_properties] if args.key?(:node_properties)
898
+ end
899
+ end
900
+
901
+ # A `LogFile` is a log stored in the CAS.
902
+ class BuildBazelRemoteExecutionV2LogFile
903
+ include Google::Apis::Core::Hashable
904
+
905
+ # A content digest. A digest for a given blob consists of the size of the blob
906
+ # and its hash. The hash algorithm to use is defined by the server. The size is
907
+ # considered to be an integral part of the digest and cannot be separated. That
908
+ # is, even if the `hash` field is correctly specified but `size_bytes` is not,
909
+ # the server MUST reject the request. The reason for including the size in the
910
+ # digest is as follows: in a great many cases, the server needs to know the size
911
+ # of the blob it is about to work with prior to starting an operation with it,
912
+ # such as flattening Merkle tree structures or streaming it to a worker.
913
+ # Technically, the server could implement a separate metadata store, but this
914
+ # results in a significantly more complicated implementation as opposed to
915
+ # having the client specify the size up-front (or storing the size along with
916
+ # the digest in every message where digests are embedded). This does mean that
917
+ # the API leaks some implementation details of (what we consider to be) a
918
+ # reasonable server implementation, but we consider this to be a worthwhile
919
+ # tradeoff. When a `Digest` is used to refer to a proto message, it always
920
+ # refers to the message in binary encoded form. To ensure consistent hashing,
921
+ # clients and servers MUST ensure that they serialize messages according to the
922
+ # following rules, even if there are alternate valid encodings for the same
923
+ # message: * Fields are serialized in tag order. * There are no unknown fields. *
924
+ # There are no duplicate fields. * Fields are serialized according to the
925
+ # default semantics for their type. Most protocol buffer implementations will
926
+ # always follow these rules when serializing, but care should be taken to avoid
927
+ # shortcuts. For instance, concatenating two messages to merge them may produce
928
+ # duplicate fields.
929
+ # Corresponds to the JSON property `digest`
930
+ # @return [Google::Apis::RemotebuildexecutionV1::BuildBazelRemoteExecutionV2Digest]
931
+ attr_accessor :digest
932
+
933
+ # This is a hint as to the purpose of the log, and is set to true if the log is
934
+ # human-readable text that can be usefully displayed to a user, and false
935
+ # otherwise. For instance, if a command-line client wishes to print the server
936
+ # logs to the terminal for a failed action, this allows it to avoid displaying a
937
+ # binary file.
938
+ # Corresponds to the JSON property `humanReadable`
939
+ # @return [Boolean]
940
+ attr_accessor :human_readable
941
+ alias_method :human_readable?, :human_readable
942
+
943
+ def initialize(**args)
944
+ update!(**args)
945
+ end
946
+
947
+ # Update properties of this object
948
+ def update!(**args)
949
+ @digest = args[:digest] if args.key?(:digest)
950
+ @human_readable = args[:human_readable] if args.key?(:human_readable)
951
+ end
952
+ end
953
+
954
+ # A single property for FileNodes, DirectoryNodes, and SymlinkNodes. The server
955
+ # is responsible for specifying the property `name`s that it accepts. If
956
+ # permitted by the server, the same `name` may occur multiple times.
957
+ class BuildBazelRemoteExecutionV2NodeProperty
958
+ include Google::Apis::Core::Hashable
959
+
960
+ # The property name.
961
+ # Corresponds to the JSON property `name`
962
+ # @return [String]
963
+ attr_accessor :name
964
+
965
+ # The property value.
966
+ # Corresponds to the JSON property `value`
967
+ # @return [String]
968
+ attr_accessor :value
969
+
970
+ def initialize(**args)
971
+ update!(**args)
972
+ end
973
+
974
+ # Update properties of this object
975
+ def update!(**args)
976
+ @name = args[:name] if args.key?(:name)
977
+ @value = args[:value] if args.key?(:value)
978
+ end
979
+ end
980
+
981
+ # An `OutputDirectory` is the output in an `ActionResult` corresponding to a
982
+ # directory's full contents rather than a single file.
983
+ class BuildBazelRemoteExecutionV2OutputDirectory
984
+ include Google::Apis::Core::Hashable
985
+
986
+ # The full path of the directory relative to the working directory. The path
987
+ # separator is a forward slash `/`. Since this is a relative path, it MUST NOT
988
+ # begin with a leading forward slash. The empty string value is allowed, and it
989
+ # denotes the entire working directory.
990
+ # Corresponds to the JSON property `path`
991
+ # @return [String]
992
+ attr_accessor :path
993
+
994
+ # A content digest. A digest for a given blob consists of the size of the blob
995
+ # and its hash. The hash algorithm to use is defined by the server. The size is
996
+ # considered to be an integral part of the digest and cannot be separated. That
997
+ # is, even if the `hash` field is correctly specified but `size_bytes` is not,
998
+ # the server MUST reject the request. The reason for including the size in the
999
+ # digest is as follows: in a great many cases, the server needs to know the size
1000
+ # of the blob it is about to work with prior to starting an operation with it,
1001
+ # such as flattening Merkle tree structures or streaming it to a worker.
1002
+ # Technically, the server could implement a separate metadata store, but this
1003
+ # results in a significantly more complicated implementation as opposed to
1004
+ # having the client specify the size up-front (or storing the size along with
1005
+ # the digest in every message where digests are embedded). This does mean that
1006
+ # the API leaks some implementation details of (what we consider to be) a
1007
+ # reasonable server implementation, but we consider this to be a worthwhile
1008
+ # tradeoff. When a `Digest` is used to refer to a proto message, it always
1009
+ # refers to the message in binary encoded form. To ensure consistent hashing,
1010
+ # clients and servers MUST ensure that they serialize messages according to the
1011
+ # following rules, even if there are alternate valid encodings for the same
1012
+ # message: * Fields are serialized in tag order. * There are no unknown fields. *
1013
+ # There are no duplicate fields. * Fields are serialized according to the
1014
+ # default semantics for their type. Most protocol buffer implementations will
1015
+ # always follow these rules when serializing, but care should be taken to avoid
1016
+ # shortcuts. For instance, concatenating two messages to merge them may produce
1017
+ # duplicate fields.
1018
+ # Corresponds to the JSON property `treeDigest`
1019
+ # @return [Google::Apis::RemotebuildexecutionV1::BuildBazelRemoteExecutionV2Digest]
1020
+ attr_accessor :tree_digest
1021
+
1022
+ def initialize(**args)
1023
+ update!(**args)
1024
+ end
1025
+
1026
+ # Update properties of this object
1027
+ def update!(**args)
1028
+ @path = args[:path] if args.key?(:path)
1029
+ @tree_digest = args[:tree_digest] if args.key?(:tree_digest)
1030
+ end
1031
+ end
1032
+
1033
+ # An `OutputFile` is similar to a FileNode, but it is used as an output in an `
1034
+ # ActionResult`. It allows a full file path rather than only a name.
1035
+ class BuildBazelRemoteExecutionV2OutputFile
1036
+ include Google::Apis::Core::Hashable
1037
+
1038
+ # The contents of the file if inlining was requested. The server SHOULD NOT
1039
+ # inline file contents unless requested by the client in the
1040
+ # GetActionResultRequest message. The server MAY omit inlining, even if
1041
+ # requested, and MUST do so if inlining would cause the response to exceed
1042
+ # message size limits.
1043
+ # Corresponds to the JSON property `contents`
1044
+ # NOTE: Values are automatically base64 encoded/decoded in the client library.
1045
+ # @return [String]
1046
+ attr_accessor :contents
1047
+
1048
+ # A content digest. A digest for a given blob consists of the size of the blob
1049
+ # and its hash. The hash algorithm to use is defined by the server. The size is
1050
+ # considered to be an integral part of the digest and cannot be separated. That
1051
+ # is, even if the `hash` field is correctly specified but `size_bytes` is not,
1052
+ # the server MUST reject the request. The reason for including the size in the
1053
+ # digest is as follows: in a great many cases, the server needs to know the size
1054
+ # of the blob it is about to work with prior to starting an operation with it,
1055
+ # such as flattening Merkle tree structures or streaming it to a worker.
1056
+ # Technically, the server could implement a separate metadata store, but this
1057
+ # results in a significantly more complicated implementation as opposed to
1058
+ # having the client specify the size up-front (or storing the size along with
1059
+ # the digest in every message where digests are embedded). This does mean that
1060
+ # the API leaks some implementation details of (what we consider to be) a
1061
+ # reasonable server implementation, but we consider this to be a worthwhile
1062
+ # tradeoff. When a `Digest` is used to refer to a proto message, it always
1063
+ # refers to the message in binary encoded form. To ensure consistent hashing,
1064
+ # clients and servers MUST ensure that they serialize messages according to the
1065
+ # following rules, even if there are alternate valid encodings for the same
1066
+ # message: * Fields are serialized in tag order. * There are no unknown fields. *
1067
+ # There are no duplicate fields. * Fields are serialized according to the
1068
+ # default semantics for their type. Most protocol buffer implementations will
1069
+ # always follow these rules when serializing, but care should be taken to avoid
1070
+ # shortcuts. For instance, concatenating two messages to merge them may produce
1071
+ # duplicate fields.
1072
+ # Corresponds to the JSON property `digest`
1073
+ # @return [Google::Apis::RemotebuildexecutionV1::BuildBazelRemoteExecutionV2Digest]
1074
+ attr_accessor :digest
1075
+
1076
+ # True if file is executable, false otherwise.
1077
+ # Corresponds to the JSON property `isExecutable`
1078
+ # @return [Boolean]
1079
+ attr_accessor :is_executable
1080
+ alias_method :is_executable?, :is_executable
1081
+
1082
+ # The supported node properties of the OutputFile, if requested by the Action.
1083
+ # Corresponds to the JSON property `nodeProperties`
1084
+ # @return [Array<Google::Apis::RemotebuildexecutionV1::BuildBazelRemoteExecutionV2NodeProperty>]
1085
+ attr_accessor :node_properties
1086
+
1087
+ # The full path of the file relative to the working directory, including the
1088
+ # filename. The path separator is a forward slash `/`. Since this is a relative
1089
+ # path, it MUST NOT begin with a leading forward slash.
1090
+ # Corresponds to the JSON property `path`
1091
+ # @return [String]
1092
+ attr_accessor :path
1093
+
1094
+ def initialize(**args)
1095
+ update!(**args)
1096
+ end
1097
+
1098
+ # Update properties of this object
1099
+ def update!(**args)
1100
+ @contents = args[:contents] if args.key?(:contents)
1101
+ @digest = args[:digest] if args.key?(:digest)
1102
+ @is_executable = args[:is_executable] if args.key?(:is_executable)
1103
+ @node_properties = args[:node_properties] if args.key?(:node_properties)
1104
+ @path = args[:path] if args.key?(:path)
1105
+ end
1106
+ end
1107
+
1108
+ # An `OutputSymlink` is similar to a Symlink, but it is used as an output in an `
1109
+ # ActionResult`. `OutputSymlink` is binary-compatible with `SymlinkNode`.
1110
+ class BuildBazelRemoteExecutionV2OutputSymlink
1111
+ include Google::Apis::Core::Hashable
1112
+
1113
+ # The supported node properties of the OutputSymlink, if requested by the Action.
1114
+ # Corresponds to the JSON property `nodeProperties`
1115
+ # @return [Array<Google::Apis::RemotebuildexecutionV1::BuildBazelRemoteExecutionV2NodeProperty>]
1116
+ attr_accessor :node_properties
1117
+
1118
+ # The full path of the symlink relative to the working directory, including the
1119
+ # filename. The path separator is a forward slash `/`. Since this is a relative
1120
+ # path, it MUST NOT begin with a leading forward slash.
1121
+ # Corresponds to the JSON property `path`
1122
+ # @return [String]
1123
+ attr_accessor :path
1124
+
1125
+ # The target path of the symlink. The path separator is a forward slash `/`. The
1126
+ # target path can be relative to the parent directory of the symlink or it can
1127
+ # be an absolute path starting with `/`. Support for absolute paths can be
1128
+ # checked using the Capabilities API. The canonical form forbids the substrings `
1129
+ # /./` and `//` in the target path. `..` components are allowed anywhere in the
1130
+ # target path.
1131
+ # Corresponds to the JSON property `target`
1132
+ # @return [String]
1133
+ attr_accessor :target
1134
+
1135
+ def initialize(**args)
1136
+ update!(**args)
1137
+ end
1138
+
1139
+ # Update properties of this object
1140
+ def update!(**args)
1141
+ @node_properties = args[:node_properties] if args.key?(:node_properties)
1142
+ @path = args[:path] if args.key?(:path)
1143
+ @target = args[:target] if args.key?(:target)
1144
+ end
1145
+ end
1146
+
1147
+ # A `Platform` is a set of requirements, such as hardware, operating system, or
1148
+ # compiler toolchain, for an Action's execution environment. A `Platform` is
1149
+ # represented as a series of key-value pairs representing the properties that
1150
+ # are required of the platform.
1151
+ class BuildBazelRemoteExecutionV2Platform
1152
+ include Google::Apis::Core::Hashable
1153
+
1154
+ # The properties that make up this platform. In order to ensure that equivalent `
1155
+ # Platform`s always hash to the same value, the properties MUST be
1156
+ # lexicographically sorted by name, and then by value. Sorting of strings is
1157
+ # done by code point, equivalently, by the UTF-8 bytes.
1158
+ # Corresponds to the JSON property `properties`
1159
+ # @return [Array<Google::Apis::RemotebuildexecutionV1::BuildBazelRemoteExecutionV2PlatformProperty>]
1160
+ attr_accessor :properties
1161
+
1162
+ def initialize(**args)
1163
+ update!(**args)
1164
+ end
1165
+
1166
+ # Update properties of this object
1167
+ def update!(**args)
1168
+ @properties = args[:properties] if args.key?(:properties)
1169
+ end
1170
+ end
1171
+
1172
+ # A single property for the environment. The server is responsible for
1173
+ # specifying the property `name`s that it accepts. If an unknown `name` is
1174
+ # provided in the requirements for an Action, the server SHOULD reject the
1175
+ # execution request. If permitted by the server, the same `name` may occur
1176
+ # multiple times. The server is also responsible for specifying the
1177
+ # interpretation of property `value`s. For instance, a property describing how
1178
+ # much RAM must be available may be interpreted as allowing a worker with 16GB
1179
+ # to fulfill a request for 8GB, while a property describing the OS environment
1180
+ # on which the action must be performed may require an exact match with the
1181
+ # worker's OS. The server MAY use the `value` of one or more properties to
1182
+ # determine how it sets up the execution environment, such as by making specific
1183
+ # system files available to the worker.
1184
+ class BuildBazelRemoteExecutionV2PlatformProperty
1185
+ include Google::Apis::Core::Hashable
1186
+
1187
+ # The property name.
1188
+ # Corresponds to the JSON property `name`
1189
+ # @return [String]
1190
+ attr_accessor :name
1191
+
1192
+ # The property value.
1193
+ # Corresponds to the JSON property `value`
1194
+ # @return [String]
1195
+ attr_accessor :value
1196
+
1197
+ def initialize(**args)
1198
+ update!(**args)
1199
+ end
1200
+
1201
+ # Update properties of this object
1202
+ def update!(**args)
1203
+ @name = args[:name] if args.key?(:name)
1204
+ @value = args[:value] if args.key?(:value)
1205
+ end
1206
+ end
1207
+
1208
+ # An optional Metadata to attach to any RPC request to tell the server about an
1209
+ # external context of the request. The server may use this for logging or other
1210
+ # purposes. To use it, the client attaches the header to the call using the
1211
+ # canonical proto serialization: * name: `build.bazel.remote.execution.v2.
1212
+ # requestmetadata-bin` * contents: the base64 encoded binary `RequestMetadata`
1213
+ # message. Note: the gRPC library serializes binary headers encoded in base 64
1214
+ # by default (https://github.com/grpc/grpc/blob/master/doc/PROTOCOL-HTTP2.md#
1215
+ # requests). Therefore, if the gRPC library is used to pass/retrieve this
1216
+ # metadata, the user may ignore the base64 encoding and assume it is simply
1217
+ # serialized as a binary message.
1218
+ class BuildBazelRemoteExecutionV2RequestMetadata
1219
+ include Google::Apis::Core::Hashable
1220
+
1221
+ # An identifier that ties multiple requests to the same action. For example,
1222
+ # multiple requests to the CAS, Action Cache, and Execution API are used in
1223
+ # order to compile foo.cc.
1224
+ # Corresponds to the JSON property `actionId`
1225
+ # @return [String]
1226
+ attr_accessor :action_id
1227
+
1228
+ # An identifier to tie multiple tool invocations together. For example, runs of
1229
+ # foo_test, bar_test and baz_test on a post-submit of a given patch.
1230
+ # Corresponds to the JSON property `correlatedInvocationsId`
1231
+ # @return [String]
1232
+ attr_accessor :correlated_invocations_id
1233
+
1234
+ # Details for the tool used to call the API.
1235
+ # Corresponds to the JSON property `toolDetails`
1236
+ # @return [Google::Apis::RemotebuildexecutionV1::BuildBazelRemoteExecutionV2ToolDetails]
1237
+ attr_accessor :tool_details
1238
+
1239
+ # An identifier that ties multiple actions together to a final result. For
1240
+ # example, multiple actions are required to build and run foo_test.
1241
+ # Corresponds to the JSON property `toolInvocationId`
1242
+ # @return [String]
1243
+ attr_accessor :tool_invocation_id
1244
+
1245
+ def initialize(**args)
1246
+ update!(**args)
1247
+ end
1248
+
1249
+ # Update properties of this object
1250
+ def update!(**args)
1251
+ @action_id = args[:action_id] if args.key?(:action_id)
1252
+ @correlated_invocations_id = args[:correlated_invocations_id] if args.key?(:correlated_invocations_id)
1253
+ @tool_details = args[:tool_details] if args.key?(:tool_details)
1254
+ @tool_invocation_id = args[:tool_invocation_id] if args.key?(:tool_invocation_id)
1255
+ end
1256
+ end
1257
+
1258
+ # A `SymlinkNode` represents a symbolic link.
1259
+ class BuildBazelRemoteExecutionV2SymlinkNode
1260
+ include Google::Apis::Core::Hashable
1261
+
1262
+ # The name of the symlink.
1263
+ # Corresponds to the JSON property `name`
1264
+ # @return [String]
1265
+ attr_accessor :name
1266
+
1267
+ # The node properties of the SymlinkNode.
1268
+ # Corresponds to the JSON property `nodeProperties`
1269
+ # @return [Array<Google::Apis::RemotebuildexecutionV1::BuildBazelRemoteExecutionV2NodeProperty>]
1270
+ attr_accessor :node_properties
1271
+
1272
+ # The target path of the symlink. The path separator is a forward slash `/`. The
1273
+ # target path can be relative to the parent directory of the symlink or it can
1274
+ # be an absolute path starting with `/`. Support for absolute paths can be
1275
+ # checked using the Capabilities API. The canonical form forbids the substrings `
1276
+ # /./` and `//` in the target path. `..` components are allowed anywhere in the
1277
+ # target path.
1278
+ # Corresponds to the JSON property `target`
1279
+ # @return [String]
1280
+ attr_accessor :target
1281
+
1282
+ def initialize(**args)
1283
+ update!(**args)
1284
+ end
1285
+
1286
+ # Update properties of this object
1287
+ def update!(**args)
1288
+ @name = args[:name] if args.key?(:name)
1289
+ @node_properties = args[:node_properties] if args.key?(:node_properties)
1290
+ @target = args[:target] if args.key?(:target)
1291
+ end
1292
+ end
1293
+
1294
+ # Details for the tool used to call the API.
1295
+ class BuildBazelRemoteExecutionV2ToolDetails
1296
+ include Google::Apis::Core::Hashable
1297
+
1298
+ # Name of the tool, e.g. bazel.
1299
+ # Corresponds to the JSON property `toolName`
1300
+ # @return [String]
1301
+ attr_accessor :tool_name
1302
+
1303
+ # Version of the tool used for the request, e.g. 5.0.3.
1304
+ # Corresponds to the JSON property `toolVersion`
1305
+ # @return [String]
1306
+ attr_accessor :tool_version
1307
+
1308
+ def initialize(**args)
1309
+ update!(**args)
1310
+ end
1311
+
1312
+ # Update properties of this object
1313
+ def update!(**args)
1314
+ @tool_name = args[:tool_name] if args.key?(:tool_name)
1315
+ @tool_version = args[:tool_version] if args.key?(:tool_version)
1316
+ end
1317
+ end
1318
+
1319
+ # A `Tree` contains all the Directory protos in a single directory Merkle tree,
1320
+ # compressed into one message.
1321
+ class BuildBazelRemoteExecutionV2Tree
1322
+ include Google::Apis::Core::Hashable
1323
+
1324
+ # All the child directories: the directories referred to by the root and,
1325
+ # recursively, all its children. In order to reconstruct the directory tree, the
1326
+ # client must take the digests of each of the child directories and then build
1327
+ # up a tree starting from the `root`.
1328
+ # Corresponds to the JSON property `children`
1329
+ # @return [Array<Google::Apis::RemotebuildexecutionV1::BuildBazelRemoteExecutionV2Directory>]
1330
+ attr_accessor :children
1331
+
1332
+ # A `Directory` represents a directory node in a file tree, containing zero or
1333
+ # more children FileNodes, DirectoryNodes and SymlinkNodes. Each `Node` contains
1334
+ # its name in the directory, either the digest of its content (either a file
1335
+ # blob or a `Directory` proto) or a symlink target, as well as possibly some
1336
+ # metadata about the file or directory. In order to ensure that two equivalent
1337
+ # directory trees hash to the same value, the following restrictions MUST be
1338
+ # obeyed when constructing a a `Directory`: * Every child in the directory must
1339
+ # have a path of exactly one segment. Multiple levels of directory hierarchy may
1340
+ # not be collapsed. * Each child in the directory must have a unique path
1341
+ # segment (file name). Note that while the API itself is case-sensitive, the
1342
+ # environment where the Action is executed may or may not be case-sensitive.
1343
+ # That is, it is legal to call the API with a Directory that has both "Foo" and "
1344
+ # foo" as children, but the Action may be rejected by the remote system upon
1345
+ # execution. * The files, directories and symlinks in the directory must each be
1346
+ # sorted in lexicographical order by path. The path strings must be sorted by
1347
+ # code point, equivalently, by UTF-8 bytes. * The NodeProperties of files,
1348
+ # directories, and symlinks must be sorted in lexicographical order by property
1349
+ # name. A `Directory` that obeys the restrictions is said to be in canonical
1350
+ # form. As an example, the following could be used for a file named `bar` and a
1351
+ # directory named `foo` with an executable file named `baz` (hashes shortened
1352
+ # for readability): ```json // (Directory proto) ` files: [ ` name: "bar",
1353
+ # digest: ` hash: "4a73bc9d03...", size: 65534 `, node_properties: [ ` "name": "
1354
+ # MTime", "value": "2017-01-15T01:30:15.01Z" ` ] ` ], directories: [ ` name: "
1355
+ # foo", digest: ` hash: "4cf2eda940...", size: 43 ` ` ] ` // (Directory proto
1356
+ # with hash "4cf2eda940..." and size 43) ` files: [ ` name: "baz", digest: `
1357
+ # hash: "b2c941073e...", size: 1294, `, is_executable: true ` ] ` ```
1358
+ # Corresponds to the JSON property `root`
1359
+ # @return [Google::Apis::RemotebuildexecutionV1::BuildBazelRemoteExecutionV2Directory]
1360
+ attr_accessor :root
1361
+
1362
+ def initialize(**args)
1363
+ update!(**args)
1364
+ end
1365
+
1366
+ # Update properties of this object
1367
+ def update!(**args)
1368
+ @children = args[:children] if args.key?(:children)
1369
+ @root = args[:root] if args.key?(:root)
1370
+ end
1371
+ end
1372
+
1373
+ # Media resource.
1374
+ class GoogleBytestreamMedia
1375
+ include Google::Apis::Core::Hashable
1376
+
1377
+ # Name of the media resource.
1378
+ # Corresponds to the JSON property `resourceName`
1379
+ # @return [String]
1380
+ attr_accessor :resource_name
1381
+
1382
+ def initialize(**args)
1383
+ update!(**args)
1384
+ end
1385
+
1386
+ # Update properties of this object
1387
+ def update!(**args)
1388
+ @resource_name = args[:resource_name] if args.key?(:resource_name)
1389
+ end
1390
+ end
1391
+
1392
+ # CommandDuration contains the various duration metrics tracked when a bot
1393
+ # performs a command.
1394
+ class GoogleDevtoolsRemotebuildbotCommandDurations
1395
+ include Google::Apis::Core::Hashable
1396
+
1397
+ # The time spent preparing the command to be run in a Docker container (includes
1398
+ # pulling the Docker image, if necessary).
1399
+ # Corresponds to the JSON property `dockerPrep`
1400
+ # @return [String]
1401
+ attr_accessor :docker_prep
1402
+
1403
+ # The timestamp when docker preparation begins.
1404
+ # Corresponds to the JSON property `dockerPrepStartTime`
1405
+ # @return [String]
1406
+ attr_accessor :docker_prep_start_time
1407
+
1408
+ # The time spent downloading the input files and constructing the working
1409
+ # directory.
1410
+ # Corresponds to the JSON property `download`
1411
+ # @return [String]
1412
+ attr_accessor :download
1413
+
1414
+ # The timestamp when downloading the input files begins.
1415
+ # Corresponds to the JSON property `downloadStartTime`
1416
+ # @return [String]
1417
+ attr_accessor :download_start_time
1418
+
1419
+ # The timestamp when execution begins.
1420
+ # Corresponds to the JSON property `execStartTime`
1421
+ # @return [String]
1422
+ attr_accessor :exec_start_time
1423
+
1424
+ # The time spent executing the command (i.e., doing useful work).
1425
+ # Corresponds to the JSON property `execution`
1426
+ # @return [String]
1427
+ attr_accessor :execution
1428
+
1429
+ # The timestamp when preparation is done and bot starts downloading files.
1430
+ # Corresponds to the JSON property `isoPrepDone`
1431
+ # @return [String]
1432
+ attr_accessor :iso_prep_done
1433
+
1434
+ # The time spent completing the command, in total.
1435
+ # Corresponds to the JSON property `overall`
1436
+ # @return [String]
1437
+ attr_accessor :overall
1438
+
1439
+ # The time spent uploading the stdout logs.
1440
+ # Corresponds to the JSON property `stdout`
1441
+ # @return [String]
1442
+ attr_accessor :stdout
1443
+
1444
+ # The time spent uploading the output files.
1445
+ # Corresponds to the JSON property `upload`
1446
+ # @return [String]
1447
+ attr_accessor :upload
1448
+
1449
+ # The timestamp when uploading the output files begins.
1450
+ # Corresponds to the JSON property `uploadStartTime`
1451
+ # @return [String]
1452
+ attr_accessor :upload_start_time
1453
+
1454
+ def initialize(**args)
1455
+ update!(**args)
1456
+ end
1457
+
1458
+ # Update properties of this object
1459
+ def update!(**args)
1460
+ @docker_prep = args[:docker_prep] if args.key?(:docker_prep)
1461
+ @docker_prep_start_time = args[:docker_prep_start_time] if args.key?(:docker_prep_start_time)
1462
+ @download = args[:download] if args.key?(:download)
1463
+ @download_start_time = args[:download_start_time] if args.key?(:download_start_time)
1464
+ @exec_start_time = args[:exec_start_time] if args.key?(:exec_start_time)
1465
+ @execution = args[:execution] if args.key?(:execution)
1466
+ @iso_prep_done = args[:iso_prep_done] if args.key?(:iso_prep_done)
1467
+ @overall = args[:overall] if args.key?(:overall)
1468
+ @stdout = args[:stdout] if args.key?(:stdout)
1469
+ @upload = args[:upload] if args.key?(:upload)
1470
+ @upload_start_time = args[:upload_start_time] if args.key?(:upload_start_time)
1471
+ end
1472
+ end
1473
+
1474
+ # CommandEvents contains counters for the number of warnings and errors that
1475
+ # occurred during the execution of a command.
1476
+ class GoogleDevtoolsRemotebuildbotCommandEvents
1477
+ include Google::Apis::Core::Hashable
1478
+
1479
+ # Indicates whether we are using a cached Docker image (true) or had to pull the
1480
+ # Docker image (false) for this command.
1481
+ # Corresponds to the JSON property `dockerCacheHit`
1482
+ # @return [Boolean]
1483
+ attr_accessor :docker_cache_hit
1484
+ alias_method :docker_cache_hit?, :docker_cache_hit
1485
+
1486
+ # Docker Image name.
1487
+ # Corresponds to the JSON property `dockerImageName`
1488
+ # @return [String]
1489
+ attr_accessor :docker_image_name
1490
+
1491
+ # The input cache miss ratio.
1492
+ # Corresponds to the JSON property `inputCacheMiss`
1493
+ # @return [Float]
1494
+ attr_accessor :input_cache_miss
1495
+
1496
+ # The number of errors reported.
1497
+ # Corresponds to the JSON property `numErrors`
1498
+ # @return [Fixnum]
1499
+ attr_accessor :num_errors
1500
+
1501
+ # The number of warnings reported.
1502
+ # Corresponds to the JSON property `numWarnings`
1503
+ # @return [Fixnum]
1504
+ attr_accessor :num_warnings
1505
+
1506
+ def initialize(**args)
1507
+ update!(**args)
1508
+ end
1509
+
1510
+ # Update properties of this object
1511
+ def update!(**args)
1512
+ @docker_cache_hit = args[:docker_cache_hit] if args.key?(:docker_cache_hit)
1513
+ @docker_image_name = args[:docker_image_name] if args.key?(:docker_image_name)
1514
+ @input_cache_miss = args[:input_cache_miss] if args.key?(:input_cache_miss)
1515
+ @num_errors = args[:num_errors] if args.key?(:num_errors)
1516
+ @num_warnings = args[:num_warnings] if args.key?(:num_warnings)
1517
+ end
1518
+ end
1519
+
1520
+ # The internal status of the command result.
1521
+ class GoogleDevtoolsRemotebuildbotCommandStatus
1522
+ include Google::Apis::Core::Hashable
1523
+
1524
+ # The status code.
1525
+ # Corresponds to the JSON property `code`
1526
+ # @return [String]
1527
+ attr_accessor :code
1528
+
1529
+ # The error message.
1530
+ # Corresponds to the JSON property `message`
1531
+ # @return [String]
1532
+ attr_accessor :message
1533
+
1534
+ def initialize(**args)
1535
+ update!(**args)
1536
+ end
1537
+
1538
+ # Update properties of this object
1539
+ def update!(**args)
1540
+ @code = args[:code] if args.key?(:code)
1541
+ @message = args[:message] if args.key?(:message)
1542
+ end
1543
+ end
1544
+
1545
+ # ResourceUsage is the system resource usage of the host machine.
1546
+ class GoogleDevtoolsRemotebuildbotResourceUsage
1547
+ include Google::Apis::Core::Hashable
1548
+
1549
+ #
1550
+ # Corresponds to the JSON property `cpuUsedPercent`
1551
+ # @return [Float]
1552
+ attr_accessor :cpu_used_percent
1553
+
1554
+ #
1555
+ # Corresponds to the JSON property `diskUsage`
1556
+ # @return [Google::Apis::RemotebuildexecutionV1::GoogleDevtoolsRemotebuildbotResourceUsageStat]
1557
+ attr_accessor :disk_usage
1558
+
1559
+ #
1560
+ # Corresponds to the JSON property `memoryUsage`
1561
+ # @return [Google::Apis::RemotebuildexecutionV1::GoogleDevtoolsRemotebuildbotResourceUsageStat]
1562
+ attr_accessor :memory_usage
1563
+
1564
+ def initialize(**args)
1565
+ update!(**args)
1566
+ end
1567
+
1568
+ # Update properties of this object
1569
+ def update!(**args)
1570
+ @cpu_used_percent = args[:cpu_used_percent] if args.key?(:cpu_used_percent)
1571
+ @disk_usage = args[:disk_usage] if args.key?(:disk_usage)
1572
+ @memory_usage = args[:memory_usage] if args.key?(:memory_usage)
1573
+ end
1574
+ end
1575
+
1576
+ #
1577
+ class GoogleDevtoolsRemotebuildbotResourceUsageStat
1578
+ include Google::Apis::Core::Hashable
1579
+
1580
+ #
1581
+ # Corresponds to the JSON property `total`
1582
+ # @return [Fixnum]
1583
+ attr_accessor :total
1584
+
1585
+ #
1586
+ # Corresponds to the JSON property `used`
1587
+ # @return [Fixnum]
1588
+ attr_accessor :used
1589
+
1590
+ def initialize(**args)
1591
+ update!(**args)
1592
+ end
1593
+
1594
+ # Update properties of this object
1595
+ def update!(**args)
1596
+ @total = args[:total] if args.key?(:total)
1597
+ @used = args[:used] if args.key?(:used)
1598
+ end
1599
+ end
1600
+
1601
+ # AcceleratorConfig defines the accelerator cards to attach to the VM.
1602
+ class GoogleDevtoolsRemotebuildexecutionAdminV1alphaAcceleratorConfig
1603
+ include Google::Apis::Core::Hashable
1604
+
1605
+ # The number of guest accelerator cards exposed to each VM.
1606
+ # Corresponds to the JSON property `acceleratorCount`
1607
+ # @return [Fixnum]
1608
+ attr_accessor :accelerator_count
1609
+
1610
+ # The type of accelerator to attach to each VM, e.g. "nvidia-tesla-k80" for
1611
+ # nVidia Tesla K80.
1612
+ # Corresponds to the JSON property `acceleratorType`
1613
+ # @return [String]
1614
+ attr_accessor :accelerator_type
1615
+
1616
+ def initialize(**args)
1617
+ update!(**args)
1618
+ end
1619
+
1620
+ # Update properties of this object
1621
+ def update!(**args)
1622
+ @accelerator_count = args[:accelerator_count] if args.key?(:accelerator_count)
1623
+ @accelerator_type = args[:accelerator_type] if args.key?(:accelerator_type)
1624
+ end
1625
+ end
1626
+
1627
+ # Autoscale defines the autoscaling policy of a worker pool.
1628
+ class GoogleDevtoolsRemotebuildexecutionAdminV1alphaAutoscale
1629
+ include Google::Apis::Core::Hashable
1630
+
1631
+ # The maximal number of workers. Must be equal to or greater than min_size.
1632
+ # Corresponds to the JSON property `maxSize`
1633
+ # @return [Fixnum]
1634
+ attr_accessor :max_size
1635
+
1636
+ # The minimal number of workers. Must be greater than 0.
1637
+ # Corresponds to the JSON property `minSize`
1638
+ # @return [Fixnum]
1639
+ attr_accessor :min_size
1640
+
1641
+ def initialize(**args)
1642
+ update!(**args)
1643
+ end
1644
+
1645
+ # Update properties of this object
1646
+ def update!(**args)
1647
+ @max_size = args[:max_size] if args.key?(:max_size)
1648
+ @min_size = args[:min_size] if args.key?(:min_size)
1649
+ end
1650
+ end
1651
+
1652
+ # The request used for `CreateInstance`.
1653
+ class GoogleDevtoolsRemotebuildexecutionAdminV1alphaCreateInstanceRequest
1654
+ include Google::Apis::Core::Hashable
1655
+
1656
+ # Instance conceptually encapsulates all Remote Build Execution resources for
1657
+ # remote builds. An instance consists of storage and compute resources (for
1658
+ # example, `ContentAddressableStorage`, `ActionCache`, `WorkerPools`) used for
1659
+ # running remote builds. All Remote Build Execution API calls are scoped to an
1660
+ # instance.
1661
+ # Corresponds to the JSON property `instance`
1662
+ # @return [Google::Apis::RemotebuildexecutionV1::GoogleDevtoolsRemotebuildexecutionAdminV1alphaInstance]
1663
+ attr_accessor :instance
1664
+
1665
+ # ID of the created instance. A valid `instance_id` must: be 6-50 characters
1666
+ # long, contain only lowercase letters, digits, hyphens and underscores, start
1667
+ # with a lowercase letter, and end with a lowercase letter or a digit.
1668
+ # Corresponds to the JSON property `instanceId`
1669
+ # @return [String]
1670
+ attr_accessor :instance_id
1671
+
1672
+ # Resource name of the project containing the instance. Format: `projects/[
1673
+ # PROJECT_ID]`.
1674
+ # Corresponds to the JSON property `parent`
1675
+ # @return [String]
1676
+ attr_accessor :parent
1677
+
1678
+ def initialize(**args)
1679
+ update!(**args)
1680
+ end
1681
+
1682
+ # Update properties of this object
1683
+ def update!(**args)
1684
+ @instance = args[:instance] if args.key?(:instance)
1685
+ @instance_id = args[:instance_id] if args.key?(:instance_id)
1686
+ @parent = args[:parent] if args.key?(:parent)
1687
+ end
1688
+ end
1689
+
1690
+ # The request used for `CreateWorkerPool`.
1691
+ class GoogleDevtoolsRemotebuildexecutionAdminV1alphaCreateWorkerPoolRequest
1692
+ include Google::Apis::Core::Hashable
1693
+
1694
+ # Resource name of the instance in which to create the new worker pool. Format: `
1695
+ # projects/[PROJECT_ID]/instances/[INSTANCE_ID]`.
1696
+ # Corresponds to the JSON property `parent`
1697
+ # @return [String]
1698
+ attr_accessor :parent
1699
+
1700
+ # ID of the created worker pool. A valid pool ID must: be 6-50 characters long,
1701
+ # contain only lowercase letters, digits, hyphens and underscores, start with a
1702
+ # lowercase letter, and end with a lowercase letter or a digit.
1703
+ # Corresponds to the JSON property `poolId`
1704
+ # @return [String]
1705
+ attr_accessor :pool_id
1706
+
1707
+ # A worker pool resource in the Remote Build Execution API.
1708
+ # Corresponds to the JSON property `workerPool`
1709
+ # @return [Google::Apis::RemotebuildexecutionV1::GoogleDevtoolsRemotebuildexecutionAdminV1alphaWorkerPool]
1710
+ attr_accessor :worker_pool
1711
+
1712
+ def initialize(**args)
1713
+ update!(**args)
1714
+ end
1715
+
1716
+ # Update properties of this object
1717
+ def update!(**args)
1718
+ @parent = args[:parent] if args.key?(:parent)
1719
+ @pool_id = args[:pool_id] if args.key?(:pool_id)
1720
+ @worker_pool = args[:worker_pool] if args.key?(:worker_pool)
1721
+ end
1722
+ end
1723
+
1724
+ # The request used for `DeleteInstance`.
1725
+ class GoogleDevtoolsRemotebuildexecutionAdminV1alphaDeleteInstanceRequest
1726
+ include Google::Apis::Core::Hashable
1727
+
1728
+ # Name of the instance to delete. Format: `projects/[PROJECT_ID]/instances/[
1729
+ # INSTANCE_ID]`.
1730
+ # Corresponds to the JSON property `name`
1731
+ # @return [String]
1732
+ attr_accessor :name
1733
+
1734
+ def initialize(**args)
1735
+ update!(**args)
1736
+ end
1737
+
1738
+ # Update properties of this object
1739
+ def update!(**args)
1740
+ @name = args[:name] if args.key?(:name)
1741
+ end
1742
+ end
1743
+
1744
+ # The request used for DeleteWorkerPool.
1745
+ class GoogleDevtoolsRemotebuildexecutionAdminV1alphaDeleteWorkerPoolRequest
1746
+ include Google::Apis::Core::Hashable
1747
+
1748
+ # Name of the worker pool to delete. Format: `projects/[PROJECT_ID]/instances/[
1749
+ # INSTANCE_ID]/workerpools/[POOL_ID]`.
1750
+ # Corresponds to the JSON property `name`
1751
+ # @return [String]
1752
+ attr_accessor :name
1753
+
1754
+ def initialize(**args)
1755
+ update!(**args)
1756
+ end
1757
+
1758
+ # Update properties of this object
1759
+ def update!(**args)
1760
+ @name = args[:name] if args.key?(:name)
1761
+ end
1762
+ end
1763
+
1764
+ # FeaturePolicy defines features allowed to be used on RBE instances, as well as
1765
+ # instance-wide behavior changes that take effect without opt-in or opt-out at
1766
+ # usage time.
1767
+ class GoogleDevtoolsRemotebuildexecutionAdminV1alphaFeaturePolicy
1768
+ include Google::Apis::Core::Hashable
1769
+
1770
+ # Defines whether a feature can be used or what values are accepted.
1771
+ # Corresponds to the JSON property `containerImageSources`
1772
+ # @return [Google::Apis::RemotebuildexecutionV1::GoogleDevtoolsRemotebuildexecutionAdminV1alphaFeaturePolicyFeature]
1773
+ attr_accessor :container_image_sources
1774
+
1775
+ # Defines whether a feature can be used or what values are accepted.
1776
+ # Corresponds to the JSON property `dockerAddCapabilities`
1777
+ # @return [Google::Apis::RemotebuildexecutionV1::GoogleDevtoolsRemotebuildexecutionAdminV1alphaFeaturePolicyFeature]
1778
+ attr_accessor :docker_add_capabilities
1779
+
1780
+ # Defines whether a feature can be used or what values are accepted.
1781
+ # Corresponds to the JSON property `dockerChrootPath`
1782
+ # @return [Google::Apis::RemotebuildexecutionV1::GoogleDevtoolsRemotebuildexecutionAdminV1alphaFeaturePolicyFeature]
1783
+ attr_accessor :docker_chroot_path
1784
+
1785
+ # Defines whether a feature can be used or what values are accepted.
1786
+ # Corresponds to the JSON property `dockerNetwork`
1787
+ # @return [Google::Apis::RemotebuildexecutionV1::GoogleDevtoolsRemotebuildexecutionAdminV1alphaFeaturePolicyFeature]
1788
+ attr_accessor :docker_network
1789
+
1790
+ # Defines whether a feature can be used or what values are accepted.
1791
+ # Corresponds to the JSON property `dockerPrivileged`
1792
+ # @return [Google::Apis::RemotebuildexecutionV1::GoogleDevtoolsRemotebuildexecutionAdminV1alphaFeaturePolicyFeature]
1793
+ attr_accessor :docker_privileged
1794
+
1795
+ # Defines whether a feature can be used or what values are accepted.
1796
+ # Corresponds to the JSON property `dockerRunAsRoot`
1797
+ # @return [Google::Apis::RemotebuildexecutionV1::GoogleDevtoolsRemotebuildexecutionAdminV1alphaFeaturePolicyFeature]
1798
+ attr_accessor :docker_run_as_root
1799
+
1800
+ # Defines whether a feature can be used or what values are accepted.
1801
+ # Corresponds to the JSON property `dockerRuntime`
1802
+ # @return [Google::Apis::RemotebuildexecutionV1::GoogleDevtoolsRemotebuildexecutionAdminV1alphaFeaturePolicyFeature]
1803
+ attr_accessor :docker_runtime
1804
+
1805
+ # Defines whether a feature can be used or what values are accepted.
1806
+ # Corresponds to the JSON property `dockerSiblingContainers`
1807
+ # @return [Google::Apis::RemotebuildexecutionV1::GoogleDevtoolsRemotebuildexecutionAdminV1alphaFeaturePolicyFeature]
1808
+ attr_accessor :docker_sibling_containers
1809
+
1810
+ # linux_isolation allows overriding the docker runtime used for containers
1811
+ # started on Linux.
1812
+ # Corresponds to the JSON property `linuxIsolation`
1813
+ # @return [String]
1814
+ attr_accessor :linux_isolation
1815
+
1816
+ def initialize(**args)
1817
+ update!(**args)
1818
+ end
1819
+
1820
+ # Update properties of this object
1821
+ def update!(**args)
1822
+ @container_image_sources = args[:container_image_sources] if args.key?(:container_image_sources)
1823
+ @docker_add_capabilities = args[:docker_add_capabilities] if args.key?(:docker_add_capabilities)
1824
+ @docker_chroot_path = args[:docker_chroot_path] if args.key?(:docker_chroot_path)
1825
+ @docker_network = args[:docker_network] if args.key?(:docker_network)
1826
+ @docker_privileged = args[:docker_privileged] if args.key?(:docker_privileged)
1827
+ @docker_run_as_root = args[:docker_run_as_root] if args.key?(:docker_run_as_root)
1828
+ @docker_runtime = args[:docker_runtime] if args.key?(:docker_runtime)
1829
+ @docker_sibling_containers = args[:docker_sibling_containers] if args.key?(:docker_sibling_containers)
1830
+ @linux_isolation = args[:linux_isolation] if args.key?(:linux_isolation)
1831
+ end
1832
+ end
1833
+
1834
+ # Defines whether a feature can be used or what values are accepted.
1835
+ class GoogleDevtoolsRemotebuildexecutionAdminV1alphaFeaturePolicyFeature
1836
+ include Google::Apis::Core::Hashable
1837
+
1838
+ # A list of acceptable values. Only effective when the policy is `RESTRICTED`.
1839
+ # Corresponds to the JSON property `allowedValues`
1840
+ # @return [Array<String>]
1841
+ attr_accessor :allowed_values
1842
+
1843
+ # The policy of the feature.
1844
+ # Corresponds to the JSON property `policy`
1845
+ # @return [String]
1846
+ attr_accessor :policy
1847
+
1848
+ def initialize(**args)
1849
+ update!(**args)
1850
+ end
1851
+
1852
+ # Update properties of this object
1853
+ def update!(**args)
1854
+ @allowed_values = args[:allowed_values] if args.key?(:allowed_values)
1855
+ @policy = args[:policy] if args.key?(:policy)
1856
+ end
1857
+ end
1858
+
1859
+ # The request used for `GetInstance`.
1860
+ class GoogleDevtoolsRemotebuildexecutionAdminV1alphaGetInstanceRequest
1861
+ include Google::Apis::Core::Hashable
1862
+
1863
+ # Name of the instance to retrieve. Format: `projects/[PROJECT_ID]/instances/[
1864
+ # INSTANCE_ID]`.
1865
+ # Corresponds to the JSON property `name`
1866
+ # @return [String]
1867
+ attr_accessor :name
1868
+
1869
+ def initialize(**args)
1870
+ update!(**args)
1871
+ end
1872
+
1873
+ # Update properties of this object
1874
+ def update!(**args)
1875
+ @name = args[:name] if args.key?(:name)
1876
+ end
1877
+ end
1878
+
1879
+ # The request used for GetWorkerPool.
1880
+ class GoogleDevtoolsRemotebuildexecutionAdminV1alphaGetWorkerPoolRequest
1881
+ include Google::Apis::Core::Hashable
1882
+
1883
+ # Name of the worker pool to retrieve. Format: `projects/[PROJECT_ID]/instances/[
1884
+ # INSTANCE_ID]/workerpools/[POOL_ID]`.
1885
+ # Corresponds to the JSON property `name`
1886
+ # @return [String]
1887
+ attr_accessor :name
1888
+
1889
+ def initialize(**args)
1890
+ update!(**args)
1891
+ end
1892
+
1893
+ # Update properties of this object
1894
+ def update!(**args)
1895
+ @name = args[:name] if args.key?(:name)
1896
+ end
1897
+ end
1898
+
1899
+ # Instance conceptually encapsulates all Remote Build Execution resources for
1900
+ # remote builds. An instance consists of storage and compute resources (for
1901
+ # example, `ContentAddressableStorage`, `ActionCache`, `WorkerPools`) used for
1902
+ # running remote builds. All Remote Build Execution API calls are scoped to an
1903
+ # instance.
1904
+ class GoogleDevtoolsRemotebuildexecutionAdminV1alphaInstance
1905
+ include Google::Apis::Core::Hashable
1906
+
1907
+ # FeaturePolicy defines features allowed to be used on RBE instances, as well as
1908
+ # instance-wide behavior changes that take effect without opt-in or opt-out at
1909
+ # usage time.
1910
+ # Corresponds to the JSON property `featurePolicy`
1911
+ # @return [Google::Apis::RemotebuildexecutionV1::GoogleDevtoolsRemotebuildexecutionAdminV1alphaFeaturePolicy]
1912
+ attr_accessor :feature_policy
1913
+
1914
+ # The location is a GCP region. Currently only `us-central1` is supported.
1915
+ # Corresponds to the JSON property `location`
1916
+ # @return [String]
1917
+ attr_accessor :location
1918
+
1919
+ # Output only. Whether stack driver logging is enabled for the instance.
1920
+ # Corresponds to the JSON property `loggingEnabled`
1921
+ # @return [Boolean]
1922
+ attr_accessor :logging_enabled
1923
+ alias_method :logging_enabled?, :logging_enabled
1924
+
1925
+ # Output only. Instance resource name formatted as: `projects/[PROJECT_ID]/
1926
+ # instances/[INSTANCE_ID]`. Name should not be populated when creating an
1927
+ # instance since it is provided in the `instance_id` field.
1928
+ # Corresponds to the JSON property `name`
1929
+ # @return [String]
1930
+ attr_accessor :name
1931
+
1932
+ # Output only. State of the instance.
1933
+ # Corresponds to the JSON property `state`
1934
+ # @return [String]
1935
+ attr_accessor :state
1936
+
1937
+ def initialize(**args)
1938
+ update!(**args)
1939
+ end
1940
+
1941
+ # Update properties of this object
1942
+ def update!(**args)
1943
+ @feature_policy = args[:feature_policy] if args.key?(:feature_policy)
1944
+ @location = args[:location] if args.key?(:location)
1945
+ @logging_enabled = args[:logging_enabled] if args.key?(:logging_enabled)
1946
+ @name = args[:name] if args.key?(:name)
1947
+ @state = args[:state] if args.key?(:state)
1948
+ end
1949
+ end
1950
+
1951
+ #
1952
+ class GoogleDevtoolsRemotebuildexecutionAdminV1alphaListInstancesRequest
1953
+ include Google::Apis::Core::Hashable
1954
+
1955
+ # Resource name of the project. Format: `projects/[PROJECT_ID]`.
1956
+ # Corresponds to the JSON property `parent`
1957
+ # @return [String]
1958
+ attr_accessor :parent
1959
+
1960
+ def initialize(**args)
1961
+ update!(**args)
1962
+ end
1963
+
1964
+ # Update properties of this object
1965
+ def update!(**args)
1966
+ @parent = args[:parent] if args.key?(:parent)
1967
+ end
1968
+ end
1969
+
1970
+ #
1971
+ class GoogleDevtoolsRemotebuildexecutionAdminV1alphaListInstancesResponse
1972
+ include Google::Apis::Core::Hashable
1973
+
1974
+ # The list of instances in a given project.
1975
+ # Corresponds to the JSON property `instances`
1976
+ # @return [Array<Google::Apis::RemotebuildexecutionV1::GoogleDevtoolsRemotebuildexecutionAdminV1alphaInstance>]
1977
+ attr_accessor :instances
1978
+
1979
+ def initialize(**args)
1980
+ update!(**args)
1981
+ end
1982
+
1983
+ # Update properties of this object
1984
+ def update!(**args)
1985
+ @instances = args[:instances] if args.key?(:instances)
1986
+ end
1987
+ end
1988
+
1989
+ #
1990
+ class GoogleDevtoolsRemotebuildexecutionAdminV1alphaListWorkerPoolsRequest
1991
+ include Google::Apis::Core::Hashable
1992
+
1993
+ # Optional. A filter expression that filters resources listed in the response.
1994
+ # The expression must specify the field name, a comparison operator, and the
1995
+ # value that you want to use for filtering. The value must be a string, a number,
1996
+ # or a boolean. String values are case-insensitive. The comparison operator
1997
+ # must be either `:`, `=`, `!=`, `>`, `>=`, `<=` or `<`. The `:` operator can be
1998
+ # used with string fields to match substrings. For non-string fields it is
1999
+ # equivalent to the `=` operator. The `:*` comparison can be used to test
2000
+ # whether a key has been defined. You can also filter on nested fields. To
2001
+ # filter on multiple expressions, you can separate expression using `AND` and `
2002
+ # OR` operators, using parentheses to specify precedence. If neither operator is
2003
+ # specified, `AND` is assumed. Examples: Include only pools with more than 100
2004
+ # reserved workers: `(worker_count > 100) (worker_config.reserved = true)`
2005
+ # Include only pools with a certain label or machines of the n1-standard family:
2006
+ # `worker_config.labels.key1 : * OR worker_config.machine_type: n1-standard`
2007
+ # Corresponds to the JSON property `filter`
2008
+ # @return [String]
2009
+ attr_accessor :filter
2010
+
2011
+ # Resource name of the instance. Format: `projects/[PROJECT_ID]/instances/[
2012
+ # INSTANCE_ID]`.
2013
+ # Corresponds to the JSON property `parent`
2014
+ # @return [String]
2015
+ attr_accessor :parent
2016
+
2017
+ def initialize(**args)
2018
+ update!(**args)
2019
+ end
2020
+
2021
+ # Update properties of this object
2022
+ def update!(**args)
2023
+ @filter = args[:filter] if args.key?(:filter)
2024
+ @parent = args[:parent] if args.key?(:parent)
2025
+ end
2026
+ end
2027
+
2028
+ #
2029
+ class GoogleDevtoolsRemotebuildexecutionAdminV1alphaListWorkerPoolsResponse
2030
+ include Google::Apis::Core::Hashable
2031
+
2032
+ # The list of worker pools in a given instance.
2033
+ # Corresponds to the JSON property `workerPools`
2034
+ # @return [Array<Google::Apis::RemotebuildexecutionV1::GoogleDevtoolsRemotebuildexecutionAdminV1alphaWorkerPool>]
2035
+ attr_accessor :worker_pools
2036
+
2037
+ def initialize(**args)
2038
+ update!(**args)
2039
+ end
2040
+
2041
+ # Update properties of this object
2042
+ def update!(**args)
2043
+ @worker_pools = args[:worker_pools] if args.key?(:worker_pools)
2044
+ end
2045
+ end
2046
+
2047
+ # The request used for `UpdateInstance`.
2048
+ class GoogleDevtoolsRemotebuildexecutionAdminV1alphaUpdateInstanceRequest
2049
+ include Google::Apis::Core::Hashable
2050
+
2051
+ # Instance conceptually encapsulates all Remote Build Execution resources for
2052
+ # remote builds. An instance consists of storage and compute resources (for
2053
+ # example, `ContentAddressableStorage`, `ActionCache`, `WorkerPools`) used for
2054
+ # running remote builds. All Remote Build Execution API calls are scoped to an
2055
+ # instance.
2056
+ # Corresponds to the JSON property `instance`
2057
+ # @return [Google::Apis::RemotebuildexecutionV1::GoogleDevtoolsRemotebuildexecutionAdminV1alphaInstance]
2058
+ attr_accessor :instance
2059
+
2060
+ # Deprecated, use instance.logging_enabled instead. Whether to enable
2061
+ # Stackdriver logging for this instance.
2062
+ # Corresponds to the JSON property `loggingEnabled`
2063
+ # @return [Boolean]
2064
+ attr_accessor :logging_enabled
2065
+ alias_method :logging_enabled?, :logging_enabled
2066
+
2067
+ # Deprecated, use instance.Name instead. Name of the instance to update. Format:
2068
+ # `projects/[PROJECT_ID]/instances/[INSTANCE_ID]`.
2069
+ # Corresponds to the JSON property `name`
2070
+ # @return [String]
2071
+ attr_accessor :name
2072
+
2073
+ # The update mask applies to instance. For the `FieldMask` definition, see https:
2074
+ # //developers.google.com/protocol-buffers/docs/reference/google.protobuf#
2075
+ # fieldmask If an empty update_mask is provided, only the non-default valued
2076
+ # field in the worker pool field will be updated. Note that in order to update a
2077
+ # field to the default value (zero, false, empty string) an explicit update_mask
2078
+ # must be provided.
2079
+ # Corresponds to the JSON property `updateMask`
2080
+ # @return [String]
2081
+ attr_accessor :update_mask
2082
+
2083
+ def initialize(**args)
2084
+ update!(**args)
2085
+ end
2086
+
2087
+ # Update properties of this object
2088
+ def update!(**args)
2089
+ @instance = args[:instance] if args.key?(:instance)
2090
+ @logging_enabled = args[:logging_enabled] if args.key?(:logging_enabled)
2091
+ @name = args[:name] if args.key?(:name)
2092
+ @update_mask = args[:update_mask] if args.key?(:update_mask)
2093
+ end
2094
+ end
2095
+
2096
+ # The request used for UpdateWorkerPool.
2097
+ class GoogleDevtoolsRemotebuildexecutionAdminV1alphaUpdateWorkerPoolRequest
2098
+ include Google::Apis::Core::Hashable
2099
+
2100
+ # The update mask applies to worker_pool. For the `FieldMask` definition, see
2101
+ # https://developers.google.com/protocol-buffers/docs/reference/google.protobuf#
2102
+ # fieldmask If an empty update_mask is provided, only the non-default valued
2103
+ # field in the worker pool field will be updated. Note that in order to update a
2104
+ # field to the default value (zero, false, empty string) an explicit update_mask
2105
+ # must be provided.
2106
+ # Corresponds to the JSON property `updateMask`
2107
+ # @return [String]
2108
+ attr_accessor :update_mask
2109
+
2110
+ # A worker pool resource in the Remote Build Execution API.
2111
+ # Corresponds to the JSON property `workerPool`
2112
+ # @return [Google::Apis::RemotebuildexecutionV1::GoogleDevtoolsRemotebuildexecutionAdminV1alphaWorkerPool]
2113
+ attr_accessor :worker_pool
2114
+
2115
+ def initialize(**args)
2116
+ update!(**args)
2117
+ end
2118
+
2119
+ # Update properties of this object
2120
+ def update!(**args)
2121
+ @update_mask = args[:update_mask] if args.key?(:update_mask)
2122
+ @worker_pool = args[:worker_pool] if args.key?(:worker_pool)
2123
+ end
2124
+ end
2125
+
2126
+ # Defines the configuration to be used for creating workers in the worker pool.
2127
+ class GoogleDevtoolsRemotebuildexecutionAdminV1alphaWorkerConfig
2128
+ include Google::Apis::Core::Hashable
2129
+
2130
+ # AcceleratorConfig defines the accelerator cards to attach to the VM.
2131
+ # Corresponds to the JSON property `accelerator`
2132
+ # @return [Google::Apis::RemotebuildexecutionV1::GoogleDevtoolsRemotebuildexecutionAdminV1alphaAcceleratorConfig]
2133
+ attr_accessor :accelerator
2134
+
2135
+ # Required. Size of the disk attached to the worker, in GB. See https://cloud.
2136
+ # google.com/compute/docs/disks/
2137
+ # Corresponds to the JSON property `diskSizeGb`
2138
+ # @return [Fixnum]
2139
+ attr_accessor :disk_size_gb
2140
+
2141
+ # Required. Disk Type to use for the worker. See [Storage options](https://cloud.
2142
+ # google.com/compute/docs/disks/#introduction). Currently only `pd-standard` and
2143
+ # `pd-ssd` are supported.
2144
+ # Corresponds to the JSON property `diskType`
2145
+ # @return [String]
2146
+ attr_accessor :disk_type
2147
+
2148
+ # Labels associated with the workers. Label keys and values can be no longer
2149
+ # than 63 characters, can only contain lowercase letters, numeric characters,
2150
+ # underscores and dashes. International letters are permitted. Label keys must
2151
+ # start with a letter. Label values are optional. There can not be more than 64
2152
+ # labels per resource.
2153
+ # Corresponds to the JSON property `labels`
2154
+ # @return [Hash<String,String>]
2155
+ attr_accessor :labels
2156
+
2157
+ # Required. Machine type of the worker, such as `n1-standard-2`. See https://
2158
+ # cloud.google.com/compute/docs/machine-types for a list of supported machine
2159
+ # types. Note that `f1-micro` and `g1-small` are not yet supported.
2160
+ # Corresponds to the JSON property `machineType`
2161
+ # @return [String]
2162
+ attr_accessor :machine_type
2163
+
2164
+ # The maximum number of actions a worker can execute concurrently.
2165
+ # Corresponds to the JSON property `maxConcurrentActions`
2166
+ # @return [Fixnum]
2167
+ attr_accessor :max_concurrent_actions
2168
+
2169
+ # Minimum CPU platform to use when creating the worker. See [CPU Platforms](
2170
+ # https://cloud.google.com/compute/docs/cpu-platforms).
2171
+ # Corresponds to the JSON property `minCpuPlatform`
2172
+ # @return [String]
2173
+ attr_accessor :min_cpu_platform
2174
+
2175
+ # Determines the type of network access granted to workers. Possible values: - "
2176
+ # public": Workers can connect to the public internet. - "private": Workers can
2177
+ # only connect to Google APIs and services. - "restricted-private": Workers can
2178
+ # only connect to Google APIs that are reachable through `restricted.googleapis.
2179
+ # com` (`199.36.153.4/30`).
2180
+ # Corresponds to the JSON property `networkAccess`
2181
+ # @return [String]
2182
+ attr_accessor :network_access
2183
+
2184
+ # Determines whether the worker is reserved (equivalent to a Compute Engine on-
2185
+ # demand VM and therefore won't be preempted). See [Preemptible VMs](https://
2186
+ # cloud.google.com/preemptible-vms/) for more details.
2187
+ # Corresponds to the JSON property `reserved`
2188
+ # @return [Boolean]
2189
+ attr_accessor :reserved
2190
+ alias_method :reserved?, :reserved
2191
+
2192
+ # The node type name to be used for sole-tenant nodes.
2193
+ # Corresponds to the JSON property `soleTenantNodeType`
2194
+ # @return [String]
2195
+ attr_accessor :sole_tenant_node_type
2196
+
2197
+ # The name of the image used by each VM.
2198
+ # Corresponds to the JSON property `vmImage`
2199
+ # @return [String]
2200
+ attr_accessor :vm_image
2201
+
2202
+ def initialize(**args)
2203
+ update!(**args)
2204
+ end
2205
+
2206
+ # Update properties of this object
2207
+ def update!(**args)
2208
+ @accelerator = args[:accelerator] if args.key?(:accelerator)
2209
+ @disk_size_gb = args[:disk_size_gb] if args.key?(:disk_size_gb)
2210
+ @disk_type = args[:disk_type] if args.key?(:disk_type)
2211
+ @labels = args[:labels] if args.key?(:labels)
2212
+ @machine_type = args[:machine_type] if args.key?(:machine_type)
2213
+ @max_concurrent_actions = args[:max_concurrent_actions] if args.key?(:max_concurrent_actions)
2214
+ @min_cpu_platform = args[:min_cpu_platform] if args.key?(:min_cpu_platform)
2215
+ @network_access = args[:network_access] if args.key?(:network_access)
2216
+ @reserved = args[:reserved] if args.key?(:reserved)
2217
+ @sole_tenant_node_type = args[:sole_tenant_node_type] if args.key?(:sole_tenant_node_type)
2218
+ @vm_image = args[:vm_image] if args.key?(:vm_image)
2219
+ end
2220
+ end
2221
+
2222
+ # A worker pool resource in the Remote Build Execution API.
2223
+ class GoogleDevtoolsRemotebuildexecutionAdminV1alphaWorkerPool
2224
+ include Google::Apis::Core::Hashable
2225
+
2226
+ # Autoscale defines the autoscaling policy of a worker pool.
2227
+ # Corresponds to the JSON property `autoscale`
2228
+ # @return [Google::Apis::RemotebuildexecutionV1::GoogleDevtoolsRemotebuildexecutionAdminV1alphaAutoscale]
2229
+ attr_accessor :autoscale
2230
+
2231
+ # Channel specifies the release channel of the pool.
2232
+ # Corresponds to the JSON property `channel`
2233
+ # @return [String]
2234
+ attr_accessor :channel
2235
+
2236
+ # WorkerPool resource name formatted as: `projects/[PROJECT_ID]/instances/[
2237
+ # INSTANCE_ID]/workerpools/[POOL_ID]`. name should not be populated when
2238
+ # creating a worker pool since it is provided in the `poolId` field.
2239
+ # Corresponds to the JSON property `name`
2240
+ # @return [String]
2241
+ attr_accessor :name
2242
+
2243
+ # Output only. State of the worker pool.
2244
+ # Corresponds to the JSON property `state`
2245
+ # @return [String]
2246
+ attr_accessor :state
2247
+
2248
+ # Defines the configuration to be used for creating workers in the worker pool.
2249
+ # Corresponds to the JSON property `workerConfig`
2250
+ # @return [Google::Apis::RemotebuildexecutionV1::GoogleDevtoolsRemotebuildexecutionAdminV1alphaWorkerConfig]
2251
+ attr_accessor :worker_config
2252
+
2253
+ # The desired number of workers in the worker pool. Must be a value between 0
2254
+ # and 15000.
2255
+ # Corresponds to the JSON property `workerCount`
2256
+ # @return [Fixnum]
2257
+ attr_accessor :worker_count
2258
+
2259
+ def initialize(**args)
2260
+ update!(**args)
2261
+ end
2262
+
2263
+ # Update properties of this object
2264
+ def update!(**args)
2265
+ @autoscale = args[:autoscale] if args.key?(:autoscale)
2266
+ @channel = args[:channel] if args.key?(:channel)
2267
+ @name = args[:name] if args.key?(:name)
2268
+ @state = args[:state] if args.key?(:state)
2269
+ @worker_config = args[:worker_config] if args.key?(:worker_config)
2270
+ @worker_count = args[:worker_count] if args.key?(:worker_count)
2271
+ end
2272
+ end
2273
+
2274
+ # AdminTemp is a prelimiary set of administration tasks. It's called "Temp"
2275
+ # because we do not yet know the best way to represent admin tasks; it's
2276
+ # possible that this will be entirely replaced in later versions of this API. If
2277
+ # this message proves to be sufficient, it will be renamed in the alpha or beta
2278
+ # release of this API. This message (suitably marshalled into a protobuf.Any)
2279
+ # can be used as the inline_assignment field in a lease; the lease assignment
2280
+ # field should simply be `"admin"` in these cases. This message is heavily based
2281
+ # on Swarming administration tasks from the LUCI project (http://github.com/luci/
2282
+ # luci-py/appengine/swarming).
2283
+ class GoogleDevtoolsRemoteworkersV1test2AdminTemp
2284
+ include Google::Apis::Core::Hashable
2285
+
2286
+ # The argument to the admin action; see `Command` for semantics.
2287
+ # Corresponds to the JSON property `arg`
2288
+ # @return [String]
2289
+ attr_accessor :arg
2290
+
2291
+ # The admin action; see `Command` for legal values.
2292
+ # Corresponds to the JSON property `command`
2293
+ # @return [String]
2294
+ attr_accessor :command
2295
+
2296
+ def initialize(**args)
2297
+ update!(**args)
2298
+ end
2299
+
2300
+ # Update properties of this object
2301
+ def update!(**args)
2302
+ @arg = args[:arg] if args.key?(:arg)
2303
+ @command = args[:command] if args.key?(:command)
2304
+ end
2305
+ end
2306
+
2307
+ # Describes a blob of binary content with its digest.
2308
+ class GoogleDevtoolsRemoteworkersV1test2Blob
2309
+ include Google::Apis::Core::Hashable
2310
+
2311
+ # The contents of the blob.
2312
+ # Corresponds to the JSON property `contents`
2313
+ # NOTE: Values are automatically base64 encoded/decoded in the client library.
2314
+ # @return [String]
2315
+ attr_accessor :contents
2316
+
2317
+ # The CommandTask and CommandResult messages assume the existence of a service
2318
+ # that can serve blobs of content, identified by a hash and size known as a "
2319
+ # digest." The method by which these blobs may be retrieved is not specified
2320
+ # here, but a model implementation is in the Remote Execution API's "
2321
+ # ContentAddressibleStorage" interface. In the context of the RWAPI, a Digest
2322
+ # will virtually always refer to the contents of a file or a directory. The
2323
+ # latter is represented by the byte-encoded Directory message.
2324
+ # Corresponds to the JSON property `digest`
2325
+ # @return [Google::Apis::RemotebuildexecutionV1::GoogleDevtoolsRemoteworkersV1test2Digest]
2326
+ attr_accessor :digest
2327
+
2328
+ def initialize(**args)
2329
+ update!(**args)
2330
+ end
2331
+
2332
+ # Update properties of this object
2333
+ def update!(**args)
2334
+ @contents = args[:contents] if args.key?(:contents)
2335
+ @digest = args[:digest] if args.key?(:digest)
2336
+ end
2337
+ end
2338
+
2339
+ # DEPRECATED - use CommandResult instead. Describes the actual outputs from the
2340
+ # task.
2341
+ class GoogleDevtoolsRemoteworkersV1test2CommandOutputs
2342
+ include Google::Apis::Core::Hashable
2343
+
2344
+ # exit_code is only fully reliable if the status' code is OK. If the task
2345
+ # exceeded its deadline or was cancelled, the process may still produce an exit
2346
+ # code as it is cancelled, and this will be populated, but a successful (zero)
2347
+ # is unlikely to be correct unless the status code is OK.
2348
+ # Corresponds to the JSON property `exitCode`
2349
+ # @return [Fixnum]
2350
+ attr_accessor :exit_code
2351
+
2352
+ # The CommandTask and CommandResult messages assume the existence of a service
2353
+ # that can serve blobs of content, identified by a hash and size known as a "
2354
+ # digest." The method by which these blobs may be retrieved is not specified
2355
+ # here, but a model implementation is in the Remote Execution API's "
2356
+ # ContentAddressibleStorage" interface. In the context of the RWAPI, a Digest
2357
+ # will virtually always refer to the contents of a file or a directory. The
2358
+ # latter is represented by the byte-encoded Directory message.
2359
+ # Corresponds to the JSON property `outputs`
2360
+ # @return [Google::Apis::RemotebuildexecutionV1::GoogleDevtoolsRemoteworkersV1test2Digest]
2361
+ attr_accessor :outputs
2362
+
2363
+ def initialize(**args)
2364
+ update!(**args)
2365
+ end
2366
+
2367
+ # Update properties of this object
2368
+ def update!(**args)
2369
+ @exit_code = args[:exit_code] if args.key?(:exit_code)
2370
+ @outputs = args[:outputs] if args.key?(:outputs)
2371
+ end
2372
+ end
2373
+
2374
+ # DEPRECATED - use CommandResult instead. Can be used as part of CompleteRequest.
2375
+ # metadata, or are part of a more sophisticated message.
2376
+ class GoogleDevtoolsRemoteworkersV1test2CommandOverhead
2377
+ include Google::Apis::Core::Hashable
2378
+
2379
+ # The elapsed time between calling Accept and Complete. The server will also
2380
+ # have its own idea of what this should be, but this excludes the overhead of
2381
+ # the RPCs and the bot response time.
2382
+ # Corresponds to the JSON property `duration`
2383
+ # @return [String]
2384
+ attr_accessor :duration
2385
+
2386
+ # The amount of time *not* spent executing the command (ie uploading/downloading
2387
+ # files).
2388
+ # Corresponds to the JSON property `overhead`
2389
+ # @return [String]
2390
+ attr_accessor :overhead
2391
+
2392
+ def initialize(**args)
2393
+ update!(**args)
2394
+ end
2395
+
2396
+ # Update properties of this object
2397
+ def update!(**args)
2398
+ @duration = args[:duration] if args.key?(:duration)
2399
+ @overhead = args[:overhead] if args.key?(:overhead)
2400
+ end
2401
+ end
2402
+
2403
+ # All information about the execution of a command, suitable for providing as
2404
+ # the Bots interface's `Lease.result` field.
2405
+ class GoogleDevtoolsRemoteworkersV1test2CommandResult
2406
+ include Google::Apis::Core::Hashable
2407
+
2408
+ # The elapsed time between calling Accept and Complete. The server will also
2409
+ # have its own idea of what this should be, but this excludes the overhead of
2410
+ # the RPCs and the bot response time.
2411
+ # Corresponds to the JSON property `duration`
2412
+ # @return [String]
2413
+ attr_accessor :duration
2414
+
2415
+ # The exit code of the process. An exit code of "0" should only be trusted if `
2416
+ # status` has a code of OK (otherwise it may simply be unset).
2417
+ # Corresponds to the JSON property `exitCode`
2418
+ # @return [Fixnum]
2419
+ attr_accessor :exit_code
2420
+
2421
+ # Implementation-dependent metadata about the task. Both servers and bots may
2422
+ # define messages which can be encoded here; bots are free to provide metadata
2423
+ # in multiple formats, and servers are free to choose one or more of the values
2424
+ # to process and ignore others. In particular, it is *not* considered an error
2425
+ # for the bot to provide the server with a field that it doesn't know about.
2426
+ # Corresponds to the JSON property `metadata`
2427
+ # @return [Array<Hash<String,Object>>]
2428
+ attr_accessor :metadata
2429
+
2430
+ # The CommandTask and CommandResult messages assume the existence of a service
2431
+ # that can serve blobs of content, identified by a hash and size known as a "
2432
+ # digest." The method by which these blobs may be retrieved is not specified
2433
+ # here, but a model implementation is in the Remote Execution API's "
2434
+ # ContentAddressibleStorage" interface. In the context of the RWAPI, a Digest
2435
+ # will virtually always refer to the contents of a file or a directory. The
2436
+ # latter is represented by the byte-encoded Directory message.
2437
+ # Corresponds to the JSON property `outputs`
2438
+ # @return [Google::Apis::RemotebuildexecutionV1::GoogleDevtoolsRemoteworkersV1test2Digest]
2439
+ attr_accessor :outputs
2440
+
2441
+ # The amount of time *not* spent executing the command (ie uploading/downloading
2442
+ # files).
2443
+ # Corresponds to the JSON property `overhead`
2444
+ # @return [String]
2445
+ attr_accessor :overhead
2446
+
2447
+ # The `Status` type defines a logical error model that is suitable for different
2448
+ # programming environments, including REST APIs and RPC APIs. It is used by [
2449
+ # gRPC](https://github.com/grpc). Each `Status` message contains three pieces of
2450
+ # data: error code, error message, and error details. You can find out more
2451
+ # about this error model and how to work with it in the [API Design Guide](https:
2452
+ # //cloud.google.com/apis/design/errors).
2453
+ # Corresponds to the JSON property `status`
2454
+ # @return [Google::Apis::RemotebuildexecutionV1::GoogleRpcStatus]
2455
+ attr_accessor :status
2456
+
2457
+ def initialize(**args)
2458
+ update!(**args)
2459
+ end
2460
+
2461
+ # Update properties of this object
2462
+ def update!(**args)
2463
+ @duration = args[:duration] if args.key?(:duration)
2464
+ @exit_code = args[:exit_code] if args.key?(:exit_code)
2465
+ @metadata = args[:metadata] if args.key?(:metadata)
2466
+ @outputs = args[:outputs] if args.key?(:outputs)
2467
+ @overhead = args[:overhead] if args.key?(:overhead)
2468
+ @status = args[:status] if args.key?(:status)
2469
+ end
2470
+ end
2471
+
2472
+ # Describes a shell-style task to execute, suitable for providing as the Bots
2473
+ # interface's `Lease.payload` field.
2474
+ class GoogleDevtoolsRemoteworkersV1test2CommandTask
2475
+ include Google::Apis::Core::Hashable
2476
+
2477
+ # Describes the expected outputs of the command.
2478
+ # Corresponds to the JSON property `expectedOutputs`
2479
+ # @return [Google::Apis::RemotebuildexecutionV1::GoogleDevtoolsRemoteworkersV1test2CommandTaskOutputs]
2480
+ attr_accessor :expected_outputs
2481
+
2482
+ # Describes the inputs to a shell-style task.
2483
+ # Corresponds to the JSON property `inputs`
2484
+ # @return [Google::Apis::RemotebuildexecutionV1::GoogleDevtoolsRemoteworkersV1test2CommandTaskInputs]
2485
+ attr_accessor :inputs
2486
+
2487
+ # Describes the timeouts associated with this task.
2488
+ # Corresponds to the JSON property `timeouts`
2489
+ # @return [Google::Apis::RemotebuildexecutionV1::GoogleDevtoolsRemoteworkersV1test2CommandTaskTimeouts]
2490
+ attr_accessor :timeouts
2491
+
2492
+ def initialize(**args)
2493
+ update!(**args)
2494
+ end
2495
+
2496
+ # Update properties of this object
2497
+ def update!(**args)
2498
+ @expected_outputs = args[:expected_outputs] if args.key?(:expected_outputs)
2499
+ @inputs = args[:inputs] if args.key?(:inputs)
2500
+ @timeouts = args[:timeouts] if args.key?(:timeouts)
2501
+ end
2502
+ end
2503
+
2504
+ # Describes the inputs to a shell-style task.
2505
+ class GoogleDevtoolsRemoteworkersV1test2CommandTaskInputs
2506
+ include Google::Apis::Core::Hashable
2507
+
2508
+ # The command itself to run (e.g., argv). This field should be passed directly
2509
+ # to the underlying operating system, and so it must be sensible to that
2510
+ # operating system. For example, on Windows, the first argument might be "C:\
2511
+ # Windows\System32\ping.exe" - that is, using drive letters and backslashes. A
2512
+ # command for a *nix system, on the other hand, would use forward slashes. All
2513
+ # other fields in the RWAPI must consistently use forward slashes, since those
2514
+ # fields may be interpretted by both the service and the bot.
2515
+ # Corresponds to the JSON property `arguments`
2516
+ # @return [Array<String>]
2517
+ attr_accessor :arguments
2518
+
2519
+ # All environment variables required by the task.
2520
+ # Corresponds to the JSON property `environmentVariables`
2521
+ # @return [Array<Google::Apis::RemotebuildexecutionV1::GoogleDevtoolsRemoteworkersV1test2CommandTaskInputsEnvironmentVariable>]
2522
+ attr_accessor :environment_variables
2523
+
2524
+ # The input filesystem to be set up prior to the task beginning. The contents
2525
+ # should be a repeated set of FileMetadata messages though other formats are
2526
+ # allowed if better for the implementation (eg, a LUCI-style .isolated file).
2527
+ # This field is repeated since implementations might want to cache the metadata,
2528
+ # in which case it may be useful to break up portions of the filesystem that
2529
+ # change frequently (eg, specific input files) from those that don't (eg,
2530
+ # standard header files).
2531
+ # Corresponds to the JSON property `files`
2532
+ # @return [Array<Google::Apis::RemotebuildexecutionV1::GoogleDevtoolsRemoteworkersV1test2Digest>]
2533
+ attr_accessor :files
2534
+
2535
+ # Inline contents for blobs expected to be needed by the bot to execute the task.
2536
+ # For example, contents of entries in `files` or blobs that are indirectly
2537
+ # referenced by an entry there. The bot should check against this list before
2538
+ # downloading required task inputs to reduce the number of communications
2539
+ # between itself and the remote CAS server.
2540
+ # Corresponds to the JSON property `inlineBlobs`
2541
+ # @return [Array<Google::Apis::RemotebuildexecutionV1::GoogleDevtoolsRemoteworkersV1test2Blob>]
2542
+ attr_accessor :inline_blobs
2543
+
2544
+ # Directory from which a command is executed. It is a relative directory with
2545
+ # respect to the bot's working directory (i.e., "./"). If it is non-empty, then
2546
+ # it must exist under "./". Otherwise, "./" will be used.
2547
+ # Corresponds to the JSON property `workingDirectory`
2548
+ # @return [String]
2549
+ attr_accessor :working_directory
2550
+
2551
+ def initialize(**args)
2552
+ update!(**args)
2553
+ end
2554
+
2555
+ # Update properties of this object
2556
+ def update!(**args)
2557
+ @arguments = args[:arguments] if args.key?(:arguments)
2558
+ @environment_variables = args[:environment_variables] if args.key?(:environment_variables)
2559
+ @files = args[:files] if args.key?(:files)
2560
+ @inline_blobs = args[:inline_blobs] if args.key?(:inline_blobs)
2561
+ @working_directory = args[:working_directory] if args.key?(:working_directory)
2562
+ end
2563
+ end
2564
+
2565
+ # An environment variable required by this task.
2566
+ class GoogleDevtoolsRemoteworkersV1test2CommandTaskInputsEnvironmentVariable
2567
+ include Google::Apis::Core::Hashable
2568
+
2569
+ # The envvar name.
2570
+ # Corresponds to the JSON property `name`
2571
+ # @return [String]
2572
+ attr_accessor :name
2573
+
2574
+ # The envvar value.
2575
+ # Corresponds to the JSON property `value`
2576
+ # @return [String]
2577
+ attr_accessor :value
2578
+
2579
+ def initialize(**args)
2580
+ update!(**args)
2581
+ end
2582
+
2583
+ # Update properties of this object
2584
+ def update!(**args)
2585
+ @name = args[:name] if args.key?(:name)
2586
+ @value = args[:value] if args.key?(:value)
2587
+ end
2588
+ end
2589
+
2590
+ # Describes the expected outputs of the command.
2591
+ class GoogleDevtoolsRemoteworkersV1test2CommandTaskOutputs
2592
+ include Google::Apis::Core::Hashable
2593
+
2594
+ # A list of expected directories, relative to the execution root. All paths MUST
2595
+ # be delimited by forward slashes.
2596
+ # Corresponds to the JSON property `directories`
2597
+ # @return [Array<String>]
2598
+ attr_accessor :directories
2599
+
2600
+ # A list of expected files, relative to the execution root. All paths MUST be
2601
+ # delimited by forward slashes.
2602
+ # Corresponds to the JSON property `files`
2603
+ # @return [Array<String>]
2604
+ attr_accessor :files
2605
+
2606
+ # The destination to which any stderr should be sent. The method by which the
2607
+ # bot should send the stream contents to that destination is not defined in this
2608
+ # API. As examples, the destination could be a file referenced in the `files`
2609
+ # field in this message, or it could be a URI that must be written via the
2610
+ # ByteStream API.
2611
+ # Corresponds to the JSON property `stderrDestination`
2612
+ # @return [String]
2613
+ attr_accessor :stderr_destination
2614
+
2615
+ # The destination to which any stdout should be sent. The method by which the
2616
+ # bot should send the stream contents to that destination is not defined in this
2617
+ # API. As examples, the destination could be a file referenced in the `files`
2618
+ # field in this message, or it could be a URI that must be written via the
2619
+ # ByteStream API.
2620
+ # Corresponds to the JSON property `stdoutDestination`
2621
+ # @return [String]
2622
+ attr_accessor :stdout_destination
2623
+
2624
+ def initialize(**args)
2625
+ update!(**args)
2626
+ end
2627
+
2628
+ # Update properties of this object
2629
+ def update!(**args)
2630
+ @directories = args[:directories] if args.key?(:directories)
2631
+ @files = args[:files] if args.key?(:files)
2632
+ @stderr_destination = args[:stderr_destination] if args.key?(:stderr_destination)
2633
+ @stdout_destination = args[:stdout_destination] if args.key?(:stdout_destination)
2634
+ end
2635
+ end
2636
+
2637
+ # Describes the timeouts associated with this task.
2638
+ class GoogleDevtoolsRemoteworkersV1test2CommandTaskTimeouts
2639
+ include Google::Apis::Core::Hashable
2640
+
2641
+ # This specifies the maximum time that the task can run, excluding the time
2642
+ # required to download inputs or upload outputs. That is, the worker will
2643
+ # terminate the task if it runs longer than this.
2644
+ # Corresponds to the JSON property `execution`
2645
+ # @return [String]
2646
+ attr_accessor :execution
2647
+
2648
+ # This specifies the maximum amount of time the task can be idle - that is, go
2649
+ # without generating some output in either stdout or stderr. If the process is
2650
+ # silent for more than the specified time, the worker will terminate the task.
2651
+ # Corresponds to the JSON property `idle`
2652
+ # @return [String]
2653
+ attr_accessor :idle
2654
+
2655
+ # If the execution or IO timeouts are exceeded, the worker will try to
2656
+ # gracefully terminate the task and return any existing logs. However, tasks may
2657
+ # be hard-frozen in which case this process will fail. This timeout specifies
2658
+ # how long to wait for a terminated task to shut down gracefully (e.g. via
2659
+ # SIGTERM) before we bring down the hammer (e.g. SIGKILL on *nix,
2660
+ # CTRL_BREAK_EVENT on Windows).
2661
+ # Corresponds to the JSON property `shutdown`
2662
+ # @return [String]
2663
+ attr_accessor :shutdown
2664
+
2665
+ def initialize(**args)
2666
+ update!(**args)
2667
+ end
2668
+
2669
+ # Update properties of this object
2670
+ def update!(**args)
2671
+ @execution = args[:execution] if args.key?(:execution)
2672
+ @idle = args[:idle] if args.key?(:idle)
2673
+ @shutdown = args[:shutdown] if args.key?(:shutdown)
2674
+ end
2675
+ end
2676
+
2677
+ # The CommandTask and CommandResult messages assume the existence of a service
2678
+ # that can serve blobs of content, identified by a hash and size known as a "
2679
+ # digest." The method by which these blobs may be retrieved is not specified
2680
+ # here, but a model implementation is in the Remote Execution API's "
2681
+ # ContentAddressibleStorage" interface. In the context of the RWAPI, a Digest
2682
+ # will virtually always refer to the contents of a file or a directory. The
2683
+ # latter is represented by the byte-encoded Directory message.
2684
+ class GoogleDevtoolsRemoteworkersV1test2Digest
2685
+ include Google::Apis::Core::Hashable
2686
+
2687
+ # A string-encoded hash (eg "1a2b3c", not the byte array [0x1a, 0x2b, 0x3c])
2688
+ # using an implementation-defined hash algorithm (eg SHA-256).
2689
+ # Corresponds to the JSON property `hash`
2690
+ # @return [String]
2691
+ attr_accessor :hash_prop
2692
+
2693
+ # The size of the contents. While this is not strictly required as part of an
2694
+ # identifier (after all, any given hash will have exactly one canonical size),
2695
+ # it's useful in almost all cases when one might want to send or retrieve blobs
2696
+ # of content and is included here for this reason.
2697
+ # Corresponds to the JSON property `sizeBytes`
2698
+ # @return [Fixnum]
2699
+ attr_accessor :size_bytes
2700
+
2701
+ def initialize(**args)
2702
+ update!(**args)
2703
+ end
2704
+
2705
+ # Update properties of this object
2706
+ def update!(**args)
2707
+ @hash_prop = args[:hash_prop] if args.key?(:hash_prop)
2708
+ @size_bytes = args[:size_bytes] if args.key?(:size_bytes)
2709
+ end
2710
+ end
2711
+
2712
+ # The contents of a directory. Similar to the equivalent message in the Remote
2713
+ # Execution API.
2714
+ class GoogleDevtoolsRemoteworkersV1test2Directory
2715
+ include Google::Apis::Core::Hashable
2716
+
2717
+ # Any subdirectories
2718
+ # Corresponds to the JSON property `directories`
2719
+ # @return [Array<Google::Apis::RemotebuildexecutionV1::GoogleDevtoolsRemoteworkersV1test2DirectoryMetadata>]
2720
+ attr_accessor :directories
2721
+
2722
+ # The files in this directory
2723
+ # Corresponds to the JSON property `files`
2724
+ # @return [Array<Google::Apis::RemotebuildexecutionV1::GoogleDevtoolsRemoteworkersV1test2FileMetadata>]
2725
+ attr_accessor :files
2726
+
2727
+ def initialize(**args)
2728
+ update!(**args)
2729
+ end
2730
+
2731
+ # Update properties of this object
2732
+ def update!(**args)
2733
+ @directories = args[:directories] if args.key?(:directories)
2734
+ @files = args[:files] if args.key?(:files)
2735
+ end
2736
+ end
2737
+
2738
+ # The metadata for a directory. Similar to the equivalent message in the Remote
2739
+ # Execution API.
2740
+ class GoogleDevtoolsRemoteworkersV1test2DirectoryMetadata
2741
+ include Google::Apis::Core::Hashable
2742
+
2743
+ # The CommandTask and CommandResult messages assume the existence of a service
2744
+ # that can serve blobs of content, identified by a hash and size known as a "
2745
+ # digest." The method by which these blobs may be retrieved is not specified
2746
+ # here, but a model implementation is in the Remote Execution API's "
2747
+ # ContentAddressibleStorage" interface. In the context of the RWAPI, a Digest
2748
+ # will virtually always refer to the contents of a file or a directory. The
2749
+ # latter is represented by the byte-encoded Directory message.
2750
+ # Corresponds to the JSON property `digest`
2751
+ # @return [Google::Apis::RemotebuildexecutionV1::GoogleDevtoolsRemoteworkersV1test2Digest]
2752
+ attr_accessor :digest
2753
+
2754
+ # The path of the directory, as in FileMetadata.path.
2755
+ # Corresponds to the JSON property `path`
2756
+ # @return [String]
2757
+ attr_accessor :path
2758
+
2759
+ def initialize(**args)
2760
+ update!(**args)
2761
+ end
2762
+
2763
+ # Update properties of this object
2764
+ def update!(**args)
2765
+ @digest = args[:digest] if args.key?(:digest)
2766
+ @path = args[:path] if args.key?(:path)
2767
+ end
2768
+ end
2769
+
2770
+ # The metadata for a file. Similar to the equivalent message in the Remote
2771
+ # Execution API.
2772
+ class GoogleDevtoolsRemoteworkersV1test2FileMetadata
2773
+ include Google::Apis::Core::Hashable
2774
+
2775
+ # If the file is small enough, its contents may also or alternatively be listed
2776
+ # here.
2777
+ # Corresponds to the JSON property `contents`
2778
+ # NOTE: Values are automatically base64 encoded/decoded in the client library.
2779
+ # @return [String]
2780
+ attr_accessor :contents
2781
+
2782
+ # The CommandTask and CommandResult messages assume the existence of a service
2783
+ # that can serve blobs of content, identified by a hash and size known as a "
2784
+ # digest." The method by which these blobs may be retrieved is not specified
2785
+ # here, but a model implementation is in the Remote Execution API's "
2786
+ # ContentAddressibleStorage" interface. In the context of the RWAPI, a Digest
2787
+ # will virtually always refer to the contents of a file or a directory. The
2788
+ # latter is represented by the byte-encoded Directory message.
2789
+ # Corresponds to the JSON property `digest`
2790
+ # @return [Google::Apis::RemotebuildexecutionV1::GoogleDevtoolsRemoteworkersV1test2Digest]
2791
+ attr_accessor :digest
2792
+
2793
+ # Properties of the file
2794
+ # Corresponds to the JSON property `isExecutable`
2795
+ # @return [Boolean]
2796
+ attr_accessor :is_executable
2797
+ alias_method :is_executable?, :is_executable
2798
+
2799
+ # The path of this file. If this message is part of the CommandOutputs.outputs
2800
+ # fields, the path is relative to the execution root and must correspond to an
2801
+ # entry in CommandTask.outputs.files. If this message is part of a Directory
2802
+ # message, then the path is relative to the root of that directory. All paths
2803
+ # MUST be delimited by forward slashes.
2804
+ # Corresponds to the JSON property `path`
2805
+ # @return [String]
2806
+ attr_accessor :path
2807
+
2808
+ def initialize(**args)
2809
+ update!(**args)
2810
+ end
2811
+
2812
+ # Update properties of this object
2813
+ def update!(**args)
2814
+ @contents = args[:contents] if args.key?(:contents)
2815
+ @digest = args[:digest] if args.key?(:digest)
2816
+ @is_executable = args[:is_executable] if args.key?(:is_executable)
2817
+ @path = args[:path] if args.key?(:path)
2818
+ end
2819
+ end
2820
+
2821
+ # The request message for Operations.CancelOperation.
2822
+ class GoogleLongrunningCancelOperationRequest
2823
+ include Google::Apis::Core::Hashable
2824
+
2825
+ def initialize(**args)
2826
+ update!(**args)
2827
+ end
2828
+
2829
+ # Update properties of this object
2830
+ def update!(**args)
2831
+ end
2832
+ end
2833
+
2834
+ # The response message for Operations.ListOperations.
2835
+ class GoogleLongrunningListOperationsResponse
2836
+ include Google::Apis::Core::Hashable
2837
+
2838
+ # The standard List next-page token.
2839
+ # Corresponds to the JSON property `nextPageToken`
2840
+ # @return [String]
2841
+ attr_accessor :next_page_token
2842
+
2843
+ # A list of operations that matches the specified filter in the request.
2844
+ # Corresponds to the JSON property `operations`
2845
+ # @return [Array<Google::Apis::RemotebuildexecutionV1::GoogleLongrunningOperation>]
2846
+ attr_accessor :operations
2847
+
2848
+ def initialize(**args)
2849
+ update!(**args)
2850
+ end
2851
+
2852
+ # Update properties of this object
2853
+ def update!(**args)
2854
+ @next_page_token = args[:next_page_token] if args.key?(:next_page_token)
2855
+ @operations = args[:operations] if args.key?(:operations)
2856
+ end
2857
+ end
2858
+
2859
+ # This resource represents a long-running operation that is the result of a
2860
+ # network API call.
2861
+ class GoogleLongrunningOperation
2862
+ include Google::Apis::Core::Hashable
2863
+
2864
+ # If the value is `false`, it means the operation is still in progress. If `true`
2865
+ # , the operation is completed, and either `error` or `response` is available.
2866
+ # Corresponds to the JSON property `done`
2867
+ # @return [Boolean]
2868
+ attr_accessor :done
2869
+ alias_method :done?, :done
2870
+
2871
+ # The `Status` type defines a logical error model that is suitable for different
2872
+ # programming environments, including REST APIs and RPC APIs. It is used by [
2873
+ # gRPC](https://github.com/grpc). Each `Status` message contains three pieces of
2874
+ # data: error code, error message, and error details. You can find out more
2875
+ # about this error model and how to work with it in the [API Design Guide](https:
2876
+ # //cloud.google.com/apis/design/errors).
2877
+ # Corresponds to the JSON property `error`
2878
+ # @return [Google::Apis::RemotebuildexecutionV1::GoogleRpcStatus]
2879
+ attr_accessor :error
2880
+
2881
+ # Service-specific metadata associated with the operation. It typically contains
2882
+ # progress information and common metadata such as create time. Some services
2883
+ # might not provide such metadata. Any method that returns a long-running
2884
+ # operation should document the metadata type, if any.
2885
+ # Corresponds to the JSON property `metadata`
2886
+ # @return [Hash<String,Object>]
2887
+ attr_accessor :metadata
2888
+
2889
+ # The server-assigned name, which is only unique within the same service that
2890
+ # originally returns it. If you use the default HTTP mapping, the `name` should
2891
+ # be a resource name ending with `operations/`unique_id``.
2892
+ # Corresponds to the JSON property `name`
2893
+ # @return [String]
2894
+ attr_accessor :name
2895
+
2896
+ # The normal response of the operation in case of success. If the original
2897
+ # method returns no data on success, such as `Delete`, the response is `google.
2898
+ # protobuf.Empty`. If the original method is standard `Get`/`Create`/`Update`,
2899
+ # the response should be the resource. For other methods, the response should
2900
+ # have the type `XxxResponse`, where `Xxx` is the original method name. For
2901
+ # example, if the original method name is `TakeSnapshot()`, the inferred
2902
+ # response type is `TakeSnapshotResponse`.
2903
+ # Corresponds to the JSON property `response`
2904
+ # @return [Hash<String,Object>]
2905
+ attr_accessor :response
2906
+
2907
+ def initialize(**args)
2908
+ update!(**args)
2909
+ end
2910
+
2911
+ # Update properties of this object
2912
+ def update!(**args)
2913
+ @done = args[:done] if args.key?(:done)
2914
+ @error = args[:error] if args.key?(:error)
2915
+ @metadata = args[:metadata] if args.key?(:metadata)
2916
+ @name = args[:name] if args.key?(:name)
2917
+ @response = args[:response] if args.key?(:response)
2918
+ end
2919
+ end
2920
+
2921
+ # A generic empty message that you can re-use to avoid defining duplicated empty
2922
+ # messages in your APIs. A typical example is to use it as the request or the
2923
+ # response type of an API method. For instance: service Foo ` rpc Bar(google.
2924
+ # protobuf.Empty) returns (google.protobuf.Empty); ` The JSON representation for
2925
+ # `Empty` is empty JSON object ````.
2926
+ class GoogleProtobufEmpty
2927
+ include Google::Apis::Core::Hashable
2928
+
2929
+ def initialize(**args)
2930
+ update!(**args)
2931
+ end
2932
+
2933
+ # Update properties of this object
2934
+ def update!(**args)
2935
+ end
2936
+ end
2937
+
2938
+ # The `Status` type defines a logical error model that is suitable for different
2939
+ # programming environments, including REST APIs and RPC APIs. It is used by [
2940
+ # gRPC](https://github.com/grpc). Each `Status` message contains three pieces of
2941
+ # data: error code, error message, and error details. You can find out more
2942
+ # about this error model and how to work with it in the [API Design Guide](https:
2943
+ # //cloud.google.com/apis/design/errors).
2944
+ class GoogleRpcStatus
2945
+ include Google::Apis::Core::Hashable
2946
+
2947
+ # The status code, which should be an enum value of google.rpc.Code.
2948
+ # Corresponds to the JSON property `code`
2949
+ # @return [Fixnum]
2950
+ attr_accessor :code
2951
+
2952
+ # A list of messages that carry the error details. There is a common set of
2953
+ # message types for APIs to use.
2954
+ # Corresponds to the JSON property `details`
2955
+ # @return [Array<Hash<String,Object>>]
2956
+ attr_accessor :details
2957
+
2958
+ # A developer-facing error message, which should be in English. Any user-facing
2959
+ # error message should be localized and sent in the google.rpc.Status.details
2960
+ # field, or localized by the client.
2961
+ # Corresponds to the JSON property `message`
2962
+ # @return [String]
2963
+ attr_accessor :message
2964
+
2965
+ def initialize(**args)
2966
+ update!(**args)
2967
+ end
2968
+
2969
+ # Update properties of this object
2970
+ def update!(**args)
2971
+ @code = args[:code] if args.key?(:code)
2972
+ @details = args[:details] if args.key?(:details)
2973
+ @message = args[:message] if args.key?(:message)
2974
+ end
2975
+ end
2976
+ end
2977
+ end
2978
+ end