google-cloud-dataproc-v1 0.7.1 → 0.10.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 89df0e2a0ac1dd2d7ac88db153a9944695825e95a77e6236c2c31104dfae639a
4
- data.tar.gz: 49d4306b2f39e73df7516c9e406d3a174b57c72c4db1c33e71d4160c0bb0d804
3
+ metadata.gz: ebf8588586e1596fe59cf789a1cbb4039a36ca1d6ed8daa2d79cec58f4ce57d5
4
+ data.tar.gz: 47785392184b79d8152e5fe51918021d08fb756b9effd5d3d05b6d7ca0e34616
5
5
  SHA512:
6
- metadata.gz: 4c074e0cc9460c96da62a9b1061e336d842f31d1f180042638955262307a2d432398a9a00798481b00383269540856dff53fa0cb6d701373272a605121a6eefa
7
- data.tar.gz: d39afd0382e2a898768a12973588f228729dd7dcd9dc60ce6a742b240c047e4d20604718e9196689995ab5de6d90b6df8012a2172acc0ccc804ebbd113c1b30a
6
+ metadata.gz: ac3a25ca0fd30bd5f04c0984fc333af9d50e7dffbbef75d1093f43d43fdc8369b5236fff5be973c9b20f9ab439fb982647446eabc6ce7281a9998bc3dd297d87
7
+ data.tar.gz: 18a9ef81778a4f33ada7b4bf1e11f9695b21c847cc14c24812b15e0adc6f9395010cb099bd4b93a3a534f978796c9addd3471c9ccc7201164ff2c6b7923fbda1
data/.yardopts CHANGED
@@ -1,5 +1,5 @@
1
1
  --no-private
2
- --title=Cloud Dataproc V1 API
2
+ --title="Cloud Dataproc V1 API"
3
3
  --exclude _pb\.rb$
4
4
  --markup markdown
5
5
  --markup-provider redcarpet
data/AUTHENTICATION.md CHANGED
@@ -120,15 +120,6 @@ To configure your system for this, simply:
120
120
  **NOTE:** This is _not_ recommended for running in production. The Cloud SDK
121
121
  *should* only be used during development.
122
122
 
123
- [gce-how-to]: https://cloud.google.com/compute/docs/authentication#using
124
- [dev-console]: https://console.cloud.google.com/project
125
-
126
- [enable-apis]: https://raw.githubusercontent.com/GoogleCloudPlatform/gcloud-common/master/authentication/enable-apis.png
127
-
128
- [create-new-service-account]: https://raw.githubusercontent.com/GoogleCloudPlatform/gcloud-common/master/authentication/create-new-service-account.png
129
- [create-new-service-account-existing-keys]: https://raw.githubusercontent.com/GoogleCloudPlatform/gcloud-common/master/authentication/create-new-service-account-existing-keys.png
130
- [reuse-service-account]: https://raw.githubusercontent.com/GoogleCloudPlatform/gcloud-common/master/authentication/reuse-service-account.png
131
-
132
123
  ## Creating a Service Account
133
124
 
134
125
  Google Cloud requires **Service Account Credentials** to
@@ -139,31 +130,22 @@ If you are not running this client within
139
130
  [Google Cloud Platform environments](#google-cloud-platform-environments), you
140
131
  need a Google Developers service account.
141
132
 
142
- 1. Visit the [Google Developers Console][dev-console].
133
+ 1. Visit the [Google Cloud Console](https://console.cloud.google.com/project).
143
134
  2. Create a new project or click on an existing project.
144
- 3. Activate the slide-out navigation tray and select **API Manager**. From
135
+ 3. Activate the menu in the upper left and select **APIs & Services**. From
145
136
  here, you will enable the APIs that your application requires.
146
137
 
147
- ![Enable the APIs that your application requires][enable-apis]
148
-
149
138
  *Note: You may need to enable billing in order to use these services.*
150
139
 
151
140
  4. Select **Credentials** from the side navigation.
152
141
 
153
- You should see a screen like one of the following.
154
-
155
- ![Create a new service account][create-new-service-account]
156
-
157
- ![Create a new service account With Existing Keys][create-new-service-account-existing-keys]
158
-
159
- Find the "Add credentials" drop down and select "Service account" to be
160
- guided through downloading a new JSON key file.
142
+ Find the "Create credentials" drop down near the top of the page, and select
143
+ "Service account" to be guided through downloading a new JSON key file.
161
144
 
162
145
  If you want to re-use an existing service account, you can easily generate a
163
- new key file. Just select the account you wish to re-use, and click "Generate
164
- new JSON key":
165
-
166
- ![Re-use an existing service account][reuse-service-account]
146
+ new key file. Just select the account you wish to re-use, click the pencil
147
+ tool on the right side to edit the service account, select the **Keys** tab,
148
+ and then select **Add Key**.
167
149
 
168
150
  The key file you download will be used by this library to authenticate API
169
151
  requests and should be stored in a secure location.
data/README.md CHANGED
@@ -37,7 +37,7 @@ request = ::Google::Cloud::Dataproc::V1::CreateAutoscalingPolicyRequest.new # (r
37
37
  response = client.create_autoscaling_policy request
38
38
  ```
39
39
 
40
- View the [Client Library Documentation](https://googleapis.dev/ruby/google-cloud-dataproc-v1/latest)
40
+ View the [Client Library Documentation](https://cloud.google.com/ruby/docs/reference/google-cloud-dataproc-v1/latest)
41
41
  for class and method documentation.
42
42
 
43
43
  See also the [Product Documentation](https://cloud.google.com/dataproc)
@@ -69,6 +69,11 @@ module GRPC
69
69
  end
70
70
  ```
71
71
 
72
+
73
+ ## Google Cloud Samples
74
+
75
+ To browse ready to use code samples check [Google Cloud Samples](https://cloud.google.com/docs/samples).
76
+
72
77
  ## Supported Ruby Versions
73
78
 
74
79
  This library is supported on Ruby 2.5+.
@@ -1,13 +1,14 @@
1
1
  # Generated by the protocol buffer compiler. DO NOT EDIT!
2
2
  # source: google/cloud/dataproc/v1/autoscaling_policies.proto
3
3
 
4
+ require 'google/protobuf'
5
+
4
6
  require 'google/api/annotations_pb'
5
7
  require 'google/api/client_pb'
6
8
  require 'google/api/field_behavior_pb'
7
9
  require 'google/api/resource_pb'
8
10
  require 'google/protobuf/duration_pb'
9
11
  require 'google/protobuf/empty_pb'
10
- require 'google/protobuf'
11
12
 
12
13
  Google::Protobuf::DescriptorPool.generated_pool.build do
13
14
  add_file("google/cloud/dataproc/v1/autoscaling_policies.proto", :syntax => :proto3) do
@@ -22,8 +23,10 @@ Google::Protobuf::DescriptorPool.generated_pool.build do
22
23
  end
23
24
  end
24
25
  add_message "google.cloud.dataproc.v1.BasicAutoscalingAlgorithm" do
25
- optional :yarn_config, :message, 1, "google.cloud.dataproc.v1.BasicYarnAutoscalingConfig"
26
26
  optional :cooldown_period, :message, 2, "google.protobuf.Duration"
27
+ oneof :config do
28
+ optional :yarn_config, :message, 1, "google.cloud.dataproc.v1.BasicYarnAutoscalingConfig"
29
+ end
27
30
  end
28
31
  add_message "google.cloud.dataproc.v1.BasicYarnAutoscalingConfig" do
29
32
  optional :graceful_decommission_timeout, :message, 5, "google.protobuf.Duration"
@@ -135,6 +135,7 @@ module Google
135
135
 
136
136
  @operations_client = Operations.new do |config|
137
137
  config.credentials = credentials
138
+ config.quota_project = @quota_project_id
138
139
  config.endpoint = @config.endpoint
139
140
  end
140
141
 
@@ -1,6 +1,8 @@
1
1
  # Generated by the protocol buffer compiler. DO NOT EDIT!
2
2
  # source: google/cloud/dataproc/v1/batches.proto
3
3
 
4
+ require 'google/protobuf'
5
+
4
6
  require 'google/api/annotations_pb'
5
7
  require 'google/api/client_pb'
6
8
  require 'google/api/field_behavior_pb'
@@ -9,7 +11,6 @@ require 'google/cloud/dataproc/v1/shared_pb'
9
11
  require 'google/longrunning/operations_pb'
10
12
  require 'google/protobuf/empty_pb'
11
13
  require 'google/protobuf/timestamp_pb'
12
- require 'google/protobuf'
13
14
 
14
15
  Google::Protobuf::DescriptorPool.generated_pool.build do
15
16
  add_file("google/cloud/dataproc/v1/batches.proto", :syntax => :proto3) do
@@ -166,6 +166,7 @@ module Google
166
166
 
167
167
  @operations_client = Operations.new do |config|
168
168
  config.credentials = credentials
169
+ config.quota_project = @quota_project_id
169
170
  config.endpoint = @config.endpoint
170
171
  end
171
172
 
@@ -24,25 +24,6 @@ module Google
24
24
  module ClusterController
25
25
  # Path helper methods for the ClusterController API.
26
26
  module Paths
27
- ##
28
- # Create a fully-qualified Cluster resource string.
29
- #
30
- # The resource will be in the following format:
31
- #
32
- # `projects/{project}/locations/{location}/clusters/{cluster}`
33
- #
34
- # @param project [String]
35
- # @param location [String]
36
- # @param cluster [String]
37
- #
38
- # @return [::String]
39
- def cluster_path project:, location:, cluster:
40
- raise ::ArgumentError, "project cannot contain /" if project.to_s.include? "/"
41
- raise ::ArgumentError, "location cannot contain /" if location.to_s.include? "/"
42
-
43
- "projects/#{project}/locations/#{location}/clusters/#{cluster}"
44
- end
45
-
46
27
  ##
47
28
  # Create a fully-qualified Service resource string.
48
29
  #
@@ -1,6 +1,8 @@
1
1
  # Generated by the protocol buffer compiler. DO NOT EDIT!
2
2
  # source: google/cloud/dataproc/v1/clusters.proto
3
3
 
4
+ require 'google/protobuf'
5
+
4
6
  require 'google/api/annotations_pb'
5
7
  require 'google/api/client_pb'
6
8
  require 'google/api/field_behavior_pb'
@@ -10,7 +12,6 @@ require 'google/longrunning/operations_pb'
10
12
  require 'google/protobuf/duration_pb'
11
13
  require 'google/protobuf/field_mask_pb'
12
14
  require 'google/protobuf/timestamp_pb'
13
- require 'google/protobuf'
14
15
 
15
16
  Google::Protobuf::DescriptorPool.generated_pool.build do
16
17
  add_file("google/cloud/dataproc/v1/clusters.proto", :syntax => :proto3) do
@@ -18,6 +19,7 @@ Google::Protobuf::DescriptorPool.generated_pool.build do
18
19
  optional :project_id, :string, 1
19
20
  optional :cluster_name, :string, 2
20
21
  optional :config, :message, 3, "google.cloud.dataproc.v1.ClusterConfig"
22
+ optional :virtual_cluster_config, :message, 10, "google.cloud.dataproc.v1.VirtualClusterConfig"
21
23
  map :labels, :string, :string, 8
22
24
  optional :status, :message, 4, "google.cloud.dataproc.v1.ClusterStatus"
23
25
  repeated :status_history, :message, 7, "google.cloud.dataproc.v1.ClusterStatus"
@@ -39,14 +41,17 @@ Google::Protobuf::DescriptorPool.generated_pool.build do
39
41
  optional :lifecycle_config, :message, 17, "google.cloud.dataproc.v1.LifecycleConfig"
40
42
  optional :endpoint_config, :message, 19, "google.cloud.dataproc.v1.EndpointConfig"
41
43
  optional :metastore_config, :message, 20, "google.cloud.dataproc.v1.MetastoreConfig"
42
- optional :gke_cluster_config, :message, 21, "google.cloud.dataproc.v1.GkeClusterConfig"
43
44
  end
44
- add_message "google.cloud.dataproc.v1.GkeClusterConfig" do
45
- optional :namespaced_gke_deployment_target, :message, 1, "google.cloud.dataproc.v1.GkeClusterConfig.NamespacedGkeDeploymentTarget"
45
+ add_message "google.cloud.dataproc.v1.VirtualClusterConfig" do
46
+ optional :staging_bucket, :string, 1
47
+ optional :auxiliary_services_config, :message, 7, "google.cloud.dataproc.v1.AuxiliaryServicesConfig"
48
+ oneof :infrastructure_config do
49
+ optional :kubernetes_cluster_config, :message, 6, "google.cloud.dataproc.v1.KubernetesClusterConfig"
50
+ end
46
51
  end
47
- add_message "google.cloud.dataproc.v1.GkeClusterConfig.NamespacedGkeDeploymentTarget" do
48
- optional :target_gke_cluster, :string, 1
49
- optional :cluster_namespace, :string, 2
52
+ add_message "google.cloud.dataproc.v1.AuxiliaryServicesConfig" do
53
+ optional :metastore_config, :message, 1, "google.cloud.dataproc.v1.MetastoreConfig"
54
+ optional :spark_history_server_config, :message, 2, "google.cloud.dataproc.v1.SparkHistoryServerConfig"
50
55
  end
51
56
  add_message "google.cloud.dataproc.v1.EndpointConfig" do
52
57
  map :http_ports, :string, :string, 1
@@ -119,6 +124,7 @@ Google::Protobuf::DescriptorPool.generated_pool.build do
119
124
  optional :boot_disk_type, :string, 3
120
125
  optional :boot_disk_size_gb, :int32, 1
121
126
  optional :num_local_ssds, :int32, 2
127
+ optional :local_ssd_interface, :string, 4
122
128
  end
123
129
  add_message "google.cloud.dataproc.v1.NodeInitializationAction" do
124
130
  optional :executable_file, :string, 1
@@ -272,8 +278,8 @@ module Google
272
278
  module V1
273
279
  Cluster = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("google.cloud.dataproc.v1.Cluster").msgclass
274
280
  ClusterConfig = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("google.cloud.dataproc.v1.ClusterConfig").msgclass
275
- GkeClusterConfig = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("google.cloud.dataproc.v1.GkeClusterConfig").msgclass
276
- GkeClusterConfig::NamespacedGkeDeploymentTarget = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("google.cloud.dataproc.v1.GkeClusterConfig.NamespacedGkeDeploymentTarget").msgclass
281
+ VirtualClusterConfig = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("google.cloud.dataproc.v1.VirtualClusterConfig").msgclass
282
+ AuxiliaryServicesConfig = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("google.cloud.dataproc.v1.AuxiliaryServicesConfig").msgclass
277
283
  EndpointConfig = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("google.cloud.dataproc.v1.EndpointConfig").msgclass
278
284
  AutoscalingConfig = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("google.cloud.dataproc.v1.AutoscalingConfig").msgclass
279
285
  EncryptionConfig = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("google.cloud.dataproc.v1.EncryptionConfig").msgclass
@@ -168,6 +168,7 @@ module Google
168
168
 
169
169
  @operations_client = Operations.new do |config|
170
170
  config.credentials = credentials
171
+ config.quota_project = @quota_project_id
171
172
  config.endpoint = @config.endpoint
172
173
  end
173
174
 
@@ -1,6 +1,8 @@
1
1
  # Generated by the protocol buffer compiler. DO NOT EDIT!
2
2
  # source: google/cloud/dataproc/v1/jobs.proto
3
3
 
4
+ require 'google/protobuf'
5
+
4
6
  require 'google/api/annotations_pb'
5
7
  require 'google/api/client_pb'
6
8
  require 'google/api/field_behavior_pb'
@@ -8,7 +10,6 @@ require 'google/longrunning/operations_pb'
8
10
  require 'google/protobuf/empty_pb'
9
11
  require 'google/protobuf/field_mask_pb'
10
12
  require 'google/protobuf/timestamp_pb'
11
- require 'google/protobuf'
12
13
 
13
14
  Google::Protobuf::DescriptorPool.generated_pool.build do
14
15
  add_file("google/cloud/dataproc/v1/jobs.proto", :syntax => :proto3) do
@@ -1,9 +1,10 @@
1
1
  # Generated by the protocol buffer compiler. DO NOT EDIT!
2
2
  # source: google/cloud/dataproc/v1/operations.proto
3
3
 
4
+ require 'google/protobuf'
5
+
4
6
  require 'google/api/field_behavior_pb'
5
7
  require 'google/protobuf/timestamp_pb'
6
- require 'google/protobuf'
7
8
 
8
9
  Google::Protobuf::DescriptorPool.generated_pool.build do
9
10
  add_file("google/cloud/dataproc/v1/operations.proto", :syntax => :proto3) do
@@ -1,12 +1,15 @@
1
1
  # Generated by the protocol buffer compiler. DO NOT EDIT!
2
2
  # source: google/cloud/dataproc/v1/shared.proto
3
3
 
4
- require 'google/api/field_behavior_pb'
5
4
  require 'google/protobuf'
6
5
 
6
+ require 'google/api/field_behavior_pb'
7
+
7
8
  Google::Protobuf::DescriptorPool.generated_pool.build do
8
9
  add_file("google/cloud/dataproc/v1/shared.proto", :syntax => :proto3) do
9
10
  add_message "google.cloud.dataproc.v1.RuntimeConfig" do
11
+ optional :version, :string, 1
12
+ optional :container_image, :string, 2
10
13
  map :properties, :string, :string, 3
11
14
  end
12
15
  add_message "google.cloud.dataproc.v1.EnvironmentConfig" do
@@ -32,6 +35,54 @@ Google::Protobuf::DescriptorPool.generated_pool.build do
32
35
  add_message "google.cloud.dataproc.v1.RuntimeInfo" do
33
36
  map :endpoints, :string, :string, 1
34
37
  optional :output_uri, :string, 2
38
+ optional :diagnostic_output_uri, :string, 3
39
+ end
40
+ add_message "google.cloud.dataproc.v1.GkeClusterConfig" do
41
+ optional :gke_cluster_target, :string, 2
42
+ repeated :node_pool_target, :message, 3, "google.cloud.dataproc.v1.GkeNodePoolTarget"
43
+ end
44
+ add_message "google.cloud.dataproc.v1.KubernetesClusterConfig" do
45
+ optional :kubernetes_namespace, :string, 1
46
+ optional :kubernetes_software_config, :message, 3, "google.cloud.dataproc.v1.KubernetesSoftwareConfig"
47
+ oneof :config do
48
+ optional :gke_cluster_config, :message, 2, "google.cloud.dataproc.v1.GkeClusterConfig"
49
+ end
50
+ end
51
+ add_message "google.cloud.dataproc.v1.KubernetesSoftwareConfig" do
52
+ map :component_version, :string, :string, 1
53
+ map :properties, :string, :string, 2
54
+ end
55
+ add_message "google.cloud.dataproc.v1.GkeNodePoolTarget" do
56
+ optional :node_pool, :string, 1
57
+ repeated :roles, :enum, 2, "google.cloud.dataproc.v1.GkeNodePoolTarget.Role"
58
+ optional :node_pool_config, :message, 3, "google.cloud.dataproc.v1.GkeNodePoolConfig"
59
+ end
60
+ add_enum "google.cloud.dataproc.v1.GkeNodePoolTarget.Role" do
61
+ value :ROLE_UNSPECIFIED, 0
62
+ value :DEFAULT, 1
63
+ value :CONTROLLER, 2
64
+ value :SPARK_DRIVER, 3
65
+ value :SPARK_EXECUTOR, 4
66
+ end
67
+ add_message "google.cloud.dataproc.v1.GkeNodePoolConfig" do
68
+ optional :config, :message, 2, "google.cloud.dataproc.v1.GkeNodePoolConfig.GkeNodeConfig"
69
+ repeated :locations, :string, 13
70
+ optional :autoscaling, :message, 4, "google.cloud.dataproc.v1.GkeNodePoolConfig.GkeNodePoolAutoscalingConfig"
71
+ end
72
+ add_message "google.cloud.dataproc.v1.GkeNodePoolConfig.GkeNodeConfig" do
73
+ optional :machine_type, :string, 1
74
+ optional :preemptible, :bool, 10
75
+ optional :local_ssd_count, :int32, 7
76
+ repeated :accelerators, :message, 11, "google.cloud.dataproc.v1.GkeNodePoolConfig.GkeNodePoolAcceleratorConfig"
77
+ optional :min_cpu_platform, :string, 13
78
+ end
79
+ add_message "google.cloud.dataproc.v1.GkeNodePoolConfig.GkeNodePoolAcceleratorConfig" do
80
+ optional :accelerator_count, :int64, 1
81
+ optional :accelerator_type, :string, 2
82
+ end
83
+ add_message "google.cloud.dataproc.v1.GkeNodePoolConfig.GkeNodePoolAutoscalingConfig" do
84
+ optional :min_node_count, :int32, 2
85
+ optional :max_node_count, :int32, 3
35
86
  end
36
87
  add_enum "google.cloud.dataproc.v1.Component" do
37
88
  value :COMPONENT_UNSPECIFIED, 0
@@ -66,6 +117,15 @@ module Google
66
117
  SparkHistoryServerConfig = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("google.cloud.dataproc.v1.SparkHistoryServerConfig").msgclass
67
118
  PeripheralsConfig = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("google.cloud.dataproc.v1.PeripheralsConfig").msgclass
68
119
  RuntimeInfo = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("google.cloud.dataproc.v1.RuntimeInfo").msgclass
120
+ GkeClusterConfig = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("google.cloud.dataproc.v1.GkeClusterConfig").msgclass
121
+ KubernetesClusterConfig = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("google.cloud.dataproc.v1.KubernetesClusterConfig").msgclass
122
+ KubernetesSoftwareConfig = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("google.cloud.dataproc.v1.KubernetesSoftwareConfig").msgclass
123
+ GkeNodePoolTarget = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("google.cloud.dataproc.v1.GkeNodePoolTarget").msgclass
124
+ GkeNodePoolTarget::Role = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("google.cloud.dataproc.v1.GkeNodePoolTarget.Role").enummodule
125
+ GkeNodePoolConfig = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("google.cloud.dataproc.v1.GkeNodePoolConfig").msgclass
126
+ GkeNodePoolConfig::GkeNodeConfig = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("google.cloud.dataproc.v1.GkeNodePoolConfig.GkeNodeConfig").msgclass
127
+ GkeNodePoolConfig::GkeNodePoolAcceleratorConfig = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("google.cloud.dataproc.v1.GkeNodePoolConfig.GkeNodePoolAcceleratorConfig").msgclass
128
+ GkeNodePoolConfig::GkeNodePoolAutoscalingConfig = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("google.cloud.dataproc.v1.GkeNodePoolConfig.GkeNodePoolAutoscalingConfig").msgclass
69
129
  Component = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("google.cloud.dataproc.v1.Component").enummodule
70
130
  FailureAction = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("google.cloud.dataproc.v1.FailureAction").enummodule
71
131
  end
@@ -21,7 +21,7 @@ module Google
21
21
  module Cloud
22
22
  module Dataproc
23
23
  module V1
24
- VERSION = "0.7.1"
24
+ VERSION = "0.10.0"
25
25
  end
26
26
  end
27
27
  end
@@ -171,6 +171,7 @@ module Google
171
171
 
172
172
  @operations_client = Operations.new do |config|
173
173
  config.credentials = credentials
174
+ config.quota_project = @quota_project_id
174
175
  config.endpoint = @config.endpoint
175
176
  end
176
177
 
@@ -214,7 +215,7 @@ module Google
214
215
  # Required. The resource name of the region or location, as described
215
216
  # in https://cloud.google.com/apis/design/resource_names.
216
217
  #
217
- # * For `projects.regions.workflowTemplates,create`, the resource name of the
218
+ # * For `projects.regions.workflowTemplates.create`, the resource name of the
218
219
  # region has the following format:
219
220
  # `projects/{project_id}/regions/{region}`
220
221
  #
@@ -24,25 +24,6 @@ module Google
24
24
  module WorkflowTemplateService
25
25
  # Path helper methods for the WorkflowTemplateService API.
26
26
  module Paths
27
- ##
28
- # Create a fully-qualified Cluster resource string.
29
- #
30
- # The resource will be in the following format:
31
- #
32
- # `projects/{project}/locations/{location}/clusters/{cluster}`
33
- #
34
- # @param project [String]
35
- # @param location [String]
36
- # @param cluster [String]
37
- #
38
- # @return [::String]
39
- def cluster_path project:, location:, cluster:
40
- raise ::ArgumentError, "project cannot contain /" if project.to_s.include? "/"
41
- raise ::ArgumentError, "location cannot contain /" if location.to_s.include? "/"
42
-
43
- "projects/#{project}/locations/#{location}/clusters/#{cluster}"
44
- end
45
-
46
27
  ##
47
28
  # Create a fully-qualified Location resource string.
48
29
  #
@@ -1,6 +1,8 @@
1
1
  # Generated by the protocol buffer compiler. DO NOT EDIT!
2
2
  # source: google/cloud/dataproc/v1/workflow_templates.proto
3
3
 
4
+ require 'google/protobuf'
5
+
4
6
  require 'google/api/annotations_pb'
5
7
  require 'google/api/client_pb'
6
8
  require 'google/api/field_behavior_pb'
@@ -11,7 +13,6 @@ require 'google/longrunning/operations_pb'
11
13
  require 'google/protobuf/duration_pb'
12
14
  require 'google/protobuf/empty_pb'
13
15
  require 'google/protobuf/timestamp_pb'
14
- require 'google/protobuf'
15
16
 
16
17
  Google::Protobuf::DescriptorPool.generated_pool.build do
17
18
  add_file("google/cloud/dataproc/v1/workflow_templates.proto", :syntax => :proto3) do
@@ -29,6 +29,8 @@ module Google
29
29
  ##
30
30
  # To load this package, including all its services, and instantiate a client:
31
31
  #
32
+ # @example
33
+ #
32
34
  # require "google/cloud/dataproc/v1"
33
35
  # client = ::Google::Cloud::Dataproc::V1::AutoscalingPolicyService::Client.new
34
36
  #
@@ -33,11 +33,7 @@ module Google
33
33
  # // For Kubernetes resources, the format is {api group}/{kind}.
34
34
  # option (google.api.resource) = {
35
35
  # type: "pubsub.googleapis.com/Topic"
36
- # name_descriptor: {
37
- # pattern: "projects/{project}/topics/{topic}"
38
- # parent_type: "cloudresourcemanager.googleapis.com/Project"
39
- # parent_name_extractor: "projects/{project}"
40
- # }
36
+ # pattern: "projects/{project}/topics/{topic}"
41
37
  # };
42
38
  # }
43
39
  #
@@ -45,10 +41,7 @@ module Google
45
41
  #
46
42
  # resources:
47
43
  # - type: "pubsub.googleapis.com/Topic"
48
- # name_descriptor:
49
- # - pattern: "projects/{project}/topics/{topic}"
50
- # parent_type: "cloudresourcemanager.googleapis.com/Project"
51
- # parent_name_extractor: "projects/{project}"
44
+ # pattern: "projects/{project}/topics/{topic}"
52
45
  #
53
46
  # Sometimes, resources have multiple patterns, typically because they can
54
47
  # live under multiple parents.
@@ -58,26 +51,10 @@ module Google
58
51
  # message LogEntry {
59
52
  # option (google.api.resource) = {
60
53
  # type: "logging.googleapis.com/LogEntry"
61
- # name_descriptor: {
62
- # pattern: "projects/{project}/logs/{log}"
63
- # parent_type: "cloudresourcemanager.googleapis.com/Project"
64
- # parent_name_extractor: "projects/{project}"
65
- # }
66
- # name_descriptor: {
67
- # pattern: "folders/{folder}/logs/{log}"
68
- # parent_type: "cloudresourcemanager.googleapis.com/Folder"
69
- # parent_name_extractor: "folders/{folder}"
70
- # }
71
- # name_descriptor: {
72
- # pattern: "organizations/{organization}/logs/{log}"
73
- # parent_type: "cloudresourcemanager.googleapis.com/Organization"
74
- # parent_name_extractor: "organizations/{organization}"
75
- # }
76
- # name_descriptor: {
77
- # pattern: "billingAccounts/{billing_account}/logs/{log}"
78
- # parent_type: "billing.googleapis.com/BillingAccount"
79
- # parent_name_extractor: "billingAccounts/{billing_account}"
80
- # }
54
+ # pattern: "projects/{project}/logs/{log}"
55
+ # pattern: "folders/{folder}/logs/{log}"
56
+ # pattern: "organizations/{organization}/logs/{log}"
57
+ # pattern: "billingAccounts/{billing_account}/logs/{log}"
81
58
  # };
82
59
  # }
83
60
  #
@@ -85,48 +62,10 @@ module Google
85
62
  #
86
63
  # resources:
87
64
  # - type: 'logging.googleapis.com/LogEntry'
88
- # name_descriptor:
89
- # - pattern: "projects/{project}/logs/{log}"
90
- # parent_type: "cloudresourcemanager.googleapis.com/Project"
91
- # parent_name_extractor: "projects/{project}"
92
- # - pattern: "folders/{folder}/logs/{log}"
93
- # parent_type: "cloudresourcemanager.googleapis.com/Folder"
94
- # parent_name_extractor: "folders/{folder}"
95
- # - pattern: "organizations/{organization}/logs/{log}"
96
- # parent_type: "cloudresourcemanager.googleapis.com/Organization"
97
- # parent_name_extractor: "organizations/{organization}"
98
- # - pattern: "billingAccounts/{billing_account}/logs/{log}"
99
- # parent_type: "billing.googleapis.com/BillingAccount"
100
- # parent_name_extractor: "billingAccounts/{billing_account}"
101
- #
102
- # For flexible resources, the resource name doesn't contain parent names, but
103
- # the resource itself has parents for policy evaluation.
104
- #
105
- # Example:
106
- #
107
- # message Shelf {
108
- # option (google.api.resource) = {
109
- # type: "library.googleapis.com/Shelf"
110
- # name_descriptor: {
111
- # pattern: "shelves/{shelf}"
112
- # parent_type: "cloudresourcemanager.googleapis.com/Project"
113
- # }
114
- # name_descriptor: {
115
- # pattern: "shelves/{shelf}"
116
- # parent_type: "cloudresourcemanager.googleapis.com/Folder"
117
- # }
118
- # };
119
- # }
120
- #
121
- # The ResourceDescriptor Yaml config will look like:
122
- #
123
- # resources:
124
- # - type: 'library.googleapis.com/Shelf'
125
- # name_descriptor:
126
- # - pattern: "shelves/{shelf}"
127
- # parent_type: "cloudresourcemanager.googleapis.com/Project"
128
- # - pattern: "shelves/{shelf}"
129
- # parent_type: "cloudresourcemanager.googleapis.com/Folder"
65
+ # pattern: "projects/{project}/logs/{log}"
66
+ # pattern: "folders/{folder}/logs/{log}"
67
+ # pattern: "organizations/{organization}/logs/{log}"
68
+ # pattern: "billingAccounts/{billing_account}/logs/{log}"
130
69
  # @!attribute [rw] type
131
70
  # @return [::String]
132
71
  # The resource type. It must be in the format of
@@ -32,8 +32,18 @@ module Google
32
32
  # unique. Names of deleted clusters can be reused.
33
33
  # @!attribute [rw] config
34
34
  # @return [::Google::Cloud::Dataproc::V1::ClusterConfig]
35
- # Required. The cluster config. Note that Dataproc may set
36
- # default values, and values may change when clusters are updated.
35
+ # Optional. The cluster config for a cluster of Compute Engine Instances.
36
+ # Note that Dataproc may set default values, and values may change
37
+ # when clusters are updated.
38
+ # @!attribute [rw] virtual_cluster_config
39
+ # @return [::Google::Cloud::Dataproc::V1::VirtualClusterConfig]
40
+ # Optional. The virtual cluster config, used when creating a Dataproc cluster that
41
+ # does not directly control the underlying compute resources, for example,
42
+ # when creating a [Dataproc-on-GKE
43
+ # cluster](https://cloud.google.com/dataproc/docs/concepts/jobs/dataproc-gke#create-a-dataproc-on-gke-cluster).
44
+ # Note that Dataproc may set default values, and values may change when
45
+ # clusters are updated. Exactly one of config or virtualClusterConfig must be
46
+ # specified.
37
47
  # @!attribute [rw] labels
38
48
  # @return [::Google::Protobuf::Map{::String => ::String}]
39
49
  # Optional. The labels to associate with this cluster.
@@ -155,37 +165,48 @@ module Google
155
165
  # @!attribute [rw] metastore_config
156
166
  # @return [::Google::Cloud::Dataproc::V1::MetastoreConfig]
157
167
  # Optional. Metastore configuration.
158
- # @!attribute [rw] gke_cluster_config
159
- # @return [::Google::Cloud::Dataproc::V1::GkeClusterConfig]
160
- # Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to
161
- # Kubernetes. Setting this is considered mutually exclusive with Compute
162
- # Engine-based options such as `gce_cluster_config`, `master_config`,
163
- # `worker_config`, `secondary_worker_config`, and `autoscaling_config`.
164
168
  class ClusterConfig
165
169
  include ::Google::Protobuf::MessageExts
166
170
  extend ::Google::Protobuf::MessageExts::ClassMethods
167
171
  end
168
172
 
169
- # The GKE config for this cluster.
170
- # @!attribute [rw] namespaced_gke_deployment_target
171
- # @return [::Google::Cloud::Dataproc::V1::GkeClusterConfig::NamespacedGkeDeploymentTarget]
172
- # Optional. A target for the deployment.
173
- class GkeClusterConfig
173
+ # Dataproc cluster config for a cluster that does not directly control the
174
+ # underlying compute resources, such as a [Dataproc-on-GKE
175
+ # cluster](https://cloud.google.com/dataproc/docs/concepts/jobs/dataproc-gke#create-a-dataproc-on-gke-cluster).
176
+ # @!attribute [rw] staging_bucket
177
+ # @return [::String]
178
+ # Optional. A Storage bucket used to stage job
179
+ # dependencies, config files, and job driver console output.
180
+ # If you do not specify a staging bucket, Cloud
181
+ # Dataproc will determine a Cloud Storage location (US,
182
+ # ASIA, or EU) for your cluster's staging bucket according to the
183
+ # Compute Engine zone where your cluster is deployed, and then create
184
+ # and manage this project-level, per-location bucket (see
185
+ # [Dataproc staging and temp
186
+ # buckets](https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)).
187
+ # **This field requires a Cloud Storage bucket name, not a `gs://...` URI to
188
+ # a Cloud Storage bucket.**
189
+ # @!attribute [rw] kubernetes_cluster_config
190
+ # @return [::Google::Cloud::Dataproc::V1::KubernetesClusterConfig]
191
+ # Required. The configuration for running the Dataproc cluster on Kubernetes.
192
+ # @!attribute [rw] auxiliary_services_config
193
+ # @return [::Google::Cloud::Dataproc::V1::AuxiliaryServicesConfig]
194
+ # Optional. Configuration of auxiliary services used by this cluster.
195
+ class VirtualClusterConfig
174
196
  include ::Google::Protobuf::MessageExts
175
197
  extend ::Google::Protobuf::MessageExts::ClassMethods
198
+ end
176
199
 
177
- # A full, namespace-isolated deployment target for an existing GKE cluster.
178
- # @!attribute [rw] target_gke_cluster
179
- # @return [::String]
180
- # Optional. The target GKE cluster to deploy to.
181
- # Format: 'projects/\\{project}/locations/\\{location}/clusters/\\{cluster_id}'
182
- # @!attribute [rw] cluster_namespace
183
- # @return [::String]
184
- # Optional. A namespace within the GKE cluster to deploy into.
185
- class NamespacedGkeDeploymentTarget
186
- include ::Google::Protobuf::MessageExts
187
- extend ::Google::Protobuf::MessageExts::ClassMethods
188
- end
200
+ # Auxiliary services configuration for a Cluster.
201
+ # @!attribute [rw] metastore_config
202
+ # @return [::Google::Cloud::Dataproc::V1::MetastoreConfig]
203
+ # Optional. The Hive Metastore configuration for this workload.
204
+ # @!attribute [rw] spark_history_server_config
205
+ # @return [::Google::Cloud::Dataproc::V1::SparkHistoryServerConfig]
206
+ # Optional. The Spark History Server configuration for the workload.
207
+ class AuxiliaryServicesConfig
208
+ include ::Google::Protobuf::MessageExts
209
+ extend ::Google::Protobuf::MessageExts::ClassMethods
189
210
  end
190
211
 
191
212
  # Endpoint config for this cluster
@@ -588,6 +609,13 @@ module Google
588
609
  # If one or more SSDs are attached, this runtime bulk
589
610
  # data is spread across them, and the boot disk contains only basic
590
611
  # config and installed binaries.
612
+ # @!attribute [rw] local_ssd_interface
613
+ # @return [::String]
614
+ # Optional. Interface type of local SSDs (default is "scsi").
615
+ # Valid values: "scsi" (Small Computer System Interface),
616
+ # "nvme" (Non-Volatile Memory Express).
617
+ # See [local SSD
618
+ # performance](https://cloud.google.com/compute/docs/disks/local-ssd#performance).
591
619
  class DiskConfig
592
620
  include ::Google::Protobuf::MessageExts
593
621
  extend ::Google::Protobuf::MessageExts::ClassMethods
@@ -640,6 +668,10 @@ module Google
640
668
  CREATING = 1
641
669
 
642
670
  # The cluster is currently running and healthy. It is ready for use.
671
+ #
672
+ # **Note:** The cluster state changes from "creating" to "running" status
673
+ # after the master node(s), first two primary worker nodes (and the last
674
+ # primary worker node if primary workers > 2) are running.
643
675
  RUNNING = 2
644
676
 
645
677
  # The cluster encountered an error. It is not ready for use.
@@ -785,11 +785,23 @@ module Google
785
785
  # 4 times within 10 minute window.
786
786
  #
787
787
  # Maximum value is 10.
788
+ #
789
+ # **Note:** Currently, this restartable job option is
790
+ # not supported in Dataproc
791
+ # [workflow
792
+ # template](https://cloud.google.com/dataproc/docs/concepts/workflows/using-workflows#adding_jobs_to_a_template)
793
+ # jobs.
788
794
  # @!attribute [rw] max_failures_total
789
795
  # @return [::Integer]
790
796
  # Optional. Maximum number of times in total a driver may be restarted as a result of
791
797
  # driver exiting with non-zero code before job is reported failed.
792
798
  # Maximum value is 240.
799
+ #
800
+ # **Note:** Currently, this restartable job option is
801
+ # not supported in Dataproc
802
+ # [workflow
803
+ # template](https://cloud.google.com/dataproc/docs/concepts/workflows/using-workflows#adding_jobs_to_a_template)
804
+ # jobs.
793
805
  class JobScheduling
794
806
  include ::Google::Protobuf::MessageExts
795
807
  extend ::Google::Protobuf::MessageExts::ClassMethods
@@ -22,6 +22,13 @@ module Google
22
22
  module Dataproc
23
23
  module V1
24
24
  # Runtime configuration for a workload.
25
+ # @!attribute [rw] version
26
+ # @return [::String]
27
+ # Optional. Version of the batch runtime.
28
+ # @!attribute [rw] container_image
29
+ # @return [::String]
30
+ # Optional. Optional custom container image for the job runtime environment. If
31
+ # not specified, a default container image will be used.
25
32
  # @!attribute [rw] properties
26
33
  # @return [::Google::Protobuf::Map{::String => ::String}]
27
34
  # Optional. A mapping of property names to values, which are used to configure workload
@@ -111,6 +118,9 @@ module Google
111
118
  # @!attribute [r] output_uri
112
119
  # @return [::String]
113
120
  # Output only. A URI pointing to the location of the stdout and stderr of the workload.
121
+ # @!attribute [r] diagnostic_output_uri
122
+ # @return [::String]
123
+ # Output only. A URI pointing to the location of the diagnostics tarball.
114
124
  class RuntimeInfo
115
125
  include ::Google::Protobuf::MessageExts
116
126
  extend ::Google::Protobuf::MessageExts::ClassMethods
@@ -125,6 +135,215 @@ module Google
125
135
  end
126
136
  end
127
137
 
138
+ # The cluster's GKE config.
139
+ # @!attribute [rw] gke_cluster_target
140
+ # @return [::String]
141
+ # Optional. A target GKE cluster to deploy to. It must be in the same project and
142
+ # region as the Dataproc cluster (the GKE cluster can be zonal or regional).
143
+ # Format: 'projects/\\{project}/locations/\\{location}/clusters/\\{cluster_id}'
144
+ # @!attribute [rw] node_pool_target
145
+ # @return [::Array<::Google::Cloud::Dataproc::V1::GkeNodePoolTarget>]
146
+ # Optional. GKE NodePools where workloads will be scheduled. At least one node pool
147
+ # must be assigned the 'default' role. Each role can be given to only a
148
+ # single NodePoolTarget. All NodePools must have the same location settings.
149
+ # If a nodePoolTarget is not specified, Dataproc constructs a default
150
+ # nodePoolTarget.
151
+ class GkeClusterConfig
152
+ include ::Google::Protobuf::MessageExts
153
+ extend ::Google::Protobuf::MessageExts::ClassMethods
154
+ end
155
+
156
+ # The configuration for running the Dataproc cluster on Kubernetes.
157
+ # @!attribute [rw] kubernetes_namespace
158
+ # @return [::String]
159
+ # Optional. A namespace within the Kubernetes cluster to deploy into. If this namespace
160
+ # does not exist, it is created. If it exists, Dataproc
161
+ # verifies that another Dataproc VirtualCluster is not installed
162
+ # into it. If not specified, the name of the Dataproc Cluster is used.
163
+ # @!attribute [rw] gke_cluster_config
164
+ # @return [::Google::Cloud::Dataproc::V1::GkeClusterConfig]
165
+ # Required. The configuration for running the Dataproc cluster on GKE.
166
+ # @!attribute [rw] kubernetes_software_config
167
+ # @return [::Google::Cloud::Dataproc::V1::KubernetesSoftwareConfig]
168
+ # Optional. The software configuration for this Dataproc cluster running on Kubernetes.
169
+ class KubernetesClusterConfig
170
+ include ::Google::Protobuf::MessageExts
171
+ extend ::Google::Protobuf::MessageExts::ClassMethods
172
+ end
173
+
174
+ # The software configuration for this Dataproc cluster running on Kubernetes.
175
+ # @!attribute [rw] component_version
176
+ # @return [::Google::Protobuf::Map{::String => ::String}]
177
+ # The components that should be installed in this Dataproc cluster. The key
178
+ # must be a string from the KubernetesComponent enumeration. The value is
179
+ # the version of the software to be installed.
180
+ # At least one entry must be specified.
181
+ # @!attribute [rw] properties
182
+ # @return [::Google::Protobuf::Map{::String => ::String}]
183
+ # The properties to set on daemon config files.
184
+ #
185
+ # Property keys are specified in `prefix:property` format, for example
186
+ # `spark:spark.kubernetes.container.image`. The following are supported
187
+ # prefixes and their mappings:
188
+ #
189
+ # * spark: `spark-defaults.conf`
190
+ #
191
+ # For more information, see [Cluster
192
+ # properties](https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
193
+ class KubernetesSoftwareConfig
194
+ include ::Google::Protobuf::MessageExts
195
+ extend ::Google::Protobuf::MessageExts::ClassMethods
196
+
197
+ # @!attribute [rw] key
198
+ # @return [::String]
199
+ # @!attribute [rw] value
200
+ # @return [::String]
201
+ class ComponentVersionEntry
202
+ include ::Google::Protobuf::MessageExts
203
+ extend ::Google::Protobuf::MessageExts::ClassMethods
204
+ end
205
+
206
+ # @!attribute [rw] key
207
+ # @return [::String]
208
+ # @!attribute [rw] value
209
+ # @return [::String]
210
+ class PropertiesEntry
211
+ include ::Google::Protobuf::MessageExts
212
+ extend ::Google::Protobuf::MessageExts::ClassMethods
213
+ end
214
+ end
215
+
216
+ # GKE NodePools that Dataproc workloads run on.
217
+ # @!attribute [rw] node_pool
218
+ # @return [::String]
219
+ # Required. The target GKE NodePool.
220
+ # Format:
221
+ # 'projects/\\{project}/locations/\\{location}/clusters/\\{cluster}/nodePools/\\{node_pool}'
222
+ # @!attribute [rw] roles
223
+ # @return [::Array<::Google::Cloud::Dataproc::V1::GkeNodePoolTarget::Role>]
224
+ # Required. The types of role for a GKE NodePool
225
+ # @!attribute [rw] node_pool_config
226
+ # @return [::Google::Cloud::Dataproc::V1::GkeNodePoolConfig]
227
+ # Optional. The configuration for the GKE NodePool.
228
+ #
229
+ # If specified, Dataproc attempts to create a NodePool with the
230
+ # specified shape. If one with the same name already exists, it is
231
+ # verified against all specified fields. If a field differs, the
232
+ # virtual cluster creation will fail.
233
+ #
234
+ # If omitted, any NodePool with the specified name is used. If a
235
+ # NodePool with the specified name does not exist, Dataproc create a NodePool
236
+ # with default values.
237
+ class GkeNodePoolTarget
238
+ include ::Google::Protobuf::MessageExts
239
+ extend ::Google::Protobuf::MessageExts::ClassMethods
240
+
241
+ # `Role` specifies whose tasks will run on the NodePool. The roles can be
242
+ # specific to workloads. Exactly one GkeNodePoolTarget within the
243
+ # VirtualCluster must have 'default' role, which is used to run all workloads
244
+ # that are not associated with a NodePool.
245
+ module Role
246
+ # Role is unspecified.
247
+ ROLE_UNSPECIFIED = 0
248
+
249
+ # Any roles that are not directly assigned to a NodePool run on the
250
+ # `default` role's NodePool.
251
+ DEFAULT = 1
252
+
253
+ # Run controllers and webhooks.
254
+ CONTROLLER = 2
255
+
256
+ # Run spark driver.
257
+ SPARK_DRIVER = 3
258
+
259
+ # Run spark executors.
260
+ SPARK_EXECUTOR = 4
261
+ end
262
+ end
263
+
264
+ # The configuration of a GKE NodePool used by a [Dataproc-on-GKE
265
+ # cluster](https://cloud.google.com/dataproc/docs/concepts/jobs/dataproc-gke#create-a-dataproc-on-gke-cluster).
266
+ # @!attribute [rw] config
267
+ # @return [::Google::Cloud::Dataproc::V1::GkeNodePoolConfig::GkeNodeConfig]
268
+ # Optional. The node pool configuration.
269
+ # @!attribute [rw] locations
270
+ # @return [::Array<::String>]
271
+ # Optional. The list of Compute Engine
272
+ # [zones](https://cloud.google.com/compute/docs/zones#available) where
273
+ # NodePool's nodes will be located.
274
+ #
275
+ # **Note:** Currently, only one zone may be specified.
276
+ #
277
+ # If a location is not specified during NodePool creation, Dataproc will
278
+ # choose a location.
279
+ # @!attribute [rw] autoscaling
280
+ # @return [::Google::Cloud::Dataproc::V1::GkeNodePoolConfig::GkeNodePoolAutoscalingConfig]
281
+ # Optional. The autoscaler configuration for this NodePool. The autoscaler is enabled
282
+ # only when a valid configuration is present.
283
+ class GkeNodePoolConfig
284
+ include ::Google::Protobuf::MessageExts
285
+ extend ::Google::Protobuf::MessageExts::ClassMethods
286
+
287
+ # Parameters that describe cluster nodes.
288
+ # @!attribute [rw] machine_type
289
+ # @return [::String]
290
+ # Optional. The name of a Compute Engine [machine
291
+ # type](https://cloud.google.com/compute/docs/machine-types).
292
+ # @!attribute [rw] preemptible
293
+ # @return [::Boolean]
294
+ # Optional. Whether the nodes are created as [preemptible VM
295
+ # instances](https://cloud.google.com/compute/docs/instances/preemptible).
296
+ # @!attribute [rw] local_ssd_count
297
+ # @return [::Integer]
298
+ # Optional. The number of local SSD disks to attach to the node, which is limited by
299
+ # the maximum number of disks allowable per zone (see [Adding Local
300
+ # SSDs](https://cloud.google.com/compute/docs/disks/local-ssd)).
301
+ # @!attribute [rw] accelerators
302
+ # @return [::Array<::Google::Cloud::Dataproc::V1::GkeNodePoolConfig::GkeNodePoolAcceleratorConfig>]
303
+ # Optional. A list of [hardware
304
+ # accelerators](https://cloud.google.com/compute/docs/gpus) to attach to
305
+ # each node.
306
+ # @!attribute [rw] min_cpu_platform
307
+ # @return [::String]
308
+ # Optional. [Minimum CPU
309
+ # platform](https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform)
310
+ # to be used by this instance. The instance may be scheduled on the
311
+ # specified or a newer CPU platform. Specify the friendly names of CPU
312
+ # platforms, such as "Intel Haswell"` or Intel Sandy Bridge".
313
+ class GkeNodeConfig
314
+ include ::Google::Protobuf::MessageExts
315
+ extend ::Google::Protobuf::MessageExts::ClassMethods
316
+ end
317
+
318
+ # A GkeNodeConfigAcceleratorConfig represents a Hardware Accelerator request
319
+ # for a NodePool.
320
+ # @!attribute [rw] accelerator_count
321
+ # @return [::Integer]
322
+ # The number of accelerator cards exposed to an instance.
323
+ # @!attribute [rw] accelerator_type
324
+ # @return [::String]
325
+ # The accelerator type resource namename (see GPUs on Compute Engine).
326
+ class GkeNodePoolAcceleratorConfig
327
+ include ::Google::Protobuf::MessageExts
328
+ extend ::Google::Protobuf::MessageExts::ClassMethods
329
+ end
330
+
331
+ # GkeNodePoolAutoscaling contains information the cluster autoscaler needs to
332
+ # adjust the size of the node pool to the current cluster usage.
333
+ # @!attribute [rw] min_node_count
334
+ # @return [::Integer]
335
+ # The minimum number of nodes in the NodePool. Must be >= 0 and <=
336
+ # max_node_count.
337
+ # @!attribute [rw] max_node_count
338
+ # @return [::Integer]
339
+ # The maximum number of nodes in the NodePool. Must be >= min_node_count.
340
+ # **Note:** Quota must be sufficient to scale up the cluster.
341
+ class GkeNodePoolAutoscalingConfig
342
+ include ::Google::Protobuf::MessageExts
343
+ extend ::Google::Protobuf::MessageExts::ClassMethods
344
+ end
345
+ end
346
+
128
347
  # Cluster components that can be activated.
129
348
  module Component
130
349
  # Unspecified component. Specifying this will cause Cluster creation to fail.
@@ -522,7 +522,7 @@ module Google
522
522
  # Required. The resource name of the region or location, as described
523
523
  # in https://cloud.google.com/apis/design/resource_names.
524
524
  #
525
- # * For `projects.regions.workflowTemplates,create`, the resource name of the
525
+ # * For `projects.regions.workflowTemplates.create`, the resource name of the
526
526
  # region has the following format:
527
527
  # `projects/{project_id}/regions/{region}`
528
528
  #
@@ -44,7 +44,7 @@ module Google
44
44
  # foo = any.unpack(Foo.class);
45
45
  # }
46
46
  #
47
- # Example 3: Pack and unpack a message in Python.
47
+ # Example 3: Pack and unpack a message in Python.
48
48
  #
49
49
  # foo = Foo(...)
50
50
  # any = Any()
@@ -54,7 +54,7 @@ module Google
54
54
  # any.Unpack(foo)
55
55
  # ...
56
56
  #
57
- # Example 4: Pack and unpack a message in Go
57
+ # Example 4: Pack and unpack a message in Go
58
58
  #
59
59
  # foo := &pb.Foo{...}
60
60
  # any, err := anypb.New(foo)
@@ -75,7 +75,7 @@ module Google
75
75
  #
76
76
  #
77
77
  # JSON
78
- # ====
78
+ #
79
79
  # The JSON representation of an `Any` value uses the regular
80
80
  # representation of the deserialized, embedded message, with an
81
81
  # additional field `@type` which contains the type URL. Example:
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: google-cloud-dataproc-v1
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.7.1
4
+ version: 0.10.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Google LLC
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2021-11-08 00:00:00.000000000 Z
11
+ date: 2022-05-13 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: gapic-common
@@ -243,7 +243,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
243
243
  - !ruby/object:Gem::Version
244
244
  version: '0'
245
245
  requirements: []
246
- rubygems_version: 3.2.17
246
+ rubygems_version: 3.3.5
247
247
  signing_key:
248
248
  specification_version: 4
249
249
  summary: API Client library for the Cloud Dataproc V1 API