google-cloud-dataproc-v1 0.12.0 → 0.14.0

Sign up to get free protection for your applications and to get access to all the features.
Files changed (28) hide show
  1. checksums.yaml +4 -4
  2. data/AUTHENTICATION.md +1 -1
  3. data/README.md +1 -1
  4. data/lib/google/cloud/dataproc/v1/cluster_controller/client.rb +14 -12
  5. data/lib/google/cloud/dataproc/v1/cluster_controller/paths.rb +21 -0
  6. data/lib/google/cloud/dataproc/v1/clusters_pb.rb +38 -0
  7. data/lib/google/cloud/dataproc/v1/clusters_services_pb.rb +2 -1
  8. data/lib/google/cloud/dataproc/v1/jobs_pb.rb +6 -0
  9. data/lib/google/cloud/dataproc/v1/node_group_controller/client.rb +662 -0
  10. data/lib/google/cloud/dataproc/v1/node_group_controller/credentials.rb +51 -0
  11. data/lib/google/cloud/dataproc/v1/node_group_controller/operations.rb +770 -0
  12. data/lib/google/cloud/dataproc/v1/node_group_controller/paths.rb +73 -0
  13. data/lib/google/cloud/dataproc/v1/node_group_controller.rb +51 -0
  14. data/lib/google/cloud/dataproc/v1/node_groups_pb.rb +44 -0
  15. data/lib/google/cloud/dataproc/v1/node_groups_services_pb.rb +55 -0
  16. data/lib/google/cloud/dataproc/v1/operations_pb.rb +19 -0
  17. data/lib/google/cloud/dataproc/v1/version.rb +1 -1
  18. data/lib/google/cloud/dataproc/v1/workflow_template_service/paths.rb +21 -0
  19. data/lib/google/cloud/dataproc/v1.rb +1 -0
  20. data/proto_docs/google/api/client.rb +318 -0
  21. data/proto_docs/google/api/launch_stage.rb +71 -0
  22. data/proto_docs/google/cloud/dataproc/v1/clusters.rb +205 -46
  23. data/proto_docs/google/cloud/dataproc/v1/jobs.rb +33 -18
  24. data/proto_docs/google/cloud/dataproc/v1/node_groups.rb +115 -0
  25. data/proto_docs/google/cloud/dataproc/v1/operations.rb +57 -0
  26. data/proto_docs/google/protobuf/empty.rb +0 -2
  27. data/proto_docs/google/rpc/status.rb +4 -2
  28. metadata +14 -4
@@ -0,0 +1,71 @@
1
+ # frozen_string_literal: true
2
+
3
+ # Copyright 2022 Google LLC
4
+ #
5
+ # Licensed under the Apache License, Version 2.0 (the "License");
6
+ # you may not use this file except in compliance with the License.
7
+ # You may obtain a copy of the License at
8
+ #
9
+ # https://www.apache.org/licenses/LICENSE-2.0
10
+ #
11
+ # Unless required by applicable law or agreed to in writing, software
12
+ # distributed under the License is distributed on an "AS IS" BASIS,
13
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14
+ # See the License for the specific language governing permissions and
15
+ # limitations under the License.
16
+
17
+ # Auto-generated by gapic-generator-ruby. DO NOT EDIT!
18
+
19
+
20
+ module Google
21
+ module Api
22
+ # The launch stage as defined by [Google Cloud Platform
23
+ # Launch Stages](https://cloud.google.com/terms/launch-stages).
24
+ module LaunchStage
25
+ # Do not use this default value.
26
+ LAUNCH_STAGE_UNSPECIFIED = 0
27
+
28
+ # The feature is not yet implemented. Users can not use it.
29
+ UNIMPLEMENTED = 6
30
+
31
+ # Prelaunch features are hidden from users and are only visible internally.
32
+ PRELAUNCH = 7
33
+
34
+ # Early Access features are limited to a closed group of testers. To use
35
+ # these features, you must sign up in advance and sign a Trusted Tester
36
+ # agreement (which includes confidentiality provisions). These features may
37
+ # be unstable, changed in backward-incompatible ways, and are not
38
+ # guaranteed to be released.
39
+ EARLY_ACCESS = 1
40
+
41
+ # Alpha is a limited availability test for releases before they are cleared
42
+ # for widespread use. By Alpha, all significant design issues are resolved
43
+ # and we are in the process of verifying functionality. Alpha customers
44
+ # need to apply for access, agree to applicable terms, and have their
45
+ # projects allowlisted. Alpha releases don't have to be feature complete,
46
+ # no SLAs are provided, and there are no technical support obligations, but
47
+ # they will be far enough along that customers can actually use them in
48
+ # test environments or for limited-use tests -- just like they would in
49
+ # normal production cases.
50
+ ALPHA = 2
51
+
52
+ # Beta is the point at which we are ready to open a release for any
53
+ # customer to use. There are no SLA or technical support obligations in a
54
+ # Beta release. Products will be complete from a feature perspective, but
55
+ # may have some open outstanding issues. Beta releases are suitable for
56
+ # limited production use cases.
57
+ BETA = 3
58
+
59
+ # GA features are open to all developers and are considered stable and
60
+ # fully qualified for production use.
61
+ GA = 4
62
+
63
+ # Deprecated features are scheduled to be shut down and removed. For more
64
+ # information, see the "Deprecation Policy" section of our [Terms of
65
+ # Service](https://cloud.google.com/terms/)
66
+ # and the [Google Cloud Platform Subject to the Deprecation
67
+ # Policy](https://cloud.google.com/terms/deprecation) documentation.
68
+ DEPRECATED = 5
69
+ end
70
+ end
71
+ end
@@ -28,8 +28,10 @@ module Google
28
28
  # Required. The Google Cloud Platform project ID that the cluster belongs to.
29
29
  # @!attribute [rw] cluster_name
30
30
  # @return [::String]
31
- # Required. The cluster name. Cluster names within a project must be
32
- # unique. Names of deleted clusters can be reused.
31
+ # Required. The cluster name, which must be unique within a project.
32
+ # The name must start with a lowercase letter, and can contain
33
+ # up to 51 lowercase letters, numbers, and hyphens. It cannot end
34
+ # with a hyphen. The name of a deleted cluster can be reused.
33
35
  # @!attribute [rw] config
34
36
  # @return [::Google::Cloud::Dataproc::V1::ClusterConfig]
35
37
  # Optional. The cluster config for a cluster of Compute Engine Instances.
@@ -37,13 +39,15 @@ module Google
37
39
  # when clusters are updated.
38
40
  # @!attribute [rw] virtual_cluster_config
39
41
  # @return [::Google::Cloud::Dataproc::V1::VirtualClusterConfig]
40
- # Optional. The virtual cluster config, used when creating a Dataproc cluster that
41
- # does not directly control the underlying compute resources, for example,
42
- # when creating a [Dataproc-on-GKE
43
- # cluster](https://cloud.google.com/dataproc/docs/concepts/jobs/dataproc-gke#create-a-dataproc-on-gke-cluster).
44
- # Note that Dataproc may set default values, and values may change when
45
- # clusters are updated. Exactly one of config or virtualClusterConfig must be
46
- # specified.
42
+ # Optional. The virtual cluster config is used when creating a Dataproc
43
+ # cluster that does not directly control the underlying compute resources,
44
+ # for example, when creating a [Dataproc-on-GKE
45
+ # cluster](https://cloud.google.com/dataproc/docs/guides/dpgke/dataproc-gke).
46
+ # Dataproc may set default values, and values may change when
47
+ # clusters are updated. Exactly one of
48
+ # {::Google::Cloud::Dataproc::V1::Cluster#config config} or
49
+ # {::Google::Cloud::Dataproc::V1::Cluster#virtual_cluster_config virtual_cluster_config}
50
+ # must be specified.
47
51
  # @!attribute [rw] labels
48
52
  # @return [::Google::Protobuf::Map{::String => ::String}]
49
53
  # Optional. The labels to associate with this cluster.
@@ -99,15 +103,13 @@ module Google
99
103
  # a Cloud Storage bucket.**
100
104
  # @!attribute [rw] temp_bucket
101
105
  # @return [::String]
102
- # Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data,
103
- # such as Spark and MapReduce history files.
104
- # If you do not specify a temp bucket,
105
- # Dataproc will determine a Cloud Storage location (US,
106
- # ASIA, or EU) for your cluster's temp bucket according to the
107
- # Compute Engine zone where your cluster is deployed, and then create
108
- # and manage this project-level, per-location bucket. The default bucket has
109
- # a TTL of 90 days, but you can use any TTL (or none) if you specify a
110
- # bucket (see
106
+ # Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs
107
+ # data, such as Spark and MapReduce history files. If you do not specify a
108
+ # temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or
109
+ # EU) for your cluster's temp bucket according to the Compute Engine zone
110
+ # where your cluster is deployed, and then create and manage this
111
+ # project-level, per-location bucket. The default bucket has a TTL of 90
112
+ # days, but you can use any TTL (or none) if you specify a bucket (see
111
113
  # [Dataproc staging and temp
112
114
  # buckets](https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)).
113
115
  # **This field requires a Cloud Storage bucket name, not a `gs://...` URI to
@@ -165,17 +167,23 @@ module Google
165
167
  # @!attribute [rw] metastore_config
166
168
  # @return [::Google::Cloud::Dataproc::V1::MetastoreConfig]
167
169
  # Optional. Metastore configuration.
170
+ # @!attribute [rw] dataproc_metric_config
171
+ # @return [::Google::Cloud::Dataproc::V1::DataprocMetricConfig]
172
+ # Optional. The config for Dataproc metrics.
173
+ # @!attribute [rw] auxiliary_node_groups
174
+ # @return [::Array<::Google::Cloud::Dataproc::V1::AuxiliaryNodeGroup>]
175
+ # Optional. The node group settings.
168
176
  class ClusterConfig
169
177
  include ::Google::Protobuf::MessageExts
170
178
  extend ::Google::Protobuf::MessageExts::ClassMethods
171
179
  end
172
180
 
173
- # Dataproc cluster config for a cluster that does not directly control the
181
+ # The Dataproc cluster config for a cluster that does not directly control the
174
182
  # underlying compute resources, such as a [Dataproc-on-GKE
175
- # cluster](https://cloud.google.com/dataproc/docs/concepts/jobs/dataproc-gke#create-a-dataproc-on-gke-cluster).
183
+ # cluster](https://cloud.google.com/dataproc/docs/guides/dpgke/dataproc-gke).
176
184
  # @!attribute [rw] staging_bucket
177
185
  # @return [::String]
178
- # Optional. A Storage bucket used to stage job
186
+ # Optional. A Cloud Storage bucket used to stage job
179
187
  # dependencies, config files, and job driver console output.
180
188
  # If you do not specify a staging bucket, Cloud
181
189
  # Dataproc will determine a Cloud Storage location (US,
@@ -188,7 +196,8 @@ module Google
188
196
  # a Cloud Storage bucket.**
189
197
  # @!attribute [rw] kubernetes_cluster_config
190
198
  # @return [::Google::Cloud::Dataproc::V1::KubernetesClusterConfig]
191
- # Required. The configuration for running the Dataproc cluster on Kubernetes.
199
+ # Required. The configuration for running the Dataproc cluster on
200
+ # Kubernetes.
192
201
  # @!attribute [rw] auxiliary_services_config
193
202
  # @return [::Google::Cloud::Dataproc::V1::AuxiliaryServicesConfig]
194
203
  # Optional. Configuration of auxiliary services used by this cluster.
@@ -355,7 +364,8 @@ module Google
355
364
  # Optional. Node Group Affinity for sole-tenant clusters.
356
365
  # @!attribute [rw] shielded_instance_config
357
366
  # @return [::Google::Cloud::Dataproc::V1::ShieldedInstanceConfig]
358
- # Optional. Shielded Instance Config for clusters using [Compute Engine Shielded
367
+ # Optional. Shielded Instance Config for clusters using [Compute Engine
368
+ # Shielded
359
369
  # VMs](https://cloud.google.com/security/shielded-cloud/shielded-vm).
360
370
  # @!attribute [rw] confidential_instance_config
361
371
  # @return [::Google::Cloud::Dataproc::V1::ConfidentialInstanceConfig]
@@ -381,7 +391,8 @@ module Google
381
391
  # fields](https://cloud.google.com/compute/docs/reference/rest/v1/instances).
382
392
  module PrivateIpv6GoogleAccess
383
393
  # If unspecified, Compute Engine default behavior will apply, which
384
- # is the same as {::Google::Cloud::Dataproc::V1::GceClusterConfig::PrivateIpv6GoogleAccess::INHERIT_FROM_SUBNETWORK INHERIT_FROM_SUBNETWORK}.
394
+ # is the same as
395
+ # {::Google::Cloud::Dataproc::V1::GceClusterConfig::PrivateIpv6GoogleAccess::INHERIT_FROM_SUBNETWORK INHERIT_FROM_SUBNETWORK}.
385
396
  PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIED = 0
386
397
 
387
398
  # Private access to and from Google Services configuration
@@ -400,6 +411,8 @@ module Google
400
411
  end
401
412
 
402
413
  # Node Group Affinity for clusters using sole-tenant node groups.
414
+ # **The Dataproc `NodeGroupAffinity` resource is not related to the
415
+ # Dataproc {::Google::Cloud::Dataproc::V1::NodeGroup NodeGroup} resource.**
403
416
  # @!attribute [rw] node_group_uri
404
417
  # @return [::String]
405
418
  # Required. The URI of a
@@ -437,7 +450,8 @@ module Google
437
450
  # VMs](https://cloud.google.com/compute/confidential-vm/docs)
438
451
  # @!attribute [rw] enable_confidential_compute
439
452
  # @return [::Boolean]
440
- # Optional. Defines whether the instance should have confidential compute enabled.
453
+ # Optional. Defines whether the instance should have confidential compute
454
+ # enabled.
441
455
  class ConfidentialInstanceConfig
442
456
  include ::Google::Protobuf::MessageExts
443
457
  extend ::Google::Protobuf::MessageExts::ClassMethods
@@ -526,10 +540,7 @@ module Google
526
540
  include ::Google::Protobuf::MessageExts
527
541
  extend ::Google::Protobuf::MessageExts::ClassMethods
528
542
 
529
- # Controls the use of
530
- # [preemptible instances]
531
- # (https://cloud.google.com/compute/docs/instances/preemptible)
532
- # within the group.
543
+ # Controls the use of preemptible instances within the group.
533
544
  module Preemptibility
534
545
  # Preemptibility is unspecified, the system will choose the
535
546
  # appropriate setting for each instance group.
@@ -541,9 +552,12 @@ module Google
541
552
  # value for Master and Worker instance groups.
542
553
  NON_PREEMPTIBLE = 1
543
554
 
544
- # Instances are preemptible.
555
+ # Instances are [preemptible]
556
+ # (https://cloud.google.com/compute/docs/instances/preemptible).
545
557
  #
546
- # This option is allowed only for secondary worker groups.
558
+ # This option is allowed only for [secondary worker]
559
+ # (https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms)
560
+ # groups.
547
561
  PREEMPTIBLE = 2
548
562
  end
549
563
  end
@@ -603,7 +617,7 @@ module Google
603
617
  # Optional. Size in GB of the boot disk (default is 500GB).
604
618
  # @!attribute [rw] num_local_ssds
605
619
  # @return [::Integer]
606
- # Optional. Number of attached SSDs, from 0 to 4 (default is 0).
620
+ # Optional. Number of attached SSDs, from 0 to 8 (default is 0).
607
621
  # If SSDs are not attached, the boot disk is used to store runtime logs and
608
622
  # [HDFS](https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data.
609
623
  # If one or more SSDs are attached, this runtime bulk
@@ -621,6 +635,68 @@ module Google
621
635
  extend ::Google::Protobuf::MessageExts::ClassMethods
622
636
  end
623
637
 
638
+ # Node group identification and configuration information.
639
+ # @!attribute [rw] node_group
640
+ # @return [::Google::Cloud::Dataproc::V1::NodeGroup]
641
+ # Required. Node group configuration.
642
+ # @!attribute [rw] node_group_id
643
+ # @return [::String]
644
+ # Optional. A node group ID. Generated if not specified.
645
+ #
646
+ # The ID must contain only letters (a-z, A-Z), numbers (0-9),
647
+ # underscores (_), and hyphens (-). Cannot begin or end with underscore
648
+ # or hyphen. Must consist of from 3 to 33 characters.
649
+ class AuxiliaryNodeGroup
650
+ include ::Google::Protobuf::MessageExts
651
+ extend ::Google::Protobuf::MessageExts::ClassMethods
652
+ end
653
+
654
+ # Dataproc Node Group.
655
+ # **The Dataproc `NodeGroup` resource is not related to the
656
+ # Dataproc {::Google::Cloud::Dataproc::V1::NodeGroupAffinity NodeGroupAffinity}
657
+ # resource.**
658
+ # @!attribute [rw] name
659
+ # @return [::String]
660
+ # The Node group [resource name](https://aip.dev/122).
661
+ # @!attribute [rw] roles
662
+ # @return [::Array<::Google::Cloud::Dataproc::V1::NodeGroup::Role>]
663
+ # Required. Node group roles.
664
+ # @!attribute [rw] node_group_config
665
+ # @return [::Google::Cloud::Dataproc::V1::InstanceGroupConfig]
666
+ # Optional. The node group instance group configuration.
667
+ # @!attribute [rw] labels
668
+ # @return [::Google::Protobuf::Map{::String => ::String}]
669
+ # Optional. Node group labels.
670
+ #
671
+ # * Label **keys** must consist of from 1 to 63 characters and conform to
672
+ # [RFC 1035](https://www.ietf.org/rfc/rfc1035.txt).
673
+ # * Label **values** can be empty. If specified, they must consist of from
674
+ # 1 to 63 characters and conform to [RFC 1035]
675
+ # (https://www.ietf.org/rfc/rfc1035.txt).
676
+ # * The node group must have no more than 32 labels.
677
+ class NodeGroup
678
+ include ::Google::Protobuf::MessageExts
679
+ extend ::Google::Protobuf::MessageExts::ClassMethods
680
+
681
+ # @!attribute [rw] key
682
+ # @return [::String]
683
+ # @!attribute [rw] value
684
+ # @return [::String]
685
+ class LabelsEntry
686
+ include ::Google::Protobuf::MessageExts
687
+ extend ::Google::Protobuf::MessageExts::ClassMethods
688
+ end
689
+
690
+ # Node group roles.
691
+ module Role
692
+ # Required unspecified role.
693
+ ROLE_UNSPECIFIED = 0
694
+
695
+ # Job drivers run on the node group.
696
+ DRIVER = 1
697
+ end
698
+ end
699
+
624
700
  # Specifies an executable to run on a fully configured node and a
625
701
  # timeout period for executable completion.
626
702
  # @!attribute [rw] executable_file
@@ -733,8 +809,8 @@ module Google
733
809
  # Specifies Kerberos related configuration.
734
810
  # @!attribute [rw] enable_kerberos
735
811
  # @return [::Boolean]
736
- # Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set
737
- # this field to true to enable Kerberos on a cluster.
812
+ # Optional. Flag to indicate whether to Kerberize the cluster (default:
813
+ # false). Set this field to true to enable Kerberos on a cluster.
738
814
  # @!attribute [rw] root_principal_password_uri
739
815
  # @return [::String]
740
816
  # Optional. The Cloud Storage URI of a KMS encrypted file containing the root
@@ -879,7 +955,8 @@ module Google
879
955
  # [Duration](https://developers.google.com/protocol-buffers/docs/proto3#json)).
880
956
  # @!attribute [rw] auto_delete_time
881
957
  # @return [::Google::Protobuf::Timestamp]
882
- # Optional. The time when cluster will be auto-deleted (see JSON representation of
958
+ # Optional. The time when cluster will be auto-deleted (see JSON
959
+ # representation of
883
960
  # [Timestamp](https://developers.google.com/protocol-buffers/docs/proto3#json)).
884
961
  # @!attribute [rw] auto_delete_ttl
885
962
  # @return [::Google::Protobuf::Duration]
@@ -911,6 +988,87 @@ module Google
911
988
  extend ::Google::Protobuf::MessageExts::ClassMethods
912
989
  end
913
990
 
991
+ # Dataproc metric config.
992
+ # @!attribute [rw] metrics
993
+ # @return [::Array<::Google::Cloud::Dataproc::V1::DataprocMetricConfig::Metric>]
994
+ # Required. Metrics sources to enable.
995
+ class DataprocMetricConfig
996
+ include ::Google::Protobuf::MessageExts
997
+ extend ::Google::Protobuf::MessageExts::ClassMethods
998
+
999
+ # A Dataproc OSS metric.
1000
+ # @!attribute [rw] metric_source
1001
+ # @return [::Google::Cloud::Dataproc::V1::DataprocMetricConfig::MetricSource]
1002
+ # Required. Default metrics are collected unless `metricOverrides` are
1003
+ # specified for the metric source (see [Available OSS metrics]
1004
+ # (https://cloud.google.com/dataproc/docs/guides/monitoring#available_oss_metrics)
1005
+ # for more information).
1006
+ # @!attribute [rw] metric_overrides
1007
+ # @return [::Array<::String>]
1008
+ # Optional. Specify one or more [available OSS metrics]
1009
+ # (https://cloud.google.com/dataproc/docs/guides/monitoring#available_oss_metrics)
1010
+ # to collect for the metric course (for the `SPARK` metric source, any
1011
+ # [Spark metric]
1012
+ # (https://spark.apache.org/docs/latest/monitoring.html#metrics) can be
1013
+ # specified).
1014
+ #
1015
+ # Provide metrics in the following format:
1016
+ # <code><var>METRIC_SOURCE</var>:<var>INSTANCE</var>:<var>GROUP</var>:<var>METRIC</var></code>
1017
+ # Use camelcase as appropriate.
1018
+ #
1019
+ # Examples:
1020
+ #
1021
+ # ```
1022
+ # yarn:ResourceManager:QueueMetrics:AppsCompleted
1023
+ # spark:driver:DAGScheduler:job.allJobs
1024
+ # sparkHistoryServer:JVM:Memory:NonHeapMemoryUsage.committed
1025
+ # hiveserver2:JVM:Memory:NonHeapMemoryUsage.used
1026
+ # ```
1027
+ #
1028
+ # Notes:
1029
+ #
1030
+ # * Only the specified overridden metrics will be collected for the
1031
+ # metric source. For example, if one or more `spark:executive` metrics
1032
+ # are listed as metric overrides, other `SPARK` metrics will not be
1033
+ # collected. The collection of the default metrics for other OSS metric
1034
+ # sources is unaffected. For example, if both `SPARK` andd `YARN` metric
1035
+ # sources are enabled, and overrides are provided for Spark metrics only,
1036
+ # all default YARN metrics will be collected.
1037
+ class Metric
1038
+ include ::Google::Protobuf::MessageExts
1039
+ extend ::Google::Protobuf::MessageExts::ClassMethods
1040
+ end
1041
+
1042
+ # A source for the collection of Dataproc OSS metrics (see [available OSS
1043
+ # metrics]
1044
+ # (https://cloud.google.com//dataproc/docs/guides/monitoring#available_oss_metrics)).
1045
+ module MetricSource
1046
+ # Required unspecified metric source.
1047
+ METRIC_SOURCE_UNSPECIFIED = 0
1048
+
1049
+ # Default monitoring agent metrics. If this source is enabled,
1050
+ # Dataproc enables the monitoring agent in Compute Engine,
1051
+ # and collects default monitoring agent metrics, which are published
1052
+ # with an `agent.googleapis.com` prefix.
1053
+ MONITORING_AGENT_DEFAULTS = 1
1054
+
1055
+ # HDFS metric source.
1056
+ HDFS = 2
1057
+
1058
+ # Spark metric source.
1059
+ SPARK = 3
1060
+
1061
+ # YARN metric source.
1062
+ YARN = 4
1063
+
1064
+ # Spark History Server metric source.
1065
+ SPARK_HISTORY_SERVER = 5
1066
+
1067
+ # Hiveserver2 metric source.
1068
+ HIVESERVER2 = 6
1069
+ end
1070
+ end
1071
+
914
1072
  # Contains cluster daemon metrics, such as HDFS and YARN stats.
915
1073
  #
916
1074
  # **Beta Feature**: This report is available for testing purposes only. It may
@@ -957,11 +1115,12 @@ module Google
957
1115
  # Required. The cluster to create.
958
1116
  # @!attribute [rw] request_id
959
1117
  # @return [::String]
960
- # Optional. A unique ID used to identify the request. If the server receives two
1118
+ # Optional. A unique ID used to identify the request. If the server receives
1119
+ # two
961
1120
  # [CreateClusterRequest](https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#google.cloud.dataproc.v1.CreateClusterRequest)s
962
1121
  # with the same id, then the second request will be ignored and the
963
- # first {::Google::Longrunning::Operation google.longrunning.Operation} created and stored in the backend
964
- # is returned.
1122
+ # first {::Google::Longrunning::Operation google.longrunning.Operation} created
1123
+ # and stored in the backend is returned.
965
1124
  #
966
1125
  # It is recommended to always set this value to a
967
1126
  # [UUID](https://en.wikipedia.org/wiki/Universally_unique_identifier).
@@ -1060,8 +1219,8 @@ module Google
1060
1219
  # receives two
1061
1220
  # [UpdateClusterRequest](https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#google.cloud.dataproc.v1.UpdateClusterRequest)s
1062
1221
  # with the same id, then the second request will be ignored and the
1063
- # first {::Google::Longrunning::Operation google.longrunning.Operation} created and stored in the
1064
- # backend is returned.
1222
+ # first {::Google::Longrunning::Operation google.longrunning.Operation} created
1223
+ # and stored in the backend is returned.
1065
1224
  #
1066
1225
  # It is recommended to always set this value to a
1067
1226
  # [UUID](https://en.wikipedia.org/wiki/Universally_unique_identifier).
@@ -1094,8 +1253,8 @@ module Google
1094
1253
  # receives two
1095
1254
  # [StopClusterRequest](https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#google.cloud.dataproc.v1.StopClusterRequest)s
1096
1255
  # with the same id, then the second request will be ignored and the
1097
- # first {::Google::Longrunning::Operation google.longrunning.Operation} created and stored in the
1098
- # backend is returned.
1256
+ # first {::Google::Longrunning::Operation google.longrunning.Operation} created
1257
+ # and stored in the backend is returned.
1099
1258
  #
1100
1259
  # Recommendation: Set this value to a
1101
1260
  # [UUID](https://en.wikipedia.org/wiki/Universally_unique_identifier).
@@ -1128,8 +1287,8 @@ module Google
1128
1287
  # receives two
1129
1288
  # [StartClusterRequest](https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#google.cloud.dataproc.v1.StartClusterRequest)s
1130
1289
  # with the same id, then the second request will be ignored and the
1131
- # first {::Google::Longrunning::Operation google.longrunning.Operation} created and stored in the
1132
- # backend is returned.
1290
+ # first {::Google::Longrunning::Operation google.longrunning.Operation} created
1291
+ # and stored in the backend is returned.
1133
1292
  #
1134
1293
  # Recommendation: Set this value to a
1135
1294
  # [UUID](https://en.wikipedia.org/wiki/Universally_unique_identifier).
@@ -1162,8 +1321,8 @@ module Google
1162
1321
  # receives two
1163
1322
  # [DeleteClusterRequest](https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#google.cloud.dataproc.v1.DeleteClusterRequest)s
1164
1323
  # with the same id, then the second request will be ignored and the
1165
- # first {::Google::Longrunning::Operation google.longrunning.Operation} created and stored in the
1166
- # backend is returned.
1324
+ # first {::Google::Longrunning::Operation google.longrunning.Operation} created
1325
+ # and stored in the backend is returned.
1167
1326
  #
1168
1327
  # It is recommended to always set this value to a
1169
1328
  # [UUID](https://en.wikipedia.org/wiki/Universally_unique_identifier).
@@ -133,7 +133,7 @@ module Google
133
133
  end
134
134
  end
135
135
 
136
- # A Dataproc job for running [Apache Spark](http://spark.apache.org/)
136
+ # A Dataproc job for running [Apache Spark](https://spark.apache.org/)
137
137
  # applications on YARN.
138
138
  # @!attribute [rw] main_jar_file_uri
139
139
  # @return [::String]
@@ -310,7 +310,7 @@ module Google
310
310
  end
311
311
 
312
312
  # A Dataproc job for running [Apache Spark
313
- # SQL](http://spark.apache.org/sql/) queries.
313
+ # SQL](https://spark.apache.org/sql/) queries.
314
314
  # @!attribute [rw] query_file_uri
315
315
  # @return [::String]
316
316
  # The HCFS URI of the script that contains SQL queries.
@@ -507,7 +507,8 @@ module Google
507
507
  # the job is submitted.
508
508
  # @!attribute [rw] cluster_labels
509
509
  # @return [::Google::Protobuf::Map{::String => ::String}]
510
- # Optional. Cluster labels to identify a cluster where the job will be submitted.
510
+ # Optional. Cluster labels to identify a cluster where the job will be
511
+ # submitted.
511
512
  class JobPlacement
512
513
  include ::Google::Protobuf::MessageExts
513
514
  extend ::Google::Protobuf::MessageExts::ClassMethods
@@ -608,8 +609,8 @@ module Google
608
609
  # Encapsulates the full scoping used to reference a job.
609
610
  # @!attribute [rw] project_id
610
611
  # @return [::String]
611
- # Optional. The ID of the Google Cloud Platform project that the job belongs to. If
612
- # specified, must match the request project ID.
612
+ # Optional. The ID of the Google Cloud Platform project that the job belongs
613
+ # to. If specified, must match the request project ID.
613
614
  # @!attribute [rw] job_id
614
615
  # @return [::String]
615
616
  # Optional. The job ID, which must be unique within the project.
@@ -756,10 +757,13 @@ module Google
756
757
  # may be reused over time.
757
758
  # @!attribute [r] done
758
759
  # @return [::Boolean]
759
- # Output only. Indicates whether the job is completed. If the value is `false`,
760
- # the job is still in progress. If `true`, the job is completed, and
760
+ # Output only. Indicates whether the job is completed. If the value is
761
+ # `false`, the job is still in progress. If `true`, the job is completed, and
761
762
  # `status.state` field will indicate if it was successful, failed,
762
763
  # or cancelled.
764
+ # @!attribute [rw] driver_scheduling_config
765
+ # @return [::Google::Cloud::Dataproc::V1::DriverSchedulingConfig]
766
+ # Optional. Driver scheduling configuration.
763
767
  class Job
764
768
  include ::Google::Protobuf::MessageExts
765
769
  extend ::Google::Protobuf::MessageExts::ClassMethods
@@ -774,6 +778,18 @@ module Google
774
778
  end
775
779
  end
776
780
 
781
+ # Driver scheduling configuration.
782
+ # @!attribute [rw] memory_mb
783
+ # @return [::Integer]
784
+ # Required. The amount of memory in MB the driver is requesting.
785
+ # @!attribute [rw] vcores
786
+ # @return [::Integer]
787
+ # Required. The number of vCPUs the driver is requesting.
788
+ class DriverSchedulingConfig
789
+ include ::Google::Protobuf::MessageExts
790
+ extend ::Google::Protobuf::MessageExts::ClassMethods
791
+ end
792
+
777
793
  # Job scheduling options.
778
794
  # @!attribute [rw] max_failures_per_hour
779
795
  # @return [::Integer]
@@ -781,27 +797,26 @@ module Google
781
797
  # a result of driver exiting with non-zero code before job is
782
798
  # reported failed.
783
799
  #
784
- # A job may be reported as thrashing if driver exits with non-zero code
785
- # 4 times within 10 minute window.
800
+ # A job may be reported as thrashing if the driver exits with a non-zero code
801
+ # four times within a 10-minute window.
786
802
  #
787
803
  # Maximum value is 10.
788
804
  #
789
- # **Note:** Currently, this restartable job option is
790
- # not supported in Dataproc
791
- # [workflow
792
- # template](https://cloud.google.com/dataproc/docs/concepts/workflows/using-workflows#adding_jobs_to_a_template)
793
- # jobs.
805
+ # **Note:** This restartable job option is not supported in Dataproc
806
+ # [workflow templates]
807
+ # (https://cloud.google.com/dataproc/docs/concepts/workflows/using-workflows#adding_jobs_to_a_template).
794
808
  # @!attribute [rw] max_failures_total
795
809
  # @return [::Integer]
796
- # Optional. Maximum number of times in total a driver may be restarted as a result of
797
- # driver exiting with non-zero code before job is reported failed.
810
+ # Optional. Maximum total number of times a driver may be restarted as a
811
+ # result of the driver exiting with a non-zero code. After the maximum number
812
+ # is reached, the job will be reported as failed.
813
+ #
798
814
  # Maximum value is 240.
799
815
  #
800
816
  # **Note:** Currently, this restartable job option is
801
817
  # not supported in Dataproc
802
818
  # [workflow
803
- # template](https://cloud.google.com/dataproc/docs/concepts/workflows/using-workflows#adding_jobs_to_a_template)
804
- # jobs.
819
+ # templates](https://cloud.google.com/dataproc/docs/concepts/workflows/using-workflows#adding_jobs_to_a_template).
805
820
  class JobScheduling
806
821
  include ::Google::Protobuf::MessageExts
807
822
  extend ::Google::Protobuf::MessageExts::ClassMethods