google-cloud-dataproc 0.2.1 → 0.2.2

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 033af0180af4311a711513cd9d01d6e08634107a1c62d467711e9c01b3acd5ef
4
- data.tar.gz: d5a21d7194df968ab642756742464ebb9e7e588b3c7b96ac5aa85d8d3c9c3262
3
+ metadata.gz: 30130cde92df898035ea51f693835396be4395f533d997d91c1400328d855f5d
4
+ data.tar.gz: fff528186d9cf8cea514ca5811ad6f1640b8f6133e4a970c65d3735291ca97bf
5
5
  SHA512:
6
- metadata.gz: b1e106c689f869979e81176e27ee1faebf23d9789bc43adcf5f27b3f0c750bba52df7b9643fc83ebd97847d0dc52a7704e9f46c6dede2ed4bd3829877f2a167e
7
- data.tar.gz: e6885ed3278f0850e4a4dead97cdaaa48cb2659e38c8adf853232b57e81db2036479440defc9c18125c3e0375994a7b4734eb6268eb6e4ab47a499530d2cf07c
6
+ metadata.gz: 615065859e329b5caaafb9fbafb4acbfe91a942012c353d65819fe4afb1bdc860833bcff3821945d5179681364041ddc79d3e5b51d53cead303d28f989bbbd40
7
+ data.tar.gz: 184ecda3c4c13f292d3d460eaf42fd76fef514b5c4cff38aadae6d8c08b3139a89f0880a7dc4a6a1d7ca3cbdff5e9452a507991bdeb2ffc0aa9b905db7d594e0
data/README.md CHANGED
@@ -1,4 +1,4 @@
1
- # Ruby Client for Google Cloud Dataproc API ([Beta](https://github.com/GoogleCloudPlatform/google-cloud-ruby#versioning))
1
+ # Ruby Client for Google Cloud Dataproc API ([Beta](https://github.com/googleapis/google-cloud-ruby#versioning))
2
2
 
3
3
  [Google Cloud Dataproc API][Product Documentation]:
4
4
  Manages Hadoop-based clusters and jobs on Google Cloud Platform.
@@ -12,7 +12,7 @@ steps:
12
12
  1. [Select or create a Cloud Platform project.](https://console.cloud.google.com/project)
13
13
  2. [Enable billing for your project.](https://cloud.google.com/billing/docs/how-to/modify-project#enable_billing_for_a_project)
14
14
  3. [Enable the Google Cloud Dataproc API.](https://console.cloud.google.com/apis/library/dataproc.googleapis.com)
15
- 4. [Setup Authentication.](https://googlecloudplatform.github.io/google-cloud-ruby/#/docs/google-cloud/master/guides/authentication)
15
+ 4. [Setup Authentication.](https://googleapis.github.io/google-cloud-ruby/#/docs/google-cloud/master/guides/authentication)
16
16
 
17
17
  ### Installation
18
18
  ```
@@ -47,17 +47,17 @@ end
47
47
  to see other available methods on the client.
48
48
  - Read the [Google Cloud Dataproc API Product documentation][Product Documentation]
49
49
  to learn more about the product and see How-to Guides.
50
- - View this [repository's main README](https://github.com/GoogleCloudPlatform/google-cloud-ruby/blob/master/README.md)
50
+ - View this [repository's main README](https://github.com/googleapis/google-cloud-ruby/blob/master/README.md)
51
51
  to see the full list of Cloud APIs that we cover.
52
52
 
53
- [Client Library Documentation]: https://googlecloudplatform.github.io/google-cloud-ruby/#/docs/google-cloud-dataproc/latest/google/cloud/dataproc/v1
53
+ [Client Library Documentation]: https://googleapis.github.io/google-cloud-ruby/#/docs/google-cloud-dataproc/latest/google/cloud/dataproc/v1
54
54
  [Product Documentation]: https://cloud.google.com/dataproc
55
55
 
56
56
  ## Enabling Logging
57
57
 
58
58
  To enable logging for this library, set the logger for the underlying [gRPC](https://github.com/grpc/grpc/tree/master/src/ruby) library.
59
59
  The logger that you set may be a Ruby stdlib [`Logger`](https://ruby-doc.org/stdlib-2.5.0/libdoc/logger/rdoc/Logger.html) as shown below,
60
- or a [`Google::Cloud::Logging::Logger`](https://googlecloudplatform.github.io/google-cloud-ruby/#/docs/google-cloud-logging/latest/google/cloud/logging/logger)
60
+ or a [`Google::Cloud::Logging::Logger`](https://googleapis.github.io/google-cloud-ruby/#/docs/google-cloud-logging/latest/google/cloud/logging/logger)
61
61
  that will write logs to [Stackdriver Logging](https://cloud.google.com/logging/). See [grpc/logconfig.rb](https://github.com/grpc/grpc/blob/master/src/ruby/lib/grpc/logconfig.rb)
62
62
  and the gRPC [spec_helper.rb](https://github.com/grpc/grpc/blob/master/src/ruby/spec/spec_helper.rb) for additional information.
63
63
 
@@ -21,7 +21,7 @@ module Google
21
21
  # rubocop:disable LineLength
22
22
 
23
23
  ##
24
- # # Ruby Client for Google Cloud Dataproc API ([Beta](https://github.com/GoogleCloudPlatform/google-cloud-ruby#versioning))
24
+ # # Ruby Client for Google Cloud Dataproc API ([Beta](https://github.com/googleapis/google-cloud-ruby#versioning))
25
25
  #
26
26
  # [Google Cloud Dataproc API][Product Documentation]:
27
27
  # Manages Hadoop-based clusters and jobs on Google Cloud Platform.
@@ -34,7 +34,7 @@ module Google
34
34
  # 1. [Select or create a Cloud Platform project.](https://console.cloud.google.com/project)
35
35
  # 2. [Enable billing for your project.](https://cloud.google.com/billing/docs/how-to/modify-project#enable_billing_for_a_project)
36
36
  # 3. [Enable the Google Cloud Dataproc API.](https://console.cloud.google.com/apis/library/dataproc.googleapis.com)
37
- # 4. [Setup Authentication.](https://googlecloudplatform.github.io/google-cloud-ruby/#/docs/google-cloud/master/guides/authentication)
37
+ # 4. [Setup Authentication.](https://googleapis.github.io/google-cloud-ruby/#/docs/google-cloud/master/guides/authentication)
38
38
  #
39
39
  # ### Installation
40
40
  # ```
@@ -67,7 +67,7 @@ module Google
67
67
  # ### Next Steps
68
68
  # - Read the [Google Cloud Dataproc API Product documentation][Product Documentation]
69
69
  # to learn more about the product and see How-to Guides.
70
- # - View this [repository's main README](https://github.com/GoogleCloudPlatform/google-cloud-ruby/blob/master/README.md)
70
+ # - View this [repository's main README](https://github.com/googleapis/google-cloud-ruby/blob/master/README.md)
71
71
  # to see the full list of Cloud APIs that we cover.
72
72
  #
73
73
  # [Product Documentation]: https://cloud.google.com/dataproc
@@ -76,7 +76,7 @@ module Google
76
76
  #
77
77
  # To enable logging for this library, set the logger for the underlying [gRPC](https://github.com/grpc/grpc/tree/master/src/ruby) library.
78
78
  # The logger that you set may be a Ruby stdlib [`Logger`](https://ruby-doc.org/stdlib-2.5.0/libdoc/logger/rdoc/Logger.html) as shown below,
79
- # or a [`Google::Cloud::Logging::Logger`](https://googlecloudplatform.github.io/google-cloud-ruby/#/docs/google-cloud-logging/latest/google/cloud/logging/logger)
79
+ # or a [`Google::Cloud::Logging::Logger`](https://googleapis.github.io/google-cloud-ruby/#/docs/google-cloud-logging/latest/google/cloud/logging/logger)
80
80
  # that will write logs to [Stackdriver Logging](https://cloud.google.com/logging/). See [grpc/logconfig.rb](https://github.com/grpc/grpc/blob/master/src/ruby/lib/grpc/logconfig.rb)
81
81
  # and the gRPC [spec_helper.rb](https://github.com/grpc/grpc/blob/master/src/ruby/spec/spec_helper.rb) for additional information.
82
82
  #
@@ -24,7 +24,7 @@ module Google
24
24
  # rubocop:disable LineLength
25
25
 
26
26
  ##
27
- # # Ruby Client for Google Cloud Dataproc API ([Beta](https://github.com/GoogleCloudPlatform/google-cloud-ruby#versioning))
27
+ # # Ruby Client for Google Cloud Dataproc API ([Beta](https://github.com/googleapis/google-cloud-ruby#versioning))
28
28
  #
29
29
  # [Google Cloud Dataproc API][Product Documentation]:
30
30
  # Manages Hadoop-based clusters and jobs on Google Cloud Platform.
@@ -37,7 +37,7 @@ module Google
37
37
  # 1. [Select or create a Cloud Platform project.](https://console.cloud.google.com/project)
38
38
  # 2. [Enable billing for your project.](https://cloud.google.com/billing/docs/how-to/modify-project#enable_billing_for_a_project)
39
39
  # 3. [Enable the Google Cloud Dataproc API.](https://console.cloud.google.com/apis/library/dataproc.googleapis.com)
40
- # 4. [Setup Authentication.](https://googlecloudplatform.github.io/google-cloud-ruby/#/docs/google-cloud/master/guides/authentication)
40
+ # 4. [Setup Authentication.](https://googleapis.github.io/google-cloud-ruby/#/docs/google-cloud/master/guides/authentication)
41
41
  #
42
42
  # ### Installation
43
43
  # ```
@@ -70,7 +70,7 @@ module Google
70
70
  # ### Next Steps
71
71
  # - Read the [Google Cloud Dataproc API Product documentation][Product Documentation]
72
72
  # to learn more about the product and see How-to Guides.
73
- # - View this [repository's main README](https://github.com/GoogleCloudPlatform/google-cloud-ruby/blob/master/README.md)
73
+ # - View this [repository's main README](https://github.com/googleapis/google-cloud-ruby/blob/master/README.md)
74
74
  # to see the full list of Cloud APIs that we cover.
75
75
  #
76
76
  # [Product Documentation]: https://cloud.google.com/dataproc
@@ -79,7 +79,7 @@ module Google
79
79
  #
80
80
  # To enable logging for this library, set the logger for the underlying [gRPC](https://github.com/grpc/grpc/tree/master/src/ruby) library.
81
81
  # The logger that you set may be a Ruby stdlib [`Logger`](https://ruby-doc.org/stdlib-2.5.0/libdoc/logger/rdoc/Logger.html) as shown below,
82
- # or a [`Google::Cloud::Logging::Logger`](https://googlecloudplatform.github.io/google-cloud-ruby/#/docs/google-cloud-logging/latest/google/cloud/logging/logger)
82
+ # or a [`Google::Cloud::Logging::Logger`](https://googleapis.github.io/google-cloud-ruby/#/docs/google-cloud-logging/latest/google/cloud/logging/logger)
83
83
  # that will write logs to [Stackdriver Logging](https://cloud.google.com/logging/). See [grpc/logconfig.rb](https://github.com/grpc/grpc/blob/master/src/ruby/lib/grpc/logconfig.rb)
84
84
  # and the gRPC [spec_helper.rb](https://github.com/grpc/grpc/blob/master/src/ruby/spec/spec_helper.rb) for additional information.
85
85
  #
@@ -242,13 +242,13 @@ module Google
242
242
  #
243
243
  # cluster_controller_client = Google::Cloud::Dataproc::ClusterController.new(version: :v1)
244
244
  #
245
- # # TODO: Initialize +project_id+:
245
+ # # TODO: Initialize `project_id`:
246
246
  # project_id = ''
247
247
  #
248
- # # TODO: Initialize +region+:
248
+ # # TODO: Initialize `region`:
249
249
  # region = ''
250
250
  #
251
- # # TODO: Initialize +cluster+:
251
+ # # TODO: Initialize `cluster`:
252
252
  # cluster = {}
253
253
  #
254
254
  # # Register a callback during the method call.
@@ -314,11 +314,11 @@ module Google
314
314
  # A hash of the same form as `Google::Cloud::Dataproc::V1::Cluster`
315
315
  # can also be provided.
316
316
  # @param update_mask [Google::Protobuf::FieldMask | Hash]
317
- # Required. Specifies the path, relative to +Cluster+, of
317
+ # Required. Specifies the path, relative to `Cluster`, of
318
318
  # the field to update. For example, to change the number of workers
319
- # in a cluster to 5, the +update_mask+ parameter would be
320
- # specified as +config.worker_config.num_instances+,
321
- # and the +PATCH+ request body would specify the new value, as follows:
319
+ # in a cluster to 5, the `update_mask` parameter would be
320
+ # specified as `config.worker_config.num_instances`,
321
+ # and the `PATCH` request body would specify the new value, as follows:
322
322
  #
323
323
  # {
324
324
  # "config":{
@@ -328,8 +328,8 @@ module Google
328
328
  # }
329
329
  # }
330
330
  # Similarly, to change the number of preemptible workers in a cluster to 5,
331
- # the +update_mask+ parameter would be
332
- # +config.secondary_worker_config.num_instances+, and the +PATCH+ request
331
+ # the `update_mask` parameter would be
332
+ # `config.secondary_worker_config.num_instances`, and the `PATCH` request
333
333
  # body would be set as follows:
334
334
  #
335
335
  # {
@@ -373,19 +373,19 @@ module Google
373
373
  #
374
374
  # cluster_controller_client = Google::Cloud::Dataproc::ClusterController.new(version: :v1)
375
375
  #
376
- # # TODO: Initialize +project_id+:
376
+ # # TODO: Initialize `project_id`:
377
377
  # project_id = ''
378
378
  #
379
- # # TODO: Initialize +region+:
379
+ # # TODO: Initialize `region`:
380
380
  # region = ''
381
381
  #
382
- # # TODO: Initialize +cluster_name+:
382
+ # # TODO: Initialize `cluster_name`:
383
383
  # cluster_name = ''
384
384
  #
385
- # # TODO: Initialize +cluster+:
385
+ # # TODO: Initialize `cluster`:
386
386
  # cluster = {}
387
387
  #
388
- # # TODO: Initialize +update_mask+:
388
+ # # TODO: Initialize `update_mask`:
389
389
  # update_mask = {}
390
390
  #
391
391
  # # Register a callback during the method call.
@@ -460,13 +460,13 @@ module Google
460
460
  #
461
461
  # cluster_controller_client = Google::Cloud::Dataproc::ClusterController.new(version: :v1)
462
462
  #
463
- # # TODO: Initialize +project_id+:
463
+ # # TODO: Initialize `project_id`:
464
464
  # project_id = ''
465
465
  #
466
- # # TODO: Initialize +region+:
466
+ # # TODO: Initialize `region`:
467
467
  # region = ''
468
468
  #
469
- # # TODO: Initialize +cluster_name+:
469
+ # # TODO: Initialize `cluster_name`:
470
470
  # cluster_name = ''
471
471
  #
472
472
  # # Register a callback during the method call.
@@ -540,13 +540,13 @@ module Google
540
540
  #
541
541
  # cluster_controller_client = Google::Cloud::Dataproc::ClusterController.new(version: :v1)
542
542
  #
543
- # # TODO: Initialize +project_id+:
543
+ # # TODO: Initialize `project_id`:
544
544
  # project_id = ''
545
545
  #
546
- # # TODO: Initialize +region+:
546
+ # # TODO: Initialize `region`:
547
547
  # region = ''
548
548
  #
549
- # # TODO: Initialize +cluster_name+:
549
+ # # TODO: Initialize `cluster_name`:
550
550
  # cluster_name = ''
551
551
  # response = cluster_controller_client.get_cluster(project_id, region, cluster_name)
552
552
 
@@ -578,15 +578,15 @@ module Google
578
578
  #
579
579
  # field = value [AND [field = value]] ...
580
580
  #
581
- # where **field** is one of +status.state+, +clusterName+, or +labels.[KEY]+,
582
- # and +[KEY]+ is a label key. **value** can be +*+ to match all values.
583
- # +status.state+ can be one of the following: +ACTIVE+, +INACTIVE+,
584
- # +CREATING+, +RUNNING+, +ERROR+, +DELETING+, or +UPDATING+. +ACTIVE+
585
- # contains the +CREATING+, +UPDATING+, and +RUNNING+ states. +INACTIVE+
586
- # contains the +DELETING+ and +ERROR+ states.
587
- # +clusterName+ is the name of the cluster provided at creation time.
588
- # Only the logical +AND+ operator is supported; space-separated items are
589
- # treated as having an implicit +AND+ operator.
581
+ # where **field** is one of `status.state`, `clusterName`, or `labels.[KEY]`,
582
+ # and `[KEY]` is a label key. **value** can be `*` to match all values.
583
+ # `status.state` can be one of the following: `ACTIVE`, `INACTIVE`,
584
+ # `CREATING`, `RUNNING`, `ERROR`, `DELETING`, or `UPDATING`. `ACTIVE`
585
+ # contains the `CREATING`, `UPDATING`, and `RUNNING` states. `INACTIVE`
586
+ # contains the `DELETING` and `ERROR` states.
587
+ # `clusterName` is the name of the cluster provided at creation time.
588
+ # Only the logical `AND` operator is supported; space-separated items are
589
+ # treated as having an implicit `AND` operator.
590
590
  #
591
591
  # Example filter:
592
592
  #
@@ -615,10 +615,10 @@ module Google
615
615
  #
616
616
  # cluster_controller_client = Google::Cloud::Dataproc::ClusterController.new(version: :v1)
617
617
  #
618
- # # TODO: Initialize +project_id+:
618
+ # # TODO: Initialize `project_id`:
619
619
  # project_id = ''
620
620
  #
621
- # # TODO: Initialize +region+:
621
+ # # TODO: Initialize `region`:
622
622
  # region = ''
623
623
  #
624
624
  # # Iterate over all results.
@@ -653,7 +653,7 @@ module Google
653
653
 
654
654
  # Gets cluster diagnostic information.
655
655
  # After the operation completes, the Operation.response field
656
- # contains +DiagnoseClusterOutputLocation+.
656
+ # contains `DiagnoseClusterOutputLocation`.
657
657
  #
658
658
  # @param project_id [String]
659
659
  # Required. The ID of the Google Cloud Platform project that the cluster
@@ -672,13 +672,13 @@ module Google
672
672
  #
673
673
  # cluster_controller_client = Google::Cloud::Dataproc::ClusterController.new(version: :v1)
674
674
  #
675
- # # TODO: Initialize +project_id+:
675
+ # # TODO: Initialize `project_id`:
676
676
  # project_id = ''
677
677
  #
678
- # # TODO: Initialize +region+:
678
+ # # TODO: Initialize `region`:
679
679
  # region = ''
680
680
  #
681
- # # TODO: Initialize +cluster_name+:
681
+ # # TODO: Initialize `cluster_name`:
682
682
  # cluster_name = ''
683
683
  #
684
684
  # # Register a callback during the method call.
@@ -88,8 +88,8 @@ module Google
88
88
  # @return [Array<Google::Cloud::Dataproc::V1::NodeInitializationAction>]
89
89
  # Optional. Commands to execute on each node after config is
90
90
  # completed. By default, executables are run on master and all worker nodes.
91
- # You can test a node's +role+ metadata to run an executable on
92
- # a master or worker node, as shown below using +curl+ (you can also use +wget+):
91
+ # You can test a node's `role` metadata to run an executable on
92
+ # a master or worker node, as shown below using `curl` (you can also use `wget`):
93
93
  #
94
94
  # ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role)
95
95
  # if [[ "${ROLE}" == 'Master' ]]; then
@@ -111,22 +111,22 @@ module Google
111
111
  #
112
112
  # A full URL, partial URI, or short name are valid. Examples:
113
113
  #
114
- # * +https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]+
115
- # * +projects/[project_id]/zones/[zone]+
116
- # * +us-central1-f+
114
+ # * `https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]`
115
+ # * `projects/[project_id]/zones/[zone]`
116
+ # * `us-central1-f`
117
117
  # @!attribute [rw] network_uri
118
118
  # @return [String]
119
119
  # Optional. The Google Compute Engine network to be used for machine
120
120
  # communications. Cannot be specified with subnetwork_uri. If neither
121
- # +network_uri+ nor +subnetwork_uri+ is specified, the "default" network of
121
+ # `network_uri` nor `subnetwork_uri` is specified, the "default" network of
122
122
  # the project is used, if it exists. Cannot be a "Custom Subnet Network" (see
123
123
  # [Using Subnetworks](https://cloud.google.com/compute/docs/subnetworks) for more information).
124
124
  #
125
125
  # A full URL, partial URI, or short name are valid. Examples:
126
126
  #
127
- # * +https://www.googleapis.com/compute/v1/projects/[project_id]/regions/global/default+
128
- # * +projects/[project_id]/regions/global/default+
129
- # * +default+
127
+ # * `https://www.googleapis.com/compute/v1/projects/[project_id]/regions/global/default`
128
+ # * `projects/[project_id]/regions/global/default`
129
+ # * `default`
130
130
  # @!attribute [rw] subnetwork_uri
131
131
  # @return [String]
132
132
  # Optional. The Google Compute Engine subnetwork to be used for machine
@@ -134,15 +134,15 @@ module Google
134
134
  #
135
135
  # A full URL, partial URI, or short name are valid. Examples:
136
136
  #
137
- # * +https://www.googleapis.com/compute/v1/projects/[project_id]/regions/us-east1/sub0+
138
- # * +projects/[project_id]/regions/us-east1/sub0+
139
- # * +sub0+
137
+ # * `https://www.googleapis.com/compute/v1/projects/[project_id]/regions/us-east1/sub0`
138
+ # * `projects/[project_id]/regions/us-east1/sub0`
139
+ # * `sub0`
140
140
  # @!attribute [rw] internal_ip_only
141
141
  # @return [true, false]
142
142
  # Optional. If true, all instances in the cluster will only have internal IP
143
143
  # addresses. By default, clusters are not restricted to internal IP addresses,
144
144
  # and will have ephemeral external IP addresses assigned to each instance.
145
- # This +internal_ip_only+ restriction can only be enabled for subnetwork
145
+ # This `internal_ip_only` restriction can only be enabled for subnetwork
146
146
  # enabled networks, and all off-cluster dependencies must be configured to be
147
147
  # accessible without external IP addresses.
148
148
  # @!attribute [rw] service_account
@@ -156,7 +156,7 @@ module Google
156
156
  #
157
157
  # (see https://cloud.google.com/compute/docs/access/service-accounts#custom_service_accounts
158
158
  # for more information).
159
- # Example: +[account_id]@[project_id].iam.gserviceaccount.com+
159
+ # Example: `[account_id]@[project_id].iam.gserviceaccount.com`
160
160
  # @!attribute [rw] service_account_scopes
161
161
  # @return [Array<String>]
162
162
  # Optional. The URIs of service account scopes to be included in Google
@@ -192,21 +192,21 @@ module Google
192
192
  # @!attribute [rw] instance_names
193
193
  # @return [Array<String>]
194
194
  # Optional. The list of instance names. Cloud Dataproc derives the names from
195
- # +cluster_name+, +num_instances+, and the instance group if not set by user
195
+ # `cluster_name`, `num_instances`, and the instance group if not set by user
196
196
  # (recommended practice is to let Cloud Dataproc derive the name).
197
197
  # @!attribute [rw] image_uri
198
198
  # @return [String]
199
199
  # Output-only. The Google Compute Engine image resource used for cluster
200
- # instances. Inferred from +SoftwareConfig.image_version+.
200
+ # instances. Inferred from `SoftwareConfig.image_version`.
201
201
  # @!attribute [rw] machine_type_uri
202
202
  # @return [String]
203
203
  # Optional. The Google Compute Engine machine type used for cluster instances.
204
204
  #
205
205
  # A full URL, partial URI, or short name are valid. Examples:
206
206
  #
207
- # * +https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2+
208
- # * +projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2+
209
- # * +n1-standard-2+
207
+ # * `https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2`
208
+ # * `projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2`
209
+ # * `n1-standard-2`
210
210
  # @!attribute [rw] disk_config
211
211
  # @return [Google::Cloud::Dataproc::V1::DiskConfig]
212
212
  # Optional. Disk option config settings.
@@ -246,9 +246,9 @@ module Google
246
246
  # /compute/docs/reference/beta/acceleratorTypes)
247
247
  #
248
248
  # Examples
249
- # * +https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80+
250
- # * +projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80+
251
- # * +nvidia-tesla-k80+
249
+ # * `https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80`
250
+ # * `projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80`
251
+ # * `nvidia-tesla-k80`
252
252
  # @!attribute [rw] accelerator_count
253
253
  # @return [Integer]
254
254
  # The number of the accelerator cards of this type exposed to this instance.
@@ -339,25 +339,25 @@ module Google
339
339
  # @!attribute [rw] image_version
340
340
  # @return [String]
341
341
  # Optional. The version of software inside the cluster. It must match the
342
- # regular expression +[0-9]+\.[0-9]++. If unspecified, it defaults to the
342
+ # regular expression `[0-9]+\.[0-9]+`. If unspecified, it defaults to the
343
343
  # latest version (see [Cloud Dataproc Versioning](https://cloud.google.com/dataproc/versioning)).
344
344
  # @!attribute [rw] properties
345
345
  # @return [Hash{String => String}]
346
346
  # Optional. The properties to set on daemon config files.
347
347
  #
348
- # Property keys are specified in +prefix:property+ format, such as
349
- # +core:fs.defaultFS+. The following are supported prefixes
348
+ # Property keys are specified in `prefix:property` format, such as
349
+ # `core:fs.defaultFS`. The following are supported prefixes
350
350
  # and their mappings:
351
351
  #
352
- # * capacity-scheduler: +capacity-scheduler.xml+
353
- # * core: +core-site.xml+
354
- # * distcp: +distcp-default.xml+
355
- # * hdfs: +hdfs-site.xml+
356
- # * hive: +hive-site.xml+
357
- # * mapred: +mapred-site.xml+
358
- # * pig: +pig.properties+
359
- # * spark: +spark-defaults.conf+
360
- # * yarn: +yarn-site.xml+
352
+ # * capacity-scheduler: `capacity-scheduler.xml`
353
+ # * core: `core-site.xml`
354
+ # * distcp: `distcp-default.xml`
355
+ # * hdfs: `hdfs-site.xml`
356
+ # * hive: `hive-site.xml`
357
+ # * mapred: `mapred-site.xml`
358
+ # * pig: `pig.properties`
359
+ # * spark: `spark-defaults.conf`
360
+ # * yarn: `yarn-site.xml`
361
361
  #
362
362
  # For more information, see
363
363
  # [Cluster properties](https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
@@ -404,11 +404,11 @@ module Google
404
404
  # Required. The changes to the cluster.
405
405
  # @!attribute [rw] update_mask
406
406
  # @return [Google::Protobuf::FieldMask]
407
- # Required. Specifies the path, relative to +Cluster+, of
407
+ # Required. Specifies the path, relative to `Cluster`, of
408
408
  # the field to update. For example, to change the number of workers
409
- # in a cluster to 5, the +update_mask+ parameter would be
410
- # specified as +config.worker_config.num_instances+,
411
- # and the +PATCH+ request body would specify the new value, as follows:
409
+ # in a cluster to 5, the `update_mask` parameter would be
410
+ # specified as `config.worker_config.num_instances`,
411
+ # and the `PATCH` request body would specify the new value, as follows:
412
412
  #
413
413
  # {
414
414
  # "config":{
@@ -418,8 +418,8 @@ module Google
418
418
  # }
419
419
  # }
420
420
  # Similarly, to change the number of preemptible workers in a cluster to 5,
421
- # the +update_mask+ parameter would be
422
- # +config.secondary_worker_config.num_instances+, and the +PATCH+ request
421
+ # the `update_mask` parameter would be
422
+ # `config.secondary_worker_config.num_instances`, and the `PATCH` request
423
423
  # body would be set as follows:
424
424
  #
425
425
  # {
@@ -494,15 +494,15 @@ module Google
494
494
  #
495
495
  # field = value [AND [field = value]] ...
496
496
  #
497
- # where **field** is one of +status.state+, +clusterName+, or +labels.[KEY]+,
498
- # and +[KEY]+ is a label key. **value** can be +*+ to match all values.
499
- # +status.state+ can be one of the following: +ACTIVE+, +INACTIVE+,
500
- # +CREATING+, +RUNNING+, +ERROR+, +DELETING+, or +UPDATING+. +ACTIVE+
501
- # contains the +CREATING+, +UPDATING+, and +RUNNING+ states. +INACTIVE+
502
- # contains the +DELETING+ and +ERROR+ states.
503
- # +clusterName+ is the name of the cluster provided at creation time.
504
- # Only the logical +AND+ operator is supported; space-separated items are
505
- # treated as having an implicit +AND+ operator.
497
+ # where **field** is one of `status.state`, `clusterName`, or `labels.[KEY]`,
498
+ # and `[KEY]` is a label key. **value** can be `*` to match all values.
499
+ # `status.state` can be one of the following: `ACTIVE`, `INACTIVE`,
500
+ # `CREATING`, `RUNNING`, `ERROR`, `DELETING`, or `UPDATING`. `ACTIVE`
501
+ # contains the `CREATING`, `UPDATING`, and `RUNNING` states. `INACTIVE`
502
+ # contains the `DELETING` and `ERROR` states.
503
+ # `clusterName` is the name of the cluster provided at creation time.
504
+ # Only the logical `AND` operator is supported; space-separated items are
505
+ # treated as having an implicit `AND` operator.
506
506
  #
507
507
  # Example filter:
508
508
  #
@@ -524,7 +524,7 @@ module Google
524
524
  # @return [String]
525
525
  # Output-only. This token is included in the response if there are more
526
526
  # results to fetch. To fetch additional results, provide this value as the
527
- # +page_token+ in a subsequent +ListClustersRequest+.
527
+ # `page_token` in a subsequent `ListClustersRequest`.
528
528
  class ListClustersResponse; end
529
529
 
530
530
  # A request to collect cluster diagnostic information.
@@ -71,11 +71,11 @@ module Google
71
71
  # @!attribute [rw] main_class
72
72
  # @return [String]
73
73
  # The name of the driver's main class. The jar file containing the class
74
- # must be in the default CLASSPATH or specified in +jar_file_uris+.
74
+ # must be in the default CLASSPATH or specified in `jar_file_uris`.
75
75
  # @!attribute [rw] args
76
76
  # @return [Array<String>]
77
77
  # Optional. The arguments to pass to the driver. Do not
78
- # include arguments, such as +-libjars+ or +-Dfoo=bar+, that can be set as job
78
+ # include arguments, such as `-libjars` or `-Dfoo=bar`, that can be set as job
79
79
  # properties, since a collision may occur that causes an incorrect job
80
80
  # submission.
81
81
  # @!attribute [rw] jar_file_uris
@@ -111,11 +111,11 @@ module Google
111
111
  # @!attribute [rw] main_class
112
112
  # @return [String]
113
113
  # The name of the driver's main class. The jar file that contains the class
114
- # must be in the default CLASSPATH or specified in +jar_file_uris+.
114
+ # must be in the default CLASSPATH or specified in `jar_file_uris`.
115
115
  # @!attribute [rw] args
116
116
  # @return [Array<String>]
117
117
  # Optional. The arguments to pass to the driver. Do not include arguments,
118
- # such as +--conf+, that can be set as job properties, since a collision may
118
+ # such as `--conf`, that can be set as job properties, since a collision may
119
119
  # occur that causes an incorrect job submission.
120
120
  # @!attribute [rw] jar_file_uris
121
121
  # @return [Array<String>]
@@ -151,7 +151,7 @@ module Google
151
151
  # @!attribute [rw] args
152
152
  # @return [Array<String>]
153
153
  # Optional. The arguments to pass to the driver. Do not include arguments,
154
- # such as +--conf+, that can be set as job properties, since a collision may
154
+ # such as `--conf`, that can be set as job properties, since a collision may
155
155
  # occur that causes an incorrect job submission.
156
156
  # @!attribute [rw] python_file_uris
157
157
  # @return [Array<String>]
@@ -210,12 +210,12 @@ module Google
210
210
  # @!attribute [rw] continue_on_failure
211
211
  # @return [true, false]
212
212
  # Optional. Whether to continue executing queries if a query fails.
213
- # The default value is +false+. Setting to +true+ can be useful when executing
213
+ # The default value is `false`. Setting to `true` can be useful when executing
214
214
  # independent parallel queries.
215
215
  # @!attribute [rw] script_variables
216
216
  # @return [Hash{String => String}]
217
217
  # Optional. Mapping of query variable names to values (equivalent to the
218
- # Hive command: +SET name="value";+).
218
+ # Hive command: `SET name="value";`).
219
219
  # @!attribute [rw] properties
220
220
  # @return [Hash{String => String}]
221
221
  # Optional. A mapping of property names and values, used to configure Hive.
@@ -240,7 +240,7 @@ module Google
240
240
  # @!attribute [rw] script_variables
241
241
  # @return [Hash{String => String}]
242
242
  # Optional. Mapping of query variable names to values (equivalent to the
243
- # Spark SQL command: SET +name="value";+).
243
+ # Spark SQL command: SET `name="value";`).
244
244
  # @!attribute [rw] properties
245
245
  # @return [Hash{String => String}]
246
246
  # Optional. A mapping of property names to values, used to configure
@@ -265,12 +265,12 @@ module Google
265
265
  # @!attribute [rw] continue_on_failure
266
266
  # @return [true, false]
267
267
  # Optional. Whether to continue executing queries if a query fails.
268
- # The default value is +false+. Setting to +true+ can be useful when executing
268
+ # The default value is `false`. Setting to `true` can be useful when executing
269
269
  # independent parallel queries.
270
270
  # @!attribute [rw] script_variables
271
271
  # @return [Hash{String => String}]
272
272
  # Optional. Mapping of query variable names to values (equivalent to the Pig
273
- # command: +name=[value]+).
273
+ # command: `name=[value]`).
274
274
  # @!attribute [rw] properties
275
275
  # @return [Hash{String => String}]
276
276
  # Optional. A mapping of property names to values, used to configure Pig.
@@ -492,7 +492,7 @@ module Google
492
492
  # @return [String]
493
493
  # Output-only. If present, the location of miscellaneous control files
494
494
  # which may be used as part of job setup and handling. If not present,
495
- # control files may be placed in the same location as +driver_output_uri+.
495
+ # control files may be placed in the same location as `driver_output_uri`.
496
496
  # @!attribute [rw] labels
497
497
  # @return [Hash{String => String}]
498
498
  # Optional. The labels to associate with this job.
@@ -572,7 +572,7 @@ module Google
572
572
  # Optional. Specifies enumerated categories of jobs to list.
573
573
  # (default = match ALL jobs).
574
574
  #
575
- # If +filter+ is provided, +jobStateMatcher+ will be ignored.
575
+ # If `filter` is provided, `jobStateMatcher` will be ignored.
576
576
  # @!attribute [rw] filter
577
577
  # @return [String]
578
578
  # Optional. A filter constraining the jobs to list. Filters are
@@ -580,11 +580,11 @@ module Google
580
580
  #
581
581
  # [field = value] AND [field [= value]] ...
582
582
  #
583
- # where **field** is +status.state+ or +labels.[KEY]+, and +[KEY]+ is a label
584
- # key. **value** can be +*+ to match all values.
585
- # +status.state+ can be either +ACTIVE+ or +NON_ACTIVE+.
586
- # Only the logical +AND+ operator is supported; space-separated items are
587
- # treated as having an implicit +AND+ operator.
583
+ # where **field** is `status.state` or `labels.[KEY]`, and `[KEY]` is a label
584
+ # key. **value** can be `*` to match all values.
585
+ # `status.state` can be either `ACTIVE` or `NON_ACTIVE`.
586
+ # Only the logical `AND` operator is supported; space-separated items are
587
+ # treated as having an implicit `AND` operator.
588
588
  #
589
589
  # Example filter:
590
590
  #
@@ -623,7 +623,7 @@ module Google
623
623
  # Required. Specifies the path, relative to <code>Job</code>, of
624
624
  # the field to update. For example, to update the labels of a Job the
625
625
  # <code>update_mask</code> parameter would be specified as
626
- # <code>labels</code>, and the +PATCH+ request body would specify the new
626
+ # <code>labels</code>, and the `PATCH` request body would specify the new
627
627
  # value. <strong>Note:</strong> Currently, <code>labels</code> is the only
628
628
  # field that can be updated.
629
629
  class UpdateJobRequest; end
@@ -636,7 +636,7 @@ module Google
636
636
  # @return [String]
637
637
  # Optional. This token is included in the response if there are more results
638
638
  # to fetch. To fetch additional results, provide this value as the
639
- # +page_token+ in a subsequent <code>ListJobsRequest</code>.
639
+ # `page_token` in a subsequent <code>ListJobsRequest</code>.
640
640
  class ListJobsResponse; end
641
641
 
642
642
  # A request to cancel a job.
@@ -21,7 +21,7 @@ module Google
21
21
  # @return [String]
22
22
  # The server-assigned name, which is only unique within the same service that
23
23
  # originally returns it. If you use the default HTTP mapping, the
24
- # +name+ should have the format of +operations/some/unique/name+.
24
+ # `name` should have the format of `operations/some/unique/name`.
25
25
  # @!attribute [rw] metadata
26
26
  # @return [Google::Protobuf::Any]
27
27
  # Service-specific metadata associated with the operation. It typically
@@ -30,8 +30,8 @@ module Google
30
30
  # long-running operation should document the metadata type, if any.
31
31
  # @!attribute [rw] done
32
32
  # @return [true, false]
33
- # If the value is +false+, it means the operation is still in progress.
34
- # If true, the operation is completed, and either +error+ or +response+ is
33
+ # If the value is `false`, it means the operation is still in progress.
34
+ # If true, the operation is completed, and either `error` or `response` is
35
35
  # available.
36
36
  # @!attribute [rw] error
37
37
  # @return [Google::Rpc::Status]
@@ -39,13 +39,13 @@ module Google
39
39
  # @!attribute [rw] response
40
40
  # @return [Google::Protobuf::Any]
41
41
  # The normal response of the operation in case of success. If the original
42
- # method returns no data on success, such as +Delete+, the response is
43
- # +google.protobuf.Empty+. If the original method is standard
44
- # +Get+/+Create+/+Update+, the response should be the resource. For other
45
- # methods, the response should have the type +XxxResponse+, where +Xxx+
42
+ # method returns no data on success, such as `Delete`, the response is
43
+ # `google.protobuf.Empty`. If the original method is standard
44
+ # `Get`/`Create`/`Update`, the response should be the resource. For other
45
+ # methods, the response should have the type `XxxResponse`, where `Xxx`
46
46
  # is the original method name. For example, if the original method name
47
- # is +TakeSnapshot()+, the inferred response type is
48
- # +TakeSnapshotResponse+.
47
+ # is `TakeSnapshot()`, the inferred response type is
48
+ # `TakeSnapshotResponse`.
49
49
  class Operation; end
50
50
 
51
51
  # The request message for {Google::Longrunning::Operations::GetOperation Operations::GetOperation}.
@@ -15,7 +15,7 @@
15
15
 
16
16
  module Google
17
17
  module Protobuf
18
- # +Any+ contains an arbitrary serialized protocol buffer message along with a
18
+ # `Any` contains an arbitrary serialized protocol buffer message along with a
19
19
  # URL that describes the type of the serialized message.
20
20
  #
21
21
  # Protobuf library provides support to pack/unpack Any values in the form
@@ -69,9 +69,9 @@ module Google
69
69
  #
70
70
  # = JSON
71
71
  #
72
- # The JSON representation of an +Any+ value uses the regular
72
+ # The JSON representation of an `Any` value uses the regular
73
73
  # representation of the deserialized, embedded message, with an
74
- # additional field +@type+ which contains the type URL. Example:
74
+ # additional field `@type` which contains the type URL. Example:
75
75
  #
76
76
  # package google.profile;
77
77
  # message Person {
@@ -87,7 +87,7 @@ module Google
87
87
  #
88
88
  # If the embedded message type is well-known and has a custom JSON
89
89
  # representation, that representation will be embedded adding a field
90
- # +value+ which holds the custom JSON in addition to the +@type+
90
+ # `value` which holds the custom JSON in addition to the `@type`
91
91
  # field. Example (for message {Google::Protobuf::Duration}):
92
92
  #
93
93
  # {
@@ -99,15 +99,15 @@ module Google
99
99
  # A URL/resource name that uniquely identifies the type of the serialized
100
100
  # protocol buffer message. The last segment of the URL's path must represent
101
101
  # the fully qualified name of the type (as in
102
- # +path/google.protobuf.Duration+). The name should be in a canonical form
102
+ # `path/google.protobuf.Duration`). The name should be in a canonical form
103
103
  # (e.g., leading "." is not accepted).
104
104
  #
105
105
  # In practice, teams usually precompile into the binary all types that they
106
106
  # expect it to use in the context of Any. However, for URLs which use the
107
- # scheme +http+, +https+, or no scheme, one can optionally set up a type
107
+ # scheme `http`, `https`, or no scheme, one can optionally set up a type
108
108
  # server that maps type URLs to message definitions as follows:
109
109
  #
110
- # * If no scheme is provided, +https+ is assumed.
110
+ # * If no scheme is provided, `https` is assumed.
111
111
  # * An HTTP GET on the URL must yield a {Google::Protobuf::Type}
112
112
  # value in binary format, or produce an error.
113
113
  # * Applications are allowed to cache lookup results based on the
@@ -120,7 +120,7 @@ module Google
120
120
  # protobuf release, and it is not used for type URLs beginning with
121
121
  # type.googleapis.com.
122
122
  #
123
- # Schemes other than +http+, +https+ (or the empty scheme) might be
123
+ # Schemes other than `http`, `https` (or the empty scheme) might be
124
124
  # used with implementation specific semantics.
125
125
  # @!attribute [rw] value
126
126
  # @return [String]
@@ -82,9 +82,9 @@ module Google
82
82
  # @return [Integer]
83
83
  # Signed fractions of a second at nanosecond resolution of the span
84
84
  # of time. Durations less than one second are represented with a 0
85
- # +seconds+ field and a positive or negative +nanos+ field. For durations
86
- # of one second or more, a non-zero value for the +nanos+ field must be
87
- # of the same sign as the +seconds+ field. Must be from -999,999,999
85
+ # `seconds` field and a positive or negative `nanos` field. For durations
86
+ # of one second or more, a non-zero value for the `nanos` field must be
87
+ # of the same sign as the `seconds` field. Must be from -999,999,999
88
88
  # to +999,999,999 inclusive.
89
89
  class Duration; end
90
90
  end
@@ -23,7 +23,7 @@ module Google
23
23
  # rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty);
24
24
  # }
25
25
  #
26
- # The JSON representation for +Empty+ is empty JSON object +{}+.
26
+ # The JSON representation for `Empty` is empty JSON object `{}`.
27
27
  class Empty; end
28
28
  end
29
29
  end
@@ -15,14 +15,14 @@
15
15
 
16
16
  module Google
17
17
  module Protobuf
18
- # +FieldMask+ represents a set of symbolic field paths, for example:
18
+ # `FieldMask` represents a set of symbolic field paths, for example:
19
19
  #
20
20
  # paths: "f.a"
21
21
  # paths: "f.b.d"
22
22
  #
23
- # Here +f+ represents a field in some root message, +a+ and +b+
24
- # fields in the message found in +f+, and +d+ a field found in the
25
- # message in +f.b+.
23
+ # Here `f` represents a field in some root message, `a` and `b`
24
+ # fields in the message found in `f`, and `d` a field found in the
25
+ # message in `f.b`.
26
26
  #
27
27
  # Field masks are used to specify a subset of fields that should be
28
28
  # returned by a get operation or modified by an update operation.
@@ -85,7 +85,7 @@ module Google
85
85
  #
86
86
  # If a repeated field is specified for an update operation, the existing
87
87
  # repeated values in the target resource will be overwritten by the new values.
88
- # Note that a repeated field is only allowed in the last position of a +paths+
88
+ # Note that a repeated field is only allowed in the last position of a `paths`
89
89
  # string.
90
90
  #
91
91
  # If a sub-message is specified in the last position of the field mask for an
@@ -177,7 +177,7 @@ module Google
177
177
  # string address = 2;
178
178
  # }
179
179
  #
180
- # In proto a field mask for +Profile+ may look as such:
180
+ # In proto a field mask for `Profile` may look as such:
181
181
  #
182
182
  # mask {
183
183
  # paths: "user.display_name"
@@ -221,7 +221,7 @@ module Google
221
221
  #
222
222
  # The implementation of any API method which has a FieldMask type field in the
223
223
  # request should verify the included field paths, and return an
224
- # +INVALID_ARGUMENT+ error if any path is duplicated or unmappable.
224
+ # `INVALID_ARGUMENT` error if any path is duplicated or unmappable.
225
225
  # @!attribute [rw] paths
226
226
  # @return [Array<String>]
227
227
  # The set of field mask paths.
@@ -29,13 +29,13 @@ module Google
29
29
  #
30
30
  # = Examples
31
31
  #
32
- # Example 1: Compute Timestamp from POSIX +time()+.
32
+ # Example 1: Compute Timestamp from POSIX `time()`.
33
33
  #
34
34
  # Timestamp timestamp;
35
35
  # timestamp.set_seconds(time(NULL));
36
36
  # timestamp.set_nanos(0);
37
37
  #
38
- # Example 2: Compute Timestamp from POSIX +gettimeofday()+.
38
+ # Example 2: Compute Timestamp from POSIX `gettimeofday()`.
39
39
  #
40
40
  # struct timeval tv;
41
41
  # gettimeofday(&tv, NULL);
@@ -44,7 +44,7 @@ module Google
44
44
  # timestamp.set_seconds(tv.tv_sec);
45
45
  # timestamp.set_nanos(tv.tv_usec * 1000);
46
46
  #
47
- # Example 3: Compute Timestamp from Win32 +GetSystemTimeAsFileTime()+.
47
+ # Example 3: Compute Timestamp from Win32 `GetSystemTimeAsFileTime()`.
48
48
  #
49
49
  # FILETIME ft;
50
50
  # GetSystemTimeAsFileTime(&ft);
@@ -56,7 +56,7 @@ module Google
56
56
  # timestamp.set_seconds((INT64) ((ticks / 10000000) - 11644473600LL));
57
57
  # timestamp.set_nanos((INT32) ((ticks % 10000000) * 100));
58
58
  #
59
- # Example 4: Compute Timestamp from Java +System.currentTimeMillis()+.
59
+ # Example 4: Compute Timestamp from Java `System.currentTimeMillis()`.
60
60
  #
61
61
  # long millis = System.currentTimeMillis();
62
62
  #
@@ -87,10 +87,10 @@ module Google
87
87
  #
88
88
  # In JavaScript, one can convert a Date object to this format using the
89
89
  # standard [toISOString()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date/toISOString]
90
- # method. In Python, a standard +datetime.datetime+ object can be converted
91
- # to this format using [+strftime+](https://docs.python.org/2/library/time.html#time.strftime)
90
+ # method. In Python, a standard `datetime.datetime` object can be converted
91
+ # to this format using [`strftime`](https://docs.python.org/2/library/time.html#time.strftime)
92
92
  # with the time format spec '%Y-%m-%dT%H:%M:%S.%fZ'. Likewise, in Java, one
93
- # can use the Joda Time's [+ISODateTimeFormat.dateTime()+](
93
+ # can use the Joda Time's [`ISODateTimeFormat.dateTime()`](
94
94
  # http://www.joda.org/joda-time/apidocs/org/joda/time/format/ISODateTimeFormat.html#dateTime--
95
95
  # ) to obtain a formatter capable of generating timestamps in this format.
96
96
  # @!attribute [rw] seconds
@@ -15,7 +15,7 @@
15
15
 
16
16
  module Google
17
17
  module Rpc
18
- # The +Status+ type defines a logical error model that is suitable for different
18
+ # The `Status` type defines a logical error model that is suitable for different
19
19
  # programming environments, including REST APIs and RPC APIs. It is used by
20
20
  # [gRPC](https://github.com/grpc). The error model is designed to be:
21
21
  #
@@ -24,7 +24,7 @@ module Google
24
24
  #
25
25
  # = Overview
26
26
  #
27
- # The +Status+ message contains three pieces of data: error code, error message,
27
+ # The `Status` message contains three pieces of data: error code, error message,
28
28
  # and error details. The error code should be an enum value of
29
29
  # {Google::Rpc::Code}, but it may accept additional error codes if needed. The
30
30
  # error message should be a developer-facing English message that helps
@@ -32,40 +32,40 @@ module Google
32
32
  # error message is needed, put the localized message in the error details or
33
33
  # localize it in the client. The optional error details may contain arbitrary
34
34
  # information about the error. There is a predefined set of error detail types
35
- # in the package +google.rpc+ that can be used for common error conditions.
35
+ # in the package `google.rpc` that can be used for common error conditions.
36
36
  #
37
37
  # = Language mapping
38
38
  #
39
- # The +Status+ message is the logical representation of the error model, but it
40
- # is not necessarily the actual wire format. When the +Status+ message is
39
+ # The `Status` message is the logical representation of the error model, but it
40
+ # is not necessarily the actual wire format. When the `Status` message is
41
41
  # exposed in different client libraries and different wire protocols, it can be
42
42
  # mapped differently. For example, it will likely be mapped to some exceptions
43
43
  # in Java, but more likely mapped to some error codes in C.
44
44
  #
45
45
  # = Other uses
46
46
  #
47
- # The error model and the +Status+ message can be used in a variety of
47
+ # The error model and the `Status` message can be used in a variety of
48
48
  # environments, either with or without APIs, to provide a
49
49
  # consistent developer experience across different environments.
50
50
  #
51
51
  # Example uses of this error model include:
52
52
  #
53
53
  # * Partial errors. If a service needs to return partial errors to the client,
54
- # it may embed the +Status+ in the normal response to indicate the partial
54
+ # it may embed the `Status` in the normal response to indicate the partial
55
55
  # errors.
56
56
  #
57
57
  # * Workflow errors. A typical workflow has multiple steps. Each step may
58
- # have a +Status+ message for error reporting.
58
+ # have a `Status` message for error reporting.
59
59
  #
60
60
  # * Batch operations. If a client uses batch request and batch response, the
61
- # +Status+ message should be used directly inside batch response, one for
61
+ # `Status` message should be used directly inside batch response, one for
62
62
  # each error sub-response.
63
63
  #
64
64
  # * Asynchronous operations. If an API call embeds asynchronous operation
65
65
  # results in its response, the status of those operations should be
66
- # represented directly using the +Status+ message.
66
+ # represented directly using the `Status` message.
67
67
  #
68
- # * Logging. If some API errors are stored in logs, the message +Status+ could
68
+ # * Logging. If some API errors are stored in logs, the message `Status` could
69
69
  # be used directly after any stripping needed for security/privacy reasons.
70
70
  # @!attribute [rw] code
71
71
  # @return [Integer]
@@ -228,13 +228,13 @@ module Google
228
228
  #
229
229
  # job_controller_client = Google::Cloud::Dataproc::JobController.new(version: :v1)
230
230
  #
231
- # # TODO: Initialize +project_id+:
231
+ # # TODO: Initialize `project_id`:
232
232
  # project_id = ''
233
233
  #
234
- # # TODO: Initialize +region+:
234
+ # # TODO: Initialize `region`:
235
235
  # region = ''
236
236
  #
237
- # # TODO: Initialize +job+:
237
+ # # TODO: Initialize `job`:
238
238
  # job = {}
239
239
  # response = job_controller_client.submit_job(project_id, region, job)
240
240
 
@@ -275,13 +275,13 @@ module Google
275
275
  #
276
276
  # job_controller_client = Google::Cloud::Dataproc::JobController.new(version: :v1)
277
277
  #
278
- # # TODO: Initialize +project_id+:
278
+ # # TODO: Initialize `project_id`:
279
279
  # project_id = ''
280
280
  #
281
- # # TODO: Initialize +region+:
281
+ # # TODO: Initialize `region`:
282
282
  # region = ''
283
283
  #
284
- # # TODO: Initialize +job_id+:
284
+ # # TODO: Initialize `job_id`:
285
285
  # job_id = ''
286
286
  # response = job_controller_client.get_job(project_id, region, job_id)
287
287
 
@@ -320,18 +320,18 @@ module Google
320
320
  # Optional. Specifies enumerated categories of jobs to list.
321
321
  # (default = match ALL jobs).
322
322
  #
323
- # If +filter+ is provided, +jobStateMatcher+ will be ignored.
323
+ # If `filter` is provided, `jobStateMatcher` will be ignored.
324
324
  # @param filter [String]
325
325
  # Optional. A filter constraining the jobs to list. Filters are
326
326
  # case-sensitive and have the following syntax:
327
327
  #
328
328
  # [field = value] AND [field [= value]] ...
329
329
  #
330
- # where **field** is +status.state+ or +labels.[KEY]+, and +[KEY]+ is a label
331
- # key. **value** can be +*+ to match all values.
332
- # +status.state+ can be either +ACTIVE+ or +NON_ACTIVE+.
333
- # Only the logical +AND+ operator is supported; space-separated items are
334
- # treated as having an implicit +AND+ operator.
330
+ # where **field** is `status.state` or `labels.[KEY]`, and `[KEY]` is a label
331
+ # key. **value** can be `*` to match all values.
332
+ # `status.state` can be either `ACTIVE` or `NON_ACTIVE`.
333
+ # Only the logical `AND` operator is supported; space-separated items are
334
+ # treated as having an implicit `AND` operator.
335
335
  #
336
336
  # Example filter:
337
337
  #
@@ -353,10 +353,10 @@ module Google
353
353
  #
354
354
  # job_controller_client = Google::Cloud::Dataproc::JobController.new(version: :v1)
355
355
  #
356
- # # TODO: Initialize +project_id+:
356
+ # # TODO: Initialize `project_id`:
357
357
  # project_id = ''
358
358
  #
359
- # # TODO: Initialize +region+:
359
+ # # TODO: Initialize `region`:
360
360
  # region = ''
361
361
  #
362
362
  # # Iterate over all results.
@@ -410,7 +410,7 @@ module Google
410
410
  # Required. Specifies the path, relative to <code>Job</code>, of
411
411
  # the field to update. For example, to update the labels of a Job the
412
412
  # <code>update_mask</code> parameter would be specified as
413
- # <code>labels</code>, and the +PATCH+ request body would specify the new
413
+ # <code>labels</code>, and the `PATCH` request body would specify the new
414
414
  # value. <strong>Note:</strong> Currently, <code>labels</code> is the only
415
415
  # field that can be updated.
416
416
  # A hash of the same form as `Google::Protobuf::FieldMask`
@@ -428,19 +428,19 @@ module Google
428
428
  #
429
429
  # job_controller_client = Google::Cloud::Dataproc::JobController.new(version: :v1)
430
430
  #
431
- # # TODO: Initialize +project_id+:
431
+ # # TODO: Initialize `project_id`:
432
432
  # project_id = ''
433
433
  #
434
- # # TODO: Initialize +region+:
434
+ # # TODO: Initialize `region`:
435
435
  # region = ''
436
436
  #
437
- # # TODO: Initialize +job_id+:
437
+ # # TODO: Initialize `job_id`:
438
438
  # job_id = ''
439
439
  #
440
- # # TODO: Initialize +job+:
440
+ # # TODO: Initialize `job`:
441
441
  # job = {}
442
442
  #
443
- # # TODO: Initialize +update_mask+:
443
+ # # TODO: Initialize `update_mask`:
444
444
  # update_mask = {}
445
445
  # response = job_controller_client.update_job(project_id, region, job_id, job, update_mask)
446
446
 
@@ -488,13 +488,13 @@ module Google
488
488
  #
489
489
  # job_controller_client = Google::Cloud::Dataproc::JobController.new(version: :v1)
490
490
  #
491
- # # TODO: Initialize +project_id+:
491
+ # # TODO: Initialize `project_id`:
492
492
  # project_id = ''
493
493
  #
494
- # # TODO: Initialize +region+:
494
+ # # TODO: Initialize `region`:
495
495
  # region = ''
496
496
  #
497
- # # TODO: Initialize +job_id+:
497
+ # # TODO: Initialize `job_id`:
498
498
  # job_id = ''
499
499
  # response = job_controller_client.cancel_job(project_id, region, job_id)
500
500
 
@@ -514,7 +514,7 @@ module Google
514
514
  end
515
515
 
516
516
  # Deletes the job from the project. If the job is active, the delete fails,
517
- # and the response returns +FAILED_PRECONDITION+.
517
+ # and the response returns `FAILED_PRECONDITION`.
518
518
  #
519
519
  # @param project_id [String]
520
520
  # Required. The ID of the Google Cloud Platform project that the job
@@ -535,13 +535,13 @@ module Google
535
535
  #
536
536
  # job_controller_client = Google::Cloud::Dataproc::JobController.new(version: :v1)
537
537
  #
538
- # # TODO: Initialize +project_id+:
538
+ # # TODO: Initialize `project_id`:
539
539
  # project_id = ''
540
540
  #
541
- # # TODO: Initialize +region+:
541
+ # # TODO: Initialize `region`:
542
542
  # region = ''
543
543
  #
544
- # # TODO: Initialize +job_id+:
544
+ # # TODO: Initialize `job_id`:
545
545
  # job_id = ''
546
546
  # job_controller_client.delete_job(project_id, region, job_id)
547
547
 
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: google-cloud-dataproc
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.2.1
4
+ version: 0.2.2
5
5
  platform: ruby
6
6
  authors:
7
7
  - Google LLC
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2018-09-10 00:00:00.000000000 Z
11
+ date: 2018-09-21 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: google-gax
@@ -120,13 +120,12 @@ files:
120
120
  - lib/google/cloud/dataproc/v1/doc/google/protobuf/field_mask.rb
121
121
  - lib/google/cloud/dataproc/v1/doc/google/protobuf/timestamp.rb
122
122
  - lib/google/cloud/dataproc/v1/doc/google/rpc/status.rb
123
- - lib/google/cloud/dataproc/v1/doc/overview.rb
124
123
  - lib/google/cloud/dataproc/v1/job_controller_client.rb
125
124
  - lib/google/cloud/dataproc/v1/job_controller_client_config.json
126
125
  - lib/google/cloud/dataproc/v1/jobs_pb.rb
127
126
  - lib/google/cloud/dataproc/v1/jobs_services_pb.rb
128
127
  - lib/google/cloud/dataproc/v1/operations_pb.rb
129
- homepage: https://github.com/GoogleCloudPlatform/google-cloud-ruby/tree/master/google-cloud-dataproc
128
+ homepage: https://github.com/googleapis/google-cloud-ruby/tree/master/google-cloud-dataproc
130
129
  licenses:
131
130
  - Apache-2.0
132
131
  metadata: {}
@@ -1,103 +0,0 @@
1
- # Copyright 2018 Google LLC
2
- #
3
- # Licensed under the Apache License, Version 2.0 (the "License");
4
- # you may not use this file except in compliance with the License.
5
- # You may obtain a copy of the License at
6
- #
7
- # https://www.apache.org/licenses/LICENSE-2.0
8
- #
9
- # Unless required by applicable law or agreed to in writing, software
10
- # distributed under the License is distributed on an "AS IS" BASIS,
11
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
- # See the License for the specific language governing permissions and
13
- # limitations under the License.
14
-
15
-
16
- module Google
17
- module Cloud
18
- # rubocop:disable LineLength
19
-
20
- ##
21
- # # Ruby Client for Google Cloud Dataproc API ([Beta](https://github.com/GoogleCloudPlatform/google-cloud-ruby#versioning))
22
- #
23
- # [Google Cloud Dataproc API][Product Documentation]:
24
- # Manages Hadoop-based clusters and jobs on Google Cloud Platform.
25
- # - [Product Documentation][]
26
- #
27
- # ## Quick Start
28
- # In order to use this library, you first need to go through the following
29
- # steps:
30
- #
31
- # 1. [Select or create a Cloud Platform project.](https://console.cloud.google.com/project)
32
- # 2. [Enable billing for your project.](https://cloud.google.com/billing/docs/how-to/modify-project#enable_billing_for_a_project)
33
- # 3. [Enable the Google Cloud Dataproc API.](https://console.cloud.google.com/apis/library/dataproc.googleapis.com)
34
- # 4. [Setup Authentication.](https://googlecloudplatform.github.io/google-cloud-ruby/#/docs/google-cloud/master/guides/authentication)
35
- #
36
- # ### Installation
37
- # ```
38
- # $ gem install google-cloud-dataproc
39
- # ```
40
- #
41
- # ### Preview
42
- # #### ClusterControllerClient
43
- # ```rb
44
- # require "google/cloud/dataproc"
45
- #
46
- # cluster_controller_client = Google::Cloud::Dataproc::ClusterController.new
47
- # project_id_2 = project_id
48
- # region = "global"
49
- #
50
- # # Iterate over all results.
51
- # cluster_controller_client.list_clusters(project_id_2, region).each do |element|
52
- # # Process element.
53
- # end
54
- #
55
- # # Or iterate over results one page at a time.
56
- # cluster_controller_client.list_clusters(project_id_2, region).each_page do |page|
57
- # # Process each page at a time.
58
- # page.each do |element|
59
- # # Process element.
60
- # end
61
- # end
62
- # ```
63
- #
64
- # ### Next Steps
65
- # - Read the [Google Cloud Dataproc API Product documentation][Product Documentation]
66
- # to learn more about the product and see How-to Guides.
67
- # - View this [repository's main README](https://github.com/GoogleCloudPlatform/google-cloud-ruby/blob/master/README.md)
68
- # to see the full list of Cloud APIs that we cover.
69
- #
70
- # [Product Documentation]: https://cloud.google.com/dataproc
71
- #
72
- # ## Enabling Logging
73
- #
74
- # To enable logging for this library, set the logger for the underlying [gRPC](https://github.com/grpc/grpc/tree/master/src/ruby) library.
75
- # The logger that you set may be a Ruby stdlib [`Logger`](https://ruby-doc.org/stdlib-2.5.0/libdoc/logger/rdoc/Logger.html) as shown below,
76
- # or a [`Google::Cloud::Logging::Logger`](https://googlecloudplatform.github.io/google-cloud-ruby/#/docs/google-cloud-logging/latest/google/cloud/logging/logger)
77
- # that will write logs to [Stackdriver Logging](https://cloud.google.com/logging/). See [grpc/logconfig.rb](https://github.com/grpc/grpc/blob/master/src/ruby/lib/grpc/logconfig.rb)
78
- # and the gRPC [spec_helper.rb](https://github.com/grpc/grpc/blob/master/src/ruby/spec/spec_helper.rb) for additional information.
79
- #
80
- # Configuring a Ruby stdlib logger:
81
- #
82
- # ```ruby
83
- # require "logger"
84
- #
85
- # module MyLogger
86
- # LOGGER = Logger.new $stderr, level: Logger::WARN
87
- # def logger
88
- # LOGGER
89
- # end
90
- # end
91
- #
92
- # # Define a gRPC module-level logger method before grpc/logconfig.rb loads.
93
- # module GRPC
94
- # extend MyLogger
95
- # end
96
- # ```
97
- #
98
- module Dataproc
99
- module V1
100
- end
101
- end
102
- end
103
- end