google-api-client 0.24.2 → 0.24.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (185) hide show
  1. checksums.yaml +4 -4
  2. data/CHANGELOG.md +68 -0
  3. data/README.md +9 -0
  4. data/generated/google/apis/adexchangebuyer2_v2beta1.rb +5 -4
  5. data/generated/google/apis/adexchangebuyer2_v2beta1/classes.rb +90 -87
  6. data/generated/google/apis/adexchangebuyer2_v2beta1/service.rb +17 -15
  7. data/generated/google/apis/admin_directory_v1.rb +1 -1
  8. data/generated/google/apis/admin_directory_v1/classes.rb +155 -0
  9. data/generated/google/apis/admin_directory_v1/representations.rb +82 -0
  10. data/generated/google/apis/alertcenter_v1beta1.rb +31 -0
  11. data/generated/google/apis/alertcenter_v1beta1/classes.rb +835 -0
  12. data/generated/google/apis/alertcenter_v1beta1/representations.rb +394 -0
  13. data/generated/google/apis/alertcenter_v1beta1/service.rb +302 -0
  14. data/generated/google/apis/androiddeviceprovisioning_v1.rb +1 -1
  15. data/generated/google/apis/androiddeviceprovisioning_v1/classes.rb +37 -0
  16. data/generated/google/apis/androiddeviceprovisioning_v1/representations.rb +6 -0
  17. data/generated/google/apis/androiddeviceprovisioning_v1/service.rb +8 -1
  18. data/generated/google/apis/androidenterprise_v1.rb +1 -1
  19. data/generated/google/apis/androidenterprise_v1/classes.rb +8 -4
  20. data/generated/google/apis/androidenterprise_v1/representations.rb +1 -0
  21. data/generated/google/apis/androidpublisher_v2.rb +1 -1
  22. data/generated/google/apis/androidpublisher_v2/service.rb +5 -1
  23. data/generated/google/apis/androidpublisher_v3.rb +1 -1
  24. data/generated/google/apis/androidpublisher_v3/service.rb +5 -1
  25. data/generated/google/apis/appengine_v1.rb +1 -1
  26. data/generated/google/apis/appengine_v1/classes.rb +8 -1
  27. data/generated/google/apis/appengine_v1/representations.rb +1 -0
  28. data/generated/google/apis/appengine_v1beta.rb +1 -1
  29. data/generated/google/apis/appengine_v1beta/classes.rb +1 -1
  30. data/generated/google/apis/bigquerydatatransfer_v1.rb +1 -1
  31. data/generated/google/apis/bigquerydatatransfer_v1/classes.rb +6 -5
  32. data/generated/google/apis/bigquerydatatransfer_v1/service.rb +12 -10
  33. data/generated/google/apis/calendar_v3.rb +1 -1
  34. data/generated/google/apis/calendar_v3/service.rb +52 -18
  35. data/generated/google/apis/cloudasset_v1beta1.rb +34 -0
  36. data/generated/google/apis/cloudasset_v1beta1/classes.rb +798 -0
  37. data/generated/google/apis/cloudasset_v1beta1/representations.rb +263 -0
  38. data/generated/google/apis/cloudasset_v1beta1/service.rb +313 -0
  39. data/generated/google/apis/cloudbuild_v1.rb +1 -1
  40. data/generated/google/apis/cloudbuild_v1/classes.rb +42 -5
  41. data/generated/google/apis/cloudbuild_v1/representations.rb +6 -0
  42. data/generated/google/apis/cloudiot_v1.rb +1 -1
  43. data/generated/google/apis/cloudiot_v1/classes.rb +59 -0
  44. data/generated/google/apis/cloudiot_v1/representations.rb +28 -0
  45. data/generated/google/apis/cloudiot_v1/service.rb +94 -0
  46. data/generated/google/apis/composer_v1.rb +1 -1
  47. data/generated/google/apis/composer_v1/classes.rb +1 -0
  48. data/generated/google/apis/composer_v1beta1.rb +1 -1
  49. data/generated/google/apis/composer_v1beta1/classes.rb +34 -5
  50. data/generated/google/apis/composer_v1beta1/representations.rb +1 -0
  51. data/generated/google/apis/compute_alpha.rb +1 -1
  52. data/generated/google/apis/compute_alpha/classes.rb +227 -48
  53. data/generated/google/apis/compute_alpha/representations.rb +84 -1
  54. data/generated/google/apis/compute_alpha/service.rb +50 -10
  55. data/generated/google/apis/compute_beta.rb +1 -1
  56. data/generated/google/apis/compute_beta/classes.rb +593 -77
  57. data/generated/google/apis/compute_beta/representations.rb +224 -18
  58. data/generated/google/apis/compute_beta/service.rb +174 -3
  59. data/generated/google/apis/compute_v1.rb +1 -1
  60. data/generated/google/apis/compute_v1/classes.rb +41 -18
  61. data/generated/google/apis/compute_v1/representations.rb +3 -0
  62. data/generated/google/apis/content_v2.rb +1 -1
  63. data/generated/google/apis/content_v2/classes.rb +372 -119
  64. data/generated/google/apis/content_v2/representations.rb +157 -39
  65. data/generated/google/apis/content_v2/service.rb +101 -11
  66. data/generated/google/apis/content_v2sandbox.rb +1 -1
  67. data/generated/google/apis/content_v2sandbox/classes.rb +372 -119
  68. data/generated/google/apis/content_v2sandbox/representations.rb +157 -39
  69. data/generated/google/apis/content_v2sandbox/service.rb +90 -0
  70. data/generated/google/apis/customsearch_v1.rb +1 -1
  71. data/generated/google/apis/dataflow_v1b3.rb +1 -1
  72. data/generated/google/apis/dataflow_v1b3/classes.rb +7 -0
  73. data/generated/google/apis/dataflow_v1b3/representations.rb +1 -0
  74. data/generated/google/apis/dataproc_v1.rb +1 -1
  75. data/generated/google/apis/dataproc_v1/classes.rb +12 -0
  76. data/generated/google/apis/dataproc_v1/representations.rb +2 -0
  77. data/generated/google/apis/dataproc_v1beta2.rb +1 -1
  78. data/generated/google/apis/dataproc_v1beta2/classes.rb +21 -6
  79. data/generated/google/apis/dataproc_v1beta2/representations.rb +2 -0
  80. data/generated/google/apis/datastore_v1.rb +1 -1
  81. data/generated/google/apis/datastore_v1/classes.rb +2 -2
  82. data/generated/google/apis/datastore_v1beta3.rb +1 -1
  83. data/generated/google/apis/datastore_v1beta3/classes.rb +2 -2
  84. data/generated/google/apis/dlp_v2.rb +1 -1
  85. data/generated/google/apis/dlp_v2/classes.rb +110 -5
  86. data/generated/google/apis/dlp_v2/representations.rb +17 -0
  87. data/generated/google/apis/dlp_v2/service.rb +41 -3
  88. data/generated/google/apis/file_v1beta1.rb +1 -1
  89. data/generated/google/apis/file_v1beta1/classes.rb +0 -234
  90. data/generated/google/apis/file_v1beta1/representations.rb +0 -79
  91. data/generated/google/apis/firebasedynamiclinks_v1.rb +1 -1
  92. data/generated/google/apis/firebasedynamiclinks_v1/classes.rb +19 -1
  93. data/generated/google/apis/firebasedynamiclinks_v1/representations.rb +3 -0
  94. data/generated/google/apis/firebasedynamiclinks_v1/service.rb +4 -1
  95. data/generated/google/apis/firebasehosting_v1beta1.rb +43 -0
  96. data/generated/google/apis/firebasehosting_v1beta1/classes.rb +767 -0
  97. data/generated/google/apis/firebasehosting_v1beta1/representations.rb +337 -0
  98. data/generated/google/apis/firebasehosting_v1beta1/service.rb +502 -0
  99. data/generated/google/apis/firebaserules_v1.rb +1 -1
  100. data/generated/google/apis/firebaserules_v1/classes.rb +8 -0
  101. data/generated/google/apis/firebaserules_v1/representations.rb +1 -0
  102. data/generated/google/apis/firebaserules_v1/service.rb +1 -1
  103. data/generated/google/apis/firestore_v1beta2.rb +1 -1
  104. data/generated/google/apis/firestore_v1beta2/service.rb +80 -80
  105. data/generated/google/apis/games_v1.rb +1 -1
  106. data/generated/google/apis/games_v1/service.rb +4 -1
  107. data/generated/google/apis/iam_v1.rb +1 -1
  108. data/generated/google/apis/iam_v1/classes.rb +3 -1
  109. data/generated/google/apis/iamcredentials_v1.rb +1 -1
  110. data/generated/google/apis/iamcredentials_v1/service.rb +0 -10
  111. data/generated/google/apis/iap_v1beta1.rb +1 -1
  112. data/generated/google/apis/iap_v1beta1/service.rb +339 -0
  113. data/generated/google/apis/jobs_v2.rb +1 -1
  114. data/generated/google/apis/jobs_v2/classes.rb +45 -37
  115. data/generated/google/apis/jobs_v3.rb +1 -1
  116. data/generated/google/apis/jobs_v3/classes.rb +21 -18
  117. data/generated/google/apis/jobs_v3p1beta1.rb +1 -1
  118. data/generated/google/apis/jobs_v3p1beta1/classes.rb +45 -20
  119. data/generated/google/apis/jobs_v3p1beta1/representations.rb +2 -0
  120. data/generated/google/apis/language_v1.rb +1 -1
  121. data/generated/google/apis/language_v1beta1.rb +1 -1
  122. data/generated/google/apis/language_v1beta2.rb +1 -1
  123. data/generated/google/apis/logging_v2.rb +1 -1
  124. data/generated/google/apis/logging_v2/classes.rb +12 -0
  125. data/generated/google/apis/logging_v2/representations.rb +1 -0
  126. data/generated/google/apis/logging_v2beta1.rb +1 -1
  127. data/generated/google/apis/logging_v2beta1/classes.rb +12 -0
  128. data/generated/google/apis/logging_v2beta1/representations.rb +1 -0
  129. data/generated/google/apis/ml_v1.rb +1 -1
  130. data/generated/google/apis/ml_v1/classes.rb +2 -2
  131. data/generated/google/apis/monitoring_v3.rb +1 -1
  132. data/generated/google/apis/monitoring_v3/classes.rb +19 -17
  133. data/generated/google/apis/monitoring_v3/representations.rb +1 -2
  134. data/generated/google/apis/partners_v2.rb +1 -1
  135. data/generated/google/apis/partners_v2/classes.rb +18 -15
  136. data/generated/google/apis/proximitybeacon_v1beta1.rb +1 -1
  137. data/generated/google/apis/proximitybeacon_v1beta1/classes.rb +18 -15
  138. data/generated/google/apis/redis_v1.rb +1 -1
  139. data/generated/google/apis/redis_v1/classes.rb +1 -1
  140. data/generated/google/apis/serviceconsumermanagement_v1.rb +1 -1
  141. data/generated/google/apis/serviceconsumermanagement_v1/classes.rb +1 -1
  142. data/generated/google/apis/servicemanagement_v1.rb +1 -1
  143. data/generated/google/apis/servicemanagement_v1/classes.rb +2 -150
  144. data/generated/google/apis/servicemanagement_v1/representations.rb +0 -42
  145. data/generated/google/apis/servicenetworking_v1beta.rb +38 -0
  146. data/generated/google/apis/servicenetworking_v1beta/classes.rb +3440 -0
  147. data/generated/google/apis/servicenetworking_v1beta/representations.rb +992 -0
  148. data/generated/google/apis/servicenetworking_v1beta/service.rb +227 -0
  149. data/generated/google/apis/serviceusage_v1.rb +1 -1
  150. data/generated/google/apis/serviceusage_v1/classes.rb +1 -1
  151. data/generated/google/apis/serviceusage_v1beta1.rb +1 -1
  152. data/generated/google/apis/serviceusage_v1beta1/classes.rb +1 -1
  153. data/generated/google/apis/serviceuser_v1.rb +1 -1
  154. data/generated/google/apis/serviceuser_v1/classes.rb +2 -150
  155. data/generated/google/apis/serviceuser_v1/representations.rb +0 -42
  156. data/generated/google/apis/spanner_v1.rb +1 -1
  157. data/generated/google/apis/spanner_v1/classes.rb +308 -30
  158. data/generated/google/apis/spanner_v1/representations.rb +17 -0
  159. data/generated/google/apis/streetviewpublish_v1.rb +1 -1
  160. data/generated/google/apis/streetviewpublish_v1/classes.rb +12 -0
  161. data/generated/google/apis/streetviewpublish_v1/representations.rb +1 -0
  162. data/generated/google/apis/testing_v1.rb +1 -1
  163. data/generated/google/apis/testing_v1/classes.rb +47 -0
  164. data/generated/google/apis/testing_v1/representations.rb +18 -0
  165. data/generated/google/apis/videointelligence_v1.rb +1 -1
  166. data/generated/google/apis/videointelligence_v1/classes.rb +676 -0
  167. data/generated/google/apis/videointelligence_v1/representations.rb +306 -0
  168. data/generated/google/apis/videointelligence_v1beta2.rb +1 -1
  169. data/generated/google/apis/videointelligence_v1beta2/classes.rb +676 -0
  170. data/generated/google/apis/videointelligence_v1beta2/representations.rb +306 -0
  171. data/generated/google/apis/{videointelligence_v1beta1.rb → videointelligence_v1p1beta1.rb} +6 -6
  172. data/generated/google/apis/{videointelligence_v1beta1 → videointelligence_v1p1beta1}/classes.rb +885 -489
  173. data/generated/google/apis/{videointelligence_v1beta1 → videointelligence_v1p1beta1}/representations.rb +357 -194
  174. data/generated/google/apis/{videointelligence_v1beta1 → videointelligence_v1p1beta1}/service.rb +12 -12
  175. data/generated/google/apis/vision_v1.rb +1 -1
  176. data/generated/google/apis/vision_v1/classes.rb +1 -1
  177. data/generated/google/apis/vision_v1p1beta1.rb +1 -1
  178. data/generated/google/apis/vision_v1p1beta1/classes.rb +1 -1
  179. data/generated/google/apis/vision_v1p2beta1.rb +1 -1
  180. data/generated/google/apis/vision_v1p2beta1/classes.rb +1 -1
  181. data/generated/google/apis/youtube_partner_v1.rb +2 -2
  182. data/generated/google/apis/youtube_partner_v1/classes.rb +2 -1
  183. data/generated/google/apis/youtube_partner_v1/service.rb +1 -1
  184. data/lib/google/apis/version.rb +1 -1
  185. metadata +22 -6
@@ -214,18 +214,6 @@ module Google
214
214
  include Google::Apis::Core::JsonObjectSupport
215
215
  end
216
216
 
217
- class MediaDownload
218
- class Representation < Google::Apis::Core::JsonRepresentation; end
219
-
220
- include Google::Apis::Core::JsonObjectSupport
221
- end
222
-
223
- class MediaUpload
224
- class Representation < Google::Apis::Core::JsonRepresentation; end
225
-
226
- include Google::Apis::Core::JsonObjectSupport
227
- end
228
-
229
217
  class MethodProp
230
218
  class Representation < Google::Apis::Core::JsonRepresentation; end
231
219
 
@@ -657,10 +645,6 @@ module Google
657
645
 
658
646
  property :delete, as: 'delete'
659
647
  property :get, as: 'get'
660
- property :media_download, as: 'mediaDownload', class: Google::Apis::ServiceuserV1::MediaDownload, decorator: Google::Apis::ServiceuserV1::MediaDownload::Representation
661
-
662
- property :media_upload, as: 'mediaUpload', class: Google::Apis::ServiceuserV1::MediaUpload, decorator: Google::Apis::ServiceuserV1::MediaUpload::Representation
663
-
664
648
  property :patch, as: 'patch'
665
649
  property :post, as: 'post'
666
650
  property :put, as: 'put'
@@ -716,32 +700,6 @@ module Google
716
700
  end
717
701
  end
718
702
 
719
- class MediaDownload
720
- # @private
721
- class Representation < Google::Apis::Core::JsonRepresentation
722
- property :complete_notification, as: 'completeNotification'
723
- property :download_service, as: 'downloadService'
724
- property :dropzone, as: 'dropzone'
725
- property :enabled, as: 'enabled'
726
- property :max_direct_download_size, :numeric_string => true, as: 'maxDirectDownloadSize'
727
- property :use_direct_download, as: 'useDirectDownload'
728
- end
729
- end
730
-
731
- class MediaUpload
732
- # @private
733
- class Representation < Google::Apis::Core::JsonRepresentation
734
- property :complete_notification, as: 'completeNotification'
735
- property :dropzone, as: 'dropzone'
736
- property :enabled, as: 'enabled'
737
- property :max_size, :numeric_string => true, as: 'maxSize'
738
- collection :mime_types, as: 'mimeTypes'
739
- property :progress_notification, as: 'progressNotification'
740
- property :start_notification, as: 'startNotification'
741
- property :upload_service, as: 'uploadService'
742
- end
743
- end
744
-
745
703
  class MethodProp
746
704
  # @private
747
705
  class Representation < Google::Apis::Core::JsonRepresentation
@@ -26,7 +26,7 @@ module Google
26
26
  # @see https://cloud.google.com/spanner/
27
27
  module SpannerV1
28
28
  VERSION = 'V1'
29
- REVISION = '20180906'
29
+ REVISION = '20180920'
30
30
 
31
31
  # View and manage your data across Google Cloud Platform services
32
32
  AUTH_CLOUD_PLATFORM = 'https://www.googleapis.com/auth/cloud-platform'
@@ -32,7 +32,7 @@ module Google
32
32
  # re-used for the next transaction. It is not necessary to create a
33
33
  # new session for each transaction.
34
34
  # # Transaction Modes
35
- # Cloud Spanner supports two transaction modes:
35
+ # Cloud Spanner supports three transaction modes:
36
36
  # 1. Locking read-write. This type of transaction is the only way
37
37
  # to write data into Cloud Spanner. These transactions rely on
38
38
  # pessimistic locking and, if necessary, two-phase commit.
@@ -43,6 +43,12 @@ module Google
43
43
  # writes. Snapshot read-only transactions can be configured to
44
44
  # read at timestamps in the past. Snapshot read-only
45
45
  # transactions do not need to be committed.
46
+ # 3. Partitioned DML. This type of transaction is used to execute
47
+ # a single Partitioned DML statement. Partitioned DML partitions
48
+ # the key space and runs the DML statement over each partition
49
+ # in parallel using separate, internal transactions that commit
50
+ # independently. Partitioned DML transactions do not need to be
51
+ # committed.
46
52
  # For transactions that only read, snapshot read-only transactions
47
53
  # provide simpler semantics and are almost always faster. In
48
54
  # particular, read-only transactions do not take locks, so they do
@@ -64,11 +70,8 @@ module Google
64
70
  # Rollback. Long periods of
65
71
  # inactivity at the client may cause Cloud Spanner to release a
66
72
  # transaction's locks and abort it.
67
- # Reads performed within a transaction acquire locks on the data
68
- # being read. Writes can only be done at commit time, after all reads
69
- # have been completed.
70
73
  # Conceptually, a read-write transaction consists of zero or more
71
- # reads or SQL queries followed by
74
+ # reads or SQL statements followed by
72
75
  # Commit. At any time before
73
76
  # Commit, the client can send a
74
77
  # Rollback request to abort the
@@ -194,7 +197,50 @@ module Google
194
197
  # restriction also applies to in-progress reads and/or SQL queries whose
195
198
  # timestamp become too old while executing. Reads and SQL queries with
196
199
  # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
197
- # ##
200
+ # ## Partitioned DML Transactions
201
+ # Partitioned DML transactions are used to execute DML statements with a
202
+ # different execution strategy that provides different, and often better,
203
+ # scalability properties for large, table-wide operations than DML in a
204
+ # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
205
+ # should prefer using ReadWrite transactions.
206
+ # Partitioned DML partitions the keyspace and runs the DML statement on each
207
+ # partition in separate, internal transactions. These transactions commit
208
+ # automatically when complete, and run independently from one another.
209
+ # To reduce lock contention, this execution strategy only acquires read locks
210
+ # on rows that match the WHERE clause of the statement. Additionally, the
211
+ # smaller per-partition transactions hold locks for less time.
212
+ # That said, Partitioned DML is not a drop-in replacement for standard DML used
213
+ # in ReadWrite transactions.
214
+ # - The DML statement must be fully-partitionable. Specifically, the statement
215
+ # must be expressible as the union of many statements which each access only
216
+ # a single row of the table.
217
+ # - The statement is not applied atomically to all rows of the table. Rather,
218
+ # the statement is applied atomically to partitions of the table, in
219
+ # independent transactions. Secondary index rows are updated atomically
220
+ # with the base table rows.
221
+ # - Partitioned DML does not guarantee exactly-once execution semantics
222
+ # against a partition. The statement will be applied at least once to each
223
+ # partition. It is strongly recommended that the DML statement should be
224
+ # idempotent to avoid unexpected results. For instance, it is potentially
225
+ # dangerous to run a statement such as
226
+ # `UPDATE table SET column = column + 1` as it could be run multiple times
227
+ # against some rows.
228
+ # - The partitions are committed automatically - there is no support for
229
+ # Commit or Rollback. If the call returns an error, or if the client issuing
230
+ # the ExecuteSql call dies, it is possible that some rows had the statement
231
+ # executed on them successfully. It is also possible that statement was
232
+ # never executed against other rows.
233
+ # - Partitioned DML transactions may only contain the execution of a single
234
+ # DML statement via ExecuteSql or ExecuteStreamingSql.
235
+ # - If any error is encountered during the execution of the partitioned DML
236
+ # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
237
+ # value that cannot be stored due to schema constraints), then the
238
+ # operation is stopped at that point and an error is returned. It is
239
+ # possible that at this point, some partitions have been committed (or even
240
+ # committed multiple times), and other partitions have not been run at all.
241
+ # Given the above, Partitioned DML is good fit for large, database-wide,
242
+ # operations that are idempotent, such as deleting old rows from a very large
243
+ # table.
198
244
  # Corresponds to the JSON property `options`
199
245
  # @return [Google::Apis::SpannerV1::TransactionOptions]
200
246
  attr_accessor :options
@@ -316,7 +362,7 @@ module Google
316
362
  # re-used for the next transaction. It is not necessary to create a
317
363
  # new session for each transaction.
318
364
  # # Transaction Modes
319
- # Cloud Spanner supports two transaction modes:
365
+ # Cloud Spanner supports three transaction modes:
320
366
  # 1. Locking read-write. This type of transaction is the only way
321
367
  # to write data into Cloud Spanner. These transactions rely on
322
368
  # pessimistic locking and, if necessary, two-phase commit.
@@ -327,6 +373,12 @@ module Google
327
373
  # writes. Snapshot read-only transactions can be configured to
328
374
  # read at timestamps in the past. Snapshot read-only
329
375
  # transactions do not need to be committed.
376
+ # 3. Partitioned DML. This type of transaction is used to execute
377
+ # a single Partitioned DML statement. Partitioned DML partitions
378
+ # the key space and runs the DML statement over each partition
379
+ # in parallel using separate, internal transactions that commit
380
+ # independently. Partitioned DML transactions do not need to be
381
+ # committed.
330
382
  # For transactions that only read, snapshot read-only transactions
331
383
  # provide simpler semantics and are almost always faster. In
332
384
  # particular, read-only transactions do not take locks, so they do
@@ -348,11 +400,8 @@ module Google
348
400
  # Rollback. Long periods of
349
401
  # inactivity at the client may cause Cloud Spanner to release a
350
402
  # transaction's locks and abort it.
351
- # Reads performed within a transaction acquire locks on the data
352
- # being read. Writes can only be done at commit time, after all reads
353
- # have been completed.
354
403
  # Conceptually, a read-write transaction consists of zero or more
355
- # reads or SQL queries followed by
404
+ # reads or SQL statements followed by
356
405
  # Commit. At any time before
357
406
  # Commit, the client can send a
358
407
  # Rollback request to abort the
@@ -478,7 +527,50 @@ module Google
478
527
  # restriction also applies to in-progress reads and/or SQL queries whose
479
528
  # timestamp become too old while executing. Reads and SQL queries with
480
529
  # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
481
- # ##
530
+ # ## Partitioned DML Transactions
531
+ # Partitioned DML transactions are used to execute DML statements with a
532
+ # different execution strategy that provides different, and often better,
533
+ # scalability properties for large, table-wide operations than DML in a
534
+ # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
535
+ # should prefer using ReadWrite transactions.
536
+ # Partitioned DML partitions the keyspace and runs the DML statement on each
537
+ # partition in separate, internal transactions. These transactions commit
538
+ # automatically when complete, and run independently from one another.
539
+ # To reduce lock contention, this execution strategy only acquires read locks
540
+ # on rows that match the WHERE clause of the statement. Additionally, the
541
+ # smaller per-partition transactions hold locks for less time.
542
+ # That said, Partitioned DML is not a drop-in replacement for standard DML used
543
+ # in ReadWrite transactions.
544
+ # - The DML statement must be fully-partitionable. Specifically, the statement
545
+ # must be expressible as the union of many statements which each access only
546
+ # a single row of the table.
547
+ # - The statement is not applied atomically to all rows of the table. Rather,
548
+ # the statement is applied atomically to partitions of the table, in
549
+ # independent transactions. Secondary index rows are updated atomically
550
+ # with the base table rows.
551
+ # - Partitioned DML does not guarantee exactly-once execution semantics
552
+ # against a partition. The statement will be applied at least once to each
553
+ # partition. It is strongly recommended that the DML statement should be
554
+ # idempotent to avoid unexpected results. For instance, it is potentially
555
+ # dangerous to run a statement such as
556
+ # `UPDATE table SET column = column + 1` as it could be run multiple times
557
+ # against some rows.
558
+ # - The partitions are committed automatically - there is no support for
559
+ # Commit or Rollback. If the call returns an error, or if the client issuing
560
+ # the ExecuteSql call dies, it is possible that some rows had the statement
561
+ # executed on them successfully. It is also possible that statement was
562
+ # never executed against other rows.
563
+ # - Partitioned DML transactions may only contain the execution of a single
564
+ # DML statement via ExecuteSql or ExecuteStreamingSql.
565
+ # - If any error is encountered during the execution of the partitioned DML
566
+ # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
567
+ # value that cannot be stored due to schema constraints), then the
568
+ # operation is stopped at that point and an error is returned. It is
569
+ # possible that at this point, some partitions have been committed (or even
570
+ # committed multiple times), and other partitions have not been run at all.
571
+ # Given the above, Partitioned DML is good fit for large, database-wide,
572
+ # operations that are idempotent, such as deleting old rows from a very large
573
+ # table.
482
574
  # Corresponds to the JSON property `singleUseTransaction`
483
575
  # @return [Google::Apis::SpannerV1::TransactionOptions]
484
576
  attr_accessor :single_use_transaction
@@ -796,6 +888,18 @@ module Google
796
888
  # @return [String]
797
889
  attr_accessor :resume_token
798
890
 
891
+ # A per-transaction sequence number used to identify this request. This
892
+ # makes each request idempotent such that if the request is received multiple
893
+ # times, at most one will succeed.
894
+ # The sequence number must be monotonically increasing within the
895
+ # transaction. If a request arrives for the first time with an out-of-order
896
+ # sequence number, the transaction may be aborted. Replays of previously
897
+ # handled requests will yield the same response as the first execution.
898
+ # Required for DML statements. Ignored for queries.
899
+ # Corresponds to the JSON property `seqno`
900
+ # @return [Fixnum]
901
+ attr_accessor :seqno
902
+
799
903
  # Required. The SQL string.
800
904
  # Corresponds to the JSON property `sql`
801
905
  # @return [String]
@@ -820,6 +924,7 @@ module Google
820
924
  @partition_token = args[:partition_token] if args.key?(:partition_token)
821
925
  @query_mode = args[:query_mode] if args.key?(:query_mode)
822
926
  @resume_token = args[:resume_token] if args.key?(:resume_token)
927
+ @seqno = args[:seqno] if args.key?(:seqno)
823
928
  @sql = args[:sql] if args.key?(:sql)
824
929
  @transaction = args[:transaction] if args.key?(:transaction)
825
930
  end
@@ -1679,6 +1784,9 @@ module Google
1679
1784
  # union operator conceptually divides one or more tables into multiple
1680
1785
  # splits, remotely evaluates a subquery independently on each split, and
1681
1786
  # then unions all results.
1787
+ # This must not contain DML commands, such as INSERT, UPDATE, or
1788
+ # DELETE. Use ExecuteStreamingSql with a
1789
+ # PartitionedDml transaction for large, partition-friendly DML operations.
1682
1790
  # Corresponds to the JSON property `sql`
1683
1791
  # @return [String]
1684
1792
  attr_accessor :sql
@@ -1792,6 +1900,19 @@ module Google
1792
1900
  end
1793
1901
  end
1794
1902
 
1903
+ # Message type to initiate a Partitioned DML transaction.
1904
+ class PartitionedDml
1905
+ include Google::Apis::Core::Hashable
1906
+
1907
+ def initialize(**args)
1908
+ update!(**args)
1909
+ end
1910
+
1911
+ # Update properties of this object
1912
+ def update!(**args)
1913
+ end
1914
+ end
1915
+
1795
1916
  # Node information for nodes appearing in a QueryPlan.plan_nodes.
1796
1917
  class PlanNode
1797
1918
  include Google::Apis::Core::Hashable
@@ -2227,6 +2348,17 @@ module Google
2227
2348
  # @return [Hash<String,Object>]
2228
2349
  attr_accessor :query_stats
2229
2350
 
2351
+ # Standard DML returns an exact count of rows that were modified.
2352
+ # Corresponds to the JSON property `rowCountExact`
2353
+ # @return [Fixnum]
2354
+ attr_accessor :row_count_exact
2355
+
2356
+ # Partitioned DML does not offer exactly-once semantics, so it
2357
+ # returns a lower bound of the rows modified.
2358
+ # Corresponds to the JSON property `rowCountLowerBound`
2359
+ # @return [Fixnum]
2360
+ attr_accessor :row_count_lower_bound
2361
+
2230
2362
  def initialize(**args)
2231
2363
  update!(**args)
2232
2364
  end
@@ -2235,6 +2367,8 @@ module Google
2235
2367
  def update!(**args)
2236
2368
  @query_plan = args[:query_plan] if args.key?(:query_plan)
2237
2369
  @query_stats = args[:query_stats] if args.key?(:query_stats)
2370
+ @row_count_exact = args[:row_count_exact] if args.key?(:row_count_exact)
2371
+ @row_count_lower_bound = args[:row_count_lower_bound] if args.key?(:row_count_lower_bound)
2238
2372
  end
2239
2373
  end
2240
2374
 
@@ -2567,7 +2701,7 @@ module Google
2567
2701
  # re-used for the next transaction. It is not necessary to create a
2568
2702
  # new session for each transaction.
2569
2703
  # # Transaction Modes
2570
- # Cloud Spanner supports two transaction modes:
2704
+ # Cloud Spanner supports three transaction modes:
2571
2705
  # 1. Locking read-write. This type of transaction is the only way
2572
2706
  # to write data into Cloud Spanner. These transactions rely on
2573
2707
  # pessimistic locking and, if necessary, two-phase commit.
@@ -2578,6 +2712,12 @@ module Google
2578
2712
  # writes. Snapshot read-only transactions can be configured to
2579
2713
  # read at timestamps in the past. Snapshot read-only
2580
2714
  # transactions do not need to be committed.
2715
+ # 3. Partitioned DML. This type of transaction is used to execute
2716
+ # a single Partitioned DML statement. Partitioned DML partitions
2717
+ # the key space and runs the DML statement over each partition
2718
+ # in parallel using separate, internal transactions that commit
2719
+ # independently. Partitioned DML transactions do not need to be
2720
+ # committed.
2581
2721
  # For transactions that only read, snapshot read-only transactions
2582
2722
  # provide simpler semantics and are almost always faster. In
2583
2723
  # particular, read-only transactions do not take locks, so they do
@@ -2599,11 +2739,8 @@ module Google
2599
2739
  # Rollback. Long periods of
2600
2740
  # inactivity at the client may cause Cloud Spanner to release a
2601
2741
  # transaction's locks and abort it.
2602
- # Reads performed within a transaction acquire locks on the data
2603
- # being read. Writes can only be done at commit time, after all reads
2604
- # have been completed.
2605
2742
  # Conceptually, a read-write transaction consists of zero or more
2606
- # reads or SQL queries followed by
2743
+ # reads or SQL statements followed by
2607
2744
  # Commit. At any time before
2608
2745
  # Commit, the client can send a
2609
2746
  # Rollback request to abort the
@@ -2729,10 +2866,58 @@ module Google
2729
2866
  # restriction also applies to in-progress reads and/or SQL queries whose
2730
2867
  # timestamp become too old while executing. Reads and SQL queries with
2731
2868
  # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
2732
- # ##
2869
+ # ## Partitioned DML Transactions
2870
+ # Partitioned DML transactions are used to execute DML statements with a
2871
+ # different execution strategy that provides different, and often better,
2872
+ # scalability properties for large, table-wide operations than DML in a
2873
+ # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
2874
+ # should prefer using ReadWrite transactions.
2875
+ # Partitioned DML partitions the keyspace and runs the DML statement on each
2876
+ # partition in separate, internal transactions. These transactions commit
2877
+ # automatically when complete, and run independently from one another.
2878
+ # To reduce lock contention, this execution strategy only acquires read locks
2879
+ # on rows that match the WHERE clause of the statement. Additionally, the
2880
+ # smaller per-partition transactions hold locks for less time.
2881
+ # That said, Partitioned DML is not a drop-in replacement for standard DML used
2882
+ # in ReadWrite transactions.
2883
+ # - The DML statement must be fully-partitionable. Specifically, the statement
2884
+ # must be expressible as the union of many statements which each access only
2885
+ # a single row of the table.
2886
+ # - The statement is not applied atomically to all rows of the table. Rather,
2887
+ # the statement is applied atomically to partitions of the table, in
2888
+ # independent transactions. Secondary index rows are updated atomically
2889
+ # with the base table rows.
2890
+ # - Partitioned DML does not guarantee exactly-once execution semantics
2891
+ # against a partition. The statement will be applied at least once to each
2892
+ # partition. It is strongly recommended that the DML statement should be
2893
+ # idempotent to avoid unexpected results. For instance, it is potentially
2894
+ # dangerous to run a statement such as
2895
+ # `UPDATE table SET column = column + 1` as it could be run multiple times
2896
+ # against some rows.
2897
+ # - The partitions are committed automatically - there is no support for
2898
+ # Commit or Rollback. If the call returns an error, or if the client issuing
2899
+ # the ExecuteSql call dies, it is possible that some rows had the statement
2900
+ # executed on them successfully. It is also possible that statement was
2901
+ # never executed against other rows.
2902
+ # - Partitioned DML transactions may only contain the execution of a single
2903
+ # DML statement via ExecuteSql or ExecuteStreamingSql.
2904
+ # - If any error is encountered during the execution of the partitioned DML
2905
+ # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
2906
+ # value that cannot be stored due to schema constraints), then the
2907
+ # operation is stopped at that point and an error is returned. It is
2908
+ # possible that at this point, some partitions have been committed (or even
2909
+ # committed multiple times), and other partitions have not been run at all.
2910
+ # Given the above, Partitioned DML is good fit for large, database-wide,
2911
+ # operations that are idempotent, such as deleting old rows from a very large
2912
+ # table.
2733
2913
  class TransactionOptions
2734
2914
  include Google::Apis::Core::Hashable
2735
2915
 
2916
+ # Message type to initiate a Partitioned DML transaction.
2917
+ # Corresponds to the JSON property `partitionedDml`
2918
+ # @return [Google::Apis::SpannerV1::PartitionedDml]
2919
+ attr_accessor :partitioned_dml
2920
+
2736
2921
  # Message type to initiate a read-only transaction.
2737
2922
  # Corresponds to the JSON property `readOnly`
2738
2923
  # @return [Google::Apis::SpannerV1::ReadOnly]
@@ -2750,6 +2935,7 @@ module Google
2750
2935
 
2751
2936
  # Update properties of this object
2752
2937
  def update!(**args)
2938
+ @partitioned_dml = args[:partitioned_dml] if args.key?(:partitioned_dml)
2753
2939
  @read_only = args[:read_only] if args.key?(:read_only)
2754
2940
  @read_write = args[:read_write] if args.key?(:read_write)
2755
2941
  end
@@ -2768,7 +2954,7 @@ module Google
2768
2954
  # re-used for the next transaction. It is not necessary to create a
2769
2955
  # new session for each transaction.
2770
2956
  # # Transaction Modes
2771
- # Cloud Spanner supports two transaction modes:
2957
+ # Cloud Spanner supports three transaction modes:
2772
2958
  # 1. Locking read-write. This type of transaction is the only way
2773
2959
  # to write data into Cloud Spanner. These transactions rely on
2774
2960
  # pessimistic locking and, if necessary, two-phase commit.
@@ -2779,6 +2965,12 @@ module Google
2779
2965
  # writes. Snapshot read-only transactions can be configured to
2780
2966
  # read at timestamps in the past. Snapshot read-only
2781
2967
  # transactions do not need to be committed.
2968
+ # 3. Partitioned DML. This type of transaction is used to execute
2969
+ # a single Partitioned DML statement. Partitioned DML partitions
2970
+ # the key space and runs the DML statement over each partition
2971
+ # in parallel using separate, internal transactions that commit
2972
+ # independently. Partitioned DML transactions do not need to be
2973
+ # committed.
2782
2974
  # For transactions that only read, snapshot read-only transactions
2783
2975
  # provide simpler semantics and are almost always faster. In
2784
2976
  # particular, read-only transactions do not take locks, so they do
@@ -2800,11 +2992,8 @@ module Google
2800
2992
  # Rollback. Long periods of
2801
2993
  # inactivity at the client may cause Cloud Spanner to release a
2802
2994
  # transaction's locks and abort it.
2803
- # Reads performed within a transaction acquire locks on the data
2804
- # being read. Writes can only be done at commit time, after all reads
2805
- # have been completed.
2806
2995
  # Conceptually, a read-write transaction consists of zero or more
2807
- # reads or SQL queries followed by
2996
+ # reads or SQL statements followed by
2808
2997
  # Commit. At any time before
2809
2998
  # Commit, the client can send a
2810
2999
  # Rollback request to abort the
@@ -2930,7 +3119,50 @@ module Google
2930
3119
  # restriction also applies to in-progress reads and/or SQL queries whose
2931
3120
  # timestamp become too old while executing. Reads and SQL queries with
2932
3121
  # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
2933
- # ##
3122
+ # ## Partitioned DML Transactions
3123
+ # Partitioned DML transactions are used to execute DML statements with a
3124
+ # different execution strategy that provides different, and often better,
3125
+ # scalability properties for large, table-wide operations than DML in a
3126
+ # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
3127
+ # should prefer using ReadWrite transactions.
3128
+ # Partitioned DML partitions the keyspace and runs the DML statement on each
3129
+ # partition in separate, internal transactions. These transactions commit
3130
+ # automatically when complete, and run independently from one another.
3131
+ # To reduce lock contention, this execution strategy only acquires read locks
3132
+ # on rows that match the WHERE clause of the statement. Additionally, the
3133
+ # smaller per-partition transactions hold locks for less time.
3134
+ # That said, Partitioned DML is not a drop-in replacement for standard DML used
3135
+ # in ReadWrite transactions.
3136
+ # - The DML statement must be fully-partitionable. Specifically, the statement
3137
+ # must be expressible as the union of many statements which each access only
3138
+ # a single row of the table.
3139
+ # - The statement is not applied atomically to all rows of the table. Rather,
3140
+ # the statement is applied atomically to partitions of the table, in
3141
+ # independent transactions. Secondary index rows are updated atomically
3142
+ # with the base table rows.
3143
+ # - Partitioned DML does not guarantee exactly-once execution semantics
3144
+ # against a partition. The statement will be applied at least once to each
3145
+ # partition. It is strongly recommended that the DML statement should be
3146
+ # idempotent to avoid unexpected results. For instance, it is potentially
3147
+ # dangerous to run a statement such as
3148
+ # `UPDATE table SET column = column + 1` as it could be run multiple times
3149
+ # against some rows.
3150
+ # - The partitions are committed automatically - there is no support for
3151
+ # Commit or Rollback. If the call returns an error, or if the client issuing
3152
+ # the ExecuteSql call dies, it is possible that some rows had the statement
3153
+ # executed on them successfully. It is also possible that statement was
3154
+ # never executed against other rows.
3155
+ # - Partitioned DML transactions may only contain the execution of a single
3156
+ # DML statement via ExecuteSql or ExecuteStreamingSql.
3157
+ # - If any error is encountered during the execution of the partitioned DML
3158
+ # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
3159
+ # value that cannot be stored due to schema constraints), then the
3160
+ # operation is stopped at that point and an error is returned. It is
3161
+ # possible that at this point, some partitions have been committed (or even
3162
+ # committed multiple times), and other partitions have not been run at all.
3163
+ # Given the above, Partitioned DML is good fit for large, database-wide,
3164
+ # operations that are idempotent, such as deleting old rows from a very large
3165
+ # table.
2934
3166
  # Corresponds to the JSON property `begin`
2935
3167
  # @return [Google::Apis::SpannerV1::TransactionOptions]
2936
3168
  attr_accessor :begin
@@ -2947,7 +3179,7 @@ module Google
2947
3179
  # re-used for the next transaction. It is not necessary to create a
2948
3180
  # new session for each transaction.
2949
3181
  # # Transaction Modes
2950
- # Cloud Spanner supports two transaction modes:
3182
+ # Cloud Spanner supports three transaction modes:
2951
3183
  # 1. Locking read-write. This type of transaction is the only way
2952
3184
  # to write data into Cloud Spanner. These transactions rely on
2953
3185
  # pessimistic locking and, if necessary, two-phase commit.
@@ -2958,6 +3190,12 @@ module Google
2958
3190
  # writes. Snapshot read-only transactions can be configured to
2959
3191
  # read at timestamps in the past. Snapshot read-only
2960
3192
  # transactions do not need to be committed.
3193
+ # 3. Partitioned DML. This type of transaction is used to execute
3194
+ # a single Partitioned DML statement. Partitioned DML partitions
3195
+ # the key space and runs the DML statement over each partition
3196
+ # in parallel using separate, internal transactions that commit
3197
+ # independently. Partitioned DML transactions do not need to be
3198
+ # committed.
2961
3199
  # For transactions that only read, snapshot read-only transactions
2962
3200
  # provide simpler semantics and are almost always faster. In
2963
3201
  # particular, read-only transactions do not take locks, so they do
@@ -2979,11 +3217,8 @@ module Google
2979
3217
  # Rollback. Long periods of
2980
3218
  # inactivity at the client may cause Cloud Spanner to release a
2981
3219
  # transaction's locks and abort it.
2982
- # Reads performed within a transaction acquire locks on the data
2983
- # being read. Writes can only be done at commit time, after all reads
2984
- # have been completed.
2985
3220
  # Conceptually, a read-write transaction consists of zero or more
2986
- # reads or SQL queries followed by
3221
+ # reads or SQL statements followed by
2987
3222
  # Commit. At any time before
2988
3223
  # Commit, the client can send a
2989
3224
  # Rollback request to abort the
@@ -3109,7 +3344,50 @@ module Google
3109
3344
  # restriction also applies to in-progress reads and/or SQL queries whose
3110
3345
  # timestamp become too old while executing. Reads and SQL queries with
3111
3346
  # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
3112
- # ##
3347
+ # ## Partitioned DML Transactions
3348
+ # Partitioned DML transactions are used to execute DML statements with a
3349
+ # different execution strategy that provides different, and often better,
3350
+ # scalability properties for large, table-wide operations than DML in a
3351
+ # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
3352
+ # should prefer using ReadWrite transactions.
3353
+ # Partitioned DML partitions the keyspace and runs the DML statement on each
3354
+ # partition in separate, internal transactions. These transactions commit
3355
+ # automatically when complete, and run independently from one another.
3356
+ # To reduce lock contention, this execution strategy only acquires read locks
3357
+ # on rows that match the WHERE clause of the statement. Additionally, the
3358
+ # smaller per-partition transactions hold locks for less time.
3359
+ # That said, Partitioned DML is not a drop-in replacement for standard DML used
3360
+ # in ReadWrite transactions.
3361
+ # - The DML statement must be fully-partitionable. Specifically, the statement
3362
+ # must be expressible as the union of many statements which each access only
3363
+ # a single row of the table.
3364
+ # - The statement is not applied atomically to all rows of the table. Rather,
3365
+ # the statement is applied atomically to partitions of the table, in
3366
+ # independent transactions. Secondary index rows are updated atomically
3367
+ # with the base table rows.
3368
+ # - Partitioned DML does not guarantee exactly-once execution semantics
3369
+ # against a partition. The statement will be applied at least once to each
3370
+ # partition. It is strongly recommended that the DML statement should be
3371
+ # idempotent to avoid unexpected results. For instance, it is potentially
3372
+ # dangerous to run a statement such as
3373
+ # `UPDATE table SET column = column + 1` as it could be run multiple times
3374
+ # against some rows.
3375
+ # - The partitions are committed automatically - there is no support for
3376
+ # Commit or Rollback. If the call returns an error, or if the client issuing
3377
+ # the ExecuteSql call dies, it is possible that some rows had the statement
3378
+ # executed on them successfully. It is also possible that statement was
3379
+ # never executed against other rows.
3380
+ # - Partitioned DML transactions may only contain the execution of a single
3381
+ # DML statement via ExecuteSql or ExecuteStreamingSql.
3382
+ # - If any error is encountered during the execution of the partitioned DML
3383
+ # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
3384
+ # value that cannot be stored due to schema constraints), then the
3385
+ # operation is stopped at that point and an error is returned. It is
3386
+ # possible that at this point, some partitions have been committed (or even
3387
+ # committed multiple times), and other partitions have not been run at all.
3388
+ # Given the above, Partitioned DML is good fit for large, database-wide,
3389
+ # operations that are idempotent, such as deleting old rows from a very large
3390
+ # table.
3113
3391
  # Corresponds to the JSON property `singleUse`
3114
3392
  # @return [Google::Apis::SpannerV1::TransactionOptions]
3115
3393
  attr_accessor :single_use