google-apis-spanner_v1 0.28.0 → 0.29.0
Sign up to get free protection for your applications and to get access to all the features.
@@ -228,165 +228,45 @@ module Google
|
|
228
228
|
class BeginTransactionRequest
|
229
229
|
include Google::Apis::Core::Hashable
|
230
230
|
|
231
|
-
#
|
232
|
-
#
|
233
|
-
#
|
234
|
-
#
|
235
|
-
#
|
236
|
-
#
|
237
|
-
#
|
238
|
-
#
|
239
|
-
#
|
240
|
-
#
|
241
|
-
#
|
242
|
-
#
|
243
|
-
#
|
244
|
-
#
|
245
|
-
#
|
246
|
-
#
|
247
|
-
#
|
248
|
-
#
|
249
|
-
#
|
250
|
-
#
|
251
|
-
#
|
252
|
-
#
|
253
|
-
#
|
254
|
-
#
|
255
|
-
#
|
256
|
-
#
|
257
|
-
#
|
258
|
-
#
|
259
|
-
#
|
260
|
-
#
|
261
|
-
#
|
262
|
-
#
|
263
|
-
#
|
264
|
-
#
|
265
|
-
#
|
266
|
-
#
|
267
|
-
#
|
268
|
-
#
|
269
|
-
#
|
270
|
-
# Cloud Spanner makes no guarantees about how long the transaction's locks were
|
271
|
-
# held for. It is an error to use Cloud Spanner locks for any sort of mutual
|
272
|
-
# exclusion other than between Cloud Spanner transactions themselves. Retrying
|
273
|
-
# aborted transactions: When a transaction aborts, the application can choose to
|
274
|
-
# retry the whole transaction again. To maximize the chances of successfully
|
275
|
-
# committing the retry, the client should execute the retry in the same session
|
276
|
-
# as the original attempt. The original session's lock priority increases with
|
277
|
-
# each consecutive abort, meaning that each attempt has a slightly better chance
|
278
|
-
# of success than the previous. Under some circumstances (for example, many
|
279
|
-
# transactions attempting to modify the same row(s)), a transaction can abort
|
280
|
-
# many times in a short period before successfully committing. Thus, it is not a
|
281
|
-
# good idea to cap the number of retries a transaction can attempt; instead, it
|
282
|
-
# is better to limit the total amount of time spent retrying. Idle transactions:
|
283
|
-
# A transaction is considered idle if it has no outstanding reads or SQL queries
|
284
|
-
# and has not started a read or SQL query within the last 10 seconds. Idle
|
285
|
-
# transactions can be aborted by Cloud Spanner so that they don't hold on to
|
286
|
-
# locks indefinitely. If an idle transaction is aborted, the commit will fail
|
287
|
-
# with error `ABORTED`. If this behavior is undesirable, periodically executing
|
288
|
-
# a simple SQL query in the transaction (for example, `SELECT 1`) prevents the
|
289
|
-
# transaction from becoming idle. Snapshot read-only transactions: Snapshot read-
|
290
|
-
# only transactions provides a simpler method than locking read-write
|
291
|
-
# transactions for doing several consistent reads. However, this type of
|
292
|
-
# transaction does not support writes. Snapshot transactions do not take locks.
|
293
|
-
# Instead, they work by choosing a Cloud Spanner timestamp, then executing all
|
294
|
-
# reads at that timestamp. Since they do not acquire locks, they do not block
|
295
|
-
# concurrent read-write transactions. Unlike locking read-write transactions,
|
296
|
-
# snapshot read-only transactions never abort. They can fail if the chosen read
|
297
|
-
# timestamp is garbage collected; however, the default garbage collection policy
|
298
|
-
# is generous enough that most applications do not need to worry about this in
|
299
|
-
# practice. Snapshot read-only transactions do not need to call Commit or
|
300
|
-
# Rollback (and in fact are not permitted to do so). To execute a snapshot
|
301
|
-
# transaction, the client specifies a timestamp bound, which tells Cloud Spanner
|
302
|
-
# how to choose a read timestamp. The types of timestamp bound are: - Strong (
|
303
|
-
# the default). - Bounded staleness. - Exact staleness. If the Cloud Spanner
|
304
|
-
# database to be read is geographically distributed, stale read-only
|
305
|
-
# transactions can execute more quickly than strong or read-write transactions,
|
306
|
-
# because they are able to execute far from the leader replica. Each type of
|
307
|
-
# timestamp bound is discussed in detail below. Strong: Strong reads are
|
308
|
-
# guaranteed to see the effects of all transactions that have committed before
|
309
|
-
# the start of the read. Furthermore, all rows yielded by a single read are
|
310
|
-
# consistent with each other -- if any part of the read observes a transaction,
|
311
|
-
# all parts of the read see the transaction. Strong reads are not repeatable:
|
312
|
-
# two consecutive strong read-only transactions might return inconsistent
|
313
|
-
# results if there are concurrent writes. If consistency across reads is
|
314
|
-
# required, the reads should be executed within a transaction or at an exact
|
315
|
-
# read timestamp. See TransactionOptions.ReadOnly.strong. Exact staleness: These
|
316
|
-
# timestamp bounds execute reads at a user-specified timestamp. Reads at a
|
317
|
-
# timestamp are guaranteed to see a consistent prefix of the global transaction
|
318
|
-
# history: they observe modifications done by all transactions with a commit
|
319
|
-
# timestamp less than or equal to the read timestamp, and observe none of the
|
320
|
-
# modifications done by transactions with a larger commit timestamp. They will
|
321
|
-
# block until all conflicting transactions that may be assigned commit
|
322
|
-
# timestamps <= the read timestamp have finished. The timestamp can either be
|
323
|
-
# expressed as an absolute Cloud Spanner commit timestamp or a staleness
|
324
|
-
# relative to the current time. These modes do not require a "negotiation phase"
|
325
|
-
# to pick a timestamp. As a result, they execute slightly faster than the
|
326
|
-
# equivalent boundedly stale concurrency modes. On the other hand, boundedly
|
327
|
-
# stale reads usually return fresher results. See TransactionOptions.ReadOnly.
|
328
|
-
# read_timestamp and TransactionOptions.ReadOnly.exact_staleness. Bounded
|
329
|
-
# staleness: Bounded staleness modes allow Cloud Spanner to pick the read
|
330
|
-
# timestamp, subject to a user-provided staleness bound. Cloud Spanner chooses
|
331
|
-
# the newest timestamp within the staleness bound that allows execution of the
|
332
|
-
# reads at the closest available replica without blocking. All rows yielded are
|
333
|
-
# consistent with each other -- if any part of the read observes a transaction,
|
334
|
-
# all parts of the read see the transaction. Boundedly stale reads are not
|
335
|
-
# repeatable: two stale reads, even if they use the same staleness bound, can
|
336
|
-
# execute at different timestamps and thus return inconsistent results.
|
337
|
-
# Boundedly stale reads execute in two phases: the first phase negotiates a
|
338
|
-
# timestamp among all replicas needed to serve the read. In the second phase,
|
339
|
-
# reads are executed at the negotiated timestamp. As a result of the two phase
|
340
|
-
# execution, bounded staleness reads are usually a little slower than comparable
|
341
|
-
# exact staleness reads. However, they are typically able to return fresher
|
342
|
-
# results, and are more likely to execute at the closest replica. Because the
|
343
|
-
# timestamp negotiation requires up-front knowledge of which rows will be read,
|
344
|
-
# it can only be used with single-use read-only transactions. See
|
345
|
-
# TransactionOptions.ReadOnly.max_staleness and TransactionOptions.ReadOnly.
|
346
|
-
# min_read_timestamp. Old read timestamps and garbage collection: Cloud Spanner
|
347
|
-
# continuously garbage collects deleted and overwritten data in the background
|
348
|
-
# to reclaim storage space. This process is known as "version GC". By default,
|
349
|
-
# version GC reclaims versions after they are one hour old. Because of this,
|
350
|
-
# Cloud Spanner cannot perform reads at read timestamps more than one hour in
|
351
|
-
# the past. This restriction also applies to in-progress reads and/or SQL
|
352
|
-
# queries whose timestamp become too old while executing. Reads and SQL queries
|
353
|
-
# with too-old read timestamps fail with the error `FAILED_PRECONDITION`. You
|
354
|
-
# can configure and extend the `VERSION_RETENTION_PERIOD` of a database up to a
|
355
|
-
# period as long as one week, which allows Cloud Spanner to perform reads up to
|
356
|
-
# one week in the past. Partitioned DML transactions: Partitioned DML
|
357
|
-
# transactions are used to execute DML statements with a different execution
|
358
|
-
# strategy that provides different, and often better, scalability properties for
|
359
|
-
# large, table-wide operations than DML in a ReadWrite transaction. Smaller
|
360
|
-
# scoped statements, such as an OLTP workload, should prefer using ReadWrite
|
361
|
-
# transactions. Partitioned DML partitions the keyspace and runs the DML
|
362
|
-
# statement on each partition in separate, internal transactions. These
|
363
|
-
# transactions commit automatically when complete, and run independently from
|
364
|
-
# one another. To reduce lock contention, this execution strategy only acquires
|
365
|
-
# read locks on rows that match the WHERE clause of the statement. Additionally,
|
366
|
-
# the smaller per-partition transactions hold locks for less time. That said,
|
367
|
-
# Partitioned DML is not a drop-in replacement for standard DML used in
|
368
|
-
# ReadWrite transactions. - The DML statement must be fully-partitionable.
|
369
|
-
# Specifically, the statement must be expressible as the union of many
|
370
|
-
# statements which each access only a single row of the table. - The statement
|
371
|
-
# is not applied atomically to all rows of the table. Rather, the statement is
|
372
|
-
# applied atomically to partitions of the table, in independent transactions.
|
373
|
-
# Secondary index rows are updated atomically with the base table rows. -
|
374
|
-
# Partitioned DML does not guarantee exactly-once execution semantics against a
|
375
|
-
# partition. The statement will be applied at least once to each partition. It
|
376
|
-
# is strongly recommended that the DML statement should be idempotent to avoid
|
377
|
-
# unexpected results. For instance, it is potentially dangerous to run a
|
378
|
-
# statement such as `UPDATE table SET column = column + 1` as it could be run
|
379
|
-
# multiple times against some rows. - The partitions are committed automatically
|
380
|
-
# - there is no support for Commit or Rollback. If the call returns an error, or
|
381
|
-
# if the client issuing the ExecuteSql call dies, it is possible that some rows
|
382
|
-
# had the statement executed on them successfully. It is also possible that
|
383
|
-
# statement was never executed against other rows. - Partitioned DML
|
384
|
-
# transactions may only contain the execution of a single DML statement via
|
385
|
-
# ExecuteSql or ExecuteStreamingSql. - If any error is encountered during the
|
386
|
-
# execution of the partitioned DML operation (for instance, a UNIQUE INDEX
|
387
|
-
# violation, division by zero, or a value that cannot be stored due to schema
|
388
|
-
# constraints), then the operation is stopped at that point and an error is
|
389
|
-
# returned. It is possible that at this point, some partitions have been
|
231
|
+
# In addition, if TransactionOptions.read_only.return_read_timestamp is set to
|
232
|
+
# true, a special value of 2^63 - 2 will be returned in the Transaction message
|
233
|
+
# that describes the transaction, instead of a valid read timestamp. This
|
234
|
+
# special value should be discarded and not used for any subsequent queries.
|
235
|
+
# Please see https://cloud.google.com/spanner/docs/change-streams for more
|
236
|
+
# details on how to query the change stream TVFs. Partitioned DML transactions:
|
237
|
+
# Partitioned DML transactions are used to execute DML statements with a
|
238
|
+
# different execution strategy that provides different, and often better,
|
239
|
+
# scalability properties for large, table-wide operations than DML in a
|
240
|
+
# ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
|
241
|
+
# should prefer using ReadWrite transactions. Partitioned DML partitions the
|
242
|
+
# keyspace and runs the DML statement on each partition in separate, internal
|
243
|
+
# transactions. These transactions commit automatically when complete, and run
|
244
|
+
# independently from one another. To reduce lock contention, this execution
|
245
|
+
# strategy only acquires read locks on rows that match the WHERE clause of the
|
246
|
+
# statement. Additionally, the smaller per-partition transactions hold locks for
|
247
|
+
# less time. That said, Partitioned DML is not a drop-in replacement for
|
248
|
+
# standard DML used in ReadWrite transactions. - The DML statement must be fully-
|
249
|
+
# partitionable. Specifically, the statement must be expressible as the union of
|
250
|
+
# many statements which each access only a single row of the table. - The
|
251
|
+
# statement is not applied atomically to all rows of the table. Rather, the
|
252
|
+
# statement is applied atomically to partitions of the table, in independent
|
253
|
+
# transactions. Secondary index rows are updated atomically with the base table
|
254
|
+
# rows. - Partitioned DML does not guarantee exactly-once execution semantics
|
255
|
+
# against a partition. The statement will be applied at least once to each
|
256
|
+
# partition. It is strongly recommended that the DML statement should be
|
257
|
+
# idempotent to avoid unexpected results. For instance, it is potentially
|
258
|
+
# dangerous to run a statement such as `UPDATE table SET column = column + 1` as
|
259
|
+
# it could be run multiple times against some rows. - The partitions are
|
260
|
+
# committed automatically - there is no support for Commit or Rollback. If the
|
261
|
+
# call returns an error, or if the client issuing the ExecuteSql call dies, it
|
262
|
+
# is possible that some rows had the statement executed on them successfully. It
|
263
|
+
# is also possible that statement was never executed against other rows. -
|
264
|
+
# Partitioned DML transactions may only contain the execution of a single DML
|
265
|
+
# statement via ExecuteSql or ExecuteStreamingSql. - If any error is encountered
|
266
|
+
# during the execution of the partitioned DML operation (for instance, a UNIQUE
|
267
|
+
# INDEX violation, division by zero, or a value that cannot be stored due to
|
268
|
+
# schema constraints), then the operation is stopped at that point and an error
|
269
|
+
# is returned. It is possible that at this point, some partitions have been
|
390
270
|
# committed (or even committed multiple times), and other partitions have not
|
391
271
|
# been run at all. Given the above, Partitioned DML is good fit for large,
|
392
272
|
# database-wide, operations that are idempotent, such as deleting old rows from
|
@@ -543,165 +423,45 @@ module Google
|
|
543
423
|
attr_accessor :return_commit_stats
|
544
424
|
alias_method :return_commit_stats?, :return_commit_stats
|
545
425
|
|
546
|
-
#
|
547
|
-
#
|
548
|
-
#
|
549
|
-
#
|
550
|
-
#
|
551
|
-
#
|
552
|
-
#
|
553
|
-
#
|
554
|
-
#
|
555
|
-
#
|
556
|
-
#
|
557
|
-
#
|
558
|
-
#
|
559
|
-
#
|
560
|
-
#
|
561
|
-
#
|
562
|
-
#
|
563
|
-
#
|
564
|
-
#
|
565
|
-
#
|
566
|
-
#
|
567
|
-
#
|
568
|
-
#
|
569
|
-
#
|
570
|
-
#
|
571
|
-
#
|
572
|
-
#
|
573
|
-
#
|
574
|
-
#
|
575
|
-
#
|
576
|
-
#
|
577
|
-
#
|
578
|
-
#
|
579
|
-
#
|
580
|
-
#
|
581
|
-
#
|
582
|
-
#
|
583
|
-
#
|
584
|
-
#
|
585
|
-
# Cloud Spanner makes no guarantees about how long the transaction's locks were
|
586
|
-
# held for. It is an error to use Cloud Spanner locks for any sort of mutual
|
587
|
-
# exclusion other than between Cloud Spanner transactions themselves. Retrying
|
588
|
-
# aborted transactions: When a transaction aborts, the application can choose to
|
589
|
-
# retry the whole transaction again. To maximize the chances of successfully
|
590
|
-
# committing the retry, the client should execute the retry in the same session
|
591
|
-
# as the original attempt. The original session's lock priority increases with
|
592
|
-
# each consecutive abort, meaning that each attempt has a slightly better chance
|
593
|
-
# of success than the previous. Under some circumstances (for example, many
|
594
|
-
# transactions attempting to modify the same row(s)), a transaction can abort
|
595
|
-
# many times in a short period before successfully committing. Thus, it is not a
|
596
|
-
# good idea to cap the number of retries a transaction can attempt; instead, it
|
597
|
-
# is better to limit the total amount of time spent retrying. Idle transactions:
|
598
|
-
# A transaction is considered idle if it has no outstanding reads or SQL queries
|
599
|
-
# and has not started a read or SQL query within the last 10 seconds. Idle
|
600
|
-
# transactions can be aborted by Cloud Spanner so that they don't hold on to
|
601
|
-
# locks indefinitely. If an idle transaction is aborted, the commit will fail
|
602
|
-
# with error `ABORTED`. If this behavior is undesirable, periodically executing
|
603
|
-
# a simple SQL query in the transaction (for example, `SELECT 1`) prevents the
|
604
|
-
# transaction from becoming idle. Snapshot read-only transactions: Snapshot read-
|
605
|
-
# only transactions provides a simpler method than locking read-write
|
606
|
-
# transactions for doing several consistent reads. However, this type of
|
607
|
-
# transaction does not support writes. Snapshot transactions do not take locks.
|
608
|
-
# Instead, they work by choosing a Cloud Spanner timestamp, then executing all
|
609
|
-
# reads at that timestamp. Since they do not acquire locks, they do not block
|
610
|
-
# concurrent read-write transactions. Unlike locking read-write transactions,
|
611
|
-
# snapshot read-only transactions never abort. They can fail if the chosen read
|
612
|
-
# timestamp is garbage collected; however, the default garbage collection policy
|
613
|
-
# is generous enough that most applications do not need to worry about this in
|
614
|
-
# practice. Snapshot read-only transactions do not need to call Commit or
|
615
|
-
# Rollback (and in fact are not permitted to do so). To execute a snapshot
|
616
|
-
# transaction, the client specifies a timestamp bound, which tells Cloud Spanner
|
617
|
-
# how to choose a read timestamp. The types of timestamp bound are: - Strong (
|
618
|
-
# the default). - Bounded staleness. - Exact staleness. If the Cloud Spanner
|
619
|
-
# database to be read is geographically distributed, stale read-only
|
620
|
-
# transactions can execute more quickly than strong or read-write transactions,
|
621
|
-
# because they are able to execute far from the leader replica. Each type of
|
622
|
-
# timestamp bound is discussed in detail below. Strong: Strong reads are
|
623
|
-
# guaranteed to see the effects of all transactions that have committed before
|
624
|
-
# the start of the read. Furthermore, all rows yielded by a single read are
|
625
|
-
# consistent with each other -- if any part of the read observes a transaction,
|
626
|
-
# all parts of the read see the transaction. Strong reads are not repeatable:
|
627
|
-
# two consecutive strong read-only transactions might return inconsistent
|
628
|
-
# results if there are concurrent writes. If consistency across reads is
|
629
|
-
# required, the reads should be executed within a transaction or at an exact
|
630
|
-
# read timestamp. See TransactionOptions.ReadOnly.strong. Exact staleness: These
|
631
|
-
# timestamp bounds execute reads at a user-specified timestamp. Reads at a
|
632
|
-
# timestamp are guaranteed to see a consistent prefix of the global transaction
|
633
|
-
# history: they observe modifications done by all transactions with a commit
|
634
|
-
# timestamp less than or equal to the read timestamp, and observe none of the
|
635
|
-
# modifications done by transactions with a larger commit timestamp. They will
|
636
|
-
# block until all conflicting transactions that may be assigned commit
|
637
|
-
# timestamps <= the read timestamp have finished. The timestamp can either be
|
638
|
-
# expressed as an absolute Cloud Spanner commit timestamp or a staleness
|
639
|
-
# relative to the current time. These modes do not require a "negotiation phase"
|
640
|
-
# to pick a timestamp. As a result, they execute slightly faster than the
|
641
|
-
# equivalent boundedly stale concurrency modes. On the other hand, boundedly
|
642
|
-
# stale reads usually return fresher results. See TransactionOptions.ReadOnly.
|
643
|
-
# read_timestamp and TransactionOptions.ReadOnly.exact_staleness. Bounded
|
644
|
-
# staleness: Bounded staleness modes allow Cloud Spanner to pick the read
|
645
|
-
# timestamp, subject to a user-provided staleness bound. Cloud Spanner chooses
|
646
|
-
# the newest timestamp within the staleness bound that allows execution of the
|
647
|
-
# reads at the closest available replica without blocking. All rows yielded are
|
648
|
-
# consistent with each other -- if any part of the read observes a transaction,
|
649
|
-
# all parts of the read see the transaction. Boundedly stale reads are not
|
650
|
-
# repeatable: two stale reads, even if they use the same staleness bound, can
|
651
|
-
# execute at different timestamps and thus return inconsistent results.
|
652
|
-
# Boundedly stale reads execute in two phases: the first phase negotiates a
|
653
|
-
# timestamp among all replicas needed to serve the read. In the second phase,
|
654
|
-
# reads are executed at the negotiated timestamp. As a result of the two phase
|
655
|
-
# execution, bounded staleness reads are usually a little slower than comparable
|
656
|
-
# exact staleness reads. However, they are typically able to return fresher
|
657
|
-
# results, and are more likely to execute at the closest replica. Because the
|
658
|
-
# timestamp negotiation requires up-front knowledge of which rows will be read,
|
659
|
-
# it can only be used with single-use read-only transactions. See
|
660
|
-
# TransactionOptions.ReadOnly.max_staleness and TransactionOptions.ReadOnly.
|
661
|
-
# min_read_timestamp. Old read timestamps and garbage collection: Cloud Spanner
|
662
|
-
# continuously garbage collects deleted and overwritten data in the background
|
663
|
-
# to reclaim storage space. This process is known as "version GC". By default,
|
664
|
-
# version GC reclaims versions after they are one hour old. Because of this,
|
665
|
-
# Cloud Spanner cannot perform reads at read timestamps more than one hour in
|
666
|
-
# the past. This restriction also applies to in-progress reads and/or SQL
|
667
|
-
# queries whose timestamp become too old while executing. Reads and SQL queries
|
668
|
-
# with too-old read timestamps fail with the error `FAILED_PRECONDITION`. You
|
669
|
-
# can configure and extend the `VERSION_RETENTION_PERIOD` of a database up to a
|
670
|
-
# period as long as one week, which allows Cloud Spanner to perform reads up to
|
671
|
-
# one week in the past. Partitioned DML transactions: Partitioned DML
|
672
|
-
# transactions are used to execute DML statements with a different execution
|
673
|
-
# strategy that provides different, and often better, scalability properties for
|
674
|
-
# large, table-wide operations than DML in a ReadWrite transaction. Smaller
|
675
|
-
# scoped statements, such as an OLTP workload, should prefer using ReadWrite
|
676
|
-
# transactions. Partitioned DML partitions the keyspace and runs the DML
|
677
|
-
# statement on each partition in separate, internal transactions. These
|
678
|
-
# transactions commit automatically when complete, and run independently from
|
679
|
-
# one another. To reduce lock contention, this execution strategy only acquires
|
680
|
-
# read locks on rows that match the WHERE clause of the statement. Additionally,
|
681
|
-
# the smaller per-partition transactions hold locks for less time. That said,
|
682
|
-
# Partitioned DML is not a drop-in replacement for standard DML used in
|
683
|
-
# ReadWrite transactions. - The DML statement must be fully-partitionable.
|
684
|
-
# Specifically, the statement must be expressible as the union of many
|
685
|
-
# statements which each access only a single row of the table. - The statement
|
686
|
-
# is not applied atomically to all rows of the table. Rather, the statement is
|
687
|
-
# applied atomically to partitions of the table, in independent transactions.
|
688
|
-
# Secondary index rows are updated atomically with the base table rows. -
|
689
|
-
# Partitioned DML does not guarantee exactly-once execution semantics against a
|
690
|
-
# partition. The statement will be applied at least once to each partition. It
|
691
|
-
# is strongly recommended that the DML statement should be idempotent to avoid
|
692
|
-
# unexpected results. For instance, it is potentially dangerous to run a
|
693
|
-
# statement such as `UPDATE table SET column = column + 1` as it could be run
|
694
|
-
# multiple times against some rows. - The partitions are committed automatically
|
695
|
-
# - there is no support for Commit or Rollback. If the call returns an error, or
|
696
|
-
# if the client issuing the ExecuteSql call dies, it is possible that some rows
|
697
|
-
# had the statement executed on them successfully. It is also possible that
|
698
|
-
# statement was never executed against other rows. - Partitioned DML
|
699
|
-
# transactions may only contain the execution of a single DML statement via
|
700
|
-
# ExecuteSql or ExecuteStreamingSql. - If any error is encountered during the
|
701
|
-
# execution of the partitioned DML operation (for instance, a UNIQUE INDEX
|
702
|
-
# violation, division by zero, or a value that cannot be stored due to schema
|
703
|
-
# constraints), then the operation is stopped at that point and an error is
|
704
|
-
# returned. It is possible that at this point, some partitions have been
|
426
|
+
# In addition, if TransactionOptions.read_only.return_read_timestamp is set to
|
427
|
+
# true, a special value of 2^63 - 2 will be returned in the Transaction message
|
428
|
+
# that describes the transaction, instead of a valid read timestamp. This
|
429
|
+
# special value should be discarded and not used for any subsequent queries.
|
430
|
+
# Please see https://cloud.google.com/spanner/docs/change-streams for more
|
431
|
+
# details on how to query the change stream TVFs. Partitioned DML transactions:
|
432
|
+
# Partitioned DML transactions are used to execute DML statements with a
|
433
|
+
# different execution strategy that provides different, and often better,
|
434
|
+
# scalability properties for large, table-wide operations than DML in a
|
435
|
+
# ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
|
436
|
+
# should prefer using ReadWrite transactions. Partitioned DML partitions the
|
437
|
+
# keyspace and runs the DML statement on each partition in separate, internal
|
438
|
+
# transactions. These transactions commit automatically when complete, and run
|
439
|
+
# independently from one another. To reduce lock contention, this execution
|
440
|
+
# strategy only acquires read locks on rows that match the WHERE clause of the
|
441
|
+
# statement. Additionally, the smaller per-partition transactions hold locks for
|
442
|
+
# less time. That said, Partitioned DML is not a drop-in replacement for
|
443
|
+
# standard DML used in ReadWrite transactions. - The DML statement must be fully-
|
444
|
+
# partitionable. Specifically, the statement must be expressible as the union of
|
445
|
+
# many statements which each access only a single row of the table. - The
|
446
|
+
# statement is not applied atomically to all rows of the table. Rather, the
|
447
|
+
# statement is applied atomically to partitions of the table, in independent
|
448
|
+
# transactions. Secondary index rows are updated atomically with the base table
|
449
|
+
# rows. - Partitioned DML does not guarantee exactly-once execution semantics
|
450
|
+
# against a partition. The statement will be applied at least once to each
|
451
|
+
# partition. It is strongly recommended that the DML statement should be
|
452
|
+
# idempotent to avoid unexpected results. For instance, it is potentially
|
453
|
+
# dangerous to run a statement such as `UPDATE table SET column = column + 1` as
|
454
|
+
# it could be run multiple times against some rows. - The partitions are
|
455
|
+
# committed automatically - there is no support for Commit or Rollback. If the
|
456
|
+
# call returns an error, or if the client issuing the ExecuteSql call dies, it
|
457
|
+
# is possible that some rows had the statement executed on them successfully. It
|
458
|
+
# is also possible that statement was never executed against other rows. -
|
459
|
+
# Partitioned DML transactions may only contain the execution of a single DML
|
460
|
+
# statement via ExecuteSql or ExecuteStreamingSql. - If any error is encountered
|
461
|
+
# during the execution of the partitioned DML operation (for instance, a UNIQUE
|
462
|
+
# INDEX violation, division by zero, or a value that cannot be stored due to
|
463
|
+
# schema constraints), then the operation is stopped at that point and an error
|
464
|
+
# is returned. It is possible that at this point, some partitions have been
|
705
465
|
# committed (or even committed multiple times), and other partitions have not
|
706
466
|
# been run at all. Given the above, Partitioned DML is good fit for large,
|
707
467
|
# database-wide, operations that are idempotent, such as deleting old rows from
|
@@ -1221,6 +981,28 @@ module Google
|
|
1221
981
|
end
|
1222
982
|
end
|
1223
983
|
|
984
|
+
# A Cloud Spanner database role.
|
985
|
+
class DatabaseRole
|
986
|
+
include Google::Apis::Core::Hashable
|
987
|
+
|
988
|
+
# Required. The name of the database role. Values are of the form `projects//
|
989
|
+
# instances//databases//databaseRoles/ `role``, where `` is as specified in the `
|
990
|
+
# CREATE ROLE` DDL statement. This name can be passed to Get/Set IAMPolicy
|
991
|
+
# methods to identify the database role.
|
992
|
+
# Corresponds to the JSON property `name`
|
993
|
+
# @return [String]
|
994
|
+
attr_accessor :name
|
995
|
+
|
996
|
+
def initialize(**args)
|
997
|
+
update!(**args)
|
998
|
+
end
|
999
|
+
|
1000
|
+
# Update properties of this object
|
1001
|
+
def update!(**args)
|
1002
|
+
@name = args[:name] if args.key?(:name)
|
1003
|
+
end
|
1004
|
+
end
|
1005
|
+
|
1224
1006
|
# Arguments to delete operations.
|
1225
1007
|
class Delete
|
1226
1008
|
include Google::Apis::Core::Hashable
|
@@ -2235,6 +2017,32 @@ module Google
|
|
2235
2017
|
end
|
2236
2018
|
end
|
2237
2019
|
|
2020
|
+
# The response for ListDatabaseRoles.
|
2021
|
+
class ListDatabaseRolesResponse
|
2022
|
+
include Google::Apis::Core::Hashable
|
2023
|
+
|
2024
|
+
# Database roles that matched the request.
|
2025
|
+
# Corresponds to the JSON property `databaseRoles`
|
2026
|
+
# @return [Array<Google::Apis::SpannerV1::DatabaseRole>]
|
2027
|
+
attr_accessor :database_roles
|
2028
|
+
|
2029
|
+
# `next_page_token` can be sent in a subsequent ListDatabaseRoles call to fetch
|
2030
|
+
# more of the matching roles.
|
2031
|
+
# Corresponds to the JSON property `nextPageToken`
|
2032
|
+
# @return [String]
|
2033
|
+
attr_accessor :next_page_token
|
2034
|
+
|
2035
|
+
def initialize(**args)
|
2036
|
+
update!(**args)
|
2037
|
+
end
|
2038
|
+
|
2039
|
+
# Update properties of this object
|
2040
|
+
def update!(**args)
|
2041
|
+
@database_roles = args[:database_roles] if args.key?(:database_roles)
|
2042
|
+
@next_page_token = args[:next_page_token] if args.key?(:next_page_token)
|
2043
|
+
end
|
2044
|
+
end
|
2045
|
+
|
2238
2046
|
# The response for ListDatabases.
|
2239
2047
|
class ListDatabasesResponse
|
2240
2048
|
include Google::Apis::Core::Hashable
|
@@ -3921,6 +3729,11 @@ module Google
|
|
3921
3729
|
# @return [String]
|
3922
3730
|
attr_accessor :create_time
|
3923
3731
|
|
3732
|
+
# The database role which created this session.
|
3733
|
+
# Corresponds to the JSON property `creatorRole`
|
3734
|
+
# @return [String]
|
3735
|
+
attr_accessor :creator_role
|
3736
|
+
|
3924
3737
|
# The labels for the session. * Label keys must be between 1 and 63 characters
|
3925
3738
|
# long and must conform to the following regular expression: `[a-z]([-a-z0-9]*[a-
|
3926
3739
|
# z0-9])?`. * Label values must be between 0 and 63 characters long and must
|
@@ -3944,6 +3757,7 @@ module Google
|
|
3944
3757
|
def update!(**args)
|
3945
3758
|
@approximate_last_use_time = args[:approximate_last_use_time] if args.key?(:approximate_last_use_time)
|
3946
3759
|
@create_time = args[:create_time] if args.key?(:create_time)
|
3760
|
+
@creator_role = args[:creator_role] if args.key?(:creator_role)
|
3947
3761
|
@labels = args[:labels] if args.key?(:labels)
|
3948
3762
|
@name = args[:name] if args.key?(:name)
|
3949
3763
|
end
|
@@ -4196,165 +4010,45 @@ module Google
|
|
4196
4010
|
end
|
4197
4011
|
end
|
4198
4012
|
|
4199
|
-
#
|
4200
|
-
#
|
4201
|
-
#
|
4202
|
-
#
|
4203
|
-
#
|
4204
|
-
#
|
4205
|
-
#
|
4206
|
-
#
|
4207
|
-
#
|
4208
|
-
#
|
4209
|
-
#
|
4210
|
-
#
|
4211
|
-
#
|
4212
|
-
#
|
4213
|
-
#
|
4214
|
-
#
|
4215
|
-
#
|
4216
|
-
#
|
4217
|
-
#
|
4218
|
-
#
|
4219
|
-
#
|
4220
|
-
#
|
4221
|
-
#
|
4222
|
-
#
|
4223
|
-
#
|
4224
|
-
#
|
4225
|
-
#
|
4226
|
-
#
|
4227
|
-
#
|
4228
|
-
#
|
4229
|
-
#
|
4230
|
-
#
|
4231
|
-
#
|
4232
|
-
#
|
4233
|
-
#
|
4234
|
-
#
|
4235
|
-
#
|
4236
|
-
#
|
4237
|
-
#
|
4238
|
-
# Cloud Spanner makes no guarantees about how long the transaction's locks were
|
4239
|
-
# held for. It is an error to use Cloud Spanner locks for any sort of mutual
|
4240
|
-
# exclusion other than between Cloud Spanner transactions themselves. Retrying
|
4241
|
-
# aborted transactions: When a transaction aborts, the application can choose to
|
4242
|
-
# retry the whole transaction again. To maximize the chances of successfully
|
4243
|
-
# committing the retry, the client should execute the retry in the same session
|
4244
|
-
# as the original attempt. The original session's lock priority increases with
|
4245
|
-
# each consecutive abort, meaning that each attempt has a slightly better chance
|
4246
|
-
# of success than the previous. Under some circumstances (for example, many
|
4247
|
-
# transactions attempting to modify the same row(s)), a transaction can abort
|
4248
|
-
# many times in a short period before successfully committing. Thus, it is not a
|
4249
|
-
# good idea to cap the number of retries a transaction can attempt; instead, it
|
4250
|
-
# is better to limit the total amount of time spent retrying. Idle transactions:
|
4251
|
-
# A transaction is considered idle if it has no outstanding reads or SQL queries
|
4252
|
-
# and has not started a read or SQL query within the last 10 seconds. Idle
|
4253
|
-
# transactions can be aborted by Cloud Spanner so that they don't hold on to
|
4254
|
-
# locks indefinitely. If an idle transaction is aborted, the commit will fail
|
4255
|
-
# with error `ABORTED`. If this behavior is undesirable, periodically executing
|
4256
|
-
# a simple SQL query in the transaction (for example, `SELECT 1`) prevents the
|
4257
|
-
# transaction from becoming idle. Snapshot read-only transactions: Snapshot read-
|
4258
|
-
# only transactions provides a simpler method than locking read-write
|
4259
|
-
# transactions for doing several consistent reads. However, this type of
|
4260
|
-
# transaction does not support writes. Snapshot transactions do not take locks.
|
4261
|
-
# Instead, they work by choosing a Cloud Spanner timestamp, then executing all
|
4262
|
-
# reads at that timestamp. Since they do not acquire locks, they do not block
|
4263
|
-
# concurrent read-write transactions. Unlike locking read-write transactions,
|
4264
|
-
# snapshot read-only transactions never abort. They can fail if the chosen read
|
4265
|
-
# timestamp is garbage collected; however, the default garbage collection policy
|
4266
|
-
# is generous enough that most applications do not need to worry about this in
|
4267
|
-
# practice. Snapshot read-only transactions do not need to call Commit or
|
4268
|
-
# Rollback (and in fact are not permitted to do so). To execute a snapshot
|
4269
|
-
# transaction, the client specifies a timestamp bound, which tells Cloud Spanner
|
4270
|
-
# how to choose a read timestamp. The types of timestamp bound are: - Strong (
|
4271
|
-
# the default). - Bounded staleness. - Exact staleness. If the Cloud Spanner
|
4272
|
-
# database to be read is geographically distributed, stale read-only
|
4273
|
-
# transactions can execute more quickly than strong or read-write transactions,
|
4274
|
-
# because they are able to execute far from the leader replica. Each type of
|
4275
|
-
# timestamp bound is discussed in detail below. Strong: Strong reads are
|
4276
|
-
# guaranteed to see the effects of all transactions that have committed before
|
4277
|
-
# the start of the read. Furthermore, all rows yielded by a single read are
|
4278
|
-
# consistent with each other -- if any part of the read observes a transaction,
|
4279
|
-
# all parts of the read see the transaction. Strong reads are not repeatable:
|
4280
|
-
# two consecutive strong read-only transactions might return inconsistent
|
4281
|
-
# results if there are concurrent writes. If consistency across reads is
|
4282
|
-
# required, the reads should be executed within a transaction or at an exact
|
4283
|
-
# read timestamp. See TransactionOptions.ReadOnly.strong. Exact staleness: These
|
4284
|
-
# timestamp bounds execute reads at a user-specified timestamp. Reads at a
|
4285
|
-
# timestamp are guaranteed to see a consistent prefix of the global transaction
|
4286
|
-
# history: they observe modifications done by all transactions with a commit
|
4287
|
-
# timestamp less than or equal to the read timestamp, and observe none of the
|
4288
|
-
# modifications done by transactions with a larger commit timestamp. They will
|
4289
|
-
# block until all conflicting transactions that may be assigned commit
|
4290
|
-
# timestamps <= the read timestamp have finished. The timestamp can either be
|
4291
|
-
# expressed as an absolute Cloud Spanner commit timestamp or a staleness
|
4292
|
-
# relative to the current time. These modes do not require a "negotiation phase"
|
4293
|
-
# to pick a timestamp. As a result, they execute slightly faster than the
|
4294
|
-
# equivalent boundedly stale concurrency modes. On the other hand, boundedly
|
4295
|
-
# stale reads usually return fresher results. See TransactionOptions.ReadOnly.
|
4296
|
-
# read_timestamp and TransactionOptions.ReadOnly.exact_staleness. Bounded
|
4297
|
-
# staleness: Bounded staleness modes allow Cloud Spanner to pick the read
|
4298
|
-
# timestamp, subject to a user-provided staleness bound. Cloud Spanner chooses
|
4299
|
-
# the newest timestamp within the staleness bound that allows execution of the
|
4300
|
-
# reads at the closest available replica without blocking. All rows yielded are
|
4301
|
-
# consistent with each other -- if any part of the read observes a transaction,
|
4302
|
-
# all parts of the read see the transaction. Boundedly stale reads are not
|
4303
|
-
# repeatable: two stale reads, even if they use the same staleness bound, can
|
4304
|
-
# execute at different timestamps and thus return inconsistent results.
|
4305
|
-
# Boundedly stale reads execute in two phases: the first phase negotiates a
|
4306
|
-
# timestamp among all replicas needed to serve the read. In the second phase,
|
4307
|
-
# reads are executed at the negotiated timestamp. As a result of the two phase
|
4308
|
-
# execution, bounded staleness reads are usually a little slower than comparable
|
4309
|
-
# exact staleness reads. However, they are typically able to return fresher
|
4310
|
-
# results, and are more likely to execute at the closest replica. Because the
|
4311
|
-
# timestamp negotiation requires up-front knowledge of which rows will be read,
|
4312
|
-
# it can only be used with single-use read-only transactions. See
|
4313
|
-
# TransactionOptions.ReadOnly.max_staleness and TransactionOptions.ReadOnly.
|
4314
|
-
# min_read_timestamp. Old read timestamps and garbage collection: Cloud Spanner
|
4315
|
-
# continuously garbage collects deleted and overwritten data in the background
|
4316
|
-
# to reclaim storage space. This process is known as "version GC". By default,
|
4317
|
-
# version GC reclaims versions after they are one hour old. Because of this,
|
4318
|
-
# Cloud Spanner cannot perform reads at read timestamps more than one hour in
|
4319
|
-
# the past. This restriction also applies to in-progress reads and/or SQL
|
4320
|
-
# queries whose timestamp become too old while executing. Reads and SQL queries
|
4321
|
-
# with too-old read timestamps fail with the error `FAILED_PRECONDITION`. You
|
4322
|
-
# can configure and extend the `VERSION_RETENTION_PERIOD` of a database up to a
|
4323
|
-
# period as long as one week, which allows Cloud Spanner to perform reads up to
|
4324
|
-
# one week in the past. Partitioned DML transactions: Partitioned DML
|
4325
|
-
# transactions are used to execute DML statements with a different execution
|
4326
|
-
# strategy that provides different, and often better, scalability properties for
|
4327
|
-
# large, table-wide operations than DML in a ReadWrite transaction. Smaller
|
4328
|
-
# scoped statements, such as an OLTP workload, should prefer using ReadWrite
|
4329
|
-
# transactions. Partitioned DML partitions the keyspace and runs the DML
|
4330
|
-
# statement on each partition in separate, internal transactions. These
|
4331
|
-
# transactions commit automatically when complete, and run independently from
|
4332
|
-
# one another. To reduce lock contention, this execution strategy only acquires
|
4333
|
-
# read locks on rows that match the WHERE clause of the statement. Additionally,
|
4334
|
-
# the smaller per-partition transactions hold locks for less time. That said,
|
4335
|
-
# Partitioned DML is not a drop-in replacement for standard DML used in
|
4336
|
-
# ReadWrite transactions. - The DML statement must be fully-partitionable.
|
4337
|
-
# Specifically, the statement must be expressible as the union of many
|
4338
|
-
# statements which each access only a single row of the table. - The statement
|
4339
|
-
# is not applied atomically to all rows of the table. Rather, the statement is
|
4340
|
-
# applied atomically to partitions of the table, in independent transactions.
|
4341
|
-
# Secondary index rows are updated atomically with the base table rows. -
|
4342
|
-
# Partitioned DML does not guarantee exactly-once execution semantics against a
|
4343
|
-
# partition. The statement will be applied at least once to each partition. It
|
4344
|
-
# is strongly recommended that the DML statement should be idempotent to avoid
|
4345
|
-
# unexpected results. For instance, it is potentially dangerous to run a
|
4346
|
-
# statement such as `UPDATE table SET column = column + 1` as it could be run
|
4347
|
-
# multiple times against some rows. - The partitions are committed automatically
|
4348
|
-
# - there is no support for Commit or Rollback. If the call returns an error, or
|
4349
|
-
# if the client issuing the ExecuteSql call dies, it is possible that some rows
|
4350
|
-
# had the statement executed on them successfully. It is also possible that
|
4351
|
-
# statement was never executed against other rows. - Partitioned DML
|
4352
|
-
# transactions may only contain the execution of a single DML statement via
|
4353
|
-
# ExecuteSql or ExecuteStreamingSql. - If any error is encountered during the
|
4354
|
-
# execution of the partitioned DML operation (for instance, a UNIQUE INDEX
|
4355
|
-
# violation, division by zero, or a value that cannot be stored due to schema
|
4356
|
-
# constraints), then the operation is stopped at that point and an error is
|
4357
|
-
# returned. It is possible that at this point, some partitions have been
|
4013
|
+
# In addition, if TransactionOptions.read_only.return_read_timestamp is set to
|
4014
|
+
# true, a special value of 2^63 - 2 will be returned in the Transaction message
|
4015
|
+
# that describes the transaction, instead of a valid read timestamp. This
|
4016
|
+
# special value should be discarded and not used for any subsequent queries.
|
4017
|
+
# Please see https://cloud.google.com/spanner/docs/change-streams for more
|
4018
|
+
# details on how to query the change stream TVFs. Partitioned DML transactions:
|
4019
|
+
# Partitioned DML transactions are used to execute DML statements with a
|
4020
|
+
# different execution strategy that provides different, and often better,
|
4021
|
+
# scalability properties for large, table-wide operations than DML in a
|
4022
|
+
# ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
|
4023
|
+
# should prefer using ReadWrite transactions. Partitioned DML partitions the
|
4024
|
+
# keyspace and runs the DML statement on each partition in separate, internal
|
4025
|
+
# transactions. These transactions commit automatically when complete, and run
|
4026
|
+
# independently from one another. To reduce lock contention, this execution
|
4027
|
+
# strategy only acquires read locks on rows that match the WHERE clause of the
|
4028
|
+
# statement. Additionally, the smaller per-partition transactions hold locks for
|
4029
|
+
# less time. That said, Partitioned DML is not a drop-in replacement for
|
4030
|
+
# standard DML used in ReadWrite transactions. - The DML statement must be fully-
|
4031
|
+
# partitionable. Specifically, the statement must be expressible as the union of
|
4032
|
+
# many statements which each access only a single row of the table. - The
|
4033
|
+
# statement is not applied atomically to all rows of the table. Rather, the
|
4034
|
+
# statement is applied atomically to partitions of the table, in independent
|
4035
|
+
# transactions. Secondary index rows are updated atomically with the base table
|
4036
|
+
# rows. - Partitioned DML does not guarantee exactly-once execution semantics
|
4037
|
+
# against a partition. The statement will be applied at least once to each
|
4038
|
+
# partition. It is strongly recommended that the DML statement should be
|
4039
|
+
# idempotent to avoid unexpected results. For instance, it is potentially
|
4040
|
+
# dangerous to run a statement such as `UPDATE table SET column = column + 1` as
|
4041
|
+
# it could be run multiple times against some rows. - The partitions are
|
4042
|
+
# committed automatically - there is no support for Commit or Rollback. If the
|
4043
|
+
# call returns an error, or if the client issuing the ExecuteSql call dies, it
|
4044
|
+
# is possible that some rows had the statement executed on them successfully. It
|
4045
|
+
# is also possible that statement was never executed against other rows. -
|
4046
|
+
# Partitioned DML transactions may only contain the execution of a single DML
|
4047
|
+
# statement via ExecuteSql or ExecuteStreamingSql. - If any error is encountered
|
4048
|
+
# during the execution of the partitioned DML operation (for instance, a UNIQUE
|
4049
|
+
# INDEX violation, division by zero, or a value that cannot be stored due to
|
4050
|
+
# schema constraints), then the operation is stopped at that point and an error
|
4051
|
+
# is returned. It is possible that at this point, some partitions have been
|
4358
4052
|
# committed (or even committed multiple times), and other partitions have not
|
4359
4053
|
# been run at all. Given the above, Partitioned DML is good fit for large,
|
4360
4054
|
# database-wide, operations that are idempotent, such as deleting old rows from
|
@@ -4395,165 +4089,45 @@ module Google
|
|
4395
4089
|
class TransactionSelector
|
4396
4090
|
include Google::Apis::Core::Hashable
|
4397
4091
|
|
4398
|
-
#
|
4399
|
-
#
|
4400
|
-
#
|
4401
|
-
#
|
4402
|
-
#
|
4403
|
-
#
|
4404
|
-
#
|
4405
|
-
#
|
4406
|
-
#
|
4407
|
-
#
|
4408
|
-
#
|
4409
|
-
#
|
4410
|
-
#
|
4411
|
-
#
|
4412
|
-
#
|
4413
|
-
#
|
4414
|
-
#
|
4415
|
-
#
|
4416
|
-
#
|
4417
|
-
#
|
4418
|
-
#
|
4419
|
-
#
|
4420
|
-
#
|
4421
|
-
#
|
4422
|
-
#
|
4423
|
-
#
|
4424
|
-
#
|
4425
|
-
#
|
4426
|
-
#
|
4427
|
-
#
|
4428
|
-
#
|
4429
|
-
#
|
4430
|
-
#
|
4431
|
-
#
|
4432
|
-
#
|
4433
|
-
#
|
4434
|
-
#
|
4435
|
-
#
|
4436
|
-
#
|
4437
|
-
# Cloud Spanner makes no guarantees about how long the transaction's locks were
|
4438
|
-
# held for. It is an error to use Cloud Spanner locks for any sort of mutual
|
4439
|
-
# exclusion other than between Cloud Spanner transactions themselves. Retrying
|
4440
|
-
# aborted transactions: When a transaction aborts, the application can choose to
|
4441
|
-
# retry the whole transaction again. To maximize the chances of successfully
|
4442
|
-
# committing the retry, the client should execute the retry in the same session
|
4443
|
-
# as the original attempt. The original session's lock priority increases with
|
4444
|
-
# each consecutive abort, meaning that each attempt has a slightly better chance
|
4445
|
-
# of success than the previous. Under some circumstances (for example, many
|
4446
|
-
# transactions attempting to modify the same row(s)), a transaction can abort
|
4447
|
-
# many times in a short period before successfully committing. Thus, it is not a
|
4448
|
-
# good idea to cap the number of retries a transaction can attempt; instead, it
|
4449
|
-
# is better to limit the total amount of time spent retrying. Idle transactions:
|
4450
|
-
# A transaction is considered idle if it has no outstanding reads or SQL queries
|
4451
|
-
# and has not started a read or SQL query within the last 10 seconds. Idle
|
4452
|
-
# transactions can be aborted by Cloud Spanner so that they don't hold on to
|
4453
|
-
# locks indefinitely. If an idle transaction is aborted, the commit will fail
|
4454
|
-
# with error `ABORTED`. If this behavior is undesirable, periodically executing
|
4455
|
-
# a simple SQL query in the transaction (for example, `SELECT 1`) prevents the
|
4456
|
-
# transaction from becoming idle. Snapshot read-only transactions: Snapshot read-
|
4457
|
-
# only transactions provides a simpler method than locking read-write
|
4458
|
-
# transactions for doing several consistent reads. However, this type of
|
4459
|
-
# transaction does not support writes. Snapshot transactions do not take locks.
|
4460
|
-
# Instead, they work by choosing a Cloud Spanner timestamp, then executing all
|
4461
|
-
# reads at that timestamp. Since they do not acquire locks, they do not block
|
4462
|
-
# concurrent read-write transactions. Unlike locking read-write transactions,
|
4463
|
-
# snapshot read-only transactions never abort. They can fail if the chosen read
|
4464
|
-
# timestamp is garbage collected; however, the default garbage collection policy
|
4465
|
-
# is generous enough that most applications do not need to worry about this in
|
4466
|
-
# practice. Snapshot read-only transactions do not need to call Commit or
|
4467
|
-
# Rollback (and in fact are not permitted to do so). To execute a snapshot
|
4468
|
-
# transaction, the client specifies a timestamp bound, which tells Cloud Spanner
|
4469
|
-
# how to choose a read timestamp. The types of timestamp bound are: - Strong (
|
4470
|
-
# the default). - Bounded staleness. - Exact staleness. If the Cloud Spanner
|
4471
|
-
# database to be read is geographically distributed, stale read-only
|
4472
|
-
# transactions can execute more quickly than strong or read-write transactions,
|
4473
|
-
# because they are able to execute far from the leader replica. Each type of
|
4474
|
-
# timestamp bound is discussed in detail below. Strong: Strong reads are
|
4475
|
-
# guaranteed to see the effects of all transactions that have committed before
|
4476
|
-
# the start of the read. Furthermore, all rows yielded by a single read are
|
4477
|
-
# consistent with each other -- if any part of the read observes a transaction,
|
4478
|
-
# all parts of the read see the transaction. Strong reads are not repeatable:
|
4479
|
-
# two consecutive strong read-only transactions might return inconsistent
|
4480
|
-
# results if there are concurrent writes. If consistency across reads is
|
4481
|
-
# required, the reads should be executed within a transaction or at an exact
|
4482
|
-
# read timestamp. See TransactionOptions.ReadOnly.strong. Exact staleness: These
|
4483
|
-
# timestamp bounds execute reads at a user-specified timestamp. Reads at a
|
4484
|
-
# timestamp are guaranteed to see a consistent prefix of the global transaction
|
4485
|
-
# history: they observe modifications done by all transactions with a commit
|
4486
|
-
# timestamp less than or equal to the read timestamp, and observe none of the
|
4487
|
-
# modifications done by transactions with a larger commit timestamp. They will
|
4488
|
-
# block until all conflicting transactions that may be assigned commit
|
4489
|
-
# timestamps <= the read timestamp have finished. The timestamp can either be
|
4490
|
-
# expressed as an absolute Cloud Spanner commit timestamp or a staleness
|
4491
|
-
# relative to the current time. These modes do not require a "negotiation phase"
|
4492
|
-
# to pick a timestamp. As a result, they execute slightly faster than the
|
4493
|
-
# equivalent boundedly stale concurrency modes. On the other hand, boundedly
|
4494
|
-
# stale reads usually return fresher results. See TransactionOptions.ReadOnly.
|
4495
|
-
# read_timestamp and TransactionOptions.ReadOnly.exact_staleness. Bounded
|
4496
|
-
# staleness: Bounded staleness modes allow Cloud Spanner to pick the read
|
4497
|
-
# timestamp, subject to a user-provided staleness bound. Cloud Spanner chooses
|
4498
|
-
# the newest timestamp within the staleness bound that allows execution of the
|
4499
|
-
# reads at the closest available replica without blocking. All rows yielded are
|
4500
|
-
# consistent with each other -- if any part of the read observes a transaction,
|
4501
|
-
# all parts of the read see the transaction. Boundedly stale reads are not
|
4502
|
-
# repeatable: two stale reads, even if they use the same staleness bound, can
|
4503
|
-
# execute at different timestamps and thus return inconsistent results.
|
4504
|
-
# Boundedly stale reads execute in two phases: the first phase negotiates a
|
4505
|
-
# timestamp among all replicas needed to serve the read. In the second phase,
|
4506
|
-
# reads are executed at the negotiated timestamp. As a result of the two phase
|
4507
|
-
# execution, bounded staleness reads are usually a little slower than comparable
|
4508
|
-
# exact staleness reads. However, they are typically able to return fresher
|
4509
|
-
# results, and are more likely to execute at the closest replica. Because the
|
4510
|
-
# timestamp negotiation requires up-front knowledge of which rows will be read,
|
4511
|
-
# it can only be used with single-use read-only transactions. See
|
4512
|
-
# TransactionOptions.ReadOnly.max_staleness and TransactionOptions.ReadOnly.
|
4513
|
-
# min_read_timestamp. Old read timestamps and garbage collection: Cloud Spanner
|
4514
|
-
# continuously garbage collects deleted and overwritten data in the background
|
4515
|
-
# to reclaim storage space. This process is known as "version GC". By default,
|
4516
|
-
# version GC reclaims versions after they are one hour old. Because of this,
|
4517
|
-
# Cloud Spanner cannot perform reads at read timestamps more than one hour in
|
4518
|
-
# the past. This restriction also applies to in-progress reads and/or SQL
|
4519
|
-
# queries whose timestamp become too old while executing. Reads and SQL queries
|
4520
|
-
# with too-old read timestamps fail with the error `FAILED_PRECONDITION`. You
|
4521
|
-
# can configure and extend the `VERSION_RETENTION_PERIOD` of a database up to a
|
4522
|
-
# period as long as one week, which allows Cloud Spanner to perform reads up to
|
4523
|
-
# one week in the past. Partitioned DML transactions: Partitioned DML
|
4524
|
-
# transactions are used to execute DML statements with a different execution
|
4525
|
-
# strategy that provides different, and often better, scalability properties for
|
4526
|
-
# large, table-wide operations than DML in a ReadWrite transaction. Smaller
|
4527
|
-
# scoped statements, such as an OLTP workload, should prefer using ReadWrite
|
4528
|
-
# transactions. Partitioned DML partitions the keyspace and runs the DML
|
4529
|
-
# statement on each partition in separate, internal transactions. These
|
4530
|
-
# transactions commit automatically when complete, and run independently from
|
4531
|
-
# one another. To reduce lock contention, this execution strategy only acquires
|
4532
|
-
# read locks on rows that match the WHERE clause of the statement. Additionally,
|
4533
|
-
# the smaller per-partition transactions hold locks for less time. That said,
|
4534
|
-
# Partitioned DML is not a drop-in replacement for standard DML used in
|
4535
|
-
# ReadWrite transactions. - The DML statement must be fully-partitionable.
|
4536
|
-
# Specifically, the statement must be expressible as the union of many
|
4537
|
-
# statements which each access only a single row of the table. - The statement
|
4538
|
-
# is not applied atomically to all rows of the table. Rather, the statement is
|
4539
|
-
# applied atomically to partitions of the table, in independent transactions.
|
4540
|
-
# Secondary index rows are updated atomically with the base table rows. -
|
4541
|
-
# Partitioned DML does not guarantee exactly-once execution semantics against a
|
4542
|
-
# partition. The statement will be applied at least once to each partition. It
|
4543
|
-
# is strongly recommended that the DML statement should be idempotent to avoid
|
4544
|
-
# unexpected results. For instance, it is potentially dangerous to run a
|
4545
|
-
# statement such as `UPDATE table SET column = column + 1` as it could be run
|
4546
|
-
# multiple times against some rows. - The partitions are committed automatically
|
4547
|
-
# - there is no support for Commit or Rollback. If the call returns an error, or
|
4548
|
-
# if the client issuing the ExecuteSql call dies, it is possible that some rows
|
4549
|
-
# had the statement executed on them successfully. It is also possible that
|
4550
|
-
# statement was never executed against other rows. - Partitioned DML
|
4551
|
-
# transactions may only contain the execution of a single DML statement via
|
4552
|
-
# ExecuteSql or ExecuteStreamingSql. - If any error is encountered during the
|
4553
|
-
# execution of the partitioned DML operation (for instance, a UNIQUE INDEX
|
4554
|
-
# violation, division by zero, or a value that cannot be stored due to schema
|
4555
|
-
# constraints), then the operation is stopped at that point and an error is
|
4556
|
-
# returned. It is possible that at this point, some partitions have been
|
4092
|
+
# In addition, if TransactionOptions.read_only.return_read_timestamp is set to
|
4093
|
+
# true, a special value of 2^63 - 2 will be returned in the Transaction message
|
4094
|
+
# that describes the transaction, instead of a valid read timestamp. This
|
4095
|
+
# special value should be discarded and not used for any subsequent queries.
|
4096
|
+
# Please see https://cloud.google.com/spanner/docs/change-streams for more
|
4097
|
+
# details on how to query the change stream TVFs. Partitioned DML transactions:
|
4098
|
+
# Partitioned DML transactions are used to execute DML statements with a
|
4099
|
+
# different execution strategy that provides different, and often better,
|
4100
|
+
# scalability properties for large, table-wide operations than DML in a
|
4101
|
+
# ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
|
4102
|
+
# should prefer using ReadWrite transactions. Partitioned DML partitions the
|
4103
|
+
# keyspace and runs the DML statement on each partition in separate, internal
|
4104
|
+
# transactions. These transactions commit automatically when complete, and run
|
4105
|
+
# independently from one another. To reduce lock contention, this execution
|
4106
|
+
# strategy only acquires read locks on rows that match the WHERE clause of the
|
4107
|
+
# statement. Additionally, the smaller per-partition transactions hold locks for
|
4108
|
+
# less time. That said, Partitioned DML is not a drop-in replacement for
|
4109
|
+
# standard DML used in ReadWrite transactions. - The DML statement must be fully-
|
4110
|
+
# partitionable. Specifically, the statement must be expressible as the union of
|
4111
|
+
# many statements which each access only a single row of the table. - The
|
4112
|
+
# statement is not applied atomically to all rows of the table. Rather, the
|
4113
|
+
# statement is applied atomically to partitions of the table, in independent
|
4114
|
+
# transactions. Secondary index rows are updated atomically with the base table
|
4115
|
+
# rows. - Partitioned DML does not guarantee exactly-once execution semantics
|
4116
|
+
# against a partition. The statement will be applied at least once to each
|
4117
|
+
# partition. It is strongly recommended that the DML statement should be
|
4118
|
+
# idempotent to avoid unexpected results. For instance, it is potentially
|
4119
|
+
# dangerous to run a statement such as `UPDATE table SET column = column + 1` as
|
4120
|
+
# it could be run multiple times against some rows. - The partitions are
|
4121
|
+
# committed automatically - there is no support for Commit or Rollback. If the
|
4122
|
+
# call returns an error, or if the client issuing the ExecuteSql call dies, it
|
4123
|
+
# is possible that some rows had the statement executed on them successfully. It
|
4124
|
+
# is also possible that statement was never executed against other rows. -
|
4125
|
+
# Partitioned DML transactions may only contain the execution of a single DML
|
4126
|
+
# statement via ExecuteSql or ExecuteStreamingSql. - If any error is encountered
|
4127
|
+
# during the execution of the partitioned DML operation (for instance, a UNIQUE
|
4128
|
+
# INDEX violation, division by zero, or a value that cannot be stored due to
|
4129
|
+
# schema constraints), then the operation is stopped at that point and an error
|
4130
|
+
# is returned. It is possible that at this point, some partitions have been
|
4557
4131
|
# committed (or even committed multiple times), and other partitions have not
|
4558
4132
|
# been run at all. Given the above, Partitioned DML is good fit for large,
|
4559
4133
|
# database-wide, operations that are idempotent, such as deleting old rows from
|
@@ -4568,165 +4142,45 @@ module Google
|
|
4568
4142
|
# @return [String]
|
4569
4143
|
attr_accessor :id
|
4570
4144
|
|
4571
|
-
#
|
4572
|
-
#
|
4573
|
-
#
|
4574
|
-
#
|
4575
|
-
#
|
4576
|
-
#
|
4577
|
-
#
|
4578
|
-
#
|
4579
|
-
#
|
4580
|
-
#
|
4581
|
-
#
|
4582
|
-
#
|
4583
|
-
#
|
4584
|
-
#
|
4585
|
-
#
|
4586
|
-
#
|
4587
|
-
#
|
4588
|
-
#
|
4589
|
-
#
|
4590
|
-
#
|
4591
|
-
#
|
4592
|
-
#
|
4593
|
-
#
|
4594
|
-
#
|
4595
|
-
#
|
4596
|
-
#
|
4597
|
-
#
|
4598
|
-
#
|
4599
|
-
#
|
4600
|
-
#
|
4601
|
-
#
|
4602
|
-
#
|
4603
|
-
#
|
4604
|
-
#
|
4605
|
-
#
|
4606
|
-
#
|
4607
|
-
#
|
4608
|
-
#
|
4609
|
-
#
|
4610
|
-
# Cloud Spanner makes no guarantees about how long the transaction's locks were
|
4611
|
-
# held for. It is an error to use Cloud Spanner locks for any sort of mutual
|
4612
|
-
# exclusion other than between Cloud Spanner transactions themselves. Retrying
|
4613
|
-
# aborted transactions: When a transaction aborts, the application can choose to
|
4614
|
-
# retry the whole transaction again. To maximize the chances of successfully
|
4615
|
-
# committing the retry, the client should execute the retry in the same session
|
4616
|
-
# as the original attempt. The original session's lock priority increases with
|
4617
|
-
# each consecutive abort, meaning that each attempt has a slightly better chance
|
4618
|
-
# of success than the previous. Under some circumstances (for example, many
|
4619
|
-
# transactions attempting to modify the same row(s)), a transaction can abort
|
4620
|
-
# many times in a short period before successfully committing. Thus, it is not a
|
4621
|
-
# good idea to cap the number of retries a transaction can attempt; instead, it
|
4622
|
-
# is better to limit the total amount of time spent retrying. Idle transactions:
|
4623
|
-
# A transaction is considered idle if it has no outstanding reads or SQL queries
|
4624
|
-
# and has not started a read or SQL query within the last 10 seconds. Idle
|
4625
|
-
# transactions can be aborted by Cloud Spanner so that they don't hold on to
|
4626
|
-
# locks indefinitely. If an idle transaction is aborted, the commit will fail
|
4627
|
-
# with error `ABORTED`. If this behavior is undesirable, periodically executing
|
4628
|
-
# a simple SQL query in the transaction (for example, `SELECT 1`) prevents the
|
4629
|
-
# transaction from becoming idle. Snapshot read-only transactions: Snapshot read-
|
4630
|
-
# only transactions provides a simpler method than locking read-write
|
4631
|
-
# transactions for doing several consistent reads. However, this type of
|
4632
|
-
# transaction does not support writes. Snapshot transactions do not take locks.
|
4633
|
-
# Instead, they work by choosing a Cloud Spanner timestamp, then executing all
|
4634
|
-
# reads at that timestamp. Since they do not acquire locks, they do not block
|
4635
|
-
# concurrent read-write transactions. Unlike locking read-write transactions,
|
4636
|
-
# snapshot read-only transactions never abort. They can fail if the chosen read
|
4637
|
-
# timestamp is garbage collected; however, the default garbage collection policy
|
4638
|
-
# is generous enough that most applications do not need to worry about this in
|
4639
|
-
# practice. Snapshot read-only transactions do not need to call Commit or
|
4640
|
-
# Rollback (and in fact are not permitted to do so). To execute a snapshot
|
4641
|
-
# transaction, the client specifies a timestamp bound, which tells Cloud Spanner
|
4642
|
-
# how to choose a read timestamp. The types of timestamp bound are: - Strong (
|
4643
|
-
# the default). - Bounded staleness. - Exact staleness. If the Cloud Spanner
|
4644
|
-
# database to be read is geographically distributed, stale read-only
|
4645
|
-
# transactions can execute more quickly than strong or read-write transactions,
|
4646
|
-
# because they are able to execute far from the leader replica. Each type of
|
4647
|
-
# timestamp bound is discussed in detail below. Strong: Strong reads are
|
4648
|
-
# guaranteed to see the effects of all transactions that have committed before
|
4649
|
-
# the start of the read. Furthermore, all rows yielded by a single read are
|
4650
|
-
# consistent with each other -- if any part of the read observes a transaction,
|
4651
|
-
# all parts of the read see the transaction. Strong reads are not repeatable:
|
4652
|
-
# two consecutive strong read-only transactions might return inconsistent
|
4653
|
-
# results if there are concurrent writes. If consistency across reads is
|
4654
|
-
# required, the reads should be executed within a transaction or at an exact
|
4655
|
-
# read timestamp. See TransactionOptions.ReadOnly.strong. Exact staleness: These
|
4656
|
-
# timestamp bounds execute reads at a user-specified timestamp. Reads at a
|
4657
|
-
# timestamp are guaranteed to see a consistent prefix of the global transaction
|
4658
|
-
# history: they observe modifications done by all transactions with a commit
|
4659
|
-
# timestamp less than or equal to the read timestamp, and observe none of the
|
4660
|
-
# modifications done by transactions with a larger commit timestamp. They will
|
4661
|
-
# block until all conflicting transactions that may be assigned commit
|
4662
|
-
# timestamps <= the read timestamp have finished. The timestamp can either be
|
4663
|
-
# expressed as an absolute Cloud Spanner commit timestamp or a staleness
|
4664
|
-
# relative to the current time. These modes do not require a "negotiation phase"
|
4665
|
-
# to pick a timestamp. As a result, they execute slightly faster than the
|
4666
|
-
# equivalent boundedly stale concurrency modes. On the other hand, boundedly
|
4667
|
-
# stale reads usually return fresher results. See TransactionOptions.ReadOnly.
|
4668
|
-
# read_timestamp and TransactionOptions.ReadOnly.exact_staleness. Bounded
|
4669
|
-
# staleness: Bounded staleness modes allow Cloud Spanner to pick the read
|
4670
|
-
# timestamp, subject to a user-provided staleness bound. Cloud Spanner chooses
|
4671
|
-
# the newest timestamp within the staleness bound that allows execution of the
|
4672
|
-
# reads at the closest available replica without blocking. All rows yielded are
|
4673
|
-
# consistent with each other -- if any part of the read observes a transaction,
|
4674
|
-
# all parts of the read see the transaction. Boundedly stale reads are not
|
4675
|
-
# repeatable: two stale reads, even if they use the same staleness bound, can
|
4676
|
-
# execute at different timestamps and thus return inconsistent results.
|
4677
|
-
# Boundedly stale reads execute in two phases: the first phase negotiates a
|
4678
|
-
# timestamp among all replicas needed to serve the read. In the second phase,
|
4679
|
-
# reads are executed at the negotiated timestamp. As a result of the two phase
|
4680
|
-
# execution, bounded staleness reads are usually a little slower than comparable
|
4681
|
-
# exact staleness reads. However, they are typically able to return fresher
|
4682
|
-
# results, and are more likely to execute at the closest replica. Because the
|
4683
|
-
# timestamp negotiation requires up-front knowledge of which rows will be read,
|
4684
|
-
# it can only be used with single-use read-only transactions. See
|
4685
|
-
# TransactionOptions.ReadOnly.max_staleness and TransactionOptions.ReadOnly.
|
4686
|
-
# min_read_timestamp. Old read timestamps and garbage collection: Cloud Spanner
|
4687
|
-
# continuously garbage collects deleted and overwritten data in the background
|
4688
|
-
# to reclaim storage space. This process is known as "version GC". By default,
|
4689
|
-
# version GC reclaims versions after they are one hour old. Because of this,
|
4690
|
-
# Cloud Spanner cannot perform reads at read timestamps more than one hour in
|
4691
|
-
# the past. This restriction also applies to in-progress reads and/or SQL
|
4692
|
-
# queries whose timestamp become too old while executing. Reads and SQL queries
|
4693
|
-
# with too-old read timestamps fail with the error `FAILED_PRECONDITION`. You
|
4694
|
-
# can configure and extend the `VERSION_RETENTION_PERIOD` of a database up to a
|
4695
|
-
# period as long as one week, which allows Cloud Spanner to perform reads up to
|
4696
|
-
# one week in the past. Partitioned DML transactions: Partitioned DML
|
4697
|
-
# transactions are used to execute DML statements with a different execution
|
4698
|
-
# strategy that provides different, and often better, scalability properties for
|
4699
|
-
# large, table-wide operations than DML in a ReadWrite transaction. Smaller
|
4700
|
-
# scoped statements, such as an OLTP workload, should prefer using ReadWrite
|
4701
|
-
# transactions. Partitioned DML partitions the keyspace and runs the DML
|
4702
|
-
# statement on each partition in separate, internal transactions. These
|
4703
|
-
# transactions commit automatically when complete, and run independently from
|
4704
|
-
# one another. To reduce lock contention, this execution strategy only acquires
|
4705
|
-
# read locks on rows that match the WHERE clause of the statement. Additionally,
|
4706
|
-
# the smaller per-partition transactions hold locks for less time. That said,
|
4707
|
-
# Partitioned DML is not a drop-in replacement for standard DML used in
|
4708
|
-
# ReadWrite transactions. - The DML statement must be fully-partitionable.
|
4709
|
-
# Specifically, the statement must be expressible as the union of many
|
4710
|
-
# statements which each access only a single row of the table. - The statement
|
4711
|
-
# is not applied atomically to all rows of the table. Rather, the statement is
|
4712
|
-
# applied atomically to partitions of the table, in independent transactions.
|
4713
|
-
# Secondary index rows are updated atomically with the base table rows. -
|
4714
|
-
# Partitioned DML does not guarantee exactly-once execution semantics against a
|
4715
|
-
# partition. The statement will be applied at least once to each partition. It
|
4716
|
-
# is strongly recommended that the DML statement should be idempotent to avoid
|
4717
|
-
# unexpected results. For instance, it is potentially dangerous to run a
|
4718
|
-
# statement such as `UPDATE table SET column = column + 1` as it could be run
|
4719
|
-
# multiple times against some rows. - The partitions are committed automatically
|
4720
|
-
# - there is no support for Commit or Rollback. If the call returns an error, or
|
4721
|
-
# if the client issuing the ExecuteSql call dies, it is possible that some rows
|
4722
|
-
# had the statement executed on them successfully. It is also possible that
|
4723
|
-
# statement was never executed against other rows. - Partitioned DML
|
4724
|
-
# transactions may only contain the execution of a single DML statement via
|
4725
|
-
# ExecuteSql or ExecuteStreamingSql. - If any error is encountered during the
|
4726
|
-
# execution of the partitioned DML operation (for instance, a UNIQUE INDEX
|
4727
|
-
# violation, division by zero, or a value that cannot be stored due to schema
|
4728
|
-
# constraints), then the operation is stopped at that point and an error is
|
4729
|
-
# returned. It is possible that at this point, some partitions have been
|
4145
|
+
# In addition, if TransactionOptions.read_only.return_read_timestamp is set to
|
4146
|
+
# true, a special value of 2^63 - 2 will be returned in the Transaction message
|
4147
|
+
# that describes the transaction, instead of a valid read timestamp. This
|
4148
|
+
# special value should be discarded and not used for any subsequent queries.
|
4149
|
+
# Please see https://cloud.google.com/spanner/docs/change-streams for more
|
4150
|
+
# details on how to query the change stream TVFs. Partitioned DML transactions:
|
4151
|
+
# Partitioned DML transactions are used to execute DML statements with a
|
4152
|
+
# different execution strategy that provides different, and often better,
|
4153
|
+
# scalability properties for large, table-wide operations than DML in a
|
4154
|
+
# ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
|
4155
|
+
# should prefer using ReadWrite transactions. Partitioned DML partitions the
|
4156
|
+
# keyspace and runs the DML statement on each partition in separate, internal
|
4157
|
+
# transactions. These transactions commit automatically when complete, and run
|
4158
|
+
# independently from one another. To reduce lock contention, this execution
|
4159
|
+
# strategy only acquires read locks on rows that match the WHERE clause of the
|
4160
|
+
# statement. Additionally, the smaller per-partition transactions hold locks for
|
4161
|
+
# less time. That said, Partitioned DML is not a drop-in replacement for
|
4162
|
+
# standard DML used in ReadWrite transactions. - The DML statement must be fully-
|
4163
|
+
# partitionable. Specifically, the statement must be expressible as the union of
|
4164
|
+
# many statements which each access only a single row of the table. - The
|
4165
|
+
# statement is not applied atomically to all rows of the table. Rather, the
|
4166
|
+
# statement is applied atomically to partitions of the table, in independent
|
4167
|
+
# transactions. Secondary index rows are updated atomically with the base table
|
4168
|
+
# rows. - Partitioned DML does not guarantee exactly-once execution semantics
|
4169
|
+
# against a partition. The statement will be applied at least once to each
|
4170
|
+
# partition. It is strongly recommended that the DML statement should be
|
4171
|
+
# idempotent to avoid unexpected results. For instance, it is potentially
|
4172
|
+
# dangerous to run a statement such as `UPDATE table SET column = column + 1` as
|
4173
|
+
# it could be run multiple times against some rows. - The partitions are
|
4174
|
+
# committed automatically - there is no support for Commit or Rollback. If the
|
4175
|
+
# call returns an error, or if the client issuing the ExecuteSql call dies, it
|
4176
|
+
# is possible that some rows had the statement executed on them successfully. It
|
4177
|
+
# is also possible that statement was never executed against other rows. -
|
4178
|
+
# Partitioned DML transactions may only contain the execution of a single DML
|
4179
|
+
# statement via ExecuteSql or ExecuteStreamingSql. - If any error is encountered
|
4180
|
+
# during the execution of the partitioned DML operation (for instance, a UNIQUE
|
4181
|
+
# INDEX violation, division by zero, or a value that cannot be stored due to
|
4182
|
+
# schema constraints), then the operation is stopped at that point and an error
|
4183
|
+
# is returned. It is possible that at this point, some partitions have been
|
4730
4184
|
# committed (or even committed multiple times), and other partitions have not
|
4731
4185
|
# been run at all. Given the above, Partitioned DML is good fit for large,
|
4732
4186
|
# database-wide, operations that are idempotent, such as deleting old rows from
|