google-apis-spanner_v1 0.25.0 → 0.26.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 51541562e98ca4e9c993afc467f0a98ff8adaa1eda1805c36bafd9997211b2bc
4
- data.tar.gz: ddae8e9f8a2915b8462b04e92914d52f13e1707953dd55bb887473a9e2ba773a
3
+ metadata.gz: eec80b1ffbe8702f803572d10fbe6cd14e056cf35fa6c7a4cd12ba3ba1df8b11
4
+ data.tar.gz: ea92fce3176c283dbd99e663a9e7ca913e9a2d1a74f2cdc9ccee830f49bf4aa6
5
5
  SHA512:
6
- metadata.gz: dc983c308f4d10749bbf5e7fb820069f8644ba3afb151c60f8f2ae15522aa813c3de799f45b359ec142103b5aee7c7bfe45f0f9dba9e0d4180b274f9953f816e
7
- data.tar.gz: 8db003dacedd2fd9421d39537dfd0504241013e30a08eec8951b3e8f0a99ea8916086b55829e532ceca0386a8f99f6f70d2cfeb71bae1ac3b56c49cc859bea3b
6
+ metadata.gz: 03ba93f0e90b0928df96d00329fd83e79689d6eb096d0bb07b43ea3c0fea079c04be2800beebdfccc1b01a927d7bac048994563135ff86f9e2a6b51dc8ee9833
7
+ data.tar.gz: 402417d83a693a56b0db7ea651feb91254ec2e3de67d46136a077f1f76e452d9efae75bd6fd4b1d5eb1103df7f5c2567137892c2003b1110cd433552e6a7b398
data/CHANGELOG.md CHANGED
@@ -1,5 +1,9 @@
1
1
  # Release history for google-apis-spanner_v1
2
2
 
3
+ ### v0.26.0 (2022-03-30)
4
+
5
+ * Regenerated from discovery document revision 20220326
6
+
3
7
  ### v0.25.0 (2022-03-19)
4
8
 
5
9
  * Regenerated from discovery document revision 20220310
@@ -233,7 +233,7 @@ module Google
233
233
  # count towards the one transaction limit). After the active transaction is
234
234
  # completed, the session can immediately be re-used for the next transaction. It
235
235
  # is not necessary to create a new session for each transaction. Transaction
236
- # Modes: Cloud Spanner supports three transaction modes: 1. Locking read-write.
236
+ # modes: Cloud Spanner supports three transaction modes: 1. Locking read-write.
237
237
  # This type of transaction is the only way to write data into Cloud Spanner.
238
238
  # These transactions rely on pessimistic locking and, if necessary, two-phase
239
239
  # commit. Locking read-write transactions may abort, requiring the application
@@ -249,9 +249,9 @@ module Google
249
249
  # simpler semantics and are almost always faster. In particular, read-only
250
250
  # transactions do not take locks, so they do not conflict with read-write
251
251
  # transactions. As a consequence of not taking locks, they also do not abort, so
252
- # retry loops are not needed. Transactions may only read/write data in a single
253
- # database. They may, however, read/write data in different tables within that
254
- # database. Locking Read-Write Transactions: Locking transactions may be used to
252
+ # retry loops are not needed. Transactions may only read-write data in a single
253
+ # database. They may, however, read-write data in different tables within that
254
+ # database. Locking read-write transactions: Locking transactions may be used to
255
255
  # atomically read-modify-write data anywhere in a database. This type of
256
256
  # transaction is externally consistent. Clients should attempt to minimize the
257
257
  # amount of time a transaction is active. Faster transactions commit with higher
@@ -270,7 +270,7 @@ module Google
270
270
  # Cloud Spanner makes no guarantees about how long the transaction's locks were
271
271
  # held for. It is an error to use Cloud Spanner locks for any sort of mutual
272
272
  # exclusion other than between Cloud Spanner transactions themselves. Retrying
273
- # Aborted Transactions: When a transaction aborts, the application can choose to
273
+ # aborted transactions: When a transaction aborts, the application can choose to
274
274
  # retry the whole transaction again. To maximize the chances of successfully
275
275
  # committing the retry, the client should execute the retry in the same session
276
276
  # as the original attempt. The original session's lock priority increases with
@@ -279,14 +279,14 @@ module Google
279
279
  # transactions attempting to modify the same row(s)), a transaction can abort
280
280
  # many times in a short period before successfully committing. Thus, it is not a
281
281
  # good idea to cap the number of retries a transaction can attempt; instead, it
282
- # is better to limit the total amount of time spent retrying. Idle Transactions:
282
+ # is better to limit the total amount of time spent retrying. Idle transactions:
283
283
  # A transaction is considered idle if it has no outstanding reads or SQL queries
284
284
  # and has not started a read or SQL query within the last 10 seconds. Idle
285
285
  # transactions can be aborted by Cloud Spanner so that they don't hold on to
286
286
  # locks indefinitely. If an idle transaction is aborted, the commit will fail
287
287
  # with error `ABORTED`. If this behavior is undesirable, periodically executing
288
288
  # a simple SQL query in the transaction (for example, `SELECT 1`) prevents the
289
- # transaction from becoming idle. Snapshot Read-Only Transactions: Snapshot read-
289
+ # transaction from becoming idle. Snapshot read-only transactions: Snapshot read-
290
290
  # only transactions provides a simpler method than locking read-write
291
291
  # transactions for doing several consistent reads. However, this type of
292
292
  # transaction does not support writes. Snapshot transactions do not take locks.
@@ -302,7 +302,7 @@ module Google
302
302
  # how to choose a read timestamp. The types of timestamp bound are: - Strong (
303
303
  # the default). - Bounded staleness. - Exact staleness. If the Cloud Spanner
304
304
  # database to be read is geographically distributed, stale read-only
305
- # transactions can execute more quickly than strong or read-write transaction,
305
+ # transactions can execute more quickly than strong or read-write transactions,
306
306
  # because they are able to execute far from the leader replica. Each type of
307
307
  # timestamp bound is discussed in detail below. Strong: Strong reads are
308
308
  # guaranteed to see the effects of all transactions that have committed before
@@ -312,7 +312,7 @@ module Google
312
312
  # two consecutive strong read-only transactions might return inconsistent
313
313
  # results if there are concurrent writes. If consistency across reads is
314
314
  # required, the reads should be executed within a transaction or at an exact
315
- # read timestamp. See TransactionOptions.ReadOnly.strong. Exact Staleness: These
315
+ # read timestamp. See TransactionOptions.ReadOnly.strong. Exact staleness: These
316
316
  # timestamp bounds execute reads at a user-specified timestamp. Reads at a
317
317
  # timestamp are guaranteed to see a consistent prefix of the global transaction
318
318
  # history: they observe modifications done by all transactions with a commit
@@ -326,7 +326,7 @@ module Google
326
326
  # equivalent boundedly stale concurrency modes. On the other hand, boundedly
327
327
  # stale reads usually return fresher results. See TransactionOptions.ReadOnly.
328
328
  # read_timestamp and TransactionOptions.ReadOnly.exact_staleness. Bounded
329
- # Staleness: Bounded staleness modes allow Cloud Spanner to pick the read
329
+ # staleness: Bounded staleness modes allow Cloud Spanner to pick the read
330
330
  # timestamp, subject to a user-provided staleness bound. Cloud Spanner chooses
331
331
  # the newest timestamp within the staleness bound that allows execution of the
332
332
  # reads at the closest available replica without blocking. All rows yielded are
@@ -343,51 +343,54 @@ module Google
343
343
  # timestamp negotiation requires up-front knowledge of which rows will be read,
344
344
  # it can only be used with single-use read-only transactions. See
345
345
  # TransactionOptions.ReadOnly.max_staleness and TransactionOptions.ReadOnly.
346
- # min_read_timestamp. Old Read Timestamps and Garbage Collection: Cloud Spanner
346
+ # min_read_timestamp. Old read timestamps and garbage collection: Cloud Spanner
347
347
  # continuously garbage collects deleted and overwritten data in the background
348
348
  # to reclaim storage space. This process is known as "version GC". By default,
349
349
  # version GC reclaims versions after they are one hour old. Because of this,
350
350
  # Cloud Spanner cannot perform reads at read timestamps more than one hour in
351
351
  # the past. This restriction also applies to in-progress reads and/or SQL
352
352
  # queries whose timestamp become too old while executing. Reads and SQL queries
353
- # with too-old read timestamps fail with the error `FAILED_PRECONDITION`.
354
- # Partitioned DML Transactions: Partitioned DML transactions are used to execute
355
- # DML statements with a different execution strategy that provides different,
356
- # and often better, scalability properties for large, table-wide operations than
357
- # DML in a ReadWrite transaction. Smaller scoped statements, such as an OLTP
358
- # workload, should prefer using ReadWrite transactions. Partitioned DML
359
- # partitions the keyspace and runs the DML statement on each partition in
360
- # separate, internal transactions. These transactions commit automatically when
361
- # complete, and run independently from one another. To reduce lock contention,
362
- # this execution strategy only acquires read locks on rows that match the WHERE
363
- # clause of the statement. Additionally, the smaller per-partition transactions
364
- # hold locks for less time. That said, Partitioned DML is not a drop-in
365
- # replacement for standard DML used in ReadWrite transactions. - The DML
366
- # statement must be fully-partitionable. Specifically, the statement must be
367
- # expressible as the union of many statements which each access only a single
368
- # row of the table. - The statement is not applied atomically to all rows of the
369
- # table. Rather, the statement is applied atomically to partitions of the table,
370
- # in independent transactions. Secondary index rows are updated atomically with
371
- # the base table rows. - Partitioned DML does not guarantee exactly-once
372
- # execution semantics against a partition. The statement will be applied at
373
- # least once to each partition. It is strongly recommended that the DML
374
- # statement should be idempotent to avoid unexpected results. For instance, it
375
- # is potentially dangerous to run a statement such as `UPDATE table SET column =
376
- # column + 1` as it could be run multiple times against some rows. - The
377
- # partitions are committed automatically - there is no support for Commit or
378
- # Rollback. If the call returns an error, or if the client issuing the
379
- # ExecuteSql call dies, it is possible that some rows had the statement executed
380
- # on them successfully. It is also possible that statement was never executed
381
- # against other rows. - Partitioned DML transactions may only contain the
382
- # execution of a single DML statement via ExecuteSql or ExecuteStreamingSql. -
383
- # If any error is encountered during the execution of the partitioned DML
384
- # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
385
- # value that cannot be stored due to schema constraints), then the operation is
386
- # stopped at that point and an error is returned. It is possible that at this
387
- # point, some partitions have been committed (or even committed multiple times),
388
- # and other partitions have not been run at all. Given the above, Partitioned
389
- # DML is good fit for large, database-wide, operations that are idempotent, such
390
- # as deleting old rows from a very large table.
353
+ # with too-old read timestamps fail with the error `FAILED_PRECONDITION`. You
354
+ # can configure and extend the `VERSION_RETENTION_PERIOD` of a database up to a
355
+ # period as long as one week, which allows Cloud Spanner to perform reads up to
356
+ # one week in the past. Partitioned DML transactions: Partitioned DML
357
+ # transactions are used to execute DML statements with a different execution
358
+ # strategy that provides different, and often better, scalability properties for
359
+ # large, table-wide operations than DML in a ReadWrite transaction. Smaller
360
+ # scoped statements, such as an OLTP workload, should prefer using ReadWrite
361
+ # transactions. Partitioned DML partitions the keyspace and runs the DML
362
+ # statement on each partition in separate, internal transactions. These
363
+ # transactions commit automatically when complete, and run independently from
364
+ # one another. To reduce lock contention, this execution strategy only acquires
365
+ # read locks on rows that match the WHERE clause of the statement. Additionally,
366
+ # the smaller per-partition transactions hold locks for less time. That said,
367
+ # Partitioned DML is not a drop-in replacement for standard DML used in
368
+ # ReadWrite transactions. - The DML statement must be fully-partitionable.
369
+ # Specifically, the statement must be expressible as the union of many
370
+ # statements which each access only a single row of the table. - The statement
371
+ # is not applied atomically to all rows of the table. Rather, the statement is
372
+ # applied atomically to partitions of the table, in independent transactions.
373
+ # Secondary index rows are updated atomically with the base table rows. -
374
+ # Partitioned DML does not guarantee exactly-once execution semantics against a
375
+ # partition. The statement will be applied at least once to each partition. It
376
+ # is strongly recommended that the DML statement should be idempotent to avoid
377
+ # unexpected results. For instance, it is potentially dangerous to run a
378
+ # statement such as `UPDATE table SET column = column + 1` as it could be run
379
+ # multiple times against some rows. - The partitions are committed automatically
380
+ # - there is no support for Commit or Rollback. If the call returns an error, or
381
+ # if the client issuing the ExecuteSql call dies, it is possible that some rows
382
+ # had the statement executed on them successfully. It is also possible that
383
+ # statement was never executed against other rows. - Partitioned DML
384
+ # transactions may only contain the execution of a single DML statement via
385
+ # ExecuteSql or ExecuteStreamingSql. - If any error is encountered during the
386
+ # execution of the partitioned DML operation (for instance, a UNIQUE INDEX
387
+ # violation, division by zero, or a value that cannot be stored due to schema
388
+ # constraints), then the operation is stopped at that point and an error is
389
+ # returned. It is possible that at this point, some partitions have been
390
+ # committed (or even committed multiple times), and other partitions have not
391
+ # been run at all. Given the above, Partitioned DML is good fit for large,
392
+ # database-wide, operations that are idempotent, such as deleting old rows from
393
+ # a very large table.
391
394
  # Corresponds to the JSON property `options`
392
395
  # @return [Google::Apis::SpannerV1::TransactionOptions]
393
396
  attr_accessor :options
@@ -545,7 +548,7 @@ module Google
545
548
  # count towards the one transaction limit). After the active transaction is
546
549
  # completed, the session can immediately be re-used for the next transaction. It
547
550
  # is not necessary to create a new session for each transaction. Transaction
548
- # Modes: Cloud Spanner supports three transaction modes: 1. Locking read-write.
551
+ # modes: Cloud Spanner supports three transaction modes: 1. Locking read-write.
549
552
  # This type of transaction is the only way to write data into Cloud Spanner.
550
553
  # These transactions rely on pessimistic locking and, if necessary, two-phase
551
554
  # commit. Locking read-write transactions may abort, requiring the application
@@ -561,9 +564,9 @@ module Google
561
564
  # simpler semantics and are almost always faster. In particular, read-only
562
565
  # transactions do not take locks, so they do not conflict with read-write
563
566
  # transactions. As a consequence of not taking locks, they also do not abort, so
564
- # retry loops are not needed. Transactions may only read/write data in a single
565
- # database. They may, however, read/write data in different tables within that
566
- # database. Locking Read-Write Transactions: Locking transactions may be used to
567
+ # retry loops are not needed. Transactions may only read-write data in a single
568
+ # database. They may, however, read-write data in different tables within that
569
+ # database. Locking read-write transactions: Locking transactions may be used to
567
570
  # atomically read-modify-write data anywhere in a database. This type of
568
571
  # transaction is externally consistent. Clients should attempt to minimize the
569
572
  # amount of time a transaction is active. Faster transactions commit with higher
@@ -582,7 +585,7 @@ module Google
582
585
  # Cloud Spanner makes no guarantees about how long the transaction's locks were
583
586
  # held for. It is an error to use Cloud Spanner locks for any sort of mutual
584
587
  # exclusion other than between Cloud Spanner transactions themselves. Retrying
585
- # Aborted Transactions: When a transaction aborts, the application can choose to
588
+ # aborted transactions: When a transaction aborts, the application can choose to
586
589
  # retry the whole transaction again. To maximize the chances of successfully
587
590
  # committing the retry, the client should execute the retry in the same session
588
591
  # as the original attempt. The original session's lock priority increases with
@@ -591,14 +594,14 @@ module Google
591
594
  # transactions attempting to modify the same row(s)), a transaction can abort
592
595
  # many times in a short period before successfully committing. Thus, it is not a
593
596
  # good idea to cap the number of retries a transaction can attempt; instead, it
594
- # is better to limit the total amount of time spent retrying. Idle Transactions:
597
+ # is better to limit the total amount of time spent retrying. Idle transactions:
595
598
  # A transaction is considered idle if it has no outstanding reads or SQL queries
596
599
  # and has not started a read or SQL query within the last 10 seconds. Idle
597
600
  # transactions can be aborted by Cloud Spanner so that they don't hold on to
598
601
  # locks indefinitely. If an idle transaction is aborted, the commit will fail
599
602
  # with error `ABORTED`. If this behavior is undesirable, periodically executing
600
603
  # a simple SQL query in the transaction (for example, `SELECT 1`) prevents the
601
- # transaction from becoming idle. Snapshot Read-Only Transactions: Snapshot read-
604
+ # transaction from becoming idle. Snapshot read-only transactions: Snapshot read-
602
605
  # only transactions provides a simpler method than locking read-write
603
606
  # transactions for doing several consistent reads. However, this type of
604
607
  # transaction does not support writes. Snapshot transactions do not take locks.
@@ -614,7 +617,7 @@ module Google
614
617
  # how to choose a read timestamp. The types of timestamp bound are: - Strong (
615
618
  # the default). - Bounded staleness. - Exact staleness. If the Cloud Spanner
616
619
  # database to be read is geographically distributed, stale read-only
617
- # transactions can execute more quickly than strong or read-write transaction,
620
+ # transactions can execute more quickly than strong or read-write transactions,
618
621
  # because they are able to execute far from the leader replica. Each type of
619
622
  # timestamp bound is discussed in detail below. Strong: Strong reads are
620
623
  # guaranteed to see the effects of all transactions that have committed before
@@ -624,7 +627,7 @@ module Google
624
627
  # two consecutive strong read-only transactions might return inconsistent
625
628
  # results if there are concurrent writes. If consistency across reads is
626
629
  # required, the reads should be executed within a transaction or at an exact
627
- # read timestamp. See TransactionOptions.ReadOnly.strong. Exact Staleness: These
630
+ # read timestamp. See TransactionOptions.ReadOnly.strong. Exact staleness: These
628
631
  # timestamp bounds execute reads at a user-specified timestamp. Reads at a
629
632
  # timestamp are guaranteed to see a consistent prefix of the global transaction
630
633
  # history: they observe modifications done by all transactions with a commit
@@ -638,7 +641,7 @@ module Google
638
641
  # equivalent boundedly stale concurrency modes. On the other hand, boundedly
639
642
  # stale reads usually return fresher results. See TransactionOptions.ReadOnly.
640
643
  # read_timestamp and TransactionOptions.ReadOnly.exact_staleness. Bounded
641
- # Staleness: Bounded staleness modes allow Cloud Spanner to pick the read
644
+ # staleness: Bounded staleness modes allow Cloud Spanner to pick the read
642
645
  # timestamp, subject to a user-provided staleness bound. Cloud Spanner chooses
643
646
  # the newest timestamp within the staleness bound that allows execution of the
644
647
  # reads at the closest available replica without blocking. All rows yielded are
@@ -655,51 +658,54 @@ module Google
655
658
  # timestamp negotiation requires up-front knowledge of which rows will be read,
656
659
  # it can only be used with single-use read-only transactions. See
657
660
  # TransactionOptions.ReadOnly.max_staleness and TransactionOptions.ReadOnly.
658
- # min_read_timestamp. Old Read Timestamps and Garbage Collection: Cloud Spanner
661
+ # min_read_timestamp. Old read timestamps and garbage collection: Cloud Spanner
659
662
  # continuously garbage collects deleted and overwritten data in the background
660
663
  # to reclaim storage space. This process is known as "version GC". By default,
661
664
  # version GC reclaims versions after they are one hour old. Because of this,
662
665
  # Cloud Spanner cannot perform reads at read timestamps more than one hour in
663
666
  # the past. This restriction also applies to in-progress reads and/or SQL
664
667
  # queries whose timestamp become too old while executing. Reads and SQL queries
665
- # with too-old read timestamps fail with the error `FAILED_PRECONDITION`.
666
- # Partitioned DML Transactions: Partitioned DML transactions are used to execute
667
- # DML statements with a different execution strategy that provides different,
668
- # and often better, scalability properties for large, table-wide operations than
669
- # DML in a ReadWrite transaction. Smaller scoped statements, such as an OLTP
670
- # workload, should prefer using ReadWrite transactions. Partitioned DML
671
- # partitions the keyspace and runs the DML statement on each partition in
672
- # separate, internal transactions. These transactions commit automatically when
673
- # complete, and run independently from one another. To reduce lock contention,
674
- # this execution strategy only acquires read locks on rows that match the WHERE
675
- # clause of the statement. Additionally, the smaller per-partition transactions
676
- # hold locks for less time. That said, Partitioned DML is not a drop-in
677
- # replacement for standard DML used in ReadWrite transactions. - The DML
678
- # statement must be fully-partitionable. Specifically, the statement must be
679
- # expressible as the union of many statements which each access only a single
680
- # row of the table. - The statement is not applied atomically to all rows of the
681
- # table. Rather, the statement is applied atomically to partitions of the table,
682
- # in independent transactions. Secondary index rows are updated atomically with
683
- # the base table rows. - Partitioned DML does not guarantee exactly-once
684
- # execution semantics against a partition. The statement will be applied at
685
- # least once to each partition. It is strongly recommended that the DML
686
- # statement should be idempotent to avoid unexpected results. For instance, it
687
- # is potentially dangerous to run a statement such as `UPDATE table SET column =
688
- # column + 1` as it could be run multiple times against some rows. - The
689
- # partitions are committed automatically - there is no support for Commit or
690
- # Rollback. If the call returns an error, or if the client issuing the
691
- # ExecuteSql call dies, it is possible that some rows had the statement executed
692
- # on them successfully. It is also possible that statement was never executed
693
- # against other rows. - Partitioned DML transactions may only contain the
694
- # execution of a single DML statement via ExecuteSql or ExecuteStreamingSql. -
695
- # If any error is encountered during the execution of the partitioned DML
696
- # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
697
- # value that cannot be stored due to schema constraints), then the operation is
698
- # stopped at that point and an error is returned. It is possible that at this
699
- # point, some partitions have been committed (or even committed multiple times),
700
- # and other partitions have not been run at all. Given the above, Partitioned
701
- # DML is good fit for large, database-wide, operations that are idempotent, such
702
- # as deleting old rows from a very large table.
668
+ # with too-old read timestamps fail with the error `FAILED_PRECONDITION`. You
669
+ # can configure and extend the `VERSION_RETENTION_PERIOD` of a database up to a
670
+ # period as long as one week, which allows Cloud Spanner to perform reads up to
671
+ # one week in the past. Partitioned DML transactions: Partitioned DML
672
+ # transactions are used to execute DML statements with a different execution
673
+ # strategy that provides different, and often better, scalability properties for
674
+ # large, table-wide operations than DML in a ReadWrite transaction. Smaller
675
+ # scoped statements, such as an OLTP workload, should prefer using ReadWrite
676
+ # transactions. Partitioned DML partitions the keyspace and runs the DML
677
+ # statement on each partition in separate, internal transactions. These
678
+ # transactions commit automatically when complete, and run independently from
679
+ # one another. To reduce lock contention, this execution strategy only acquires
680
+ # read locks on rows that match the WHERE clause of the statement. Additionally,
681
+ # the smaller per-partition transactions hold locks for less time. That said,
682
+ # Partitioned DML is not a drop-in replacement for standard DML used in
683
+ # ReadWrite transactions. - The DML statement must be fully-partitionable.
684
+ # Specifically, the statement must be expressible as the union of many
685
+ # statements which each access only a single row of the table. - The statement
686
+ # is not applied atomically to all rows of the table. Rather, the statement is
687
+ # applied atomically to partitions of the table, in independent transactions.
688
+ # Secondary index rows are updated atomically with the base table rows. -
689
+ # Partitioned DML does not guarantee exactly-once execution semantics against a
690
+ # partition. The statement will be applied at least once to each partition. It
691
+ # is strongly recommended that the DML statement should be idempotent to avoid
692
+ # unexpected results. For instance, it is potentially dangerous to run a
693
+ # statement such as `UPDATE table SET column = column + 1` as it could be run
694
+ # multiple times against some rows. - The partitions are committed automatically
695
+ # - there is no support for Commit or Rollback. If the call returns an error, or
696
+ # if the client issuing the ExecuteSql call dies, it is possible that some rows
697
+ # had the statement executed on them successfully. It is also possible that
698
+ # statement was never executed against other rows. - Partitioned DML
699
+ # transactions may only contain the execution of a single DML statement via
700
+ # ExecuteSql or ExecuteStreamingSql. - If any error is encountered during the
701
+ # execution of the partitioned DML operation (for instance, a UNIQUE INDEX
702
+ # violation, division by zero, or a value that cannot be stored due to schema
703
+ # constraints), then the operation is stopped at that point and an error is
704
+ # returned. It is possible that at this point, some partitions have been
705
+ # committed (or even committed multiple times), and other partitions have not
706
+ # been run at all. Given the above, Partitioned DML is good fit for large,
707
+ # database-wide, operations that are idempotent, such as deleting old rows from
708
+ # a very large table.
703
709
  # Corresponds to the JSON property `singleUseTransaction`
704
710
  # @return [Google::Apis::SpannerV1::TransactionOptions]
705
711
  attr_accessor :single_use_transaction
@@ -1326,8 +1332,7 @@ module Google
1326
1332
  # A generic empty message that you can re-use to avoid defining duplicated empty
1327
1333
  # messages in your APIs. A typical example is to use it as the request or the
1328
1334
  # response type of an API method. For instance: service Foo ` rpc Bar(google.
1329
- # protobuf.Empty) returns (google.protobuf.Empty); ` The JSON representation for
1330
- # `Empty` is empty JSON object ````.
1335
+ # protobuf.Empty) returns (google.protobuf.Empty); `
1331
1336
  class Empty
1332
1337
  include Google::Apis::Core::Hashable
1333
1338
 
@@ -4196,7 +4201,7 @@ module Google
4196
4201
  # count towards the one transaction limit). After the active transaction is
4197
4202
  # completed, the session can immediately be re-used for the next transaction. It
4198
4203
  # is not necessary to create a new session for each transaction. Transaction
4199
- # Modes: Cloud Spanner supports three transaction modes: 1. Locking read-write.
4204
+ # modes: Cloud Spanner supports three transaction modes: 1. Locking read-write.
4200
4205
  # This type of transaction is the only way to write data into Cloud Spanner.
4201
4206
  # These transactions rely on pessimistic locking and, if necessary, two-phase
4202
4207
  # commit. Locking read-write transactions may abort, requiring the application
@@ -4212,9 +4217,9 @@ module Google
4212
4217
  # simpler semantics and are almost always faster. In particular, read-only
4213
4218
  # transactions do not take locks, so they do not conflict with read-write
4214
4219
  # transactions. As a consequence of not taking locks, they also do not abort, so
4215
- # retry loops are not needed. Transactions may only read/write data in a single
4216
- # database. They may, however, read/write data in different tables within that
4217
- # database. Locking Read-Write Transactions: Locking transactions may be used to
4220
+ # retry loops are not needed. Transactions may only read-write data in a single
4221
+ # database. They may, however, read-write data in different tables within that
4222
+ # database. Locking read-write transactions: Locking transactions may be used to
4218
4223
  # atomically read-modify-write data anywhere in a database. This type of
4219
4224
  # transaction is externally consistent. Clients should attempt to minimize the
4220
4225
  # amount of time a transaction is active. Faster transactions commit with higher
@@ -4233,7 +4238,7 @@ module Google
4233
4238
  # Cloud Spanner makes no guarantees about how long the transaction's locks were
4234
4239
  # held for. It is an error to use Cloud Spanner locks for any sort of mutual
4235
4240
  # exclusion other than between Cloud Spanner transactions themselves. Retrying
4236
- # Aborted Transactions: When a transaction aborts, the application can choose to
4241
+ # aborted transactions: When a transaction aborts, the application can choose to
4237
4242
  # retry the whole transaction again. To maximize the chances of successfully
4238
4243
  # committing the retry, the client should execute the retry in the same session
4239
4244
  # as the original attempt. The original session's lock priority increases with
@@ -4242,14 +4247,14 @@ module Google
4242
4247
  # transactions attempting to modify the same row(s)), a transaction can abort
4243
4248
  # many times in a short period before successfully committing. Thus, it is not a
4244
4249
  # good idea to cap the number of retries a transaction can attempt; instead, it
4245
- # is better to limit the total amount of time spent retrying. Idle Transactions:
4250
+ # is better to limit the total amount of time spent retrying. Idle transactions:
4246
4251
  # A transaction is considered idle if it has no outstanding reads or SQL queries
4247
4252
  # and has not started a read or SQL query within the last 10 seconds. Idle
4248
4253
  # transactions can be aborted by Cloud Spanner so that they don't hold on to
4249
4254
  # locks indefinitely. If an idle transaction is aborted, the commit will fail
4250
4255
  # with error `ABORTED`. If this behavior is undesirable, periodically executing
4251
4256
  # a simple SQL query in the transaction (for example, `SELECT 1`) prevents the
4252
- # transaction from becoming idle. Snapshot Read-Only Transactions: Snapshot read-
4257
+ # transaction from becoming idle. Snapshot read-only transactions: Snapshot read-
4253
4258
  # only transactions provides a simpler method than locking read-write
4254
4259
  # transactions for doing several consistent reads. However, this type of
4255
4260
  # transaction does not support writes. Snapshot transactions do not take locks.
@@ -4265,7 +4270,7 @@ module Google
4265
4270
  # how to choose a read timestamp. The types of timestamp bound are: - Strong (
4266
4271
  # the default). - Bounded staleness. - Exact staleness. If the Cloud Spanner
4267
4272
  # database to be read is geographically distributed, stale read-only
4268
- # transactions can execute more quickly than strong or read-write transaction,
4273
+ # transactions can execute more quickly than strong or read-write transactions,
4269
4274
  # because they are able to execute far from the leader replica. Each type of
4270
4275
  # timestamp bound is discussed in detail below. Strong: Strong reads are
4271
4276
  # guaranteed to see the effects of all transactions that have committed before
@@ -4275,7 +4280,7 @@ module Google
4275
4280
  # two consecutive strong read-only transactions might return inconsistent
4276
4281
  # results if there are concurrent writes. If consistency across reads is
4277
4282
  # required, the reads should be executed within a transaction or at an exact
4278
- # read timestamp. See TransactionOptions.ReadOnly.strong. Exact Staleness: These
4283
+ # read timestamp. See TransactionOptions.ReadOnly.strong. Exact staleness: These
4279
4284
  # timestamp bounds execute reads at a user-specified timestamp. Reads at a
4280
4285
  # timestamp are guaranteed to see a consistent prefix of the global transaction
4281
4286
  # history: they observe modifications done by all transactions with a commit
@@ -4289,7 +4294,7 @@ module Google
4289
4294
  # equivalent boundedly stale concurrency modes. On the other hand, boundedly
4290
4295
  # stale reads usually return fresher results. See TransactionOptions.ReadOnly.
4291
4296
  # read_timestamp and TransactionOptions.ReadOnly.exact_staleness. Bounded
4292
- # Staleness: Bounded staleness modes allow Cloud Spanner to pick the read
4297
+ # staleness: Bounded staleness modes allow Cloud Spanner to pick the read
4293
4298
  # timestamp, subject to a user-provided staleness bound. Cloud Spanner chooses
4294
4299
  # the newest timestamp within the staleness bound that allows execution of the
4295
4300
  # reads at the closest available replica without blocking. All rows yielded are
@@ -4306,51 +4311,54 @@ module Google
4306
4311
  # timestamp negotiation requires up-front knowledge of which rows will be read,
4307
4312
  # it can only be used with single-use read-only transactions. See
4308
4313
  # TransactionOptions.ReadOnly.max_staleness and TransactionOptions.ReadOnly.
4309
- # min_read_timestamp. Old Read Timestamps and Garbage Collection: Cloud Spanner
4314
+ # min_read_timestamp. Old read timestamps and garbage collection: Cloud Spanner
4310
4315
  # continuously garbage collects deleted and overwritten data in the background
4311
4316
  # to reclaim storage space. This process is known as "version GC". By default,
4312
4317
  # version GC reclaims versions after they are one hour old. Because of this,
4313
4318
  # Cloud Spanner cannot perform reads at read timestamps more than one hour in
4314
4319
  # the past. This restriction also applies to in-progress reads and/or SQL
4315
4320
  # queries whose timestamp become too old while executing. Reads and SQL queries
4316
- # with too-old read timestamps fail with the error `FAILED_PRECONDITION`.
4317
- # Partitioned DML Transactions: Partitioned DML transactions are used to execute
4318
- # DML statements with a different execution strategy that provides different,
4319
- # and often better, scalability properties for large, table-wide operations than
4320
- # DML in a ReadWrite transaction. Smaller scoped statements, such as an OLTP
4321
- # workload, should prefer using ReadWrite transactions. Partitioned DML
4322
- # partitions the keyspace and runs the DML statement on each partition in
4323
- # separate, internal transactions. These transactions commit automatically when
4324
- # complete, and run independently from one another. To reduce lock contention,
4325
- # this execution strategy only acquires read locks on rows that match the WHERE
4326
- # clause of the statement. Additionally, the smaller per-partition transactions
4327
- # hold locks for less time. That said, Partitioned DML is not a drop-in
4328
- # replacement for standard DML used in ReadWrite transactions. - The DML
4329
- # statement must be fully-partitionable. Specifically, the statement must be
4330
- # expressible as the union of many statements which each access only a single
4331
- # row of the table. - The statement is not applied atomically to all rows of the
4332
- # table. Rather, the statement is applied atomically to partitions of the table,
4333
- # in independent transactions. Secondary index rows are updated atomically with
4334
- # the base table rows. - Partitioned DML does not guarantee exactly-once
4335
- # execution semantics against a partition. The statement will be applied at
4336
- # least once to each partition. It is strongly recommended that the DML
4337
- # statement should be idempotent to avoid unexpected results. For instance, it
4338
- # is potentially dangerous to run a statement such as `UPDATE table SET column =
4339
- # column + 1` as it could be run multiple times against some rows. - The
4340
- # partitions are committed automatically - there is no support for Commit or
4341
- # Rollback. If the call returns an error, or if the client issuing the
4342
- # ExecuteSql call dies, it is possible that some rows had the statement executed
4343
- # on them successfully. It is also possible that statement was never executed
4344
- # against other rows. - Partitioned DML transactions may only contain the
4345
- # execution of a single DML statement via ExecuteSql or ExecuteStreamingSql. -
4346
- # If any error is encountered during the execution of the partitioned DML
4347
- # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
4348
- # value that cannot be stored due to schema constraints), then the operation is
4349
- # stopped at that point and an error is returned. It is possible that at this
4350
- # point, some partitions have been committed (or even committed multiple times),
4351
- # and other partitions have not been run at all. Given the above, Partitioned
4352
- # DML is good fit for large, database-wide, operations that are idempotent, such
4353
- # as deleting old rows from a very large table.
4321
+ # with too-old read timestamps fail with the error `FAILED_PRECONDITION`. You
4322
+ # can configure and extend the `VERSION_RETENTION_PERIOD` of a database up to a
4323
+ # period as long as one week, which allows Cloud Spanner to perform reads up to
4324
+ # one week in the past. Partitioned DML transactions: Partitioned DML
4325
+ # transactions are used to execute DML statements with a different execution
4326
+ # strategy that provides different, and often better, scalability properties for
4327
+ # large, table-wide operations than DML in a ReadWrite transaction. Smaller
4328
+ # scoped statements, such as an OLTP workload, should prefer using ReadWrite
4329
+ # transactions. Partitioned DML partitions the keyspace and runs the DML
4330
+ # statement on each partition in separate, internal transactions. These
4331
+ # transactions commit automatically when complete, and run independently from
4332
+ # one another. To reduce lock contention, this execution strategy only acquires
4333
+ # read locks on rows that match the WHERE clause of the statement. Additionally,
4334
+ # the smaller per-partition transactions hold locks for less time. That said,
4335
+ # Partitioned DML is not a drop-in replacement for standard DML used in
4336
+ # ReadWrite transactions. - The DML statement must be fully-partitionable.
4337
+ # Specifically, the statement must be expressible as the union of many
4338
+ # statements which each access only a single row of the table. - The statement
4339
+ # is not applied atomically to all rows of the table. Rather, the statement is
4340
+ # applied atomically to partitions of the table, in independent transactions.
4341
+ # Secondary index rows are updated atomically with the base table rows. -
4342
+ # Partitioned DML does not guarantee exactly-once execution semantics against a
4343
+ # partition. The statement will be applied at least once to each partition. It
4344
+ # is strongly recommended that the DML statement should be idempotent to avoid
4345
+ # unexpected results. For instance, it is potentially dangerous to run a
4346
+ # statement such as `UPDATE table SET column = column + 1` as it could be run
4347
+ # multiple times against some rows. - The partitions are committed automatically
4348
+ # - there is no support for Commit or Rollback. If the call returns an error, or
4349
+ # if the client issuing the ExecuteSql call dies, it is possible that some rows
4350
+ # had the statement executed on them successfully. It is also possible that
4351
+ # statement was never executed against other rows. - Partitioned DML
4352
+ # transactions may only contain the execution of a single DML statement via
4353
+ # ExecuteSql or ExecuteStreamingSql. - If any error is encountered during the
4354
+ # execution of the partitioned DML operation (for instance, a UNIQUE INDEX
4355
+ # violation, division by zero, or a value that cannot be stored due to schema
4356
+ # constraints), then the operation is stopped at that point and an error is
4357
+ # returned. It is possible that at this point, some partitions have been
4358
+ # committed (or even committed multiple times), and other partitions have not
4359
+ # been run at all. Given the above, Partitioned DML is good fit for large,
4360
+ # database-wide, operations that are idempotent, such as deleting old rows from
4361
+ # a very large table.
4354
4362
  class TransactionOptions
4355
4363
  include Google::Apis::Core::Hashable
4356
4364
 
@@ -4392,7 +4400,7 @@ module Google
4392
4400
  # count towards the one transaction limit). After the active transaction is
4393
4401
  # completed, the session can immediately be re-used for the next transaction. It
4394
4402
  # is not necessary to create a new session for each transaction. Transaction
4395
- # Modes: Cloud Spanner supports three transaction modes: 1. Locking read-write.
4403
+ # modes: Cloud Spanner supports three transaction modes: 1. Locking read-write.
4396
4404
  # This type of transaction is the only way to write data into Cloud Spanner.
4397
4405
  # These transactions rely on pessimistic locking and, if necessary, two-phase
4398
4406
  # commit. Locking read-write transactions may abort, requiring the application
@@ -4408,9 +4416,9 @@ module Google
4408
4416
  # simpler semantics and are almost always faster. In particular, read-only
4409
4417
  # transactions do not take locks, so they do not conflict with read-write
4410
4418
  # transactions. As a consequence of not taking locks, they also do not abort, so
4411
- # retry loops are not needed. Transactions may only read/write data in a single
4412
- # database. They may, however, read/write data in different tables within that
4413
- # database. Locking Read-Write Transactions: Locking transactions may be used to
4419
+ # retry loops are not needed. Transactions may only read-write data in a single
4420
+ # database. They may, however, read-write data in different tables within that
4421
+ # database. Locking read-write transactions: Locking transactions may be used to
4414
4422
  # atomically read-modify-write data anywhere in a database. This type of
4415
4423
  # transaction is externally consistent. Clients should attempt to minimize the
4416
4424
  # amount of time a transaction is active. Faster transactions commit with higher
@@ -4429,7 +4437,7 @@ module Google
4429
4437
  # Cloud Spanner makes no guarantees about how long the transaction's locks were
4430
4438
  # held for. It is an error to use Cloud Spanner locks for any sort of mutual
4431
4439
  # exclusion other than between Cloud Spanner transactions themselves. Retrying
4432
- # Aborted Transactions: When a transaction aborts, the application can choose to
4440
+ # aborted transactions: When a transaction aborts, the application can choose to
4433
4441
  # retry the whole transaction again. To maximize the chances of successfully
4434
4442
  # committing the retry, the client should execute the retry in the same session
4435
4443
  # as the original attempt. The original session's lock priority increases with
@@ -4438,14 +4446,14 @@ module Google
4438
4446
  # transactions attempting to modify the same row(s)), a transaction can abort
4439
4447
  # many times in a short period before successfully committing. Thus, it is not a
4440
4448
  # good idea to cap the number of retries a transaction can attempt; instead, it
4441
- # is better to limit the total amount of time spent retrying. Idle Transactions:
4449
+ # is better to limit the total amount of time spent retrying. Idle transactions:
4442
4450
  # A transaction is considered idle if it has no outstanding reads or SQL queries
4443
4451
  # and has not started a read or SQL query within the last 10 seconds. Idle
4444
4452
  # transactions can be aborted by Cloud Spanner so that they don't hold on to
4445
4453
  # locks indefinitely. If an idle transaction is aborted, the commit will fail
4446
4454
  # with error `ABORTED`. If this behavior is undesirable, periodically executing
4447
4455
  # a simple SQL query in the transaction (for example, `SELECT 1`) prevents the
4448
- # transaction from becoming idle. Snapshot Read-Only Transactions: Snapshot read-
4456
+ # transaction from becoming idle. Snapshot read-only transactions: Snapshot read-
4449
4457
  # only transactions provides a simpler method than locking read-write
4450
4458
  # transactions for doing several consistent reads. However, this type of
4451
4459
  # transaction does not support writes. Snapshot transactions do not take locks.
@@ -4461,7 +4469,7 @@ module Google
4461
4469
  # how to choose a read timestamp. The types of timestamp bound are: - Strong (
4462
4470
  # the default). - Bounded staleness. - Exact staleness. If the Cloud Spanner
4463
4471
  # database to be read is geographically distributed, stale read-only
4464
- # transactions can execute more quickly than strong or read-write transaction,
4472
+ # transactions can execute more quickly than strong or read-write transactions,
4465
4473
  # because they are able to execute far from the leader replica. Each type of
4466
4474
  # timestamp bound is discussed in detail below. Strong: Strong reads are
4467
4475
  # guaranteed to see the effects of all transactions that have committed before
@@ -4471,7 +4479,7 @@ module Google
4471
4479
  # two consecutive strong read-only transactions might return inconsistent
4472
4480
  # results if there are concurrent writes. If consistency across reads is
4473
4481
  # required, the reads should be executed within a transaction or at an exact
4474
- # read timestamp. See TransactionOptions.ReadOnly.strong. Exact Staleness: These
4482
+ # read timestamp. See TransactionOptions.ReadOnly.strong. Exact staleness: These
4475
4483
  # timestamp bounds execute reads at a user-specified timestamp. Reads at a
4476
4484
  # timestamp are guaranteed to see a consistent prefix of the global transaction
4477
4485
  # history: they observe modifications done by all transactions with a commit
@@ -4485,7 +4493,7 @@ module Google
4485
4493
  # equivalent boundedly stale concurrency modes. On the other hand, boundedly
4486
4494
  # stale reads usually return fresher results. See TransactionOptions.ReadOnly.
4487
4495
  # read_timestamp and TransactionOptions.ReadOnly.exact_staleness. Bounded
4488
- # Staleness: Bounded staleness modes allow Cloud Spanner to pick the read
4496
+ # staleness: Bounded staleness modes allow Cloud Spanner to pick the read
4489
4497
  # timestamp, subject to a user-provided staleness bound. Cloud Spanner chooses
4490
4498
  # the newest timestamp within the staleness bound that allows execution of the
4491
4499
  # reads at the closest available replica without blocking. All rows yielded are
@@ -4502,51 +4510,54 @@ module Google
4502
4510
  # timestamp negotiation requires up-front knowledge of which rows will be read,
4503
4511
  # it can only be used with single-use read-only transactions. See
4504
4512
  # TransactionOptions.ReadOnly.max_staleness and TransactionOptions.ReadOnly.
4505
- # min_read_timestamp. Old Read Timestamps and Garbage Collection: Cloud Spanner
4513
+ # min_read_timestamp. Old read timestamps and garbage collection: Cloud Spanner
4506
4514
  # continuously garbage collects deleted and overwritten data in the background
4507
4515
  # to reclaim storage space. This process is known as "version GC". By default,
4508
4516
  # version GC reclaims versions after they are one hour old. Because of this,
4509
4517
  # Cloud Spanner cannot perform reads at read timestamps more than one hour in
4510
4518
  # the past. This restriction also applies to in-progress reads and/or SQL
4511
4519
  # queries whose timestamp become too old while executing. Reads and SQL queries
4512
- # with too-old read timestamps fail with the error `FAILED_PRECONDITION`.
4513
- # Partitioned DML Transactions: Partitioned DML transactions are used to execute
4514
- # DML statements with a different execution strategy that provides different,
4515
- # and often better, scalability properties for large, table-wide operations than
4516
- # DML in a ReadWrite transaction. Smaller scoped statements, such as an OLTP
4517
- # workload, should prefer using ReadWrite transactions. Partitioned DML
4518
- # partitions the keyspace and runs the DML statement on each partition in
4519
- # separate, internal transactions. These transactions commit automatically when
4520
- # complete, and run independently from one another. To reduce lock contention,
4521
- # this execution strategy only acquires read locks on rows that match the WHERE
4522
- # clause of the statement. Additionally, the smaller per-partition transactions
4523
- # hold locks for less time. That said, Partitioned DML is not a drop-in
4524
- # replacement for standard DML used in ReadWrite transactions. - The DML
4525
- # statement must be fully-partitionable. Specifically, the statement must be
4526
- # expressible as the union of many statements which each access only a single
4527
- # row of the table. - The statement is not applied atomically to all rows of the
4528
- # table. Rather, the statement is applied atomically to partitions of the table,
4529
- # in independent transactions. Secondary index rows are updated atomically with
4530
- # the base table rows. - Partitioned DML does not guarantee exactly-once
4531
- # execution semantics against a partition. The statement will be applied at
4532
- # least once to each partition. It is strongly recommended that the DML
4533
- # statement should be idempotent to avoid unexpected results. For instance, it
4534
- # is potentially dangerous to run a statement such as `UPDATE table SET column =
4535
- # column + 1` as it could be run multiple times against some rows. - The
4536
- # partitions are committed automatically - there is no support for Commit or
4537
- # Rollback. If the call returns an error, or if the client issuing the
4538
- # ExecuteSql call dies, it is possible that some rows had the statement executed
4539
- # on them successfully. It is also possible that statement was never executed
4540
- # against other rows. - Partitioned DML transactions may only contain the
4541
- # execution of a single DML statement via ExecuteSql or ExecuteStreamingSql. -
4542
- # If any error is encountered during the execution of the partitioned DML
4543
- # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
4544
- # value that cannot be stored due to schema constraints), then the operation is
4545
- # stopped at that point and an error is returned. It is possible that at this
4546
- # point, some partitions have been committed (or even committed multiple times),
4547
- # and other partitions have not been run at all. Given the above, Partitioned
4548
- # DML is good fit for large, database-wide, operations that are idempotent, such
4549
- # as deleting old rows from a very large table.
4520
+ # with too-old read timestamps fail with the error `FAILED_PRECONDITION`. You
4521
+ # can configure and extend the `VERSION_RETENTION_PERIOD` of a database up to a
4522
+ # period as long as one week, which allows Cloud Spanner to perform reads up to
4523
+ # one week in the past. Partitioned DML transactions: Partitioned DML
4524
+ # transactions are used to execute DML statements with a different execution
4525
+ # strategy that provides different, and often better, scalability properties for
4526
+ # large, table-wide operations than DML in a ReadWrite transaction. Smaller
4527
+ # scoped statements, such as an OLTP workload, should prefer using ReadWrite
4528
+ # transactions. Partitioned DML partitions the keyspace and runs the DML
4529
+ # statement on each partition in separate, internal transactions. These
4530
+ # transactions commit automatically when complete, and run independently from
4531
+ # one another. To reduce lock contention, this execution strategy only acquires
4532
+ # read locks on rows that match the WHERE clause of the statement. Additionally,
4533
+ # the smaller per-partition transactions hold locks for less time. That said,
4534
+ # Partitioned DML is not a drop-in replacement for standard DML used in
4535
+ # ReadWrite transactions. - The DML statement must be fully-partitionable.
4536
+ # Specifically, the statement must be expressible as the union of many
4537
+ # statements which each access only a single row of the table. - The statement
4538
+ # is not applied atomically to all rows of the table. Rather, the statement is
4539
+ # applied atomically to partitions of the table, in independent transactions.
4540
+ # Secondary index rows are updated atomically with the base table rows. -
4541
+ # Partitioned DML does not guarantee exactly-once execution semantics against a
4542
+ # partition. The statement will be applied at least once to each partition. It
4543
+ # is strongly recommended that the DML statement should be idempotent to avoid
4544
+ # unexpected results. For instance, it is potentially dangerous to run a
4545
+ # statement such as `UPDATE table SET column = column + 1` as it could be run
4546
+ # multiple times against some rows. - The partitions are committed automatically
4547
+ # - there is no support for Commit or Rollback. If the call returns an error, or
4548
+ # if the client issuing the ExecuteSql call dies, it is possible that some rows
4549
+ # had the statement executed on them successfully. It is also possible that
4550
+ # statement was never executed against other rows. - Partitioned DML
4551
+ # transactions may only contain the execution of a single DML statement via
4552
+ # ExecuteSql or ExecuteStreamingSql. - If any error is encountered during the
4553
+ # execution of the partitioned DML operation (for instance, a UNIQUE INDEX
4554
+ # violation, division by zero, or a value that cannot be stored due to schema
4555
+ # constraints), then the operation is stopped at that point and an error is
4556
+ # returned. It is possible that at this point, some partitions have been
4557
+ # committed (or even committed multiple times), and other partitions have not
4558
+ # been run at all. Given the above, Partitioned DML is good fit for large,
4559
+ # database-wide, operations that are idempotent, such as deleting old rows from
4560
+ # a very large table.
4550
4561
  # Corresponds to the JSON property `begin`
4551
4562
  # @return [Google::Apis::SpannerV1::TransactionOptions]
4552
4563
  attr_accessor :begin
@@ -4562,7 +4573,7 @@ module Google
4562
4573
  # count towards the one transaction limit). After the active transaction is
4563
4574
  # completed, the session can immediately be re-used for the next transaction. It
4564
4575
  # is not necessary to create a new session for each transaction. Transaction
4565
- # Modes: Cloud Spanner supports three transaction modes: 1. Locking read-write.
4576
+ # modes: Cloud Spanner supports three transaction modes: 1. Locking read-write.
4566
4577
  # This type of transaction is the only way to write data into Cloud Spanner.
4567
4578
  # These transactions rely on pessimistic locking and, if necessary, two-phase
4568
4579
  # commit. Locking read-write transactions may abort, requiring the application
@@ -4578,9 +4589,9 @@ module Google
4578
4589
  # simpler semantics and are almost always faster. In particular, read-only
4579
4590
  # transactions do not take locks, so they do not conflict with read-write
4580
4591
  # transactions. As a consequence of not taking locks, they also do not abort, so
4581
- # retry loops are not needed. Transactions may only read/write data in a single
4582
- # database. They may, however, read/write data in different tables within that
4583
- # database. Locking Read-Write Transactions: Locking transactions may be used to
4592
+ # retry loops are not needed. Transactions may only read-write data in a single
4593
+ # database. They may, however, read-write data in different tables within that
4594
+ # database. Locking read-write transactions: Locking transactions may be used to
4584
4595
  # atomically read-modify-write data anywhere in a database. This type of
4585
4596
  # transaction is externally consistent. Clients should attempt to minimize the
4586
4597
  # amount of time a transaction is active. Faster transactions commit with higher
@@ -4599,7 +4610,7 @@ module Google
4599
4610
  # Cloud Spanner makes no guarantees about how long the transaction's locks were
4600
4611
  # held for. It is an error to use Cloud Spanner locks for any sort of mutual
4601
4612
  # exclusion other than between Cloud Spanner transactions themselves. Retrying
4602
- # Aborted Transactions: When a transaction aborts, the application can choose to
4613
+ # aborted transactions: When a transaction aborts, the application can choose to
4603
4614
  # retry the whole transaction again. To maximize the chances of successfully
4604
4615
  # committing the retry, the client should execute the retry in the same session
4605
4616
  # as the original attempt. The original session's lock priority increases with
@@ -4608,14 +4619,14 @@ module Google
4608
4619
  # transactions attempting to modify the same row(s)), a transaction can abort
4609
4620
  # many times in a short period before successfully committing. Thus, it is not a
4610
4621
  # good idea to cap the number of retries a transaction can attempt; instead, it
4611
- # is better to limit the total amount of time spent retrying. Idle Transactions:
4622
+ # is better to limit the total amount of time spent retrying. Idle transactions:
4612
4623
  # A transaction is considered idle if it has no outstanding reads or SQL queries
4613
4624
  # and has not started a read or SQL query within the last 10 seconds. Idle
4614
4625
  # transactions can be aborted by Cloud Spanner so that they don't hold on to
4615
4626
  # locks indefinitely. If an idle transaction is aborted, the commit will fail
4616
4627
  # with error `ABORTED`. If this behavior is undesirable, periodically executing
4617
4628
  # a simple SQL query in the transaction (for example, `SELECT 1`) prevents the
4618
- # transaction from becoming idle. Snapshot Read-Only Transactions: Snapshot read-
4629
+ # transaction from becoming idle. Snapshot read-only transactions: Snapshot read-
4619
4630
  # only transactions provides a simpler method than locking read-write
4620
4631
  # transactions for doing several consistent reads. However, this type of
4621
4632
  # transaction does not support writes. Snapshot transactions do not take locks.
@@ -4631,7 +4642,7 @@ module Google
4631
4642
  # how to choose a read timestamp. The types of timestamp bound are: - Strong (
4632
4643
  # the default). - Bounded staleness. - Exact staleness. If the Cloud Spanner
4633
4644
  # database to be read is geographically distributed, stale read-only
4634
- # transactions can execute more quickly than strong or read-write transaction,
4645
+ # transactions can execute more quickly than strong or read-write transactions,
4635
4646
  # because they are able to execute far from the leader replica. Each type of
4636
4647
  # timestamp bound is discussed in detail below. Strong: Strong reads are
4637
4648
  # guaranteed to see the effects of all transactions that have committed before
@@ -4641,7 +4652,7 @@ module Google
4641
4652
  # two consecutive strong read-only transactions might return inconsistent
4642
4653
  # results if there are concurrent writes. If consistency across reads is
4643
4654
  # required, the reads should be executed within a transaction or at an exact
4644
- # read timestamp. See TransactionOptions.ReadOnly.strong. Exact Staleness: These
4655
+ # read timestamp. See TransactionOptions.ReadOnly.strong. Exact staleness: These
4645
4656
  # timestamp bounds execute reads at a user-specified timestamp. Reads at a
4646
4657
  # timestamp are guaranteed to see a consistent prefix of the global transaction
4647
4658
  # history: they observe modifications done by all transactions with a commit
@@ -4655,7 +4666,7 @@ module Google
4655
4666
  # equivalent boundedly stale concurrency modes. On the other hand, boundedly
4656
4667
  # stale reads usually return fresher results. See TransactionOptions.ReadOnly.
4657
4668
  # read_timestamp and TransactionOptions.ReadOnly.exact_staleness. Bounded
4658
- # Staleness: Bounded staleness modes allow Cloud Spanner to pick the read
4669
+ # staleness: Bounded staleness modes allow Cloud Spanner to pick the read
4659
4670
  # timestamp, subject to a user-provided staleness bound. Cloud Spanner chooses
4660
4671
  # the newest timestamp within the staleness bound that allows execution of the
4661
4672
  # reads at the closest available replica without blocking. All rows yielded are
@@ -4672,51 +4683,54 @@ module Google
4672
4683
  # timestamp negotiation requires up-front knowledge of which rows will be read,
4673
4684
  # it can only be used with single-use read-only transactions. See
4674
4685
  # TransactionOptions.ReadOnly.max_staleness and TransactionOptions.ReadOnly.
4675
- # min_read_timestamp. Old Read Timestamps and Garbage Collection: Cloud Spanner
4686
+ # min_read_timestamp. Old read timestamps and garbage collection: Cloud Spanner
4676
4687
  # continuously garbage collects deleted and overwritten data in the background
4677
4688
  # to reclaim storage space. This process is known as "version GC". By default,
4678
4689
  # version GC reclaims versions after they are one hour old. Because of this,
4679
4690
  # Cloud Spanner cannot perform reads at read timestamps more than one hour in
4680
4691
  # the past. This restriction also applies to in-progress reads and/or SQL
4681
4692
  # queries whose timestamp become too old while executing. Reads and SQL queries
4682
- # with too-old read timestamps fail with the error `FAILED_PRECONDITION`.
4683
- # Partitioned DML Transactions: Partitioned DML transactions are used to execute
4684
- # DML statements with a different execution strategy that provides different,
4685
- # and often better, scalability properties for large, table-wide operations than
4686
- # DML in a ReadWrite transaction. Smaller scoped statements, such as an OLTP
4687
- # workload, should prefer using ReadWrite transactions. Partitioned DML
4688
- # partitions the keyspace and runs the DML statement on each partition in
4689
- # separate, internal transactions. These transactions commit automatically when
4690
- # complete, and run independently from one another. To reduce lock contention,
4691
- # this execution strategy only acquires read locks on rows that match the WHERE
4692
- # clause of the statement. Additionally, the smaller per-partition transactions
4693
- # hold locks for less time. That said, Partitioned DML is not a drop-in
4694
- # replacement for standard DML used in ReadWrite transactions. - The DML
4695
- # statement must be fully-partitionable. Specifically, the statement must be
4696
- # expressible as the union of many statements which each access only a single
4697
- # row of the table. - The statement is not applied atomically to all rows of the
4698
- # table. Rather, the statement is applied atomically to partitions of the table,
4699
- # in independent transactions. Secondary index rows are updated atomically with
4700
- # the base table rows. - Partitioned DML does not guarantee exactly-once
4701
- # execution semantics against a partition. The statement will be applied at
4702
- # least once to each partition. It is strongly recommended that the DML
4703
- # statement should be idempotent to avoid unexpected results. For instance, it
4704
- # is potentially dangerous to run a statement such as `UPDATE table SET column =
4705
- # column + 1` as it could be run multiple times against some rows. - The
4706
- # partitions are committed automatically - there is no support for Commit or
4707
- # Rollback. If the call returns an error, or if the client issuing the
4708
- # ExecuteSql call dies, it is possible that some rows had the statement executed
4709
- # on them successfully. It is also possible that statement was never executed
4710
- # against other rows. - Partitioned DML transactions may only contain the
4711
- # execution of a single DML statement via ExecuteSql or ExecuteStreamingSql. -
4712
- # If any error is encountered during the execution of the partitioned DML
4713
- # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
4714
- # value that cannot be stored due to schema constraints), then the operation is
4715
- # stopped at that point and an error is returned. It is possible that at this
4716
- # point, some partitions have been committed (or even committed multiple times),
4717
- # and other partitions have not been run at all. Given the above, Partitioned
4718
- # DML is good fit for large, database-wide, operations that are idempotent, such
4719
- # as deleting old rows from a very large table.
4693
+ # with too-old read timestamps fail with the error `FAILED_PRECONDITION`. You
4694
+ # can configure and extend the `VERSION_RETENTION_PERIOD` of a database up to a
4695
+ # period as long as one week, which allows Cloud Spanner to perform reads up to
4696
+ # one week in the past. Partitioned DML transactions: Partitioned DML
4697
+ # transactions are used to execute DML statements with a different execution
4698
+ # strategy that provides different, and often better, scalability properties for
4699
+ # large, table-wide operations than DML in a ReadWrite transaction. Smaller
4700
+ # scoped statements, such as an OLTP workload, should prefer using ReadWrite
4701
+ # transactions. Partitioned DML partitions the keyspace and runs the DML
4702
+ # statement on each partition in separate, internal transactions. These
4703
+ # transactions commit automatically when complete, and run independently from
4704
+ # one another. To reduce lock contention, this execution strategy only acquires
4705
+ # read locks on rows that match the WHERE clause of the statement. Additionally,
4706
+ # the smaller per-partition transactions hold locks for less time. That said,
4707
+ # Partitioned DML is not a drop-in replacement for standard DML used in
4708
+ # ReadWrite transactions. - The DML statement must be fully-partitionable.
4709
+ # Specifically, the statement must be expressible as the union of many
4710
+ # statements which each access only a single row of the table. - The statement
4711
+ # is not applied atomically to all rows of the table. Rather, the statement is
4712
+ # applied atomically to partitions of the table, in independent transactions.
4713
+ # Secondary index rows are updated atomically with the base table rows. -
4714
+ # Partitioned DML does not guarantee exactly-once execution semantics against a
4715
+ # partition. The statement will be applied at least once to each partition. It
4716
+ # is strongly recommended that the DML statement should be idempotent to avoid
4717
+ # unexpected results. For instance, it is potentially dangerous to run a
4718
+ # statement such as `UPDATE table SET column = column + 1` as it could be run
4719
+ # multiple times against some rows. - The partitions are committed automatically
4720
+ # - there is no support for Commit or Rollback. If the call returns an error, or
4721
+ # if the client issuing the ExecuteSql call dies, it is possible that some rows
4722
+ # had the statement executed on them successfully. It is also possible that
4723
+ # statement was never executed against other rows. - Partitioned DML
4724
+ # transactions may only contain the execution of a single DML statement via
4725
+ # ExecuteSql or ExecuteStreamingSql. - If any error is encountered during the
4726
+ # execution of the partitioned DML operation (for instance, a UNIQUE INDEX
4727
+ # violation, division by zero, or a value that cannot be stored due to schema
4728
+ # constraints), then the operation is stopped at that point and an error is
4729
+ # returned. It is possible that at this point, some partitions have been
4730
+ # committed (or even committed multiple times), and other partitions have not
4731
+ # been run at all. Given the above, Partitioned DML is good fit for large,
4732
+ # database-wide, operations that are idempotent, such as deleting old rows from
4733
+ # a very large table.
4720
4734
  # Corresponds to the JSON property `singleUse`
4721
4735
  # @return [Google::Apis::SpannerV1::TransactionOptions]
4722
4736
  attr_accessor :single_use
@@ -16,13 +16,13 @@ module Google
16
16
  module Apis
17
17
  module SpannerV1
18
18
  # Version of the google-apis-spanner_v1 gem
19
- GEM_VERSION = "0.25.0"
19
+ GEM_VERSION = "0.26.0"
20
20
 
21
21
  # Version of the code generator used to generate this client
22
22
  GENERATOR_VERSION = "0.4.1"
23
23
 
24
24
  # Revision of the discovery document this client was generated from
25
- REVISION = "20220310"
25
+ REVISION = "20220326"
26
26
  end
27
27
  end
28
28
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: google-apis-spanner_v1
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.25.0
4
+ version: 0.26.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Google LLC
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2022-03-21 00:00:00.000000000 Z
11
+ date: 2022-04-04 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: google-apis-core
@@ -58,7 +58,7 @@ licenses:
58
58
  metadata:
59
59
  bug_tracker_uri: https://github.com/googleapis/google-api-ruby-client/issues
60
60
  changelog_uri: https://github.com/googleapis/google-api-ruby-client/tree/main/generated/google-apis-spanner_v1/CHANGELOG.md
61
- documentation_uri: https://googleapis.dev/ruby/google-apis-spanner_v1/v0.25.0
61
+ documentation_uri: https://googleapis.dev/ruby/google-apis-spanner_v1/v0.26.0
62
62
  source_code_uri: https://github.com/googleapis/google-api-ruby-client/tree/main/generated/google-apis-spanner_v1
63
63
  post_install_message:
64
64
  rdoc_options: []