google-apis-spanner_v1 0.23.0 → 0.26.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -59,6 +59,15 @@ module Google
59
59
  # @return [String]
60
60
  attr_accessor :expire_time
61
61
 
62
+ # Output only. The max allowed expiration time of the backup, with microseconds
63
+ # granularity. A backup's expiration time can be configured in multiple APIs:
64
+ # CreateBackup, UpdateBackup, CopyBackup. When updating or copying an existing
65
+ # backup, the expiration time specified must be less than `Backup.
66
+ # max_expire_time`.
67
+ # Corresponds to the JSON property `maxExpireTime`
68
+ # @return [String]
69
+ attr_accessor :max_expire_time
70
+
62
71
  # Output only for the CreateBackup operation. Required for the UpdateBackup
63
72
  # operation. A globally unique identifier for the backup which cannot be changed.
64
73
  # Values are of the form `projects//instances//backups/a-z*[a-z0-9]` The final
@@ -70,6 +79,16 @@ module Google
70
79
  # @return [String]
71
80
  attr_accessor :name
72
81
 
82
+ # Output only. The names of the destination backups being created by copying
83
+ # this source backup. The backup names are of the form `projects//instances//
84
+ # backups/`. Referencing backups may exist in different instances. The existence
85
+ # of any referencing backup prevents the backup from being deleted. When the
86
+ # copy operation is done (either successfully completed or cancelled or the
87
+ # destination backup is deleted), the reference to the backup is removed.
88
+ # Corresponds to the JSON property `referencingBackups`
89
+ # @return [Array<String>]
90
+ attr_accessor :referencing_backups
91
+
73
92
  # Output only. The names of the restored databases that reference the backup.
74
93
  # The database names are of the form `projects//instances//databases/`.
75
94
  # Referencing databases may exist in different instances. The existence of any
@@ -108,7 +127,9 @@ module Google
108
127
  @database_dialect = args[:database_dialect] if args.key?(:database_dialect)
109
128
  @encryption_info = args[:encryption_info] if args.key?(:encryption_info)
110
129
  @expire_time = args[:expire_time] if args.key?(:expire_time)
130
+ @max_expire_time = args[:max_expire_time] if args.key?(:max_expire_time)
111
131
  @name = args[:name] if args.key?(:name)
132
+ @referencing_backups = args[:referencing_backups] if args.key?(:referencing_backups)
112
133
  @referencing_databases = args[:referencing_databases] if args.key?(:referencing_databases)
113
134
  @size_bytes = args[:size_bytes] if args.key?(:size_bytes)
114
135
  @state = args[:state] if args.key?(:state)
@@ -212,7 +233,7 @@ module Google
212
233
  # count towards the one transaction limit). After the active transaction is
213
234
  # completed, the session can immediately be re-used for the next transaction. It
214
235
  # is not necessary to create a new session for each transaction. Transaction
215
- # Modes: Cloud Spanner supports three transaction modes: 1. Locking read-write.
236
+ # modes: Cloud Spanner supports three transaction modes: 1. Locking read-write.
216
237
  # This type of transaction is the only way to write data into Cloud Spanner.
217
238
  # These transactions rely on pessimistic locking and, if necessary, two-phase
218
239
  # commit. Locking read-write transactions may abort, requiring the application
@@ -228,9 +249,9 @@ module Google
228
249
  # simpler semantics and are almost always faster. In particular, read-only
229
250
  # transactions do not take locks, so they do not conflict with read-write
230
251
  # transactions. As a consequence of not taking locks, they also do not abort, so
231
- # retry loops are not needed. Transactions may only read/write data in a single
232
- # database. They may, however, read/write data in different tables within that
233
- # database. Locking Read-Write Transactions: Locking transactions may be used to
252
+ # retry loops are not needed. Transactions may only read-write data in a single
253
+ # database. They may, however, read-write data in different tables within that
254
+ # database. Locking read-write transactions: Locking transactions may be used to
234
255
  # atomically read-modify-write data anywhere in a database. This type of
235
256
  # transaction is externally consistent. Clients should attempt to minimize the
236
257
  # amount of time a transaction is active. Faster transactions commit with higher
@@ -249,7 +270,7 @@ module Google
249
270
  # Cloud Spanner makes no guarantees about how long the transaction's locks were
250
271
  # held for. It is an error to use Cloud Spanner locks for any sort of mutual
251
272
  # exclusion other than between Cloud Spanner transactions themselves. Retrying
252
- # Aborted Transactions: When a transaction aborts, the application can choose to
273
+ # aborted transactions: When a transaction aborts, the application can choose to
253
274
  # retry the whole transaction again. To maximize the chances of successfully
254
275
  # committing the retry, the client should execute the retry in the same session
255
276
  # as the original attempt. The original session's lock priority increases with
@@ -258,14 +279,14 @@ module Google
258
279
  # transactions attempting to modify the same row(s)), a transaction can abort
259
280
  # many times in a short period before successfully committing. Thus, it is not a
260
281
  # good idea to cap the number of retries a transaction can attempt; instead, it
261
- # is better to limit the total amount of time spent retrying. Idle Transactions:
282
+ # is better to limit the total amount of time spent retrying. Idle transactions:
262
283
  # A transaction is considered idle if it has no outstanding reads or SQL queries
263
284
  # and has not started a read or SQL query within the last 10 seconds. Idle
264
285
  # transactions can be aborted by Cloud Spanner so that they don't hold on to
265
286
  # locks indefinitely. If an idle transaction is aborted, the commit will fail
266
287
  # with error `ABORTED`. If this behavior is undesirable, periodically executing
267
288
  # a simple SQL query in the transaction (for example, `SELECT 1`) prevents the
268
- # transaction from becoming idle. Snapshot Read-Only Transactions: Snapshot read-
289
+ # transaction from becoming idle. Snapshot read-only transactions: Snapshot read-
269
290
  # only transactions provides a simpler method than locking read-write
270
291
  # transactions for doing several consistent reads. However, this type of
271
292
  # transaction does not support writes. Snapshot transactions do not take locks.
@@ -281,7 +302,7 @@ module Google
281
302
  # how to choose a read timestamp. The types of timestamp bound are: - Strong (
282
303
  # the default). - Bounded staleness. - Exact staleness. If the Cloud Spanner
283
304
  # database to be read is geographically distributed, stale read-only
284
- # transactions can execute more quickly than strong or read-write transaction,
305
+ # transactions can execute more quickly than strong or read-write transactions,
285
306
  # because they are able to execute far from the leader replica. Each type of
286
307
  # timestamp bound is discussed in detail below. Strong: Strong reads are
287
308
  # guaranteed to see the effects of all transactions that have committed before
@@ -291,7 +312,7 @@ module Google
291
312
  # two consecutive strong read-only transactions might return inconsistent
292
313
  # results if there are concurrent writes. If consistency across reads is
293
314
  # required, the reads should be executed within a transaction or at an exact
294
- # read timestamp. See TransactionOptions.ReadOnly.strong. Exact Staleness: These
315
+ # read timestamp. See TransactionOptions.ReadOnly.strong. Exact staleness: These
295
316
  # timestamp bounds execute reads at a user-specified timestamp. Reads at a
296
317
  # timestamp are guaranteed to see a consistent prefix of the global transaction
297
318
  # history: they observe modifications done by all transactions with a commit
@@ -305,7 +326,7 @@ module Google
305
326
  # equivalent boundedly stale concurrency modes. On the other hand, boundedly
306
327
  # stale reads usually return fresher results. See TransactionOptions.ReadOnly.
307
328
  # read_timestamp and TransactionOptions.ReadOnly.exact_staleness. Bounded
308
- # Staleness: Bounded staleness modes allow Cloud Spanner to pick the read
329
+ # staleness: Bounded staleness modes allow Cloud Spanner to pick the read
309
330
  # timestamp, subject to a user-provided staleness bound. Cloud Spanner chooses
310
331
  # the newest timestamp within the staleness bound that allows execution of the
311
332
  # reads at the closest available replica without blocking. All rows yielded are
@@ -322,51 +343,54 @@ module Google
322
343
  # timestamp negotiation requires up-front knowledge of which rows will be read,
323
344
  # it can only be used with single-use read-only transactions. See
324
345
  # TransactionOptions.ReadOnly.max_staleness and TransactionOptions.ReadOnly.
325
- # min_read_timestamp. Old Read Timestamps and Garbage Collection: Cloud Spanner
346
+ # min_read_timestamp. Old read timestamps and garbage collection: Cloud Spanner
326
347
  # continuously garbage collects deleted and overwritten data in the background
327
348
  # to reclaim storage space. This process is known as "version GC". By default,
328
349
  # version GC reclaims versions after they are one hour old. Because of this,
329
350
  # Cloud Spanner cannot perform reads at read timestamps more than one hour in
330
351
  # the past. This restriction also applies to in-progress reads and/or SQL
331
352
  # queries whose timestamp become too old while executing. Reads and SQL queries
332
- # with too-old read timestamps fail with the error `FAILED_PRECONDITION`.
333
- # Partitioned DML Transactions: Partitioned DML transactions are used to execute
334
- # DML statements with a different execution strategy that provides different,
335
- # and often better, scalability properties for large, table-wide operations than
336
- # DML in a ReadWrite transaction. Smaller scoped statements, such as an OLTP
337
- # workload, should prefer using ReadWrite transactions. Partitioned DML
338
- # partitions the keyspace and runs the DML statement on each partition in
339
- # separate, internal transactions. These transactions commit automatically when
340
- # complete, and run independently from one another. To reduce lock contention,
341
- # this execution strategy only acquires read locks on rows that match the WHERE
342
- # clause of the statement. Additionally, the smaller per-partition transactions
343
- # hold locks for less time. That said, Partitioned DML is not a drop-in
344
- # replacement for standard DML used in ReadWrite transactions. - The DML
345
- # statement must be fully-partitionable. Specifically, the statement must be
346
- # expressible as the union of many statements which each access only a single
347
- # row of the table. - The statement is not applied atomically to all rows of the
348
- # table. Rather, the statement is applied atomically to partitions of the table,
349
- # in independent transactions. Secondary index rows are updated atomically with
350
- # the base table rows. - Partitioned DML does not guarantee exactly-once
351
- # execution semantics against a partition. The statement will be applied at
352
- # least once to each partition. It is strongly recommended that the DML
353
- # statement should be idempotent to avoid unexpected results. For instance, it
354
- # is potentially dangerous to run a statement such as `UPDATE table SET column =
355
- # column + 1` as it could be run multiple times against some rows. - The
356
- # partitions are committed automatically - there is no support for Commit or
357
- # Rollback. If the call returns an error, or if the client issuing the
358
- # ExecuteSql call dies, it is possible that some rows had the statement executed
359
- # on them successfully. It is also possible that statement was never executed
360
- # against other rows. - Partitioned DML transactions may only contain the
361
- # execution of a single DML statement via ExecuteSql or ExecuteStreamingSql. -
362
- # If any error is encountered during the execution of the partitioned DML
363
- # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
364
- # value that cannot be stored due to schema constraints), then the operation is
365
- # stopped at that point and an error is returned. It is possible that at this
366
- # point, some partitions have been committed (or even committed multiple times),
367
- # and other partitions have not been run at all. Given the above, Partitioned
368
- # DML is good fit for large, database-wide, operations that are idempotent, such
369
- # as deleting old rows from a very large table.
353
+ # with too-old read timestamps fail with the error `FAILED_PRECONDITION`. You
354
+ # can configure and extend the `VERSION_RETENTION_PERIOD` of a database up to a
355
+ # period as long as one week, which allows Cloud Spanner to perform reads up to
356
+ # one week in the past. Partitioned DML transactions: Partitioned DML
357
+ # transactions are used to execute DML statements with a different execution
358
+ # strategy that provides different, and often better, scalability properties for
359
+ # large, table-wide operations than DML in a ReadWrite transaction. Smaller
360
+ # scoped statements, such as an OLTP workload, should prefer using ReadWrite
361
+ # transactions. Partitioned DML partitions the keyspace and runs the DML
362
+ # statement on each partition in separate, internal transactions. These
363
+ # transactions commit automatically when complete, and run independently from
364
+ # one another. To reduce lock contention, this execution strategy only acquires
365
+ # read locks on rows that match the WHERE clause of the statement. Additionally,
366
+ # the smaller per-partition transactions hold locks for less time. That said,
367
+ # Partitioned DML is not a drop-in replacement for standard DML used in
368
+ # ReadWrite transactions. - The DML statement must be fully-partitionable.
369
+ # Specifically, the statement must be expressible as the union of many
370
+ # statements which each access only a single row of the table. - The statement
371
+ # is not applied atomically to all rows of the table. Rather, the statement is
372
+ # applied atomically to partitions of the table, in independent transactions.
373
+ # Secondary index rows are updated atomically with the base table rows. -
374
+ # Partitioned DML does not guarantee exactly-once execution semantics against a
375
+ # partition. The statement will be applied at least once to each partition. It
376
+ # is strongly recommended that the DML statement should be idempotent to avoid
377
+ # unexpected results. For instance, it is potentially dangerous to run a
378
+ # statement such as `UPDATE table SET column = column + 1` as it could be run
379
+ # multiple times against some rows. - The partitions are committed automatically
380
+ # - there is no support for Commit or Rollback. If the call returns an error, or
381
+ # if the client issuing the ExecuteSql call dies, it is possible that some rows
382
+ # had the statement executed on them successfully. It is also possible that
383
+ # statement was never executed against other rows. - Partitioned DML
384
+ # transactions may only contain the execution of a single DML statement via
385
+ # ExecuteSql or ExecuteStreamingSql. - If any error is encountered during the
386
+ # execution of the partitioned DML operation (for instance, a UNIQUE INDEX
387
+ # violation, division by zero, or a value that cannot be stored due to schema
388
+ # constraints), then the operation is stopped at that point and an error is
389
+ # returned. It is possible that at this point, some partitions have been
390
+ # committed (or even committed multiple times), and other partitions have not
391
+ # been run at all. Given the above, Partitioned DML is good fit for large,
392
+ # database-wide, operations that are idempotent, such as deleting old rows from
393
+ # a very large table.
370
394
  # Corresponds to the JSON property `options`
371
395
  # @return [Google::Apis::SpannerV1::TransactionOptions]
372
396
  attr_accessor :options
@@ -524,7 +548,7 @@ module Google
524
548
  # count towards the one transaction limit). After the active transaction is
525
549
  # completed, the session can immediately be re-used for the next transaction. It
526
550
  # is not necessary to create a new session for each transaction. Transaction
527
- # Modes: Cloud Spanner supports three transaction modes: 1. Locking read-write.
551
+ # modes: Cloud Spanner supports three transaction modes: 1. Locking read-write.
528
552
  # This type of transaction is the only way to write data into Cloud Spanner.
529
553
  # These transactions rely on pessimistic locking and, if necessary, two-phase
530
554
  # commit. Locking read-write transactions may abort, requiring the application
@@ -540,9 +564,9 @@ module Google
540
564
  # simpler semantics and are almost always faster. In particular, read-only
541
565
  # transactions do not take locks, so they do not conflict with read-write
542
566
  # transactions. As a consequence of not taking locks, they also do not abort, so
543
- # retry loops are not needed. Transactions may only read/write data in a single
544
- # database. They may, however, read/write data in different tables within that
545
- # database. Locking Read-Write Transactions: Locking transactions may be used to
567
+ # retry loops are not needed. Transactions may only read-write data in a single
568
+ # database. They may, however, read-write data in different tables within that
569
+ # database. Locking read-write transactions: Locking transactions may be used to
546
570
  # atomically read-modify-write data anywhere in a database. This type of
547
571
  # transaction is externally consistent. Clients should attempt to minimize the
548
572
  # amount of time a transaction is active. Faster transactions commit with higher
@@ -561,7 +585,7 @@ module Google
561
585
  # Cloud Spanner makes no guarantees about how long the transaction's locks were
562
586
  # held for. It is an error to use Cloud Spanner locks for any sort of mutual
563
587
  # exclusion other than between Cloud Spanner transactions themselves. Retrying
564
- # Aborted Transactions: When a transaction aborts, the application can choose to
588
+ # aborted transactions: When a transaction aborts, the application can choose to
565
589
  # retry the whole transaction again. To maximize the chances of successfully
566
590
  # committing the retry, the client should execute the retry in the same session
567
591
  # as the original attempt. The original session's lock priority increases with
@@ -570,14 +594,14 @@ module Google
570
594
  # transactions attempting to modify the same row(s)), a transaction can abort
571
595
  # many times in a short period before successfully committing. Thus, it is not a
572
596
  # good idea to cap the number of retries a transaction can attempt; instead, it
573
- # is better to limit the total amount of time spent retrying. Idle Transactions:
597
+ # is better to limit the total amount of time spent retrying. Idle transactions:
574
598
  # A transaction is considered idle if it has no outstanding reads or SQL queries
575
599
  # and has not started a read or SQL query within the last 10 seconds. Idle
576
600
  # transactions can be aborted by Cloud Spanner so that they don't hold on to
577
601
  # locks indefinitely. If an idle transaction is aborted, the commit will fail
578
602
  # with error `ABORTED`. If this behavior is undesirable, periodically executing
579
603
  # a simple SQL query in the transaction (for example, `SELECT 1`) prevents the
580
- # transaction from becoming idle. Snapshot Read-Only Transactions: Snapshot read-
604
+ # transaction from becoming idle. Snapshot read-only transactions: Snapshot read-
581
605
  # only transactions provides a simpler method than locking read-write
582
606
  # transactions for doing several consistent reads. However, this type of
583
607
  # transaction does not support writes. Snapshot transactions do not take locks.
@@ -593,7 +617,7 @@ module Google
593
617
  # how to choose a read timestamp. The types of timestamp bound are: - Strong (
594
618
  # the default). - Bounded staleness. - Exact staleness. If the Cloud Spanner
595
619
  # database to be read is geographically distributed, stale read-only
596
- # transactions can execute more quickly than strong or read-write transaction,
620
+ # transactions can execute more quickly than strong or read-write transactions,
597
621
  # because they are able to execute far from the leader replica. Each type of
598
622
  # timestamp bound is discussed in detail below. Strong: Strong reads are
599
623
  # guaranteed to see the effects of all transactions that have committed before
@@ -603,7 +627,7 @@ module Google
603
627
  # two consecutive strong read-only transactions might return inconsistent
604
628
  # results if there are concurrent writes. If consistency across reads is
605
629
  # required, the reads should be executed within a transaction or at an exact
606
- # read timestamp. See TransactionOptions.ReadOnly.strong. Exact Staleness: These
630
+ # read timestamp. See TransactionOptions.ReadOnly.strong. Exact staleness: These
607
631
  # timestamp bounds execute reads at a user-specified timestamp. Reads at a
608
632
  # timestamp are guaranteed to see a consistent prefix of the global transaction
609
633
  # history: they observe modifications done by all transactions with a commit
@@ -617,7 +641,7 @@ module Google
617
641
  # equivalent boundedly stale concurrency modes. On the other hand, boundedly
618
642
  # stale reads usually return fresher results. See TransactionOptions.ReadOnly.
619
643
  # read_timestamp and TransactionOptions.ReadOnly.exact_staleness. Bounded
620
- # Staleness: Bounded staleness modes allow Cloud Spanner to pick the read
644
+ # staleness: Bounded staleness modes allow Cloud Spanner to pick the read
621
645
  # timestamp, subject to a user-provided staleness bound. Cloud Spanner chooses
622
646
  # the newest timestamp within the staleness bound that allows execution of the
623
647
  # reads at the closest available replica without blocking. All rows yielded are
@@ -634,51 +658,54 @@ module Google
634
658
  # timestamp negotiation requires up-front knowledge of which rows will be read,
635
659
  # it can only be used with single-use read-only transactions. See
636
660
  # TransactionOptions.ReadOnly.max_staleness and TransactionOptions.ReadOnly.
637
- # min_read_timestamp. Old Read Timestamps and Garbage Collection: Cloud Spanner
661
+ # min_read_timestamp. Old read timestamps and garbage collection: Cloud Spanner
638
662
  # continuously garbage collects deleted and overwritten data in the background
639
663
  # to reclaim storage space. This process is known as "version GC". By default,
640
664
  # version GC reclaims versions after they are one hour old. Because of this,
641
665
  # Cloud Spanner cannot perform reads at read timestamps more than one hour in
642
666
  # the past. This restriction also applies to in-progress reads and/or SQL
643
667
  # queries whose timestamp become too old while executing. Reads and SQL queries
644
- # with too-old read timestamps fail with the error `FAILED_PRECONDITION`.
645
- # Partitioned DML Transactions: Partitioned DML transactions are used to execute
646
- # DML statements with a different execution strategy that provides different,
647
- # and often better, scalability properties for large, table-wide operations than
648
- # DML in a ReadWrite transaction. Smaller scoped statements, such as an OLTP
649
- # workload, should prefer using ReadWrite transactions. Partitioned DML
650
- # partitions the keyspace and runs the DML statement on each partition in
651
- # separate, internal transactions. These transactions commit automatically when
652
- # complete, and run independently from one another. To reduce lock contention,
653
- # this execution strategy only acquires read locks on rows that match the WHERE
654
- # clause of the statement. Additionally, the smaller per-partition transactions
655
- # hold locks for less time. That said, Partitioned DML is not a drop-in
656
- # replacement for standard DML used in ReadWrite transactions. - The DML
657
- # statement must be fully-partitionable. Specifically, the statement must be
658
- # expressible as the union of many statements which each access only a single
659
- # row of the table. - The statement is not applied atomically to all rows of the
660
- # table. Rather, the statement is applied atomically to partitions of the table,
661
- # in independent transactions. Secondary index rows are updated atomically with
662
- # the base table rows. - Partitioned DML does not guarantee exactly-once
663
- # execution semantics against a partition. The statement will be applied at
664
- # least once to each partition. It is strongly recommended that the DML
665
- # statement should be idempotent to avoid unexpected results. For instance, it
666
- # is potentially dangerous to run a statement such as `UPDATE table SET column =
667
- # column + 1` as it could be run multiple times against some rows. - The
668
- # partitions are committed automatically - there is no support for Commit or
669
- # Rollback. If the call returns an error, or if the client issuing the
670
- # ExecuteSql call dies, it is possible that some rows had the statement executed
671
- # on them successfully. It is also possible that statement was never executed
672
- # against other rows. - Partitioned DML transactions may only contain the
673
- # execution of a single DML statement via ExecuteSql or ExecuteStreamingSql. -
674
- # If any error is encountered during the execution of the partitioned DML
675
- # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
676
- # value that cannot be stored due to schema constraints), then the operation is
677
- # stopped at that point and an error is returned. It is possible that at this
678
- # point, some partitions have been committed (or even committed multiple times),
679
- # and other partitions have not been run at all. Given the above, Partitioned
680
- # DML is good fit for large, database-wide, operations that are idempotent, such
681
- # as deleting old rows from a very large table.
668
+ # with too-old read timestamps fail with the error `FAILED_PRECONDITION`. You
669
+ # can configure and extend the `VERSION_RETENTION_PERIOD` of a database up to a
670
+ # period as long as one week, which allows Cloud Spanner to perform reads up to
671
+ # one week in the past. Partitioned DML transactions: Partitioned DML
672
+ # transactions are used to execute DML statements with a different execution
673
+ # strategy that provides different, and often better, scalability properties for
674
+ # large, table-wide operations than DML in a ReadWrite transaction. Smaller
675
+ # scoped statements, such as an OLTP workload, should prefer using ReadWrite
676
+ # transactions. Partitioned DML partitions the keyspace and runs the DML
677
+ # statement on each partition in separate, internal transactions. These
678
+ # transactions commit automatically when complete, and run independently from
679
+ # one another. To reduce lock contention, this execution strategy only acquires
680
+ # read locks on rows that match the WHERE clause of the statement. Additionally,
681
+ # the smaller per-partition transactions hold locks for less time. That said,
682
+ # Partitioned DML is not a drop-in replacement for standard DML used in
683
+ # ReadWrite transactions. - The DML statement must be fully-partitionable.
684
+ # Specifically, the statement must be expressible as the union of many
685
+ # statements which each access only a single row of the table. - The statement
686
+ # is not applied atomically to all rows of the table. Rather, the statement is
687
+ # applied atomically to partitions of the table, in independent transactions.
688
+ # Secondary index rows are updated atomically with the base table rows. -
689
+ # Partitioned DML does not guarantee exactly-once execution semantics against a
690
+ # partition. The statement will be applied at least once to each partition. It
691
+ # is strongly recommended that the DML statement should be idempotent to avoid
692
+ # unexpected results. For instance, it is potentially dangerous to run a
693
+ # statement such as `UPDATE table SET column = column + 1` as it could be run
694
+ # multiple times against some rows. - The partitions are committed automatically
695
+ # - there is no support for Commit or Rollback. If the call returns an error, or
696
+ # if the client issuing the ExecuteSql call dies, it is possible that some rows
697
+ # had the statement executed on them successfully. It is also possible that
698
+ # statement was never executed against other rows. - Partitioned DML
699
+ # transactions may only contain the execution of a single DML statement via
700
+ # ExecuteSql or ExecuteStreamingSql. - If any error is encountered during the
701
+ # execution of the partitioned DML operation (for instance, a UNIQUE INDEX
702
+ # violation, division by zero, or a value that cannot be stored due to schema
703
+ # constraints), then the operation is stopped at that point and an error is
704
+ # returned. It is possible that at this point, some partitions have been
705
+ # committed (or even committed multiple times), and other partitions have not
706
+ # been run at all. Given the above, Partitioned DML is good fit for large,
707
+ # database-wide, operations that are idempotent, such as deleting old rows from
708
+ # a very large table.
682
709
  # Corresponds to the JSON property `singleUseTransaction`
683
710
  # @return [Google::Apis::SpannerV1::TransactionOptions]
684
711
  attr_accessor :single_use_transaction
@@ -793,6 +820,125 @@ module Google
793
820
  end
794
821
  end
795
822
 
823
+ # Encryption configuration for the copied backup.
824
+ class CopyBackupEncryptionConfig
825
+ include Google::Apis::Core::Hashable
826
+
827
+ # Required. The encryption type of the backup.
828
+ # Corresponds to the JSON property `encryptionType`
829
+ # @return [String]
830
+ attr_accessor :encryption_type
831
+
832
+ # Optional. The Cloud KMS key that will be used to protect the backup. This
833
+ # field should be set only when encryption_type is `CUSTOMER_MANAGED_ENCRYPTION`.
834
+ # Values are of the form `projects//locations//keyRings//cryptoKeys/`.
835
+ # Corresponds to the JSON property `kmsKeyName`
836
+ # @return [String]
837
+ attr_accessor :kms_key_name
838
+
839
+ def initialize(**args)
840
+ update!(**args)
841
+ end
842
+
843
+ # Update properties of this object
844
+ def update!(**args)
845
+ @encryption_type = args[:encryption_type] if args.key?(:encryption_type)
846
+ @kms_key_name = args[:kms_key_name] if args.key?(:kms_key_name)
847
+ end
848
+ end
849
+
850
+ # Metadata type for the google.longrunning.Operation returned by CopyBackup.
851
+ class CopyBackupMetadata
852
+ include Google::Apis::Core::Hashable
853
+
854
+ # The time at which cancellation of CopyBackup operation was received.
855
+ # Operations.CancelOperation starts asynchronous cancellation on a long-running
856
+ # operation. The server makes a best effort to cancel the operation, but success
857
+ # is not guaranteed. Clients can use Operations.GetOperation or other methods to
858
+ # check whether the cancellation succeeded or whether the operation completed
859
+ # despite cancellation. On successful cancellation, the operation is not deleted;
860
+ # instead, it becomes an operation with an Operation.error value with a google.
861
+ # rpc.Status.code of 1, corresponding to `Code.CANCELLED`.
862
+ # Corresponds to the JSON property `cancelTime`
863
+ # @return [String]
864
+ attr_accessor :cancel_time
865
+
866
+ # The name of the backup being created through the copy operation. Values are of
867
+ # the form `projects//instances//backups/`.
868
+ # Corresponds to the JSON property `name`
869
+ # @return [String]
870
+ attr_accessor :name
871
+
872
+ # Encapsulates progress related information for a Cloud Spanner long running
873
+ # operation.
874
+ # Corresponds to the JSON property `progress`
875
+ # @return [Google::Apis::SpannerV1::OperationProgress]
876
+ attr_accessor :progress
877
+
878
+ # The name of the source backup that is being copied. Values are of the form `
879
+ # projects//instances//backups/`.
880
+ # Corresponds to the JSON property `sourceBackup`
881
+ # @return [String]
882
+ attr_accessor :source_backup
883
+
884
+ def initialize(**args)
885
+ update!(**args)
886
+ end
887
+
888
+ # Update properties of this object
889
+ def update!(**args)
890
+ @cancel_time = args[:cancel_time] if args.key?(:cancel_time)
891
+ @name = args[:name] if args.key?(:name)
892
+ @progress = args[:progress] if args.key?(:progress)
893
+ @source_backup = args[:source_backup] if args.key?(:source_backup)
894
+ end
895
+ end
896
+
897
+ # The request for CopyBackup.
898
+ class CopyBackupRequest
899
+ include Google::Apis::Core::Hashable
900
+
901
+ # Required. The id of the backup copy. The `backup_id` appended to `parent`
902
+ # forms the full backup_uri of the form `projects//instances//backups/`.
903
+ # Corresponds to the JSON property `backupId`
904
+ # @return [String]
905
+ attr_accessor :backup_id
906
+
907
+ # Encryption configuration for the copied backup.
908
+ # Corresponds to the JSON property `encryptionConfig`
909
+ # @return [Google::Apis::SpannerV1::CopyBackupEncryptionConfig]
910
+ attr_accessor :encryption_config
911
+
912
+ # Required. The expiration time of the backup in microsecond granularity. The
913
+ # expiration time must be at least 6 hours and at most 366 days from the `
914
+ # create_time` of the source backup. Once the `expire_time` has passed, the
915
+ # backup is eligible to be automatically deleted by Cloud Spanner to free the
916
+ # resources used by the backup.
917
+ # Corresponds to the JSON property `expireTime`
918
+ # @return [String]
919
+ attr_accessor :expire_time
920
+
921
+ # Required. The source backup to be copied. The source backup needs to be in
922
+ # READY state for it to be copied. Once CopyBackup is in progress, the source
923
+ # backup cannot be deleted or cleaned up on expiration until CopyBackup is
924
+ # finished. Values are of the form: `projects//instances//backups/`.
925
+ # Corresponds to the JSON property `sourceBackup`
926
+ # @return [String]
927
+ attr_accessor :source_backup
928
+
929
+ def initialize(**args)
930
+ update!(**args)
931
+ end
932
+
933
+ # Update properties of this object
934
+ def update!(**args)
935
+ @backup_id = args[:backup_id] if args.key?(:backup_id)
936
+ @encryption_config = args[:encryption_config] if args.key?(:encryption_config)
937
+ @expire_time = args[:expire_time] if args.key?(:expire_time)
938
+ @source_backup = args[:source_backup] if args.key?(:source_backup)
939
+ end
940
+ end
941
+
796
942
  # Metadata type for the operation returned by CreateBackup.
797
943
  class CreateBackupMetadata
798
944
  include Google::Apis::Core::Hashable
@@ -1186,8 +1332,7 @@ module Google
1186
1332
  # A generic empty message that you can re-use to avoid defining duplicated empty
1187
1333
  # messages in your APIs. A typical example is to use it as the request or the
1188
1334
  # response type of an API method. For instance: service Foo ` rpc Bar(google.
1189
- # protobuf.Empty) returns (google.protobuf.Empty); ` The JSON representation for
1190
- # `Empty` is empty JSON object ````.
1335
+ # protobuf.Empty) returns (google.protobuf.Empty); `
1191
1336
  class Empty
1192
1337
  include Google::Apis::Core::Hashable
1193
1338
 
@@ -1657,6 +1802,11 @@ module Google
1657
1802
  # @return [String]
1658
1803
  attr_accessor :config
1659
1804
 
1805
+ # Output only. The time at which the instance was created.
1806
+ # Corresponds to the JSON property `createTime`
1807
+ # @return [String]
1808
+ attr_accessor :create_time
1809
+
1660
1810
  # Required. The descriptive name for this instance as it appears in UIs. Must be
1661
1811
  # unique per project and between 4 and 30 characters in length.
1662
1812
  # Corresponds to the JSON property `displayName`
@@ -1721,6 +1871,11 @@ module Google
1721
1871
  # @return [String]
1722
1872
  attr_accessor :state
1723
1873
 
1874
+ # Output only. The time at which the instance was most recently updated.
1875
+ # Corresponds to the JSON property `updateTime`
1876
+ # @return [String]
1877
+ attr_accessor :update_time
1878
+
1724
1879
  def initialize(**args)
1725
1880
  update!(**args)
1726
1881
  end
@@ -1728,6 +1883,7 @@ module Google
1728
1883
  # Update properties of this object
1729
1884
  def update!(**args)
1730
1885
  @config = args[:config] if args.key?(:config)
1886
+ @create_time = args[:create_time] if args.key?(:create_time)
1731
1887
  @display_name = args[:display_name] if args.key?(:display_name)
1732
1888
  @endpoint_uris = args[:endpoint_uris] if args.key?(:endpoint_uris)
1733
1889
  @labels = args[:labels] if args.key?(:labels)
@@ -1735,6 +1891,7 @@ module Google
1735
1891
  @node_count = args[:node_count] if args.key?(:node_count)
1736
1892
  @processing_units = args[:processing_units] if args.key?(:processing_units)
1737
1893
  @state = args[:state] if args.key?(:state)
1894
+ @update_time = args[:update_time] if args.key?(:update_time)
1738
1895
  end
1739
1896
  end
1740
1897
 
@@ -4044,7 +4201,7 @@ module Google
4044
4201
  # count towards the one transaction limit). After the active transaction is
4045
4202
  # completed, the session can immediately be re-used for the next transaction. It
4046
4203
  # is not necessary to create a new session for each transaction. Transaction
4047
- # Modes: Cloud Spanner supports three transaction modes: 1. Locking read-write.
4204
+ # modes: Cloud Spanner supports three transaction modes: 1. Locking read-write.
4048
4205
  # This type of transaction is the only way to write data into Cloud Spanner.
4049
4206
  # These transactions rely on pessimistic locking and, if necessary, two-phase
4050
4207
  # commit. Locking read-write transactions may abort, requiring the application
@@ -4060,9 +4217,9 @@ module Google
4060
4217
  # simpler semantics and are almost always faster. In particular, read-only
4061
4218
  # transactions do not take locks, so they do not conflict with read-write
4062
4219
  # transactions. As a consequence of not taking locks, they also do not abort, so
4063
- # retry loops are not needed. Transactions may only read/write data in a single
4064
- # database. They may, however, read/write data in different tables within that
4065
- # database. Locking Read-Write Transactions: Locking transactions may be used to
4220
+ # retry loops are not needed. Transactions may only read-write data in a single
4221
+ # database. They may, however, read-write data in different tables within that
4222
+ # database. Locking read-write transactions: Locking transactions may be used to
4066
4223
  # atomically read-modify-write data anywhere in a database. This type of
4067
4224
  # transaction is externally consistent. Clients should attempt to minimize the
4068
4225
  # amount of time a transaction is active. Faster transactions commit with higher
@@ -4081,7 +4238,7 @@ module Google
4081
4238
  # Cloud Spanner makes no guarantees about how long the transaction's locks were
4082
4239
  # held for. It is an error to use Cloud Spanner locks for any sort of mutual
4083
4240
  # exclusion other than between Cloud Spanner transactions themselves. Retrying
4084
- # Aborted Transactions: When a transaction aborts, the application can choose to
4241
+ # aborted transactions: When a transaction aborts, the application can choose to
4085
4242
  # retry the whole transaction again. To maximize the chances of successfully
4086
4243
  # committing the retry, the client should execute the retry in the same session
4087
4244
  # as the original attempt. The original session's lock priority increases with
@@ -4090,14 +4247,14 @@ module Google
4090
4247
  # transactions attempting to modify the same row(s)), a transaction can abort
4091
4248
  # many times in a short period before successfully committing. Thus, it is not a
4092
4249
  # good idea to cap the number of retries a transaction can attempt; instead, it
4093
- # is better to limit the total amount of time spent retrying. Idle Transactions:
4250
+ # is better to limit the total amount of time spent retrying. Idle transactions:
4094
4251
  # A transaction is considered idle if it has no outstanding reads or SQL queries
4095
4252
  # and has not started a read or SQL query within the last 10 seconds. Idle
4096
4253
  # transactions can be aborted by Cloud Spanner so that they don't hold on to
4097
4254
  # locks indefinitely. If an idle transaction is aborted, the commit will fail
4098
4255
  # with error `ABORTED`. If this behavior is undesirable, periodically executing
4099
4256
  # a simple SQL query in the transaction (for example, `SELECT 1`) prevents the
4100
- # transaction from becoming idle. Snapshot Read-Only Transactions: Snapshot read-
4257
+ # transaction from becoming idle. Snapshot read-only transactions: Snapshot read-
4101
4258
  # only transactions provides a simpler method than locking read-write
4102
4259
  # transactions for doing several consistent reads. However, this type of
4103
4260
  # transaction does not support writes. Snapshot transactions do not take locks.
@@ -4113,7 +4270,7 @@ module Google
4113
4270
  # how to choose a read timestamp. The types of timestamp bound are: - Strong (
4114
4271
  # the default). - Bounded staleness. - Exact staleness. If the Cloud Spanner
4115
4272
  # database to be read is geographically distributed, stale read-only
4116
- # transactions can execute more quickly than strong or read-write transaction,
4273
+ # transactions can execute more quickly than strong or read-write transactions,
4117
4274
  # because they are able to execute far from the leader replica. Each type of
4118
4275
  # timestamp bound is discussed in detail below. Strong: Strong reads are
4119
4276
  # guaranteed to see the effects of all transactions that have committed before
@@ -4123,7 +4280,7 @@ module Google
4123
4280
  # two consecutive strong read-only transactions might return inconsistent
4124
4281
  # results if there are concurrent writes. If consistency across reads is
4125
4282
  # required, the reads should be executed within a transaction or at an exact
4126
- # read timestamp. See TransactionOptions.ReadOnly.strong. Exact Staleness: These
4283
+ # read timestamp. See TransactionOptions.ReadOnly.strong. Exact staleness: These
4127
4284
  # timestamp bounds execute reads at a user-specified timestamp. Reads at a
4128
4285
  # timestamp are guaranteed to see a consistent prefix of the global transaction
4129
4286
  # history: they observe modifications done by all transactions with a commit
@@ -4137,7 +4294,7 @@ module Google
4137
4294
  # equivalent boundedly stale concurrency modes. On the other hand, boundedly
4138
4295
  # stale reads usually return fresher results. See TransactionOptions.ReadOnly.
4139
4296
  # read_timestamp and TransactionOptions.ReadOnly.exact_staleness. Bounded
4140
- # Staleness: Bounded staleness modes allow Cloud Spanner to pick the read
4297
+ # staleness: Bounded staleness modes allow Cloud Spanner to pick the read
4141
4298
  # timestamp, subject to a user-provided staleness bound. Cloud Spanner chooses
4142
4299
  # the newest timestamp within the staleness bound that allows execution of the
4143
4300
  # reads at the closest available replica without blocking. All rows yielded are
@@ -4154,51 +4311,54 @@ module Google
4154
4311
  # timestamp negotiation requires up-front knowledge of which rows will be read,
4155
4312
  # it can only be used with single-use read-only transactions. See
4156
4313
  # TransactionOptions.ReadOnly.max_staleness and TransactionOptions.ReadOnly.
4157
- # min_read_timestamp. Old Read Timestamps and Garbage Collection: Cloud Spanner
4314
+ # min_read_timestamp. Old read timestamps and garbage collection: Cloud Spanner
4158
4315
  # continuously garbage collects deleted and overwritten data in the background
4159
4316
  # to reclaim storage space. This process is known as "version GC". By default,
4160
4317
  # version GC reclaims versions after they are one hour old. Because of this,
4161
4318
  # Cloud Spanner cannot perform reads at read timestamps more than one hour in
4162
4319
  # the past. This restriction also applies to in-progress reads and/or SQL
4163
4320
  # queries whose timestamp become too old while executing. Reads and SQL queries
4164
- # with too-old read timestamps fail with the error `FAILED_PRECONDITION`.
4165
- # Partitioned DML Transactions: Partitioned DML transactions are used to execute
4166
- # DML statements with a different execution strategy that provides different,
4167
- # and often better, scalability properties for large, table-wide operations than
4168
- # DML in a ReadWrite transaction. Smaller scoped statements, such as an OLTP
4169
- # workload, should prefer using ReadWrite transactions. Partitioned DML
4170
- # partitions the keyspace and runs the DML statement on each partition in
4171
- # separate, internal transactions. These transactions commit automatically when
4172
- # complete, and run independently from one another. To reduce lock contention,
4173
- # this execution strategy only acquires read locks on rows that match the WHERE
4174
- # clause of the statement. Additionally, the smaller per-partition transactions
4175
- # hold locks for less time. That said, Partitioned DML is not a drop-in
4176
- # replacement for standard DML used in ReadWrite transactions. - The DML
4177
- # statement must be fully-partitionable. Specifically, the statement must be
4178
- # expressible as the union of many statements which each access only a single
4179
- # row of the table. - The statement is not applied atomically to all rows of the
4180
- # table. Rather, the statement is applied atomically to partitions of the table,
4181
- # in independent transactions. Secondary index rows are updated atomically with
4182
- # the base table rows. - Partitioned DML does not guarantee exactly-once
4183
- # execution semantics against a partition. The statement will be applied at
4184
- # least once to each partition. It is strongly recommended that the DML
4185
- # statement should be idempotent to avoid unexpected results. For instance, it
4186
- # is potentially dangerous to run a statement such as `UPDATE table SET column =
4187
- # column + 1` as it could be run multiple times against some rows. - The
4188
- # partitions are committed automatically - there is no support for Commit or
4189
- # Rollback. If the call returns an error, or if the client issuing the
4190
- # ExecuteSql call dies, it is possible that some rows had the statement executed
4191
- # on them successfully. It is also possible that statement was never executed
4192
- # against other rows. - Partitioned DML transactions may only contain the
4193
- # execution of a single DML statement via ExecuteSql or ExecuteStreamingSql. -
4194
- # If any error is encountered during the execution of the partitioned DML
4195
- # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
4196
- # value that cannot be stored due to schema constraints), then the operation is
4197
- # stopped at that point and an error is returned. It is possible that at this
4198
- # point, some partitions have been committed (or even committed multiple times),
4199
- # and other partitions have not been run at all. Given the above, Partitioned
4200
- # DML is good fit for large, database-wide, operations that are idempotent, such
4201
- # as deleting old rows from a very large table.
4321
+ # with too-old read timestamps fail with the error `FAILED_PRECONDITION`. You
4322
+ # can configure and extend the `VERSION_RETENTION_PERIOD` of a database up to a
4323
+ # period as long as one week, which allows Cloud Spanner to perform reads up to
4324
+ # one week in the past. Partitioned DML transactions: Partitioned DML
4325
+ # transactions are used to execute DML statements with a different execution
4326
+ # strategy that provides different, and often better, scalability properties for
4327
+ # large, table-wide operations than DML in a ReadWrite transaction. Smaller
4328
+ # scoped statements, such as an OLTP workload, should prefer using ReadWrite
4329
+ # transactions. Partitioned DML partitions the keyspace and runs the DML
4330
+ # statement on each partition in separate, internal transactions. These
4331
+ # transactions commit automatically when complete, and run independently from
4332
+ # one another. To reduce lock contention, this execution strategy only acquires
4333
+ # read locks on rows that match the WHERE clause of the statement. Additionally,
4334
+ # the smaller per-partition transactions hold locks for less time. That said,
4335
+ # Partitioned DML is not a drop-in replacement for standard DML used in
4336
+ # ReadWrite transactions. - The DML statement must be fully-partitionable.
4337
+ # Specifically, the statement must be expressible as the union of many
4338
+ # statements which each access only a single row of the table. - The statement
4339
+ # is not applied atomically to all rows of the table. Rather, the statement is
4340
+ # applied atomically to partitions of the table, in independent transactions.
4341
+ # Secondary index rows are updated atomically with the base table rows. -
4342
+ # Partitioned DML does not guarantee exactly-once execution semantics against a
4343
+ # partition. The statement will be applied at least once to each partition. It
4344
+ # is strongly recommended that the DML statement should be idempotent to avoid
4345
+ # unexpected results. For instance, it is potentially dangerous to run a
4346
+ # statement such as `UPDATE table SET column = column + 1` as it could be run
4347
+ # multiple times against some rows. - The partitions are committed automatically
4348
+ # - there is no support for Commit or Rollback. If the call returns an error, or
4349
+ # if the client issuing the ExecuteSql call dies, it is possible that some rows
4350
+ # had the statement executed on them successfully. It is also possible that
4351
+ # statement was never executed against other rows. - Partitioned DML
4352
+ # transactions may only contain the execution of a single DML statement via
4353
+ # ExecuteSql or ExecuteStreamingSql. - If any error is encountered during the
4354
+ # execution of the partitioned DML operation (for instance, a UNIQUE INDEX
4355
+ # violation, division by zero, or a value that cannot be stored due to schema
4356
+ # constraints), then the operation is stopped at that point and an error is
4357
+ # returned. It is possible that at this point, some partitions have been
4358
+ # committed (or even committed multiple times), and other partitions have not
4359
+ # been run at all. Given the above, Partitioned DML is good fit for large,
4360
+ # database-wide, operations that are idempotent, such as deleting old rows from
4361
+ # a very large table.
4202
4362
  class TransactionOptions
4203
4363
  include Google::Apis::Core::Hashable
4204
4364
 
@@ -4240,7 +4400,7 @@ module Google
4240
4400
  # count towards the one transaction limit). After the active transaction is
4241
4401
  # completed, the session can immediately be re-used for the next transaction. It
4242
4402
  # is not necessary to create a new session for each transaction. Transaction
4243
- # Modes: Cloud Spanner supports three transaction modes: 1. Locking read-write.
4403
+ # modes: Cloud Spanner supports three transaction modes: 1. Locking read-write.
4244
4404
  # This type of transaction is the only way to write data into Cloud Spanner.
4245
4405
  # These transactions rely on pessimistic locking and, if necessary, two-phase
4246
4406
  # commit. Locking read-write transactions may abort, requiring the application
@@ -4256,9 +4416,9 @@ module Google
4256
4416
  # simpler semantics and are almost always faster. In particular, read-only
4257
4417
  # transactions do not take locks, so they do not conflict with read-write
4258
4418
  # transactions. As a consequence of not taking locks, they also do not abort, so
4259
- # retry loops are not needed. Transactions may only read/write data in a single
4260
- # database. They may, however, read/write data in different tables within that
4261
- # database. Locking Read-Write Transactions: Locking transactions may be used to
4419
+ # retry loops are not needed. Transactions may only read-write data in a single
4420
+ # database. They may, however, read-write data in different tables within that
4421
+ # database. Locking read-write transactions: Locking transactions may be used to
4262
4422
  # atomically read-modify-write data anywhere in a database. This type of
4263
4423
  # transaction is externally consistent. Clients should attempt to minimize the
4264
4424
  # amount of time a transaction is active. Faster transactions commit with higher
@@ -4277,7 +4437,7 @@ module Google
4277
4437
  # Cloud Spanner makes no guarantees about how long the transaction's locks were
4278
4438
  # held for. It is an error to use Cloud Spanner locks for any sort of mutual
4279
4439
  # exclusion other than between Cloud Spanner transactions themselves. Retrying
4280
- # Aborted Transactions: When a transaction aborts, the application can choose to
4440
+ # aborted transactions: When a transaction aborts, the application can choose to
4281
4441
  # retry the whole transaction again. To maximize the chances of successfully
4282
4442
  # committing the retry, the client should execute the retry in the same session
4283
4443
  # as the original attempt. The original session's lock priority increases with
@@ -4286,14 +4446,14 @@ module Google
4286
4446
  # transactions attempting to modify the same row(s)), a transaction can abort
4287
4447
  # many times in a short period before successfully committing. Thus, it is not a
4288
4448
  # good idea to cap the number of retries a transaction can attempt; instead, it
4289
- # is better to limit the total amount of time spent retrying. Idle Transactions:
4449
+ # is better to limit the total amount of time spent retrying. Idle transactions:
4290
4450
  # A transaction is considered idle if it has no outstanding reads or SQL queries
4291
4451
  # and has not started a read or SQL query within the last 10 seconds. Idle
4292
4452
  # transactions can be aborted by Cloud Spanner so that they don't hold on to
4293
4453
  # locks indefinitely. If an idle transaction is aborted, the commit will fail
4294
4454
  # with error `ABORTED`. If this behavior is undesirable, periodically executing
4295
4455
  # a simple SQL query in the transaction (for example, `SELECT 1`) prevents the
4296
- # transaction from becoming idle. Snapshot Read-Only Transactions: Snapshot read-
4456
+ # transaction from becoming idle. Snapshot read-only transactions: Snapshot read-
4297
4457
  # only transactions provides a simpler method than locking read-write
4298
4458
  # transactions for doing several consistent reads. However, this type of
4299
4459
  # transaction does not support writes. Snapshot transactions do not take locks.
@@ -4309,7 +4469,7 @@ module Google
4309
4469
  # how to choose a read timestamp. The types of timestamp bound are: - Strong (
4310
4470
  # the default). - Bounded staleness. - Exact staleness. If the Cloud Spanner
4311
4471
  # database to be read is geographically distributed, stale read-only
4312
- # transactions can execute more quickly than strong or read-write transaction,
4472
+ # transactions can execute more quickly than strong or read-write transactions,
4313
4473
  # because they are able to execute far from the leader replica. Each type of
4314
4474
  # timestamp bound is discussed in detail below. Strong: Strong reads are
4315
4475
  # guaranteed to see the effects of all transactions that have committed before
@@ -4319,7 +4479,7 @@ module Google
4319
4479
  # two consecutive strong read-only transactions might return inconsistent
4320
4480
  # results if there are concurrent writes. If consistency across reads is
4321
4481
  # required, the reads should be executed within a transaction or at an exact
4322
- # read timestamp. See TransactionOptions.ReadOnly.strong. Exact Staleness: These
4482
+ # read timestamp. See TransactionOptions.ReadOnly.strong. Exact staleness: These
4323
4483
  # timestamp bounds execute reads at a user-specified timestamp. Reads at a
4324
4484
  # timestamp are guaranteed to see a consistent prefix of the global transaction
4325
4485
  # history: they observe modifications done by all transactions with a commit
@@ -4333,7 +4493,7 @@ module Google
4333
4493
  # equivalent boundedly stale concurrency modes. On the other hand, boundedly
4334
4494
  # stale reads usually return fresher results. See TransactionOptions.ReadOnly.
4335
4495
  # read_timestamp and TransactionOptions.ReadOnly.exact_staleness. Bounded
4336
- # Staleness: Bounded staleness modes allow Cloud Spanner to pick the read
4496
+ # staleness: Bounded staleness modes allow Cloud Spanner to pick the read
4337
4497
  # timestamp, subject to a user-provided staleness bound. Cloud Spanner chooses
4338
4498
  # the newest timestamp within the staleness bound that allows execution of the
4339
4499
  # reads at the closest available replica without blocking. All rows yielded are
@@ -4350,51 +4510,54 @@ module Google
4350
4510
  # timestamp negotiation requires up-front knowledge of which rows will be read,
4351
4511
  # it can only be used with single-use read-only transactions. See
4352
4512
  # TransactionOptions.ReadOnly.max_staleness and TransactionOptions.ReadOnly.
4353
- # min_read_timestamp. Old Read Timestamps and Garbage Collection: Cloud Spanner
4513
+ # min_read_timestamp. Old read timestamps and garbage collection: Cloud Spanner
4354
4514
  # continuously garbage collects deleted and overwritten data in the background
4355
4515
  # to reclaim storage space. This process is known as "version GC". By default,
4356
4516
  # version GC reclaims versions after they are one hour old. Because of this,
4357
4517
  # Cloud Spanner cannot perform reads at read timestamps more than one hour in
4358
4518
  # the past. This restriction also applies to in-progress reads and/or SQL
4359
4519
  # queries whose timestamp become too old while executing. Reads and SQL queries
4360
- # with too-old read timestamps fail with the error `FAILED_PRECONDITION`.
4361
- # Partitioned DML Transactions: Partitioned DML transactions are used to execute
4362
- # DML statements with a different execution strategy that provides different,
4363
- # and often better, scalability properties for large, table-wide operations than
4364
- # DML in a ReadWrite transaction. Smaller scoped statements, such as an OLTP
4365
- # workload, should prefer using ReadWrite transactions. Partitioned DML
4366
- # partitions the keyspace and runs the DML statement on each partition in
4367
- # separate, internal transactions. These transactions commit automatically when
4368
- # complete, and run independently from one another. To reduce lock contention,
4369
- # this execution strategy only acquires read locks on rows that match the WHERE
4370
- # clause of the statement. Additionally, the smaller per-partition transactions
4371
- # hold locks for less time. That said, Partitioned DML is not a drop-in
4372
- # replacement for standard DML used in ReadWrite transactions. - The DML
4373
- # statement must be fully-partitionable. Specifically, the statement must be
4374
- # expressible as the union of many statements which each access only a single
4375
- # row of the table. - The statement is not applied atomically to all rows of the
4376
- # table. Rather, the statement is applied atomically to partitions of the table,
4377
- # in independent transactions. Secondary index rows are updated atomically with
4378
- # the base table rows. - Partitioned DML does not guarantee exactly-once
4379
- # execution semantics against a partition. The statement will be applied at
4380
- # least once to each partition. It is strongly recommended that the DML
4381
- # statement should be idempotent to avoid unexpected results. For instance, it
4382
- # is potentially dangerous to run a statement such as `UPDATE table SET column =
4383
- # column + 1` as it could be run multiple times against some rows. - The
4384
- # partitions are committed automatically - there is no support for Commit or
4385
- # Rollback. If the call returns an error, or if the client issuing the
4386
- # ExecuteSql call dies, it is possible that some rows had the statement executed
4387
- # on them successfully. It is also possible that statement was never executed
4388
- # against other rows. - Partitioned DML transactions may only contain the
4389
- # execution of a single DML statement via ExecuteSql or ExecuteStreamingSql. -
4390
- # If any error is encountered during the execution of the partitioned DML
4391
- # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
4392
- # value that cannot be stored due to schema constraints), then the operation is
4393
- # stopped at that point and an error is returned. It is possible that at this
4394
- # point, some partitions have been committed (or even committed multiple times),
4395
- # and other partitions have not been run at all. Given the above, Partitioned
4396
- # DML is good fit for large, database-wide, operations that are idempotent, such
4397
- # as deleting old rows from a very large table.
4520
+ # with too-old read timestamps fail with the error `FAILED_PRECONDITION`. You
4521
+ # can configure and extend the `VERSION_RETENTION_PERIOD` of a database up to a
4522
+ # period as long as one week, which allows Cloud Spanner to perform reads up to
4523
+ # one week in the past. Partitioned DML transactions: Partitioned DML
4524
+ # transactions are used to execute DML statements with a different execution
4525
+ # strategy that provides different, and often better, scalability properties for
4526
+ # large, table-wide operations than DML in a ReadWrite transaction. Smaller
4527
+ # scoped statements, such as an OLTP workload, should prefer using ReadWrite
4528
+ # transactions. Partitioned DML partitions the keyspace and runs the DML
4529
+ # statement on each partition in separate, internal transactions. These
4530
+ # transactions commit automatically when complete, and run independently from
4531
+ # one another. To reduce lock contention, this execution strategy only acquires
4532
+ # read locks on rows that match the WHERE clause of the statement. Additionally,
4533
+ # the smaller per-partition transactions hold locks for less time. That said,
4534
+ # Partitioned DML is not a drop-in replacement for standard DML used in
4535
+ # ReadWrite transactions. - The DML statement must be fully-partitionable.
4536
+ # Specifically, the statement must be expressible as the union of many
4537
+ # statements which each access only a single row of the table. - The statement
4538
+ # is not applied atomically to all rows of the table. Rather, the statement is
4539
+ # applied atomically to partitions of the table, in independent transactions.
4540
+ # Secondary index rows are updated atomically with the base table rows. -
4541
+ # Partitioned DML does not guarantee exactly-once execution semantics against a
4542
+ # partition. The statement will be applied at least once to each partition. It
4543
+ # is strongly recommended that the DML statement should be idempotent to avoid
4544
+ # unexpected results. For instance, it is potentially dangerous to run a
4545
+ # statement such as `UPDATE table SET column = column + 1` as it could be run
4546
+ # multiple times against some rows. - The partitions are committed automatically
4547
+ # - there is no support for Commit or Rollback. If the call returns an error, or
4548
+ # if the client issuing the ExecuteSql call dies, it is possible that some rows
4549
+ # had the statement executed on them successfully. It is also possible that
4550
+ # statement was never executed against other rows. - Partitioned DML
4551
+ # transactions may only contain the execution of a single DML statement via
4552
+ # ExecuteSql or ExecuteStreamingSql. - If any error is encountered during the
4553
+ # execution of the partitioned DML operation (for instance, a UNIQUE INDEX
4554
+ # violation, division by zero, or a value that cannot be stored due to schema
4555
+ # constraints), then the operation is stopped at that point and an error is
4556
+ # returned. It is possible that at this point, some partitions have been
4557
+ # committed (or even committed multiple times), and other partitions have not
4558
+ # been run at all. Given the above, Partitioned DML is good fit for large,
4559
+ # database-wide, operations that are idempotent, such as deleting old rows from
4560
+ # a very large table.
4398
4561
  # Corresponds to the JSON property `begin`
4399
4562
  # @return [Google::Apis::SpannerV1::TransactionOptions]
4400
4563
  attr_accessor :begin
@@ -4410,7 +4573,7 @@ module Google
4410
4573
  # count towards the one transaction limit). After the active transaction is
4411
4574
  # completed, the session can immediately be re-used for the next transaction. It
4412
4575
  # is not necessary to create a new session for each transaction. Transaction
4413
- # Modes: Cloud Spanner supports three transaction modes: 1. Locking read-write.
4576
+ # modes: Cloud Spanner supports three transaction modes: 1. Locking read-write.
4414
4577
  # This type of transaction is the only way to write data into Cloud Spanner.
4415
4578
  # These transactions rely on pessimistic locking and, if necessary, two-phase
4416
4579
  # commit. Locking read-write transactions may abort, requiring the application
@@ -4426,9 +4589,9 @@ module Google
4426
4589
  # simpler semantics and are almost always faster. In particular, read-only
4427
4590
  # transactions do not take locks, so they do not conflict with read-write
4428
4591
  # transactions. As a consequence of not taking locks, they also do not abort, so
4429
- # retry loops are not needed. Transactions may only read/write data in a single
4430
- # database. They may, however, read/write data in different tables within that
4431
- # database. Locking Read-Write Transactions: Locking transactions may be used to
4592
+ # retry loops are not needed. Transactions may only read-write data in a single
4593
+ # database. They may, however, read-write data in different tables within that
4594
+ # database. Locking read-write transactions: Locking transactions may be used to
4432
4595
  # atomically read-modify-write data anywhere in a database. This type of
4433
4596
  # transaction is externally consistent. Clients should attempt to minimize the
4434
4597
  # amount of time a transaction is active. Faster transactions commit with higher
@@ -4447,7 +4610,7 @@ module Google
4447
4610
  # Cloud Spanner makes no guarantees about how long the transaction's locks were
4448
4611
  # held for. It is an error to use Cloud Spanner locks for any sort of mutual
4449
4612
  # exclusion other than between Cloud Spanner transactions themselves. Retrying
4450
- # Aborted Transactions: When a transaction aborts, the application can choose to
4613
+ # aborted transactions: When a transaction aborts, the application can choose to
4451
4614
  # retry the whole transaction again. To maximize the chances of successfully
4452
4615
  # committing the retry, the client should execute the retry in the same session
4453
4616
  # as the original attempt. The original session's lock priority increases with
@@ -4456,14 +4619,14 @@ module Google
4456
4619
  # transactions attempting to modify the same row(s)), a transaction can abort
4457
4620
  # many times in a short period before successfully committing. Thus, it is not a
4458
4621
  # good idea to cap the number of retries a transaction can attempt; instead, it
4459
- # is better to limit the total amount of time spent retrying. Idle Transactions:
4622
+ # is better to limit the total amount of time spent retrying. Idle transactions:
4460
4623
  # A transaction is considered idle if it has no outstanding reads or SQL queries
4461
4624
  # and has not started a read or SQL query within the last 10 seconds. Idle
4462
4625
  # transactions can be aborted by Cloud Spanner so that they don't hold on to
4463
4626
  # locks indefinitely. If an idle transaction is aborted, the commit will fail
4464
4627
  # with error `ABORTED`. If this behavior is undesirable, periodically executing
4465
4628
  # a simple SQL query in the transaction (for example, `SELECT 1`) prevents the
4466
- # transaction from becoming idle. Snapshot Read-Only Transactions: Snapshot read-
4629
+ # transaction from becoming idle. Snapshot read-only transactions: Snapshot read-
4467
4630
  # only transactions provides a simpler method than locking read-write
4468
4631
  # transactions for doing several consistent reads. However, this type of
4469
4632
  # transaction does not support writes. Snapshot transactions do not take locks.
@@ -4479,7 +4642,7 @@ module Google
4479
4642
  # how to choose a read timestamp. The types of timestamp bound are: - Strong (
4480
4643
  # the default). - Bounded staleness. - Exact staleness. If the Cloud Spanner
4481
4644
  # database to be read is geographically distributed, stale read-only
4482
- # transactions can execute more quickly than strong or read-write transaction,
4645
+ # transactions can execute more quickly than strong or read-write transactions,
4483
4646
  # because they are able to execute far from the leader replica. Each type of
4484
4647
  # timestamp bound is discussed in detail below. Strong: Strong reads are
4485
4648
  # guaranteed to see the effects of all transactions that have committed before
@@ -4489,7 +4652,7 @@ module Google
4489
4652
  # two consecutive strong read-only transactions might return inconsistent
4490
4653
  # results if there are concurrent writes. If consistency across reads is
4491
4654
  # required, the reads should be executed within a transaction or at an exact
4492
- # read timestamp. See TransactionOptions.ReadOnly.strong. Exact Staleness: These
4655
+ # read timestamp. See TransactionOptions.ReadOnly.strong. Exact staleness: These
4493
4656
  # timestamp bounds execute reads at a user-specified timestamp. Reads at a
4494
4657
  # timestamp are guaranteed to see a consistent prefix of the global transaction
4495
4658
  # history: they observe modifications done by all transactions with a commit
@@ -4503,7 +4666,7 @@ module Google
4503
4666
  # equivalent boundedly stale concurrency modes. On the other hand, boundedly
4504
4667
  # stale reads usually return fresher results. See TransactionOptions.ReadOnly.
4505
4668
  # read_timestamp and TransactionOptions.ReadOnly.exact_staleness. Bounded
4506
- # Staleness: Bounded staleness modes allow Cloud Spanner to pick the read
4669
+ # staleness: Bounded staleness modes allow Cloud Spanner to pick the read
4507
4670
  # timestamp, subject to a user-provided staleness bound. Cloud Spanner chooses
4508
4671
  # the newest timestamp within the staleness bound that allows execution of the
4509
4672
  # reads at the closest available replica without blocking. All rows yielded are
@@ -4520,51 +4683,54 @@ module Google
4520
4683
  # timestamp negotiation requires up-front knowledge of which rows will be read,
4521
4684
  # it can only be used with single-use read-only transactions. See
4522
4685
  # TransactionOptions.ReadOnly.max_staleness and TransactionOptions.ReadOnly.
4523
- # min_read_timestamp. Old Read Timestamps and Garbage Collection: Cloud Spanner
4686
+ # min_read_timestamp. Old read timestamps and garbage collection: Cloud Spanner
4524
4687
  # continuously garbage collects deleted and overwritten data in the background
4525
4688
  # to reclaim storage space. This process is known as "version GC". By default,
4526
4689
  # version GC reclaims versions after they are one hour old. Because of this,
4527
4690
  # Cloud Spanner cannot perform reads at read timestamps more than one hour in
4528
4691
  # the past. This restriction also applies to in-progress reads and/or SQL
4529
4692
  # queries whose timestamp become too old while executing. Reads and SQL queries
4530
- # with too-old read timestamps fail with the error `FAILED_PRECONDITION`.
4531
- # Partitioned DML Transactions: Partitioned DML transactions are used to execute
4532
- # DML statements with a different execution strategy that provides different,
4533
- # and often better, scalability properties for large, table-wide operations than
4534
- # DML in a ReadWrite transaction. Smaller scoped statements, such as an OLTP
4535
- # workload, should prefer using ReadWrite transactions. Partitioned DML
4536
- # partitions the keyspace and runs the DML statement on each partition in
4537
- # separate, internal transactions. These transactions commit automatically when
4538
- # complete, and run independently from one another. To reduce lock contention,
4539
- # this execution strategy only acquires read locks on rows that match the WHERE
4540
- # clause of the statement. Additionally, the smaller per-partition transactions
4541
- # hold locks for less time. That said, Partitioned DML is not a drop-in
4542
- # replacement for standard DML used in ReadWrite transactions. - The DML
4543
- # statement must be fully-partitionable. Specifically, the statement must be
4544
- # expressible as the union of many statements which each access only a single
4545
- # row of the table. - The statement is not applied atomically to all rows of the
4546
- # table. Rather, the statement is applied atomically to partitions of the table,
4547
- # in independent transactions. Secondary index rows are updated atomically with
4548
- # the base table rows. - Partitioned DML does not guarantee exactly-once
4549
- # execution semantics against a partition. The statement will be applied at
4550
- # least once to each partition. It is strongly recommended that the DML
4551
- # statement should be idempotent to avoid unexpected results. For instance, it
4552
- # is potentially dangerous to run a statement such as `UPDATE table SET column =
4553
- # column + 1` as it could be run multiple times against some rows. - The
4554
- # partitions are committed automatically - there is no support for Commit or
4555
- # Rollback. If the call returns an error, or if the client issuing the
4556
- # ExecuteSql call dies, it is possible that some rows had the statement executed
4557
- # on them successfully. It is also possible that statement was never executed
4558
- # against other rows. - Partitioned DML transactions may only contain the
4559
- # execution of a single DML statement via ExecuteSql or ExecuteStreamingSql. -
4560
- # If any error is encountered during the execution of the partitioned DML
4561
- # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
4562
- # value that cannot be stored due to schema constraints), then the operation is
4563
- # stopped at that point and an error is returned. It is possible that at this
4564
- # point, some partitions have been committed (or even committed multiple times),
4565
- # and other partitions have not been run at all. Given the above, Partitioned
4566
- # DML is good fit for large, database-wide, operations that are idempotent, such
4567
- # as deleting old rows from a very large table.
4693
+ # with too-old read timestamps fail with the error `FAILED_PRECONDITION`. You
4694
+ # can configure and extend the `VERSION_RETENTION_PERIOD` of a database up to a
4695
+ # period as long as one week, which allows Cloud Spanner to perform reads up to
4696
+ # one week in the past. Partitioned DML transactions: Partitioned DML
4697
+ # transactions are used to execute DML statements with a different execution
4698
+ # strategy that provides different, and often better, scalability properties for
4699
+ # large, table-wide operations than DML in a ReadWrite transaction. Smaller
4700
+ # scoped statements, such as an OLTP workload, should prefer using ReadWrite
4701
+ # transactions. Partitioned DML partitions the keyspace and runs the DML
4702
+ # statement on each partition in separate, internal transactions. These
4703
+ # transactions commit automatically when complete, and run independently from
4704
+ # one another. To reduce lock contention, this execution strategy only acquires
4705
+ # read locks on rows that match the WHERE clause of the statement. Additionally,
4706
+ # the smaller per-partition transactions hold locks for less time. That said,
4707
+ # Partitioned DML is not a drop-in replacement for standard DML used in
4708
+ # ReadWrite transactions. - The DML statement must be fully-partitionable.
4709
+ # Specifically, the statement must be expressible as the union of many
4710
+ # statements which each access only a single row of the table. - The statement
4711
+ # is not applied atomically to all rows of the table. Rather, the statement is
4712
+ # applied atomically to partitions of the table, in independent transactions.
4713
+ # Secondary index rows are updated atomically with the base table rows. -
4714
+ # Partitioned DML does not guarantee exactly-once execution semantics against a
4715
+ # partition. The statement will be applied at least once to each partition. It
4716
+ # is strongly recommended that the DML statement should be idempotent to avoid
4717
+ # unexpected results. For instance, it is potentially dangerous to run a
4718
+ # statement such as `UPDATE table SET column = column + 1` as it could be run
4719
+ # multiple times against some rows. - The partitions are committed automatically
4720
+ # - there is no support for Commit or Rollback. If the call returns an error, or
4721
+ # if the client issuing the ExecuteSql call dies, it is possible that some rows
4722
+ # had the statement executed on them successfully. It is also possible that
4723
+ # statement was never executed against other rows. - Partitioned DML
4724
+ # transactions may only contain the execution of a single DML statement via
4725
+ # ExecuteSql or ExecuteStreamingSql. - If any error is encountered during the
4726
+ # execution of the partitioned DML operation (for instance, a UNIQUE INDEX
4727
+ # violation, division by zero, or a value that cannot be stored due to schema
4728
+ # constraints), then the operation is stopped at that point and an error is
4729
+ # returned. It is possible that at this point, some partitions have been
4730
+ # committed (or even committed multiple times), and other partitions have not
4731
+ # been run at all. Given the above, Partitioned DML is good fit for large,
4732
+ # database-wide, operations that are idempotent, such as deleting old rows from
4733
+ # a very large table.
4568
4734
  # Corresponds to the JSON property `singleUse`
4569
4735
  # @return [Google::Apis::SpannerV1::TransactionOptions]
4570
4736
  attr_accessor :single_use