google-apis-spanner_v1 0.29.0 → 0.32.0

Sign up to get free protection for your applications and to get access to all the features.
@@ -228,49 +228,193 @@ module Google
228
228
  class BeginTransactionRequest
229
229
  include Google::Apis::Core::Hashable
230
230
 
231
- # In addition, if TransactionOptions.read_only.return_read_timestamp is set to
232
- # true, a special value of 2^63 - 2 will be returned in the Transaction message
233
- # that describes the transaction, instead of a valid read timestamp. This
234
- # special value should be discarded and not used for any subsequent queries.
235
- # Please see https://cloud.google.com/spanner/docs/change-streams for more
236
- # details on how to query the change stream TVFs. Partitioned DML transactions:
237
- # Partitioned DML transactions are used to execute DML statements with a
238
- # different execution strategy that provides different, and often better,
239
- # scalability properties for large, table-wide operations than DML in a
240
- # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
241
- # should prefer using ReadWrite transactions. Partitioned DML partitions the
242
- # keyspace and runs the DML statement on each partition in separate, internal
243
- # transactions. These transactions commit automatically when complete, and run
244
- # independently from one another. To reduce lock contention, this execution
245
- # strategy only acquires read locks on rows that match the WHERE clause of the
246
- # statement. Additionally, the smaller per-partition transactions hold locks for
247
- # less time. That said, Partitioned DML is not a drop-in replacement for
248
- # standard DML used in ReadWrite transactions. - The DML statement must be fully-
249
- # partitionable. Specifically, the statement must be expressible as the union of
250
- # many statements which each access only a single row of the table. - The
251
- # statement is not applied atomically to all rows of the table. Rather, the
252
- # statement is applied atomically to partitions of the table, in independent
253
- # transactions. Secondary index rows are updated atomically with the base table
254
- # rows. - Partitioned DML does not guarantee exactly-once execution semantics
255
- # against a partition. The statement will be applied at least once to each
256
- # partition. It is strongly recommended that the DML statement should be
257
- # idempotent to avoid unexpected results. For instance, it is potentially
258
- # dangerous to run a statement such as `UPDATE table SET column = column + 1` as
259
- # it could be run multiple times against some rows. - The partitions are
260
- # committed automatically - there is no support for Commit or Rollback. If the
261
- # call returns an error, or if the client issuing the ExecuteSql call dies, it
262
- # is possible that some rows had the statement executed on them successfully. It
263
- # is also possible that statement was never executed against other rows. -
264
- # Partitioned DML transactions may only contain the execution of a single DML
265
- # statement via ExecuteSql or ExecuteStreamingSql. - If any error is encountered
266
- # during the execution of the partitioned DML operation (for instance, a UNIQUE
267
- # INDEX violation, division by zero, or a value that cannot be stored due to
268
- # schema constraints), then the operation is stopped at that point and an error
269
- # is returned. It is possible that at this point, some partitions have been
270
- # committed (or even committed multiple times), and other partitions have not
271
- # been run at all. Given the above, Partitioned DML is good fit for large,
272
- # database-wide, operations that are idempotent, such as deleting old rows from
273
- # a very large table.
231
+ # Transactions: Each session can have at most one active transaction at a time (
232
+ # note that standalone reads and queries use a transaction internally and do
233
+ # count towards the one transaction limit). After the active transaction is
234
+ # completed, the session can immediately be re-used for the next transaction. It
235
+ # is not necessary to create a new session for each transaction. Transaction
236
+ # modes: Cloud Spanner supports three transaction modes: 1. Locking read-write.
237
+ # This type of transaction is the only way to write data into Cloud Spanner.
238
+ # These transactions rely on pessimistic locking and, if necessary, two-phase
239
+ # commit. Locking read-write transactions may abort, requiring the application
240
+ # to retry. 2. Snapshot read-only. Snapshot read-only transactions provide
241
+ # guaranteed consistency across several reads, but do not allow writes. Snapshot
242
+ # read-only transactions can be configured to read at timestamps in the past, or
243
+ # configured to perform a strong read (where Spanner will select a timestamp
244
+ # such that the read is guaranteed to see the effects of all transactions that
245
+ # have committed before the start of the read). Snapshot read-only transactions
246
+ # do not need to be committed. Queries on change streams must be performed with
247
+ # the snapshot read-only transaction mode, specifying a strong read. Please see
248
+ # TransactionOptions.ReadOnly.strong for more details. 3. Partitioned DML. This
249
+ # type of transaction is used to execute a single Partitioned DML statement.
250
+ # Partitioned DML partitions the key space and runs the DML statement over each
251
+ # partition in parallel using separate, internal transactions that commit
252
+ # independently. Partitioned DML transactions do not need to be committed. For
253
+ # transactions that only read, snapshot read-only transactions provide simpler
254
+ # semantics and are almost always faster. In particular, read-only transactions
255
+ # do not take locks, so they do not conflict with read-write transactions. As a
256
+ # consequence of not taking locks, they also do not abort, so retry loops are
257
+ # not needed. Transactions may only read-write data in a single database. They
258
+ # may, however, read-write data in different tables within that database.
259
+ # Locking read-write transactions: Locking transactions may be used to
260
+ # atomically read-modify-write data anywhere in a database. This type of
261
+ # transaction is externally consistent. Clients should attempt to minimize the
262
+ # amount of time a transaction is active. Faster transactions commit with higher
263
+ # probability and cause less contention. Cloud Spanner attempts to keep read
264
+ # locks active as long as the transaction continues to do reads, and the
265
+ # transaction has not been terminated by Commit or Rollback. Long periods of
266
+ # inactivity at the client may cause Cloud Spanner to release a transaction's
267
+ # locks and abort it. Conceptually, a read-write transaction consists of zero or
268
+ # more reads or SQL statements followed by Commit. At any time before Commit,
269
+ # the client can send a Rollback request to abort the transaction. Semantics:
270
+ # Cloud Spanner can commit the transaction if all read locks it acquired are
271
+ # still valid at commit time, and it is able to acquire write locks for all
272
+ # writes. Cloud Spanner can abort the transaction for any reason. If a commit
273
+ # attempt returns `ABORTED`, Cloud Spanner guarantees that the transaction has
274
+ # not modified any user data in Cloud Spanner. Unless the transaction commits,
275
+ # Cloud Spanner makes no guarantees about how long the transaction's locks were
276
+ # held for. It is an error to use Cloud Spanner locks for any sort of mutual
277
+ # exclusion other than between Cloud Spanner transactions themselves. Retrying
278
+ # aborted transactions: When a transaction aborts, the application can choose to
279
+ # retry the whole transaction again. To maximize the chances of successfully
280
+ # committing the retry, the client should execute the retry in the same session
281
+ # as the original attempt. The original session's lock priority increases with
282
+ # each consecutive abort, meaning that each attempt has a slightly better chance
283
+ # of success than the previous. Under some circumstances (for example, many
284
+ # transactions attempting to modify the same row(s)), a transaction can abort
285
+ # many times in a short period before successfully committing. Thus, it is not a
286
+ # good idea to cap the number of retries a transaction can attempt; instead, it
287
+ # is better to limit the total amount of time spent retrying. Idle transactions:
288
+ # A transaction is considered idle if it has no outstanding reads or SQL queries
289
+ # and has not started a read or SQL query within the last 10 seconds. Idle
290
+ # transactions can be aborted by Cloud Spanner so that they don't hold on to
291
+ # locks indefinitely. If an idle transaction is aborted, the commit will fail
292
+ # with error `ABORTED`. If this behavior is undesirable, periodically executing
293
+ # a simple SQL query in the transaction (for example, `SELECT 1`) prevents the
294
+ # transaction from becoming idle. Snapshot read-only transactions: Snapshot read-
295
+ # only transactions provides a simpler method than locking read-write
296
+ # transactions for doing several consistent reads. However, this type of
297
+ # transaction does not support writes. Snapshot transactions do not take locks.
298
+ # Instead, they work by choosing a Cloud Spanner timestamp, then executing all
299
+ # reads at that timestamp. Since they do not acquire locks, they do not block
300
+ # concurrent read-write transactions. Unlike locking read-write transactions,
301
+ # snapshot read-only transactions never abort. They can fail if the chosen read
302
+ # timestamp is garbage collected; however, the default garbage collection policy
303
+ # is generous enough that most applications do not need to worry about this in
304
+ # practice. Snapshot read-only transactions do not need to call Commit or
305
+ # Rollback (and in fact are not permitted to do so). To execute a snapshot
306
+ # transaction, the client specifies a timestamp bound, which tells Cloud Spanner
307
+ # how to choose a read timestamp. The types of timestamp bound are: - Strong (
308
+ # the default). - Bounded staleness. - Exact staleness. If the Cloud Spanner
309
+ # database to be read is geographically distributed, stale read-only
310
+ # transactions can execute more quickly than strong or read-write transactions,
311
+ # because they are able to execute far from the leader replica. Each type of
312
+ # timestamp bound is discussed in detail below. Strong: Strong reads are
313
+ # guaranteed to see the effects of all transactions that have committed before
314
+ # the start of the read. Furthermore, all rows yielded by a single read are
315
+ # consistent with each other -- if any part of the read observes a transaction,
316
+ # all parts of the read see the transaction. Strong reads are not repeatable:
317
+ # two consecutive strong read-only transactions might return inconsistent
318
+ # results if there are concurrent writes. If consistency across reads is
319
+ # required, the reads should be executed within a transaction or at an exact
320
+ # read timestamp. Queries on change streams (see below for more details) must
321
+ # also specify the strong read timestamp bound. See TransactionOptions.ReadOnly.
322
+ # strong. Exact staleness: These timestamp bounds execute reads at a user-
323
+ # specified timestamp. Reads at a timestamp are guaranteed to see a consistent
324
+ # prefix of the global transaction history: they observe modifications done by
325
+ # all transactions with a commit timestamp less than or equal to the read
326
+ # timestamp, and observe none of the modifications done by transactions with a
327
+ # larger commit timestamp. They will block until all conflicting transactions
328
+ # that may be assigned commit timestamps <= the read timestamp have finished.
329
+ # The timestamp can either be expressed as an absolute Cloud Spanner commit
330
+ # timestamp or a staleness relative to the current time. These modes do not
331
+ # require a "negotiation phase" to pick a timestamp. As a result, they execute
332
+ # slightly faster than the equivalent boundedly stale concurrency modes. On the
333
+ # other hand, boundedly stale reads usually return fresher results. See
334
+ # TransactionOptions.ReadOnly.read_timestamp and TransactionOptions.ReadOnly.
335
+ # exact_staleness. Bounded staleness: Bounded staleness modes allow Cloud
336
+ # Spanner to pick the read timestamp, subject to a user-provided staleness bound.
337
+ # Cloud Spanner chooses the newest timestamp within the staleness bound that
338
+ # allows execution of the reads at the closest available replica without
339
+ # blocking. All rows yielded are consistent with each other -- if any part of
340
+ # the read observes a transaction, all parts of the read see the transaction.
341
+ # Boundedly stale reads are not repeatable: two stale reads, even if they use
342
+ # the same staleness bound, can execute at different timestamps and thus return
343
+ # inconsistent results. Boundedly stale reads execute in two phases: the first
344
+ # phase negotiates a timestamp among all replicas needed to serve the read. In
345
+ # the second phase, reads are executed at the negotiated timestamp. As a result
346
+ # of the two phase execution, bounded staleness reads are usually a little
347
+ # slower than comparable exact staleness reads. However, they are typically able
348
+ # to return fresher results, and are more likely to execute at the closest
349
+ # replica. Because the timestamp negotiation requires up-front knowledge of
350
+ # which rows will be read, it can only be used with single-use read-only
351
+ # transactions. See TransactionOptions.ReadOnly.max_staleness and
352
+ # TransactionOptions.ReadOnly.min_read_timestamp. Old read timestamps and
353
+ # garbage collection: Cloud Spanner continuously garbage collects deleted and
354
+ # overwritten data in the background to reclaim storage space. This process is
355
+ # known as "version GC". By default, version GC reclaims versions after they are
356
+ # one hour old. Because of this, Cloud Spanner cannot perform reads at read
357
+ # timestamps more than one hour in the past. This restriction also applies to in-
358
+ # progress reads and/or SQL queries whose timestamp become too old while
359
+ # executing. Reads and SQL queries with too-old read timestamps fail with the
360
+ # error `FAILED_PRECONDITION`. You can configure and extend the `
361
+ # VERSION_RETENTION_PERIOD` of a database up to a period as long as one week,
362
+ # which allows Cloud Spanner to perform reads up to one week in the past.
363
+ # Querying change Streams: A Change Stream is a schema object that can be
364
+ # configured to watch data changes on the entire database, a set of tables, or a
365
+ # set of columns in a database. When a change stream is created, Spanner
366
+ # automatically defines a corresponding SQL Table-Valued Function (TVF) that can
367
+ # be used to query the change records in the associated change stream using the
368
+ # ExecuteStreamingSql API. The name of the TVF for a change stream is generated
369
+ # from the name of the change stream: READ_. All queries on change stream TVFs
370
+ # must be executed using the ExecuteStreamingSql API with a single-use read-only
371
+ # transaction with a strong read-only timestamp_bound. The change stream TVF
372
+ # allows users to specify the start_timestamp and end_timestamp for the time
373
+ # range of interest. All change records within the retention period is
374
+ # accessible using the strong read-only timestamp_bound. All other
375
+ # TransactionOptions are invalid for change stream queries. In addition, if
376
+ # TransactionOptions.read_only.return_read_timestamp is set to true, a special
377
+ # value of 2^63 - 2 will be returned in the Transaction message that describes
378
+ # the transaction, instead of a valid read timestamp. This special value should
379
+ # be discarded and not used for any subsequent queries. Please see https://cloud.
380
+ # google.com/spanner/docs/change-streams for more details on how to query the
381
+ # change stream TVFs. Partitioned DML transactions: Partitioned DML transactions
382
+ # are used to execute DML statements with a different execution strategy that
383
+ # provides different, and often better, scalability properties for large, table-
384
+ # wide operations than DML in a ReadWrite transaction. Smaller scoped statements,
385
+ # such as an OLTP workload, should prefer using ReadWrite transactions.
386
+ # Partitioned DML partitions the keyspace and runs the DML statement on each
387
+ # partition in separate, internal transactions. These transactions commit
388
+ # automatically when complete, and run independently from one another. To reduce
389
+ # lock contention, this execution strategy only acquires read locks on rows that
390
+ # match the WHERE clause of the statement. Additionally, the smaller per-
391
+ # partition transactions hold locks for less time. That said, Partitioned DML is
392
+ # not a drop-in replacement for standard DML used in ReadWrite transactions. -
393
+ # The DML statement must be fully-partitionable. Specifically, the statement
394
+ # must be expressible as the union of many statements which each access only a
395
+ # single row of the table. - The statement is not applied atomically to all rows
396
+ # of the table. Rather, the statement is applied atomically to partitions of the
397
+ # table, in independent transactions. Secondary index rows are updated
398
+ # atomically with the base table rows. - Partitioned DML does not guarantee
399
+ # exactly-once execution semantics against a partition. The statement will be
400
+ # applied at least once to each partition. It is strongly recommended that the
401
+ # DML statement should be idempotent to avoid unexpected results. For instance,
402
+ # it is potentially dangerous to run a statement such as `UPDATE table SET
403
+ # column = column + 1` as it could be run multiple times against some rows. -
404
+ # The partitions are committed automatically - there is no support for Commit or
405
+ # Rollback. If the call returns an error, or if the client issuing the
406
+ # ExecuteSql call dies, it is possible that some rows had the statement executed
407
+ # on them successfully. It is also possible that statement was never executed
408
+ # against other rows. - Partitioned DML transactions may only contain the
409
+ # execution of a single DML statement via ExecuteSql or ExecuteStreamingSql. -
410
+ # If any error is encountered during the execution of the partitioned DML
411
+ # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
412
+ # value that cannot be stored due to schema constraints), then the operation is
413
+ # stopped at that point and an error is returned. It is possible that at this
414
+ # point, some partitions have been committed (or even committed multiple times),
415
+ # and other partitions have not been run at all. Given the above, Partitioned
416
+ # DML is good fit for large, database-wide, operations that are idempotent, such
417
+ # as deleting old rows from a very large table.
274
418
  # Corresponds to the JSON property `options`
275
419
  # @return [Google::Apis::SpannerV1::TransactionOptions]
276
420
  attr_accessor :options
@@ -423,49 +567,193 @@ module Google
423
567
  attr_accessor :return_commit_stats
424
568
  alias_method :return_commit_stats?, :return_commit_stats
425
569
 
426
- # In addition, if TransactionOptions.read_only.return_read_timestamp is set to
427
- # true, a special value of 2^63 - 2 will be returned in the Transaction message
428
- # that describes the transaction, instead of a valid read timestamp. This
429
- # special value should be discarded and not used for any subsequent queries.
430
- # Please see https://cloud.google.com/spanner/docs/change-streams for more
431
- # details on how to query the change stream TVFs. Partitioned DML transactions:
432
- # Partitioned DML transactions are used to execute DML statements with a
433
- # different execution strategy that provides different, and often better,
434
- # scalability properties for large, table-wide operations than DML in a
435
- # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
436
- # should prefer using ReadWrite transactions. Partitioned DML partitions the
437
- # keyspace and runs the DML statement on each partition in separate, internal
438
- # transactions. These transactions commit automatically when complete, and run
439
- # independently from one another. To reduce lock contention, this execution
440
- # strategy only acquires read locks on rows that match the WHERE clause of the
441
- # statement. Additionally, the smaller per-partition transactions hold locks for
442
- # less time. That said, Partitioned DML is not a drop-in replacement for
443
- # standard DML used in ReadWrite transactions. - The DML statement must be fully-
444
- # partitionable. Specifically, the statement must be expressible as the union of
445
- # many statements which each access only a single row of the table. - The
446
- # statement is not applied atomically to all rows of the table. Rather, the
447
- # statement is applied atomically to partitions of the table, in independent
448
- # transactions. Secondary index rows are updated atomically with the base table
449
- # rows. - Partitioned DML does not guarantee exactly-once execution semantics
450
- # against a partition. The statement will be applied at least once to each
451
- # partition. It is strongly recommended that the DML statement should be
452
- # idempotent to avoid unexpected results. For instance, it is potentially
453
- # dangerous to run a statement such as `UPDATE table SET column = column + 1` as
454
- # it could be run multiple times against some rows. - The partitions are
455
- # committed automatically - there is no support for Commit or Rollback. If the
456
- # call returns an error, or if the client issuing the ExecuteSql call dies, it
457
- # is possible that some rows had the statement executed on them successfully. It
458
- # is also possible that statement was never executed against other rows. -
459
- # Partitioned DML transactions may only contain the execution of a single DML
460
- # statement via ExecuteSql or ExecuteStreamingSql. - If any error is encountered
461
- # during the execution of the partitioned DML operation (for instance, a UNIQUE
462
- # INDEX violation, division by zero, or a value that cannot be stored due to
463
- # schema constraints), then the operation is stopped at that point and an error
464
- # is returned. It is possible that at this point, some partitions have been
465
- # committed (or even committed multiple times), and other partitions have not
466
- # been run at all. Given the above, Partitioned DML is good fit for large,
467
- # database-wide, operations that are idempotent, such as deleting old rows from
468
- # a very large table.
570
+ # Transactions: Each session can have at most one active transaction at a time (
571
+ # note that standalone reads and queries use a transaction internally and do
572
+ # count towards the one transaction limit). After the active transaction is
573
+ # completed, the session can immediately be re-used for the next transaction. It
574
+ # is not necessary to create a new session for each transaction. Transaction
575
+ # modes: Cloud Spanner supports three transaction modes: 1. Locking read-write.
576
+ # This type of transaction is the only way to write data into Cloud Spanner.
577
+ # These transactions rely on pessimistic locking and, if necessary, two-phase
578
+ # commit. Locking read-write transactions may abort, requiring the application
579
+ # to retry. 2. Snapshot read-only. Snapshot read-only transactions provide
580
+ # guaranteed consistency across several reads, but do not allow writes. Snapshot
581
+ # read-only transactions can be configured to read at timestamps in the past, or
582
+ # configured to perform a strong read (where Spanner will select a timestamp
583
+ # such that the read is guaranteed to see the effects of all transactions that
584
+ # have committed before the start of the read). Snapshot read-only transactions
585
+ # do not need to be committed. Queries on change streams must be performed with
586
+ # the snapshot read-only transaction mode, specifying a strong read. Please see
587
+ # TransactionOptions.ReadOnly.strong for more details. 3. Partitioned DML. This
588
+ # type of transaction is used to execute a single Partitioned DML statement.
589
+ # Partitioned DML partitions the key space and runs the DML statement over each
590
+ # partition in parallel using separate, internal transactions that commit
591
+ # independently. Partitioned DML transactions do not need to be committed. For
592
+ # transactions that only read, snapshot read-only transactions provide simpler
593
+ # semantics and are almost always faster. In particular, read-only transactions
594
+ # do not take locks, so they do not conflict with read-write transactions. As a
595
+ # consequence of not taking locks, they also do not abort, so retry loops are
596
+ # not needed. Transactions may only read-write data in a single database. They
597
+ # may, however, read-write data in different tables within that database.
598
+ # Locking read-write transactions: Locking transactions may be used to
599
+ # atomically read-modify-write data anywhere in a database. This type of
600
+ # transaction is externally consistent. Clients should attempt to minimize the
601
+ # amount of time a transaction is active. Faster transactions commit with higher
602
+ # probability and cause less contention. Cloud Spanner attempts to keep read
603
+ # locks active as long as the transaction continues to do reads, and the
604
+ # transaction has not been terminated by Commit or Rollback. Long periods of
605
+ # inactivity at the client may cause Cloud Spanner to release a transaction's
606
+ # locks and abort it. Conceptually, a read-write transaction consists of zero or
607
+ # more reads or SQL statements followed by Commit. At any time before Commit,
608
+ # the client can send a Rollback request to abort the transaction. Semantics:
609
+ # Cloud Spanner can commit the transaction if all read locks it acquired are
610
+ # still valid at commit time, and it is able to acquire write locks for all
611
+ # writes. Cloud Spanner can abort the transaction for any reason. If a commit
612
+ # attempt returns `ABORTED`, Cloud Spanner guarantees that the transaction has
613
+ # not modified any user data in Cloud Spanner. Unless the transaction commits,
614
+ # Cloud Spanner makes no guarantees about how long the transaction's locks were
615
+ # held for. It is an error to use Cloud Spanner locks for any sort of mutual
616
+ # exclusion other than between Cloud Spanner transactions themselves. Retrying
617
+ # aborted transactions: When a transaction aborts, the application can choose to
618
+ # retry the whole transaction again. To maximize the chances of successfully
619
+ # committing the retry, the client should execute the retry in the same session
620
+ # as the original attempt. The original session's lock priority increases with
621
+ # each consecutive abort, meaning that each attempt has a slightly better chance
622
+ # of success than the previous. Under some circumstances (for example, many
623
+ # transactions attempting to modify the same row(s)), a transaction can abort
624
+ # many times in a short period before successfully committing. Thus, it is not a
625
+ # good idea to cap the number of retries a transaction can attempt; instead, it
626
+ # is better to limit the total amount of time spent retrying. Idle transactions:
627
+ # A transaction is considered idle if it has no outstanding reads or SQL queries
628
+ # and has not started a read or SQL query within the last 10 seconds. Idle
629
+ # transactions can be aborted by Cloud Spanner so that they don't hold on to
630
+ # locks indefinitely. If an idle transaction is aborted, the commit will fail
631
+ # with error `ABORTED`. If this behavior is undesirable, periodically executing
632
+ # a simple SQL query in the transaction (for example, `SELECT 1`) prevents the
633
+ # transaction from becoming idle. Snapshot read-only transactions: Snapshot read-
634
+ # only transactions provides a simpler method than locking read-write
635
+ # transactions for doing several consistent reads. However, this type of
636
+ # transaction does not support writes. Snapshot transactions do not take locks.
637
+ # Instead, they work by choosing a Cloud Spanner timestamp, then executing all
638
+ # reads at that timestamp. Since they do not acquire locks, they do not block
639
+ # concurrent read-write transactions. Unlike locking read-write transactions,
640
+ # snapshot read-only transactions never abort. They can fail if the chosen read
641
+ # timestamp is garbage collected; however, the default garbage collection policy
642
+ # is generous enough that most applications do not need to worry about this in
643
+ # practice. Snapshot read-only transactions do not need to call Commit or
644
+ # Rollback (and in fact are not permitted to do so). To execute a snapshot
645
+ # transaction, the client specifies a timestamp bound, which tells Cloud Spanner
646
+ # how to choose a read timestamp. The types of timestamp bound are: - Strong (
647
+ # the default). - Bounded staleness. - Exact staleness. If the Cloud Spanner
648
+ # database to be read is geographically distributed, stale read-only
649
+ # transactions can execute more quickly than strong or read-write transactions,
650
+ # because they are able to execute far from the leader replica. Each type of
651
+ # timestamp bound is discussed in detail below. Strong: Strong reads are
652
+ # guaranteed to see the effects of all transactions that have committed before
653
+ # the start of the read. Furthermore, all rows yielded by a single read are
654
+ # consistent with each other -- if any part of the read observes a transaction,
655
+ # all parts of the read see the transaction. Strong reads are not repeatable:
656
+ # two consecutive strong read-only transactions might return inconsistent
657
+ # results if there are concurrent writes. If consistency across reads is
658
+ # required, the reads should be executed within a transaction or at an exact
659
+ # read timestamp. Queries on change streams (see below for more details) must
660
+ # also specify the strong read timestamp bound. See TransactionOptions.ReadOnly.
661
+ # strong. Exact staleness: These timestamp bounds execute reads at a user-
662
+ # specified timestamp. Reads at a timestamp are guaranteed to see a consistent
663
+ # prefix of the global transaction history: they observe modifications done by
664
+ # all transactions with a commit timestamp less than or equal to the read
665
+ # timestamp, and observe none of the modifications done by transactions with a
666
+ # larger commit timestamp. They will block until all conflicting transactions
667
+ # that may be assigned commit timestamps <= the read timestamp have finished.
668
+ # The timestamp can either be expressed as an absolute Cloud Spanner commit
669
+ # timestamp or a staleness relative to the current time. These modes do not
670
+ # require a "negotiation phase" to pick a timestamp. As a result, they execute
671
+ # slightly faster than the equivalent boundedly stale concurrency modes. On the
672
+ # other hand, boundedly stale reads usually return fresher results. See
673
+ # TransactionOptions.ReadOnly.read_timestamp and TransactionOptions.ReadOnly.
674
+ # exact_staleness. Bounded staleness: Bounded staleness modes allow Cloud
675
+ # Spanner to pick the read timestamp, subject to a user-provided staleness bound.
676
+ # Cloud Spanner chooses the newest timestamp within the staleness bound that
677
+ # allows execution of the reads at the closest available replica without
678
+ # blocking. All rows yielded are consistent with each other -- if any part of
679
+ # the read observes a transaction, all parts of the read see the transaction.
680
+ # Boundedly stale reads are not repeatable: two stale reads, even if they use
681
+ # the same staleness bound, can execute at different timestamps and thus return
682
+ # inconsistent results. Boundedly stale reads execute in two phases: the first
683
+ # phase negotiates a timestamp among all replicas needed to serve the read. In
684
+ # the second phase, reads are executed at the negotiated timestamp. As a result
685
+ # of the two phase execution, bounded staleness reads are usually a little
686
+ # slower than comparable exact staleness reads. However, they are typically able
687
+ # to return fresher results, and are more likely to execute at the closest
688
+ # replica. Because the timestamp negotiation requires up-front knowledge of
689
+ # which rows will be read, it can only be used with single-use read-only
690
+ # transactions. See TransactionOptions.ReadOnly.max_staleness and
691
+ # TransactionOptions.ReadOnly.min_read_timestamp. Old read timestamps and
692
+ # garbage collection: Cloud Spanner continuously garbage collects deleted and
693
+ # overwritten data in the background to reclaim storage space. This process is
694
+ # known as "version GC". By default, version GC reclaims versions after they are
695
+ # one hour old. Because of this, Cloud Spanner cannot perform reads at read
696
+ # timestamps more than one hour in the past. This restriction also applies to in-
697
+ # progress reads and/or SQL queries whose timestamp become too old while
698
+ # executing. Reads and SQL queries with too-old read timestamps fail with the
699
+ # error `FAILED_PRECONDITION`. You can configure and extend the `
700
+ # VERSION_RETENTION_PERIOD` of a database up to a period as long as one week,
701
+ # which allows Cloud Spanner to perform reads up to one week in the past.
702
+ # Querying change Streams: A Change Stream is a schema object that can be
703
+ # configured to watch data changes on the entire database, a set of tables, or a
704
+ # set of columns in a database. When a change stream is created, Spanner
705
+ # automatically defines a corresponding SQL Table-Valued Function (TVF) that can
706
+ # be used to query the change records in the associated change stream using the
707
+ # ExecuteStreamingSql API. The name of the TVF for a change stream is generated
708
+ # from the name of the change stream: READ_. All queries on change stream TVFs
709
+ # must be executed using the ExecuteStreamingSql API with a single-use read-only
710
+ # transaction with a strong read-only timestamp_bound. The change stream TVF
711
+ # allows users to specify the start_timestamp and end_timestamp for the time
712
+ # range of interest. All change records within the retention period is
713
+ # accessible using the strong read-only timestamp_bound. All other
714
+ # TransactionOptions are invalid for change stream queries. In addition, if
715
+ # TransactionOptions.read_only.return_read_timestamp is set to true, a special
716
+ # value of 2^63 - 2 will be returned in the Transaction message that describes
717
+ # the transaction, instead of a valid read timestamp. This special value should
718
+ # be discarded and not used for any subsequent queries. Please see https://cloud.
719
+ # google.com/spanner/docs/change-streams for more details on how to query the
720
+ # change stream TVFs. Partitioned DML transactions: Partitioned DML transactions
721
+ # are used to execute DML statements with a different execution strategy that
722
+ # provides different, and often better, scalability properties for large, table-
723
+ # wide operations than DML in a ReadWrite transaction. Smaller scoped statements,
724
+ # such as an OLTP workload, should prefer using ReadWrite transactions.
725
+ # Partitioned DML partitions the keyspace and runs the DML statement on each
726
+ # partition in separate, internal transactions. These transactions commit
727
+ # automatically when complete, and run independently from one another. To reduce
728
+ # lock contention, this execution strategy only acquires read locks on rows that
729
+ # match the WHERE clause of the statement. Additionally, the smaller per-
730
+ # partition transactions hold locks for less time. That said, Partitioned DML is
731
+ # not a drop-in replacement for standard DML used in ReadWrite transactions. -
732
+ # The DML statement must be fully-partitionable. Specifically, the statement
733
+ # must be expressible as the union of many statements which each access only a
734
+ # single row of the table. - The statement is not applied atomically to all rows
735
+ # of the table. Rather, the statement is applied atomically to partitions of the
736
+ # table, in independent transactions. Secondary index rows are updated
737
+ # atomically with the base table rows. - Partitioned DML does not guarantee
738
+ # exactly-once execution semantics against a partition. The statement will be
739
+ # applied at least once to each partition. It is strongly recommended that the
740
+ # DML statement should be idempotent to avoid unexpected results. For instance,
741
+ # it is potentially dangerous to run a statement such as `UPDATE table SET
742
+ # column = column + 1` as it could be run multiple times against some rows. -
743
+ # The partitions are committed automatically - there is no support for Commit or
744
+ # Rollback. If the call returns an error, or if the client issuing the
745
+ # ExecuteSql call dies, it is possible that some rows had the statement executed
746
+ # on them successfully. It is also possible that statement was never executed
747
+ # against other rows. - Partitioned DML transactions may only contain the
748
+ # execution of a single DML statement via ExecuteSql or ExecuteStreamingSql. -
749
+ # If any error is encountered during the execution of the partitioned DML
750
+ # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
751
+ # value that cannot be stored due to schema constraints), then the operation is
752
+ # stopped at that point and an error is returned. It is possible that at this
753
+ # point, some partitions have been committed (or even committed multiple times),
754
+ # and other partitions have not been run at all. Given the above, Partitioned
755
+ # DML is good fit for large, database-wide, operations that are idempotent, such
756
+ # as deleting old rows from a very large table.
469
757
  # Corresponds to the JSON property `singleUseTransaction`
470
758
  # @return [Google::Apis::SpannerV1::TransactionOptions]
471
759
  attr_accessor :single_use_transaction
@@ -4010,49 +4298,193 @@ module Google
4010
4298
  end
4011
4299
  end
4012
4300
 
4013
- # In addition, if TransactionOptions.read_only.return_read_timestamp is set to
4014
- # true, a special value of 2^63 - 2 will be returned in the Transaction message
4015
- # that describes the transaction, instead of a valid read timestamp. This
4016
- # special value should be discarded and not used for any subsequent queries.
4017
- # Please see https://cloud.google.com/spanner/docs/change-streams for more
4018
- # details on how to query the change stream TVFs. Partitioned DML transactions:
4019
- # Partitioned DML transactions are used to execute DML statements with a
4020
- # different execution strategy that provides different, and often better,
4021
- # scalability properties for large, table-wide operations than DML in a
4022
- # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
4023
- # should prefer using ReadWrite transactions. Partitioned DML partitions the
4024
- # keyspace and runs the DML statement on each partition in separate, internal
4025
- # transactions. These transactions commit automatically when complete, and run
4026
- # independently from one another. To reduce lock contention, this execution
4027
- # strategy only acquires read locks on rows that match the WHERE clause of the
4028
- # statement. Additionally, the smaller per-partition transactions hold locks for
4029
- # less time. That said, Partitioned DML is not a drop-in replacement for
4030
- # standard DML used in ReadWrite transactions. - The DML statement must be fully-
4031
- # partitionable. Specifically, the statement must be expressible as the union of
4032
- # many statements which each access only a single row of the table. - The
4033
- # statement is not applied atomically to all rows of the table. Rather, the
4034
- # statement is applied atomically to partitions of the table, in independent
4035
- # transactions. Secondary index rows are updated atomically with the base table
4036
- # rows. - Partitioned DML does not guarantee exactly-once execution semantics
4037
- # against a partition. The statement will be applied at least once to each
4038
- # partition. It is strongly recommended that the DML statement should be
4039
- # idempotent to avoid unexpected results. For instance, it is potentially
4040
- # dangerous to run a statement such as `UPDATE table SET column = column + 1` as
4041
- # it could be run multiple times against some rows. - The partitions are
4042
- # committed automatically - there is no support for Commit or Rollback. If the
4043
- # call returns an error, or if the client issuing the ExecuteSql call dies, it
4044
- # is possible that some rows had the statement executed on them successfully. It
4045
- # is also possible that statement was never executed against other rows. -
4046
- # Partitioned DML transactions may only contain the execution of a single DML
4047
- # statement via ExecuteSql or ExecuteStreamingSql. - If any error is encountered
4048
- # during the execution of the partitioned DML operation (for instance, a UNIQUE
4049
- # INDEX violation, division by zero, or a value that cannot be stored due to
4050
- # schema constraints), then the operation is stopped at that point and an error
4051
- # is returned. It is possible that at this point, some partitions have been
4052
- # committed (or even committed multiple times), and other partitions have not
4053
- # been run at all. Given the above, Partitioned DML is good fit for large,
4054
- # database-wide, operations that are idempotent, such as deleting old rows from
4055
- # a very large table.
4301
+ # Transactions: Each session can have at most one active transaction at a time (
4302
+ # note that standalone reads and queries use a transaction internally and do
4303
+ # count towards the one transaction limit). After the active transaction is
4304
+ # completed, the session can immediately be re-used for the next transaction. It
4305
+ # is not necessary to create a new session for each transaction. Transaction
4306
+ # modes: Cloud Spanner supports three transaction modes: 1. Locking read-write.
4307
+ # This type of transaction is the only way to write data into Cloud Spanner.
4308
+ # These transactions rely on pessimistic locking and, if necessary, two-phase
4309
+ # commit. Locking read-write transactions may abort, requiring the application
4310
+ # to retry. 2. Snapshot read-only. Snapshot read-only transactions provide
4311
+ # guaranteed consistency across several reads, but do not allow writes. Snapshot
4312
+ # read-only transactions can be configured to read at timestamps in the past, or
4313
+ # configured to perform a strong read (where Spanner will select a timestamp
4314
+ # such that the read is guaranteed to see the effects of all transactions that
4315
+ # have committed before the start of the read). Snapshot read-only transactions
4316
+ # do not need to be committed. Queries on change streams must be performed with
4317
+ # the snapshot read-only transaction mode, specifying a strong read. Please see
4318
+ # TransactionOptions.ReadOnly.strong for more details. 3. Partitioned DML. This
4319
+ # type of transaction is used to execute a single Partitioned DML statement.
4320
+ # Partitioned DML partitions the key space and runs the DML statement over each
4321
+ # partition in parallel using separate, internal transactions that commit
4322
+ # independently. Partitioned DML transactions do not need to be committed. For
4323
+ # transactions that only read, snapshot read-only transactions provide simpler
4324
+ # semantics and are almost always faster. In particular, read-only transactions
4325
+ # do not take locks, so they do not conflict with read-write transactions. As a
4326
+ # consequence of not taking locks, they also do not abort, so retry loops are
4327
+ # not needed. Transactions may only read-write data in a single database. They
4328
+ # may, however, read-write data in different tables within that database.
4329
+ # Locking read-write transactions: Locking transactions may be used to
4330
+ # atomically read-modify-write data anywhere in a database. This type of
4331
+ # transaction is externally consistent. Clients should attempt to minimize the
4332
+ # amount of time a transaction is active. Faster transactions commit with higher
4333
+ # probability and cause less contention. Cloud Spanner attempts to keep read
4334
+ # locks active as long as the transaction continues to do reads, and the
4335
+ # transaction has not been terminated by Commit or Rollback. Long periods of
4336
+ # inactivity at the client may cause Cloud Spanner to release a transaction's
4337
+ # locks and abort it. Conceptually, a read-write transaction consists of zero or
4338
+ # more reads or SQL statements followed by Commit. At any time before Commit,
4339
+ # the client can send a Rollback request to abort the transaction. Semantics:
4340
+ # Cloud Spanner can commit the transaction if all read locks it acquired are
4341
+ # still valid at commit time, and it is able to acquire write locks for all
4342
+ # writes. Cloud Spanner can abort the transaction for any reason. If a commit
4343
+ # attempt returns `ABORTED`, Cloud Spanner guarantees that the transaction has
4344
+ # not modified any user data in Cloud Spanner. Unless the transaction commits,
4345
+ # Cloud Spanner makes no guarantees about how long the transaction's locks were
4346
+ # held for. It is an error to use Cloud Spanner locks for any sort of mutual
4347
+ # exclusion other than between Cloud Spanner transactions themselves. Retrying
4348
+ # aborted transactions: When a transaction aborts, the application can choose to
4349
+ # retry the whole transaction again. To maximize the chances of successfully
4350
+ # committing the retry, the client should execute the retry in the same session
4351
+ # as the original attempt. The original session's lock priority increases with
4352
+ # each consecutive abort, meaning that each attempt has a slightly better chance
4353
+ # of success than the previous. Under some circumstances (for example, many
4354
+ # transactions attempting to modify the same row(s)), a transaction can abort
4355
+ # many times in a short period before successfully committing. Thus, it is not a
4356
+ # good idea to cap the number of retries a transaction can attempt; instead, it
4357
+ # is better to limit the total amount of time spent retrying. Idle transactions:
4358
+ # A transaction is considered idle if it has no outstanding reads or SQL queries
4359
+ # and has not started a read or SQL query within the last 10 seconds. Idle
4360
+ # transactions can be aborted by Cloud Spanner so that they don't hold on to
4361
+ # locks indefinitely. If an idle transaction is aborted, the commit will fail
4362
+ # with error `ABORTED`. If this behavior is undesirable, periodically executing
4363
+ # a simple SQL query in the transaction (for example, `SELECT 1`) prevents the
4364
+ # transaction from becoming idle. Snapshot read-only transactions: Snapshot read-
4365
+ # only transactions provides a simpler method than locking read-write
4366
+ # transactions for doing several consistent reads. However, this type of
4367
+ # transaction does not support writes. Snapshot transactions do not take locks.
4368
+ # Instead, they work by choosing a Cloud Spanner timestamp, then executing all
4369
+ # reads at that timestamp. Since they do not acquire locks, they do not block
4370
+ # concurrent read-write transactions. Unlike locking read-write transactions,
4371
+ # snapshot read-only transactions never abort. They can fail if the chosen read
4372
+ # timestamp is garbage collected; however, the default garbage collection policy
4373
+ # is generous enough that most applications do not need to worry about this in
4374
+ # practice. Snapshot read-only transactions do not need to call Commit or
4375
+ # Rollback (and in fact are not permitted to do so). To execute a snapshot
4376
+ # transaction, the client specifies a timestamp bound, which tells Cloud Spanner
4377
+ # how to choose a read timestamp. The types of timestamp bound are: - Strong (
4378
+ # the default). - Bounded staleness. - Exact staleness. If the Cloud Spanner
4379
+ # database to be read is geographically distributed, stale read-only
4380
+ # transactions can execute more quickly than strong or read-write transactions,
4381
+ # because they are able to execute far from the leader replica. Each type of
4382
+ # timestamp bound is discussed in detail below. Strong: Strong reads are
4383
+ # guaranteed to see the effects of all transactions that have committed before
4384
+ # the start of the read. Furthermore, all rows yielded by a single read are
4385
+ # consistent with each other -- if any part of the read observes a transaction,
4386
+ # all parts of the read see the transaction. Strong reads are not repeatable:
4387
+ # two consecutive strong read-only transactions might return inconsistent
4388
+ # results if there are concurrent writes. If consistency across reads is
4389
+ # required, the reads should be executed within a transaction or at an exact
4390
+ # read timestamp. Queries on change streams (see below for more details) must
4391
+ # also specify the strong read timestamp bound. See TransactionOptions.ReadOnly.
4392
+ # strong. Exact staleness: These timestamp bounds execute reads at a user-
4393
+ # specified timestamp. Reads at a timestamp are guaranteed to see a consistent
4394
+ # prefix of the global transaction history: they observe modifications done by
4395
+ # all transactions with a commit timestamp less than or equal to the read
4396
+ # timestamp, and observe none of the modifications done by transactions with a
4397
+ # larger commit timestamp. They will block until all conflicting transactions
4398
+ # that may be assigned commit timestamps <= the read timestamp have finished.
4399
+ # The timestamp can either be expressed as an absolute Cloud Spanner commit
4400
+ # timestamp or a staleness relative to the current time. These modes do not
4401
+ # require a "negotiation phase" to pick a timestamp. As a result, they execute
4402
+ # slightly faster than the equivalent boundedly stale concurrency modes. On the
4403
+ # other hand, boundedly stale reads usually return fresher results. See
4404
+ # TransactionOptions.ReadOnly.read_timestamp and TransactionOptions.ReadOnly.
4405
+ # exact_staleness. Bounded staleness: Bounded staleness modes allow Cloud
4406
+ # Spanner to pick the read timestamp, subject to a user-provided staleness bound.
4407
+ # Cloud Spanner chooses the newest timestamp within the staleness bound that
4408
+ # allows execution of the reads at the closest available replica without
4409
+ # blocking. All rows yielded are consistent with each other -- if any part of
4410
+ # the read observes a transaction, all parts of the read see the transaction.
4411
+ # Boundedly stale reads are not repeatable: two stale reads, even if they use
4412
+ # the same staleness bound, can execute at different timestamps and thus return
4413
+ # inconsistent results. Boundedly stale reads execute in two phases: the first
4414
+ # phase negotiates a timestamp among all replicas needed to serve the read. In
4415
+ # the second phase, reads are executed at the negotiated timestamp. As a result
4416
+ # of the two phase execution, bounded staleness reads are usually a little
4417
+ # slower than comparable exact staleness reads. However, they are typically able
4418
+ # to return fresher results, and are more likely to execute at the closest
4419
+ # replica. Because the timestamp negotiation requires up-front knowledge of
4420
+ # which rows will be read, it can only be used with single-use read-only
4421
+ # transactions. See TransactionOptions.ReadOnly.max_staleness and
4422
+ # TransactionOptions.ReadOnly.min_read_timestamp. Old read timestamps and
4423
+ # garbage collection: Cloud Spanner continuously garbage collects deleted and
4424
+ # overwritten data in the background to reclaim storage space. This process is
4425
+ # known as "version GC". By default, version GC reclaims versions after they are
4426
+ # one hour old. Because of this, Cloud Spanner cannot perform reads at read
4427
+ # timestamps more than one hour in the past. This restriction also applies to in-
4428
+ # progress reads and/or SQL queries whose timestamp become too old while
4429
+ # executing. Reads and SQL queries with too-old read timestamps fail with the
4430
+ # error `FAILED_PRECONDITION`. You can configure and extend the `
4431
+ # VERSION_RETENTION_PERIOD` of a database up to a period as long as one week,
4432
+ # which allows Cloud Spanner to perform reads up to one week in the past.
4433
+ # Querying change Streams: A Change Stream is a schema object that can be
4434
+ # configured to watch data changes on the entire database, a set of tables, or a
4435
+ # set of columns in a database. When a change stream is created, Spanner
4436
+ # automatically defines a corresponding SQL Table-Valued Function (TVF) that can
4437
+ # be used to query the change records in the associated change stream using the
4438
+ # ExecuteStreamingSql API. The name of the TVF for a change stream is generated
4439
+ # from the name of the change stream: READ_. All queries on change stream TVFs
4440
+ # must be executed using the ExecuteStreamingSql API with a single-use read-only
4441
+ # transaction with a strong read-only timestamp_bound. The change stream TVF
4442
+ # allows users to specify the start_timestamp and end_timestamp for the time
4443
+ # range of interest. All change records within the retention period is
4444
+ # accessible using the strong read-only timestamp_bound. All other
4445
+ # TransactionOptions are invalid for change stream queries. In addition, if
4446
+ # TransactionOptions.read_only.return_read_timestamp is set to true, a special
4447
+ # value of 2^63 - 2 will be returned in the Transaction message that describes
4448
+ # the transaction, instead of a valid read timestamp. This special value should
4449
+ # be discarded and not used for any subsequent queries. Please see https://cloud.
4450
+ # google.com/spanner/docs/change-streams for more details on how to query the
4451
+ # change stream TVFs. Partitioned DML transactions: Partitioned DML transactions
4452
+ # are used to execute DML statements with a different execution strategy that
4453
+ # provides different, and often better, scalability properties for large, table-
4454
+ # wide operations than DML in a ReadWrite transaction. Smaller scoped statements,
4455
+ # such as an OLTP workload, should prefer using ReadWrite transactions.
4456
+ # Partitioned DML partitions the keyspace and runs the DML statement on each
4457
+ # partition in separate, internal transactions. These transactions commit
4458
+ # automatically when complete, and run independently from one another. To reduce
4459
+ # lock contention, this execution strategy only acquires read locks on rows that
4460
+ # match the WHERE clause of the statement. Additionally, the smaller per-
4461
+ # partition transactions hold locks for less time. That said, Partitioned DML is
4462
+ # not a drop-in replacement for standard DML used in ReadWrite transactions. -
4463
+ # The DML statement must be fully-partitionable. Specifically, the statement
4464
+ # must be expressible as the union of many statements which each access only a
4465
+ # single row of the table. - The statement is not applied atomically to all rows
4466
+ # of the table. Rather, the statement is applied atomically to partitions of the
4467
+ # table, in independent transactions. Secondary index rows are updated
4468
+ # atomically with the base table rows. - Partitioned DML does not guarantee
4469
+ # exactly-once execution semantics against a partition. The statement will be
4470
+ # applied at least once to each partition. It is strongly recommended that the
4471
+ # DML statement should be idempotent to avoid unexpected results. For instance,
4472
+ # it is potentially dangerous to run a statement such as `UPDATE table SET
4473
+ # column = column + 1` as it could be run multiple times against some rows. -
4474
+ # The partitions are committed automatically - there is no support for Commit or
4475
+ # Rollback. If the call returns an error, or if the client issuing the
4476
+ # ExecuteSql call dies, it is possible that some rows had the statement executed
4477
+ # on them successfully. It is also possible that statement was never executed
4478
+ # against other rows. - Partitioned DML transactions may only contain the
4479
+ # execution of a single DML statement via ExecuteSql or ExecuteStreamingSql. -
4480
+ # If any error is encountered during the execution of the partitioned DML
4481
+ # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
4482
+ # value that cannot be stored due to schema constraints), then the operation is
4483
+ # stopped at that point and an error is returned. It is possible that at this
4484
+ # point, some partitions have been committed (or even committed multiple times),
4485
+ # and other partitions have not been run at all. Given the above, Partitioned
4486
+ # DML is good fit for large, database-wide, operations that are idempotent, such
4487
+ # as deleting old rows from a very large table.
4056
4488
  class TransactionOptions
4057
4489
  include Google::Apis::Core::Hashable
4058
4490
 
@@ -4089,49 +4521,193 @@ module Google
4089
4521
  class TransactionSelector
4090
4522
  include Google::Apis::Core::Hashable
4091
4523
 
4092
- # In addition, if TransactionOptions.read_only.return_read_timestamp is set to
4093
- # true, a special value of 2^63 - 2 will be returned in the Transaction message
4094
- # that describes the transaction, instead of a valid read timestamp. This
4095
- # special value should be discarded and not used for any subsequent queries.
4096
- # Please see https://cloud.google.com/spanner/docs/change-streams for more
4097
- # details on how to query the change stream TVFs. Partitioned DML transactions:
4098
- # Partitioned DML transactions are used to execute DML statements with a
4099
- # different execution strategy that provides different, and often better,
4100
- # scalability properties for large, table-wide operations than DML in a
4101
- # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
4102
- # should prefer using ReadWrite transactions. Partitioned DML partitions the
4103
- # keyspace and runs the DML statement on each partition in separate, internal
4104
- # transactions. These transactions commit automatically when complete, and run
4105
- # independently from one another. To reduce lock contention, this execution
4106
- # strategy only acquires read locks on rows that match the WHERE clause of the
4107
- # statement. Additionally, the smaller per-partition transactions hold locks for
4108
- # less time. That said, Partitioned DML is not a drop-in replacement for
4109
- # standard DML used in ReadWrite transactions. - The DML statement must be fully-
4110
- # partitionable. Specifically, the statement must be expressible as the union of
4111
- # many statements which each access only a single row of the table. - The
4112
- # statement is not applied atomically to all rows of the table. Rather, the
4113
- # statement is applied atomically to partitions of the table, in independent
4114
- # transactions. Secondary index rows are updated atomically with the base table
4115
- # rows. - Partitioned DML does not guarantee exactly-once execution semantics
4116
- # against a partition. The statement will be applied at least once to each
4117
- # partition. It is strongly recommended that the DML statement should be
4118
- # idempotent to avoid unexpected results. For instance, it is potentially
4119
- # dangerous to run a statement such as `UPDATE table SET column = column + 1` as
4120
- # it could be run multiple times against some rows. - The partitions are
4121
- # committed automatically - there is no support for Commit or Rollback. If the
4122
- # call returns an error, or if the client issuing the ExecuteSql call dies, it
4123
- # is possible that some rows had the statement executed on them successfully. It
4124
- # is also possible that statement was never executed against other rows. -
4125
- # Partitioned DML transactions may only contain the execution of a single DML
4126
- # statement via ExecuteSql or ExecuteStreamingSql. - If any error is encountered
4127
- # during the execution of the partitioned DML operation (for instance, a UNIQUE
4128
- # INDEX violation, division by zero, or a value that cannot be stored due to
4129
- # schema constraints), then the operation is stopped at that point and an error
4130
- # is returned. It is possible that at this point, some partitions have been
4131
- # committed (or even committed multiple times), and other partitions have not
4132
- # been run at all. Given the above, Partitioned DML is good fit for large,
4133
- # database-wide, operations that are idempotent, such as deleting old rows from
4134
- # a very large table.
4524
+ # Transactions: Each session can have at most one active transaction at a time (
4525
+ # note that standalone reads and queries use a transaction internally and do
4526
+ # count towards the one transaction limit). After the active transaction is
4527
+ # completed, the session can immediately be re-used for the next transaction. It
4528
+ # is not necessary to create a new session for each transaction. Transaction
4529
+ # modes: Cloud Spanner supports three transaction modes: 1. Locking read-write.
4530
+ # This type of transaction is the only way to write data into Cloud Spanner.
4531
+ # These transactions rely on pessimistic locking and, if necessary, two-phase
4532
+ # commit. Locking read-write transactions may abort, requiring the application
4533
+ # to retry. 2. Snapshot read-only. Snapshot read-only transactions provide
4534
+ # guaranteed consistency across several reads, but do not allow writes. Snapshot
4535
+ # read-only transactions can be configured to read at timestamps in the past, or
4536
+ # configured to perform a strong read (where Spanner will select a timestamp
4537
+ # such that the read is guaranteed to see the effects of all transactions that
4538
+ # have committed before the start of the read). Snapshot read-only transactions
4539
+ # do not need to be committed. Queries on change streams must be performed with
4540
+ # the snapshot read-only transaction mode, specifying a strong read. Please see
4541
+ # TransactionOptions.ReadOnly.strong for more details. 3. Partitioned DML. This
4542
+ # type of transaction is used to execute a single Partitioned DML statement.
4543
+ # Partitioned DML partitions the key space and runs the DML statement over each
4544
+ # partition in parallel using separate, internal transactions that commit
4545
+ # independently. Partitioned DML transactions do not need to be committed. For
4546
+ # transactions that only read, snapshot read-only transactions provide simpler
4547
+ # semantics and are almost always faster. In particular, read-only transactions
4548
+ # do not take locks, so they do not conflict with read-write transactions. As a
4549
+ # consequence of not taking locks, they also do not abort, so retry loops are
4550
+ # not needed. Transactions may only read-write data in a single database. They
4551
+ # may, however, read-write data in different tables within that database.
4552
+ # Locking read-write transactions: Locking transactions may be used to
4553
+ # atomically read-modify-write data anywhere in a database. This type of
4554
+ # transaction is externally consistent. Clients should attempt to minimize the
4555
+ # amount of time a transaction is active. Faster transactions commit with higher
4556
+ # probability and cause less contention. Cloud Spanner attempts to keep read
4557
+ # locks active as long as the transaction continues to do reads, and the
4558
+ # transaction has not been terminated by Commit or Rollback. Long periods of
4559
+ # inactivity at the client may cause Cloud Spanner to release a transaction's
4560
+ # locks and abort it. Conceptually, a read-write transaction consists of zero or
4561
+ # more reads or SQL statements followed by Commit. At any time before Commit,
4562
+ # the client can send a Rollback request to abort the transaction. Semantics:
4563
+ # Cloud Spanner can commit the transaction if all read locks it acquired are
4564
+ # still valid at commit time, and it is able to acquire write locks for all
4565
+ # writes. Cloud Spanner can abort the transaction for any reason. If a commit
4566
+ # attempt returns `ABORTED`, Cloud Spanner guarantees that the transaction has
4567
+ # not modified any user data in Cloud Spanner. Unless the transaction commits,
4568
+ # Cloud Spanner makes no guarantees about how long the transaction's locks were
4569
+ # held for. It is an error to use Cloud Spanner locks for any sort of mutual
4570
+ # exclusion other than between Cloud Spanner transactions themselves. Retrying
4571
+ # aborted transactions: When a transaction aborts, the application can choose to
4572
+ # retry the whole transaction again. To maximize the chances of successfully
4573
+ # committing the retry, the client should execute the retry in the same session
4574
+ # as the original attempt. The original session's lock priority increases with
4575
+ # each consecutive abort, meaning that each attempt has a slightly better chance
4576
+ # of success than the previous. Under some circumstances (for example, many
4577
+ # transactions attempting to modify the same row(s)), a transaction can abort
4578
+ # many times in a short period before successfully committing. Thus, it is not a
4579
+ # good idea to cap the number of retries a transaction can attempt; instead, it
4580
+ # is better to limit the total amount of time spent retrying. Idle transactions:
4581
+ # A transaction is considered idle if it has no outstanding reads or SQL queries
4582
+ # and has not started a read or SQL query within the last 10 seconds. Idle
4583
+ # transactions can be aborted by Cloud Spanner so that they don't hold on to
4584
+ # locks indefinitely. If an idle transaction is aborted, the commit will fail
4585
+ # with error `ABORTED`. If this behavior is undesirable, periodically executing
4586
+ # a simple SQL query in the transaction (for example, `SELECT 1`) prevents the
4587
+ # transaction from becoming idle. Snapshot read-only transactions: Snapshot read-
4588
+ # only transactions provides a simpler method than locking read-write
4589
+ # transactions for doing several consistent reads. However, this type of
4590
+ # transaction does not support writes. Snapshot transactions do not take locks.
4591
+ # Instead, they work by choosing a Cloud Spanner timestamp, then executing all
4592
+ # reads at that timestamp. Since they do not acquire locks, they do not block
4593
+ # concurrent read-write transactions. Unlike locking read-write transactions,
4594
+ # snapshot read-only transactions never abort. They can fail if the chosen read
4595
+ # timestamp is garbage collected; however, the default garbage collection policy
4596
+ # is generous enough that most applications do not need to worry about this in
4597
+ # practice. Snapshot read-only transactions do not need to call Commit or
4598
+ # Rollback (and in fact are not permitted to do so). To execute a snapshot
4599
+ # transaction, the client specifies a timestamp bound, which tells Cloud Spanner
4600
+ # how to choose a read timestamp. The types of timestamp bound are: - Strong (
4601
+ # the default). - Bounded staleness. - Exact staleness. If the Cloud Spanner
4602
+ # database to be read is geographically distributed, stale read-only
4603
+ # transactions can execute more quickly than strong or read-write transactions,
4604
+ # because they are able to execute far from the leader replica. Each type of
4605
+ # timestamp bound is discussed in detail below. Strong: Strong reads are
4606
+ # guaranteed to see the effects of all transactions that have committed before
4607
+ # the start of the read. Furthermore, all rows yielded by a single read are
4608
+ # consistent with each other -- if any part of the read observes a transaction,
4609
+ # all parts of the read see the transaction. Strong reads are not repeatable:
4610
+ # two consecutive strong read-only transactions might return inconsistent
4611
+ # results if there are concurrent writes. If consistency across reads is
4612
+ # required, the reads should be executed within a transaction or at an exact
4613
+ # read timestamp. Queries on change streams (see below for more details) must
4614
+ # also specify the strong read timestamp bound. See TransactionOptions.ReadOnly.
4615
+ # strong. Exact staleness: These timestamp bounds execute reads at a user-
4616
+ # specified timestamp. Reads at a timestamp are guaranteed to see a consistent
4617
+ # prefix of the global transaction history: they observe modifications done by
4618
+ # all transactions with a commit timestamp less than or equal to the read
4619
+ # timestamp, and observe none of the modifications done by transactions with a
4620
+ # larger commit timestamp. They will block until all conflicting transactions
4621
+ # that may be assigned commit timestamps <= the read timestamp have finished.
4622
+ # The timestamp can either be expressed as an absolute Cloud Spanner commit
4623
+ # timestamp or a staleness relative to the current time. These modes do not
4624
+ # require a "negotiation phase" to pick a timestamp. As a result, they execute
4625
+ # slightly faster than the equivalent boundedly stale concurrency modes. On the
4626
+ # other hand, boundedly stale reads usually return fresher results. See
4627
+ # TransactionOptions.ReadOnly.read_timestamp and TransactionOptions.ReadOnly.
4628
+ # exact_staleness. Bounded staleness: Bounded staleness modes allow Cloud
4629
+ # Spanner to pick the read timestamp, subject to a user-provided staleness bound.
4630
+ # Cloud Spanner chooses the newest timestamp within the staleness bound that
4631
+ # allows execution of the reads at the closest available replica without
4632
+ # blocking. All rows yielded are consistent with each other -- if any part of
4633
+ # the read observes a transaction, all parts of the read see the transaction.
4634
+ # Boundedly stale reads are not repeatable: two stale reads, even if they use
4635
+ # the same staleness bound, can execute at different timestamps and thus return
4636
+ # inconsistent results. Boundedly stale reads execute in two phases: the first
4637
+ # phase negotiates a timestamp among all replicas needed to serve the read. In
4638
+ # the second phase, reads are executed at the negotiated timestamp. As a result
4639
+ # of the two phase execution, bounded staleness reads are usually a little
4640
+ # slower than comparable exact staleness reads. However, they are typically able
4641
+ # to return fresher results, and are more likely to execute at the closest
4642
+ # replica. Because the timestamp negotiation requires up-front knowledge of
4643
+ # which rows will be read, it can only be used with single-use read-only
4644
+ # transactions. See TransactionOptions.ReadOnly.max_staleness and
4645
+ # TransactionOptions.ReadOnly.min_read_timestamp. Old read timestamps and
4646
+ # garbage collection: Cloud Spanner continuously garbage collects deleted and
4647
+ # overwritten data in the background to reclaim storage space. This process is
4648
+ # known as "version GC". By default, version GC reclaims versions after they are
4649
+ # one hour old. Because of this, Cloud Spanner cannot perform reads at read
4650
+ # timestamps more than one hour in the past. This restriction also applies to in-
4651
+ # progress reads and/or SQL queries whose timestamp become too old while
4652
+ # executing. Reads and SQL queries with too-old read timestamps fail with the
4653
+ # error `FAILED_PRECONDITION`. You can configure and extend the `
4654
+ # VERSION_RETENTION_PERIOD` of a database up to a period as long as one week,
4655
+ # which allows Cloud Spanner to perform reads up to one week in the past.
4656
+ # Querying change Streams: A Change Stream is a schema object that can be
4657
+ # configured to watch data changes on the entire database, a set of tables, or a
4658
+ # set of columns in a database. When a change stream is created, Spanner
4659
+ # automatically defines a corresponding SQL Table-Valued Function (TVF) that can
4660
+ # be used to query the change records in the associated change stream using the
4661
+ # ExecuteStreamingSql API. The name of the TVF for a change stream is generated
4662
+ # from the name of the change stream: READ_. All queries on change stream TVFs
4663
+ # must be executed using the ExecuteStreamingSql API with a single-use read-only
4664
+ # transaction with a strong read-only timestamp_bound. The change stream TVF
4665
+ # allows users to specify the start_timestamp and end_timestamp for the time
4666
+ # range of interest. All change records within the retention period is
4667
+ # accessible using the strong read-only timestamp_bound. All other
4668
+ # TransactionOptions are invalid for change stream queries. In addition, if
4669
+ # TransactionOptions.read_only.return_read_timestamp is set to true, a special
4670
+ # value of 2^63 - 2 will be returned in the Transaction message that describes
4671
+ # the transaction, instead of a valid read timestamp. This special value should
4672
+ # be discarded and not used for any subsequent queries. Please see https://cloud.
4673
+ # google.com/spanner/docs/change-streams for more details on how to query the
4674
+ # change stream TVFs. Partitioned DML transactions: Partitioned DML transactions
4675
+ # are used to execute DML statements with a different execution strategy that
4676
+ # provides different, and often better, scalability properties for large, table-
4677
+ # wide operations than DML in a ReadWrite transaction. Smaller scoped statements,
4678
+ # such as an OLTP workload, should prefer using ReadWrite transactions.
4679
+ # Partitioned DML partitions the keyspace and runs the DML statement on each
4680
+ # partition in separate, internal transactions. These transactions commit
4681
+ # automatically when complete, and run independently from one another. To reduce
4682
+ # lock contention, this execution strategy only acquires read locks on rows that
4683
+ # match the WHERE clause of the statement. Additionally, the smaller per-
4684
+ # partition transactions hold locks for less time. That said, Partitioned DML is
4685
+ # not a drop-in replacement for standard DML used in ReadWrite transactions. -
4686
+ # The DML statement must be fully-partitionable. Specifically, the statement
4687
+ # must be expressible as the union of many statements which each access only a
4688
+ # single row of the table. - The statement is not applied atomically to all rows
4689
+ # of the table. Rather, the statement is applied atomically to partitions of the
4690
+ # table, in independent transactions. Secondary index rows are updated
4691
+ # atomically with the base table rows. - Partitioned DML does not guarantee
4692
+ # exactly-once execution semantics against a partition. The statement will be
4693
+ # applied at least once to each partition. It is strongly recommended that the
4694
+ # DML statement should be idempotent to avoid unexpected results. For instance,
4695
+ # it is potentially dangerous to run a statement such as `UPDATE table SET
4696
+ # column = column + 1` as it could be run multiple times against some rows. -
4697
+ # The partitions are committed automatically - there is no support for Commit or
4698
+ # Rollback. If the call returns an error, or if the client issuing the
4699
+ # ExecuteSql call dies, it is possible that some rows had the statement executed
4700
+ # on them successfully. It is also possible that statement was never executed
4701
+ # against other rows. - Partitioned DML transactions may only contain the
4702
+ # execution of a single DML statement via ExecuteSql or ExecuteStreamingSql. -
4703
+ # If any error is encountered during the execution of the partitioned DML
4704
+ # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
4705
+ # value that cannot be stored due to schema constraints), then the operation is
4706
+ # stopped at that point and an error is returned. It is possible that at this
4707
+ # point, some partitions have been committed (or even committed multiple times),
4708
+ # and other partitions have not been run at all. Given the above, Partitioned
4709
+ # DML is good fit for large, database-wide, operations that are idempotent, such
4710
+ # as deleting old rows from a very large table.
4135
4711
  # Corresponds to the JSON property `begin`
4136
4712
  # @return [Google::Apis::SpannerV1::TransactionOptions]
4137
4713
  attr_accessor :begin
@@ -4142,49 +4718,193 @@ module Google
4142
4718
  # @return [String]
4143
4719
  attr_accessor :id
4144
4720
 
4145
- # In addition, if TransactionOptions.read_only.return_read_timestamp is set to
4146
- # true, a special value of 2^63 - 2 will be returned in the Transaction message
4147
- # that describes the transaction, instead of a valid read timestamp. This
4148
- # special value should be discarded and not used for any subsequent queries.
4149
- # Please see https://cloud.google.com/spanner/docs/change-streams for more
4150
- # details on how to query the change stream TVFs. Partitioned DML transactions:
4151
- # Partitioned DML transactions are used to execute DML statements with a
4152
- # different execution strategy that provides different, and often better,
4153
- # scalability properties for large, table-wide operations than DML in a
4154
- # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
4155
- # should prefer using ReadWrite transactions. Partitioned DML partitions the
4156
- # keyspace and runs the DML statement on each partition in separate, internal
4157
- # transactions. These transactions commit automatically when complete, and run
4158
- # independently from one another. To reduce lock contention, this execution
4159
- # strategy only acquires read locks on rows that match the WHERE clause of the
4160
- # statement. Additionally, the smaller per-partition transactions hold locks for
4161
- # less time. That said, Partitioned DML is not a drop-in replacement for
4162
- # standard DML used in ReadWrite transactions. - The DML statement must be fully-
4163
- # partitionable. Specifically, the statement must be expressible as the union of
4164
- # many statements which each access only a single row of the table. - The
4165
- # statement is not applied atomically to all rows of the table. Rather, the
4166
- # statement is applied atomically to partitions of the table, in independent
4167
- # transactions. Secondary index rows are updated atomically with the base table
4168
- # rows. - Partitioned DML does not guarantee exactly-once execution semantics
4169
- # against a partition. The statement will be applied at least once to each
4170
- # partition. It is strongly recommended that the DML statement should be
4171
- # idempotent to avoid unexpected results. For instance, it is potentially
4172
- # dangerous to run a statement such as `UPDATE table SET column = column + 1` as
4173
- # it could be run multiple times against some rows. - The partitions are
4174
- # committed automatically - there is no support for Commit or Rollback. If the
4175
- # call returns an error, or if the client issuing the ExecuteSql call dies, it
4176
- # is possible that some rows had the statement executed on them successfully. It
4177
- # is also possible that statement was never executed against other rows. -
4178
- # Partitioned DML transactions may only contain the execution of a single DML
4179
- # statement via ExecuteSql or ExecuteStreamingSql. - If any error is encountered
4180
- # during the execution of the partitioned DML operation (for instance, a UNIQUE
4181
- # INDEX violation, division by zero, or a value that cannot be stored due to
4182
- # schema constraints), then the operation is stopped at that point and an error
4183
- # is returned. It is possible that at this point, some partitions have been
4184
- # committed (or even committed multiple times), and other partitions have not
4185
- # been run at all. Given the above, Partitioned DML is good fit for large,
4186
- # database-wide, operations that are idempotent, such as deleting old rows from
4187
- # a very large table.
4721
+ # Transactions: Each session can have at most one active transaction at a time (
4722
+ # note that standalone reads and queries use a transaction internally and do
4723
+ # count towards the one transaction limit). After the active transaction is
4724
+ # completed, the session can immediately be re-used for the next transaction. It
4725
+ # is not necessary to create a new session for each transaction. Transaction
4726
+ # modes: Cloud Spanner supports three transaction modes: 1. Locking read-write.
4727
+ # This type of transaction is the only way to write data into Cloud Spanner.
4728
+ # These transactions rely on pessimistic locking and, if necessary, two-phase
4729
+ # commit. Locking read-write transactions may abort, requiring the application
4730
+ # to retry. 2. Snapshot read-only. Snapshot read-only transactions provide
4731
+ # guaranteed consistency across several reads, but do not allow writes. Snapshot
4732
+ # read-only transactions can be configured to read at timestamps in the past, or
4733
+ # configured to perform a strong read (where Spanner will select a timestamp
4734
+ # such that the read is guaranteed to see the effects of all transactions that
4735
+ # have committed before the start of the read). Snapshot read-only transactions
4736
+ # do not need to be committed. Queries on change streams must be performed with
4737
+ # the snapshot read-only transaction mode, specifying a strong read. Please see
4738
+ # TransactionOptions.ReadOnly.strong for more details. 3. Partitioned DML. This
4739
+ # type of transaction is used to execute a single Partitioned DML statement.
4740
+ # Partitioned DML partitions the key space and runs the DML statement over each
4741
+ # partition in parallel using separate, internal transactions that commit
4742
+ # independently. Partitioned DML transactions do not need to be committed. For
4743
+ # transactions that only read, snapshot read-only transactions provide simpler
4744
+ # semantics and are almost always faster. In particular, read-only transactions
4745
+ # do not take locks, so they do not conflict with read-write transactions. As a
4746
+ # consequence of not taking locks, they also do not abort, so retry loops are
4747
+ # not needed. Transactions may only read-write data in a single database. They
4748
+ # may, however, read-write data in different tables within that database.
4749
+ # Locking read-write transactions: Locking transactions may be used to
4750
+ # atomically read-modify-write data anywhere in a database. This type of
4751
+ # transaction is externally consistent. Clients should attempt to minimize the
4752
+ # amount of time a transaction is active. Faster transactions commit with higher
4753
+ # probability and cause less contention. Cloud Spanner attempts to keep read
4754
+ # locks active as long as the transaction continues to do reads, and the
4755
+ # transaction has not been terminated by Commit or Rollback. Long periods of
4756
+ # inactivity at the client may cause Cloud Spanner to release a transaction's
4757
+ # locks and abort it. Conceptually, a read-write transaction consists of zero or
4758
+ # more reads or SQL statements followed by Commit. At any time before Commit,
4759
+ # the client can send a Rollback request to abort the transaction. Semantics:
4760
+ # Cloud Spanner can commit the transaction if all read locks it acquired are
4761
+ # still valid at commit time, and it is able to acquire write locks for all
4762
+ # writes. Cloud Spanner can abort the transaction for any reason. If a commit
4763
+ # attempt returns `ABORTED`, Cloud Spanner guarantees that the transaction has
4764
+ # not modified any user data in Cloud Spanner. Unless the transaction commits,
4765
+ # Cloud Spanner makes no guarantees about how long the transaction's locks were
4766
+ # held for. It is an error to use Cloud Spanner locks for any sort of mutual
4767
+ # exclusion other than between Cloud Spanner transactions themselves. Retrying
4768
+ # aborted transactions: When a transaction aborts, the application can choose to
4769
+ # retry the whole transaction again. To maximize the chances of successfully
4770
+ # committing the retry, the client should execute the retry in the same session
4771
+ # as the original attempt. The original session's lock priority increases with
4772
+ # each consecutive abort, meaning that each attempt has a slightly better chance
4773
+ # of success than the previous. Under some circumstances (for example, many
4774
+ # transactions attempting to modify the same row(s)), a transaction can abort
4775
+ # many times in a short period before successfully committing. Thus, it is not a
4776
+ # good idea to cap the number of retries a transaction can attempt; instead, it
4777
+ # is better to limit the total amount of time spent retrying. Idle transactions:
4778
+ # A transaction is considered idle if it has no outstanding reads or SQL queries
4779
+ # and has not started a read or SQL query within the last 10 seconds. Idle
4780
+ # transactions can be aborted by Cloud Spanner so that they don't hold on to
4781
+ # locks indefinitely. If an idle transaction is aborted, the commit will fail
4782
+ # with error `ABORTED`. If this behavior is undesirable, periodically executing
4783
+ # a simple SQL query in the transaction (for example, `SELECT 1`) prevents the
4784
+ # transaction from becoming idle. Snapshot read-only transactions: Snapshot read-
4785
+ # only transactions provides a simpler method than locking read-write
4786
+ # transactions for doing several consistent reads. However, this type of
4787
+ # transaction does not support writes. Snapshot transactions do not take locks.
4788
+ # Instead, they work by choosing a Cloud Spanner timestamp, then executing all
4789
+ # reads at that timestamp. Since they do not acquire locks, they do not block
4790
+ # concurrent read-write transactions. Unlike locking read-write transactions,
4791
+ # snapshot read-only transactions never abort. They can fail if the chosen read
4792
+ # timestamp is garbage collected; however, the default garbage collection policy
4793
+ # is generous enough that most applications do not need to worry about this in
4794
+ # practice. Snapshot read-only transactions do not need to call Commit or
4795
+ # Rollback (and in fact are not permitted to do so). To execute a snapshot
4796
+ # transaction, the client specifies a timestamp bound, which tells Cloud Spanner
4797
+ # how to choose a read timestamp. The types of timestamp bound are: - Strong (
4798
+ # the default). - Bounded staleness. - Exact staleness. If the Cloud Spanner
4799
+ # database to be read is geographically distributed, stale read-only
4800
+ # transactions can execute more quickly than strong or read-write transactions,
4801
+ # because they are able to execute far from the leader replica. Each type of
4802
+ # timestamp bound is discussed in detail below. Strong: Strong reads are
4803
+ # guaranteed to see the effects of all transactions that have committed before
4804
+ # the start of the read. Furthermore, all rows yielded by a single read are
4805
+ # consistent with each other -- if any part of the read observes a transaction,
4806
+ # all parts of the read see the transaction. Strong reads are not repeatable:
4807
+ # two consecutive strong read-only transactions might return inconsistent
4808
+ # results if there are concurrent writes. If consistency across reads is
4809
+ # required, the reads should be executed within a transaction or at an exact
4810
+ # read timestamp. Queries on change streams (see below for more details) must
4811
+ # also specify the strong read timestamp bound. See TransactionOptions.ReadOnly.
4812
+ # strong. Exact staleness: These timestamp bounds execute reads at a user-
4813
+ # specified timestamp. Reads at a timestamp are guaranteed to see a consistent
4814
+ # prefix of the global transaction history: they observe modifications done by
4815
+ # all transactions with a commit timestamp less than or equal to the read
4816
+ # timestamp, and observe none of the modifications done by transactions with a
4817
+ # larger commit timestamp. They will block until all conflicting transactions
4818
+ # that may be assigned commit timestamps <= the read timestamp have finished.
4819
+ # The timestamp can either be expressed as an absolute Cloud Spanner commit
4820
+ # timestamp or a staleness relative to the current time. These modes do not
4821
+ # require a "negotiation phase" to pick a timestamp. As a result, they execute
4822
+ # slightly faster than the equivalent boundedly stale concurrency modes. On the
4823
+ # other hand, boundedly stale reads usually return fresher results. See
4824
+ # TransactionOptions.ReadOnly.read_timestamp and TransactionOptions.ReadOnly.
4825
+ # exact_staleness. Bounded staleness: Bounded staleness modes allow Cloud
4826
+ # Spanner to pick the read timestamp, subject to a user-provided staleness bound.
4827
+ # Cloud Spanner chooses the newest timestamp within the staleness bound that
4828
+ # allows execution of the reads at the closest available replica without
4829
+ # blocking. All rows yielded are consistent with each other -- if any part of
4830
+ # the read observes a transaction, all parts of the read see the transaction.
4831
+ # Boundedly stale reads are not repeatable: two stale reads, even if they use
4832
+ # the same staleness bound, can execute at different timestamps and thus return
4833
+ # inconsistent results. Boundedly stale reads execute in two phases: the first
4834
+ # phase negotiates a timestamp among all replicas needed to serve the read. In
4835
+ # the second phase, reads are executed at the negotiated timestamp. As a result
4836
+ # of the two phase execution, bounded staleness reads are usually a little
4837
+ # slower than comparable exact staleness reads. However, they are typically able
4838
+ # to return fresher results, and are more likely to execute at the closest
4839
+ # replica. Because the timestamp negotiation requires up-front knowledge of
4840
+ # which rows will be read, it can only be used with single-use read-only
4841
+ # transactions. See TransactionOptions.ReadOnly.max_staleness and
4842
+ # TransactionOptions.ReadOnly.min_read_timestamp. Old read timestamps and
4843
+ # garbage collection: Cloud Spanner continuously garbage collects deleted and
4844
+ # overwritten data in the background to reclaim storage space. This process is
4845
+ # known as "version GC". By default, version GC reclaims versions after they are
4846
+ # one hour old. Because of this, Cloud Spanner cannot perform reads at read
4847
+ # timestamps more than one hour in the past. This restriction also applies to in-
4848
+ # progress reads and/or SQL queries whose timestamp become too old while
4849
+ # executing. Reads and SQL queries with too-old read timestamps fail with the
4850
+ # error `FAILED_PRECONDITION`. You can configure and extend the `
4851
+ # VERSION_RETENTION_PERIOD` of a database up to a period as long as one week,
4852
+ # which allows Cloud Spanner to perform reads up to one week in the past.
4853
+ # Querying change Streams: A Change Stream is a schema object that can be
4854
+ # configured to watch data changes on the entire database, a set of tables, or a
4855
+ # set of columns in a database. When a change stream is created, Spanner
4856
+ # automatically defines a corresponding SQL Table-Valued Function (TVF) that can
4857
+ # be used to query the change records in the associated change stream using the
4858
+ # ExecuteStreamingSql API. The name of the TVF for a change stream is generated
4859
+ # from the name of the change stream: READ_. All queries on change stream TVFs
4860
+ # must be executed using the ExecuteStreamingSql API with a single-use read-only
4861
+ # transaction with a strong read-only timestamp_bound. The change stream TVF
4862
+ # allows users to specify the start_timestamp and end_timestamp for the time
4863
+ # range of interest. All change records within the retention period is
4864
+ # accessible using the strong read-only timestamp_bound. All other
4865
+ # TransactionOptions are invalid for change stream queries. In addition, if
4866
+ # TransactionOptions.read_only.return_read_timestamp is set to true, a special
4867
+ # value of 2^63 - 2 will be returned in the Transaction message that describes
4868
+ # the transaction, instead of a valid read timestamp. This special value should
4869
+ # be discarded and not used for any subsequent queries. Please see https://cloud.
4870
+ # google.com/spanner/docs/change-streams for more details on how to query the
4871
+ # change stream TVFs. Partitioned DML transactions: Partitioned DML transactions
4872
+ # are used to execute DML statements with a different execution strategy that
4873
+ # provides different, and often better, scalability properties for large, table-
4874
+ # wide operations than DML in a ReadWrite transaction. Smaller scoped statements,
4875
+ # such as an OLTP workload, should prefer using ReadWrite transactions.
4876
+ # Partitioned DML partitions the keyspace and runs the DML statement on each
4877
+ # partition in separate, internal transactions. These transactions commit
4878
+ # automatically when complete, and run independently from one another. To reduce
4879
+ # lock contention, this execution strategy only acquires read locks on rows that
4880
+ # match the WHERE clause of the statement. Additionally, the smaller per-
4881
+ # partition transactions hold locks for less time. That said, Partitioned DML is
4882
+ # not a drop-in replacement for standard DML used in ReadWrite transactions. -
4883
+ # The DML statement must be fully-partitionable. Specifically, the statement
4884
+ # must be expressible as the union of many statements which each access only a
4885
+ # single row of the table. - The statement is not applied atomically to all rows
4886
+ # of the table. Rather, the statement is applied atomically to partitions of the
4887
+ # table, in independent transactions. Secondary index rows are updated
4888
+ # atomically with the base table rows. - Partitioned DML does not guarantee
4889
+ # exactly-once execution semantics against a partition. The statement will be
4890
+ # applied at least once to each partition. It is strongly recommended that the
4891
+ # DML statement should be idempotent to avoid unexpected results. For instance,
4892
+ # it is potentially dangerous to run a statement such as `UPDATE table SET
4893
+ # column = column + 1` as it could be run multiple times against some rows. -
4894
+ # The partitions are committed automatically - there is no support for Commit or
4895
+ # Rollback. If the call returns an error, or if the client issuing the
4896
+ # ExecuteSql call dies, it is possible that some rows had the statement executed
4897
+ # on them successfully. It is also possible that statement was never executed
4898
+ # against other rows. - Partitioned DML transactions may only contain the
4899
+ # execution of a single DML statement via ExecuteSql or ExecuteStreamingSql. -
4900
+ # If any error is encountered during the execution of the partitioned DML
4901
+ # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
4902
+ # value that cannot be stored due to schema constraints), then the operation is
4903
+ # stopped at that point and an error is returned. It is possible that at this
4904
+ # point, some partitions have been committed (or even committed multiple times),
4905
+ # and other partitions have not been run at all. Given the above, Partitioned
4906
+ # DML is good fit for large, database-wide, operations that are idempotent, such
4907
+ # as deleting old rows from a very large table.
4188
4908
  # Corresponds to the JSON property `singleUse`
4189
4909
  # @return [Google::Apis::SpannerV1::TransactionOptions]
4190
4910
  attr_accessor :single_use