google-apis-spanner_v1 0.41.0 → 0.43.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -105,8 +105,8 @@ module Google
105
105
 
106
106
  # Optional. A user-supplied tag associated with the split points. For example, "
107
107
  # initial_data_load", "special_event_1". Defaults to "CloudAddSplitPointsAPI" if
108
- # not specified. The length of the tag must not exceed 50 characters,else will
109
- # be trimmed. Only valid UTF8 characters are allowed.
108
+ # not specified. The length of the tag must not exceed 50 characters, or else it
109
+ # is trimmed. Only valid UTF8 characters are allowed.
110
110
  # Corresponds to the JSON property `initiator`
111
111
  # @return [String]
112
112
  attr_accessor :initiator
@@ -797,198 +797,7 @@ module Google
797
797
  # @return [Google::Apis::SpannerV1::Mutation]
798
798
  attr_accessor :mutation_key
799
799
 
800
- # Transactions: Each session can have at most one active transaction at a time (
801
- # note that standalone reads and queries use a transaction internally and do
802
- # count towards the one transaction limit). After the active transaction is
803
- # completed, the session can immediately be re-used for the next transaction. It
804
- # is not necessary to create a new session for each transaction. Transaction
805
- # modes: Cloud Spanner supports three transaction modes: 1. Locking read-write.
806
- # This type of transaction is the only way to write data into Cloud Spanner.
807
- # These transactions rely on pessimistic locking and, if necessary, two-phase
808
- # commit. Locking read-write transactions may abort, requiring the application
809
- # to retry. 2. Snapshot read-only. Snapshot read-only transactions provide
810
- # guaranteed consistency across several reads, but do not allow writes. Snapshot
811
- # read-only transactions can be configured to read at timestamps in the past, or
812
- # configured to perform a strong read (where Spanner will select a timestamp
813
- # such that the read is guaranteed to see the effects of all transactions that
814
- # have committed before the start of the read). Snapshot read-only transactions
815
- # do not need to be committed. Queries on change streams must be performed with
816
- # the snapshot read-only transaction mode, specifying a strong read. See
817
- # TransactionOptions.ReadOnly.strong for more details. 3. Partitioned DML. This
818
- # type of transaction is used to execute a single Partitioned DML statement.
819
- # Partitioned DML partitions the key space and runs the DML statement over each
820
- # partition in parallel using separate, internal transactions that commit
821
- # independently. Partitioned DML transactions do not need to be committed. For
822
- # transactions that only read, snapshot read-only transactions provide simpler
823
- # semantics and are almost always faster. In particular, read-only transactions
824
- # do not take locks, so they do not conflict with read-write transactions. As a
825
- # consequence of not taking locks, they also do not abort, so retry loops are
826
- # not needed. Transactions may only read-write data in a single database. They
827
- # may, however, read-write data in different tables within that database.
828
- # Locking read-write transactions: Locking transactions may be used to
829
- # atomically read-modify-write data anywhere in a database. This type of
830
- # transaction is externally consistent. Clients should attempt to minimize the
831
- # amount of time a transaction is active. Faster transactions commit with higher
832
- # probability and cause less contention. Cloud Spanner attempts to keep read
833
- # locks active as long as the transaction continues to do reads, and the
834
- # transaction has not been terminated by Commit or Rollback. Long periods of
835
- # inactivity at the client may cause Cloud Spanner to release a transaction's
836
- # locks and abort it. Conceptually, a read-write transaction consists of zero or
837
- # more reads or SQL statements followed by Commit. At any time before Commit,
838
- # the client can send a Rollback request to abort the transaction. Semantics:
839
- # Cloud Spanner can commit the transaction if all read locks it acquired are
840
- # still valid at commit time, and it is able to acquire write locks for all
841
- # writes. Cloud Spanner can abort the transaction for any reason. If a commit
842
- # attempt returns `ABORTED`, Cloud Spanner guarantees that the transaction has
843
- # not modified any user data in Cloud Spanner. Unless the transaction commits,
844
- # Cloud Spanner makes no guarantees about how long the transaction's locks were
845
- # held for. It is an error to use Cloud Spanner locks for any sort of mutual
846
- # exclusion other than between Cloud Spanner transactions themselves. Retrying
847
- # aborted transactions: When a transaction aborts, the application can choose to
848
- # retry the whole transaction again. To maximize the chances of successfully
849
- # committing the retry, the client should execute the retry in the same session
850
- # as the original attempt. The original session's lock priority increases with
851
- # each consecutive abort, meaning that each attempt has a slightly better chance
852
- # of success than the previous. Note that the lock priority is preserved per
853
- # session (not per transaction). Lock priority is set by the first read or write
854
- # in the first attempt of a read-write transaction. If the application starts a
855
- # new session to retry the whole transaction, the transaction loses its original
856
- # lock priority. Moreover, the lock priority is only preserved if the
857
- # transaction fails with an `ABORTED` error. Under some circumstances (for
858
- # example, many transactions attempting to modify the same row(s)), a
859
- # transaction can abort many times in a short period before successfully
860
- # committing. Thus, it is not a good idea to cap the number of retries a
861
- # transaction can attempt; instead, it is better to limit the total amount of
862
- # time spent retrying. Idle transactions: A transaction is considered idle if it
863
- # has no outstanding reads or SQL queries and has not started a read or SQL
864
- # query within the last 10 seconds. Idle transactions can be aborted by Cloud
865
- # Spanner so that they don't hold on to locks indefinitely. If an idle
866
- # transaction is aborted, the commit will fail with error `ABORTED`. If this
867
- # behavior is undesirable, periodically executing a simple SQL query in the
868
- # transaction (for example, `SELECT 1`) prevents the transaction from becoming
869
- # idle. Snapshot read-only transactions: Snapshot read-only transactions
870
- # provides a simpler method than locking read-write transactions for doing
871
- # several consistent reads. However, this type of transaction does not support
872
- # writes. Snapshot transactions do not take locks. Instead, they work by
873
- # choosing a Cloud Spanner timestamp, then executing all reads at that timestamp.
874
- # Since they do not acquire locks, they do not block concurrent read-write
875
- # transactions. Unlike locking read-write transactions, snapshot read-only
876
- # transactions never abort. They can fail if the chosen read timestamp is
877
- # garbage collected; however, the default garbage collection policy is generous
878
- # enough that most applications do not need to worry about this in practice.
879
- # Snapshot read-only transactions do not need to call Commit or Rollback (and in
880
- # fact are not permitted to do so). To execute a snapshot transaction, the
881
- # client specifies a timestamp bound, which tells Cloud Spanner how to choose a
882
- # read timestamp. The types of timestamp bound are: - Strong (the default). -
883
- # Bounded staleness. - Exact staleness. If the Cloud Spanner database to be read
884
- # is geographically distributed, stale read-only transactions can execute more
885
- # quickly than strong or read-write transactions, because they are able to
886
- # execute far from the leader replica. Each type of timestamp bound is discussed
887
- # in detail below. Strong: Strong reads are guaranteed to see the effects of all
888
- # transactions that have committed before the start of the read. Furthermore,
889
- # all rows yielded by a single read are consistent with each other -- if any
890
- # part of the read observes a transaction, all parts of the read see the
891
- # transaction. Strong reads are not repeatable: two consecutive strong read-only
892
- # transactions might return inconsistent results if there are concurrent writes.
893
- # If consistency across reads is required, the reads should be executed within a
894
- # transaction or at an exact read timestamp. Queries on change streams (see
895
- # below for more details) must also specify the strong read timestamp bound. See
896
- # TransactionOptions.ReadOnly.strong. Exact staleness: These timestamp bounds
897
- # execute reads at a user-specified timestamp. Reads at a timestamp are
898
- # guaranteed to see a consistent prefix of the global transaction history: they
899
- # observe modifications done by all transactions with a commit timestamp less
900
- # than or equal to the read timestamp, and observe none of the modifications
901
- # done by transactions with a larger commit timestamp. They will block until all
902
- # conflicting transactions that may be assigned commit timestamps <= the read
903
- # timestamp have finished. The timestamp can either be expressed as an absolute
904
- # Cloud Spanner commit timestamp or a staleness relative to the current time.
905
- # These modes do not require a "negotiation phase" to pick a timestamp. As a
906
- # result, they execute slightly faster than the equivalent boundedly stale
907
- # concurrency modes. On the other hand, boundedly stale reads usually return
908
- # fresher results. See TransactionOptions.ReadOnly.read_timestamp and
909
- # TransactionOptions.ReadOnly.exact_staleness. Bounded staleness: Bounded
910
- # staleness modes allow Cloud Spanner to pick the read timestamp, subject to a
911
- # user-provided staleness bound. Cloud Spanner chooses the newest timestamp
912
- # within the staleness bound that allows execution of the reads at the closest
913
- # available replica without blocking. All rows yielded are consistent with each
914
- # other -- if any part of the read observes a transaction, all parts of the read
915
- # see the transaction. Boundedly stale reads are not repeatable: two stale reads,
916
- # even if they use the same staleness bound, can execute at different
917
- # timestamps and thus return inconsistent results. Boundedly stale reads execute
918
- # in two phases: the first phase negotiates a timestamp among all replicas
919
- # needed to serve the read. In the second phase, reads are executed at the
920
- # negotiated timestamp. As a result of the two phase execution, bounded
921
- # staleness reads are usually a little slower than comparable exact staleness
922
- # reads. However, they are typically able to return fresher results, and are
923
- # more likely to execute at the closest replica. Because the timestamp
924
- # negotiation requires up-front knowledge of which rows will be read, it can
925
- # only be used with single-use read-only transactions. See TransactionOptions.
926
- # ReadOnly.max_staleness and TransactionOptions.ReadOnly.min_read_timestamp. Old
927
- # read timestamps and garbage collection: Cloud Spanner continuously garbage
928
- # collects deleted and overwritten data in the background to reclaim storage
929
- # space. This process is known as "version GC". By default, version GC reclaims
930
- # versions after they are one hour old. Because of this, Cloud Spanner cannot
931
- # perform reads at read timestamps more than one hour in the past. This
932
- # restriction also applies to in-progress reads and/or SQL queries whose
933
- # timestamp become too old while executing. Reads and SQL queries with too-old
934
- # read timestamps fail with the error `FAILED_PRECONDITION`. You can configure
935
- # and extend the `VERSION_RETENTION_PERIOD` of a database up to a period as long
936
- # as one week, which allows Cloud Spanner to perform reads up to one week in the
937
- # past. Querying change Streams: A Change Stream is a schema object that can be
938
- # configured to watch data changes on the entire database, a set of tables, or a
939
- # set of columns in a database. When a change stream is created, Spanner
940
- # automatically defines a corresponding SQL Table-Valued Function (TVF) that can
941
- # be used to query the change records in the associated change stream using the
942
- # ExecuteStreamingSql API. The name of the TVF for a change stream is generated
943
- # from the name of the change stream: READ_. All queries on change stream TVFs
944
- # must be executed using the ExecuteStreamingSql API with a single-use read-only
945
- # transaction with a strong read-only timestamp_bound. The change stream TVF
946
- # allows users to specify the start_timestamp and end_timestamp for the time
947
- # range of interest. All change records within the retention period is
948
- # accessible using the strong read-only timestamp_bound. All other
949
- # TransactionOptions are invalid for change stream queries. In addition, if
950
- # TransactionOptions.read_only.return_read_timestamp is set to true, a special
951
- # value of 2^63 - 2 will be returned in the Transaction message that describes
952
- # the transaction, instead of a valid read timestamp. This special value should
953
- # be discarded and not used for any subsequent queries. Please see https://cloud.
954
- # google.com/spanner/docs/change-streams for more details on how to query the
955
- # change stream TVFs. Partitioned DML transactions: Partitioned DML transactions
956
- # are used to execute DML statements with a different execution strategy that
957
- # provides different, and often better, scalability properties for large, table-
958
- # wide operations than DML in a ReadWrite transaction. Smaller scoped statements,
959
- # such as an OLTP workload, should prefer using ReadWrite transactions.
960
- # Partitioned DML partitions the keyspace and runs the DML statement on each
961
- # partition in separate, internal transactions. These transactions commit
962
- # automatically when complete, and run independently from one another. To reduce
963
- # lock contention, this execution strategy only acquires read locks on rows that
964
- # match the WHERE clause of the statement. Additionally, the smaller per-
965
- # partition transactions hold locks for less time. That said, Partitioned DML is
966
- # not a drop-in replacement for standard DML used in ReadWrite transactions. -
967
- # The DML statement must be fully-partitionable. Specifically, the statement
968
- # must be expressible as the union of many statements which each access only a
969
- # single row of the table. - The statement is not applied atomically to all rows
970
- # of the table. Rather, the statement is applied atomically to partitions of the
971
- # table, in independent transactions. Secondary index rows are updated
972
- # atomically with the base table rows. - Partitioned DML does not guarantee
973
- # exactly-once execution semantics against a partition. The statement is applied
974
- # at least once to each partition. It is strongly recommended that the DML
975
- # statement should be idempotent to avoid unexpected results. For instance, it
976
- # is potentially dangerous to run a statement such as `UPDATE table SET column =
977
- # column + 1` as it could be run multiple times against some rows. - The
978
- # partitions are committed automatically - there is no support for Commit or
979
- # Rollback. If the call returns an error, or if the client issuing the
980
- # ExecuteSql call dies, it is possible that some rows had the statement executed
981
- # on them successfully. It is also possible that statement was never executed
982
- # against other rows. - Partitioned DML transactions may only contain the
983
- # execution of a single DML statement via ExecuteSql or ExecuteStreamingSql. -
984
- # If any error is encountered during the execution of the partitioned DML
985
- # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
986
- # value that cannot be stored due to schema constraints), then the operation is
987
- # stopped at that point and an error is returned. It is possible that at this
988
- # point, some partitions have been committed (or even committed multiple times),
989
- # and other partitions have not been run at all. Given the above, Partitioned
990
- # DML is good fit for large, database-wide, operations that are idempotent, such
991
- # as deleting old rows from a very large table.
800
+ # Options to use for transactions.
992
801
  # Corresponds to the JSON property `options`
993
802
  # @return [Google::Apis::SpannerV1::TransactionOptions]
994
803
  attr_accessor :options
@@ -1179,6 +988,79 @@ module Google
1179
988
  end
1180
989
  end
1181
990
 
991
+ # Spanner Change Streams enable customers to capture and stream out changes to
992
+ # their Spanner databases in real-time. A change stream can be created with
993
+ # option partition_mode='IMMUTABLE_KEY_RANGE' or partition_mode='
994
+ # MUTABLE_KEY_RANGE'. This message is only used in Change Streams created with
995
+ # the option partition_mode='MUTABLE_KEY_RANGE'. Spanner automatically creates a
996
+ # special Table-Valued Function (TVF) along with each Change Streams. The
997
+ # function provides access to the change stream's records. The function is named
998
+ # READ_ (where is the name of the change stream), and it returns a table with
999
+ # only one column called ChangeRecord.
1000
+ class ChangeStreamRecord
1001
+ include Google::Apis::Core::Hashable
1002
+
1003
+ # A data change record contains a set of changes to a table with the same
1004
+ # modification type (insert, update, or delete) committed at the same commit
1005
+ # timestamp in one change stream partition for the same transaction. Multiple
1006
+ # data change records can be returned for the same transaction across multiple
1007
+ # change stream partitions.
1008
+ # Corresponds to the JSON property `dataChangeRecord`
1009
+ # @return [Google::Apis::SpannerV1::DataChangeRecord]
1010
+ attr_accessor :data_change_record
1011
+
1012
+ # A heartbeat record is returned as a progress indicator, when there are no data
1013
+ # changes or any other partition record types in the change stream partition.
1014
+ # Corresponds to the JSON property `heartbeatRecord`
1015
+ # @return [Google::Apis::SpannerV1::HeartbeatRecord]
1016
+ attr_accessor :heartbeat_record
1017
+
1018
+ # A partition end record serves as a notification that the client should stop
1019
+ # reading the partition. No further records are expected to be retrieved on it.
1020
+ # Corresponds to the JSON property `partitionEndRecord`
1021
+ # @return [Google::Apis::SpannerV1::PartitionEndRecord]
1022
+ attr_accessor :partition_end_record
1023
+
1024
+ # A partition event record describes key range changes for a change stream
1025
+ # partition. The changes to a row defined by its primary key can be captured in
1026
+ # one change stream partition for a specific time range, and then be captured in
1027
+ # a different change stream partition for a different time range. This movement
1028
+ # of key ranges across change stream partitions is a reflection of activities,
1029
+ # such as Spanner's dynamic splitting and load balancing, etc. Processing this
1030
+ # event is needed if users want to guarantee processing of the changes for any
1031
+ # key in timestamp order. If time ordered processing of changes for a primary
1032
+ # key is not needed, this event can be ignored. To guarantee time ordered
1033
+ # processing for each primary key, if the event describes move-ins, the reader
1034
+ # of this partition needs to wait until the readers of the source partitions
1035
+ # have processed all records with timestamps <= this PartitionEventRecord.
1036
+ # commit_timestamp, before advancing beyond this PartitionEventRecord. If the
1037
+ # event describes move-outs, the reader can notify the readers of the
1038
+ # destination partitions that they can continue processing.
1039
+ # Corresponds to the JSON property `partitionEventRecord`
1040
+ # @return [Google::Apis::SpannerV1::PartitionEventRecord]
1041
+ attr_accessor :partition_event_record
1042
+
1043
+ # A partition start record serves as a notification that the client should
1044
+ # schedule the partitions to be queried. PartitionStartRecord returns
1045
+ # information about one or more partitions.
1046
+ # Corresponds to the JSON property `partitionStartRecord`
1047
+ # @return [Google::Apis::SpannerV1::PartitionStartRecord]
1048
+ attr_accessor :partition_start_record
1049
+
1050
+ def initialize(**args)
1051
+ update!(**args)
1052
+ end
1053
+
1054
+ # Update properties of this object
1055
+ def update!(**args)
1056
+ @data_change_record = args[:data_change_record] if args.key?(:data_change_record)
1057
+ @heartbeat_record = args[:heartbeat_record] if args.key?(:heartbeat_record)
1058
+ @partition_end_record = args[:partition_end_record] if args.key?(:partition_end_record)
1059
+ @partition_event_record = args[:partition_event_record] if args.key?(:partition_event_record)
1060
+ @partition_start_record = args[:partition_start_record] if args.key?(:partition_start_record)
1061
+ end
1062
+ end
1063
+
1182
1064
  # Metadata associated with a parent-child relationship appearing in a PlanNode.
1183
1065
  class ChildLink
1184
1066
  include Google::Apis::Core::Hashable
@@ -1218,6 +1100,46 @@ module Google
1218
1100
  end
1219
1101
  end
1220
1102
 
1103
+ # Metadata for a column.
1104
+ class ColumnMetadata
1105
+ include Google::Apis::Core::Hashable
1106
+
1107
+ # Indicates whether the column is a primary key column.
1108
+ # Corresponds to the JSON property `isPrimaryKey`
1109
+ # @return [Boolean]
1110
+ attr_accessor :is_primary_key
1111
+ alias_method :is_primary_key?, :is_primary_key
1112
+
1113
+ # Name of the column.
1114
+ # Corresponds to the JSON property `name`
1115
+ # @return [String]
1116
+ attr_accessor :name
1117
+
1118
+ # Ordinal position of the column based on the original table definition in the
1119
+ # schema starting with a value of 1.
1120
+ # Corresponds to the JSON property `ordinalPosition`
1121
+ # @return [Fixnum]
1122
+ attr_accessor :ordinal_position
1123
+
1124
+ # `Type` indicates the type of a Cloud Spanner value, as might be stored in a
1125
+ # table cell or returned from an SQL query.
1126
+ # Corresponds to the JSON property `type`
1127
+ # @return [Google::Apis::SpannerV1::Type]
1128
+ attr_accessor :type
1129
+
1130
+ def initialize(**args)
1131
+ update!(**args)
1132
+ end
1133
+
1134
+ # Update properties of this object
1135
+ def update!(**args)
1136
+ @is_primary_key = args[:is_primary_key] if args.key?(:is_primary_key)
1137
+ @name = args[:name] if args.key?(:name)
1138
+ @ordinal_position = args[:ordinal_position] if args.key?(:ordinal_position)
1139
+ @type = args[:type] if args.key?(:type)
1140
+ end
1141
+ end
1142
+
1221
1143
  # The request for Commit.
1222
1144
  class CommitRequest
1223
1145
  include Google::Apis::Core::Hashable
@@ -1256,198 +1178,7 @@ module Google
1256
1178
  attr_accessor :return_commit_stats
1257
1179
  alias_method :return_commit_stats?, :return_commit_stats
1258
1180
 
1259
- # Transactions: Each session can have at most one active transaction at a time (
1260
- # note that standalone reads and queries use a transaction internally and do
1261
- # count towards the one transaction limit). After the active transaction is
1262
- # completed, the session can immediately be re-used for the next transaction. It
1263
- # is not necessary to create a new session for each transaction. Transaction
1264
- # modes: Cloud Spanner supports three transaction modes: 1. Locking read-write.
1265
- # This type of transaction is the only way to write data into Cloud Spanner.
1266
- # These transactions rely on pessimistic locking and, if necessary, two-phase
1267
- # commit. Locking read-write transactions may abort, requiring the application
1268
- # to retry. 2. Snapshot read-only. Snapshot read-only transactions provide
1269
- # guaranteed consistency across several reads, but do not allow writes. Snapshot
1270
- # read-only transactions can be configured to read at timestamps in the past, or
1271
- # configured to perform a strong read (where Spanner will select a timestamp
1272
- # such that the read is guaranteed to see the effects of all transactions that
1273
- # have committed before the start of the read). Snapshot read-only transactions
1274
- # do not need to be committed. Queries on change streams must be performed with
1275
- # the snapshot read-only transaction mode, specifying a strong read. See
1276
- # TransactionOptions.ReadOnly.strong for more details. 3. Partitioned DML. This
1277
- # type of transaction is used to execute a single Partitioned DML statement.
1278
- # Partitioned DML partitions the key space and runs the DML statement over each
1279
- # partition in parallel using separate, internal transactions that commit
1280
- # independently. Partitioned DML transactions do not need to be committed. For
1281
- # transactions that only read, snapshot read-only transactions provide simpler
1282
- # semantics and are almost always faster. In particular, read-only transactions
1283
- # do not take locks, so they do not conflict with read-write transactions. As a
1284
- # consequence of not taking locks, they also do not abort, so retry loops are
1285
- # not needed. Transactions may only read-write data in a single database. They
1286
- # may, however, read-write data in different tables within that database.
1287
- # Locking read-write transactions: Locking transactions may be used to
1288
- # atomically read-modify-write data anywhere in a database. This type of
1289
- # transaction is externally consistent. Clients should attempt to minimize the
1290
- # amount of time a transaction is active. Faster transactions commit with higher
1291
- # probability and cause less contention. Cloud Spanner attempts to keep read
1292
- # locks active as long as the transaction continues to do reads, and the
1293
- # transaction has not been terminated by Commit or Rollback. Long periods of
1294
- # inactivity at the client may cause Cloud Spanner to release a transaction's
1295
- # locks and abort it. Conceptually, a read-write transaction consists of zero or
1296
- # more reads or SQL statements followed by Commit. At any time before Commit,
1297
- # the client can send a Rollback request to abort the transaction. Semantics:
1298
- # Cloud Spanner can commit the transaction if all read locks it acquired are
1299
- # still valid at commit time, and it is able to acquire write locks for all
1300
- # writes. Cloud Spanner can abort the transaction for any reason. If a commit
1301
- # attempt returns `ABORTED`, Cloud Spanner guarantees that the transaction has
1302
- # not modified any user data in Cloud Spanner. Unless the transaction commits,
1303
- # Cloud Spanner makes no guarantees about how long the transaction's locks were
1304
- # held for. It is an error to use Cloud Spanner locks for any sort of mutual
1305
- # exclusion other than between Cloud Spanner transactions themselves. Retrying
1306
- # aborted transactions: When a transaction aborts, the application can choose to
1307
- # retry the whole transaction again. To maximize the chances of successfully
1308
- # committing the retry, the client should execute the retry in the same session
1309
- # as the original attempt. The original session's lock priority increases with
1310
- # each consecutive abort, meaning that each attempt has a slightly better chance
1311
- # of success than the previous. Note that the lock priority is preserved per
1312
- # session (not per transaction). Lock priority is set by the first read or write
1313
- # in the first attempt of a read-write transaction. If the application starts a
1314
- # new session to retry the whole transaction, the transaction loses its original
1315
- # lock priority. Moreover, the lock priority is only preserved if the
1316
- # transaction fails with an `ABORTED` error. Under some circumstances (for
1317
- # example, many transactions attempting to modify the same row(s)), a
1318
- # transaction can abort many times in a short period before successfully
1319
- # committing. Thus, it is not a good idea to cap the number of retries a
1320
- # transaction can attempt; instead, it is better to limit the total amount of
1321
- # time spent retrying. Idle transactions: A transaction is considered idle if it
1322
- # has no outstanding reads or SQL queries and has not started a read or SQL
1323
- # query within the last 10 seconds. Idle transactions can be aborted by Cloud
1324
- # Spanner so that they don't hold on to locks indefinitely. If an idle
1325
- # transaction is aborted, the commit will fail with error `ABORTED`. If this
1326
- # behavior is undesirable, periodically executing a simple SQL query in the
1327
- # transaction (for example, `SELECT 1`) prevents the transaction from becoming
1328
- # idle. Snapshot read-only transactions: Snapshot read-only transactions
1329
- # provides a simpler method than locking read-write transactions for doing
1330
- # several consistent reads. However, this type of transaction does not support
1331
- # writes. Snapshot transactions do not take locks. Instead, they work by
1332
- # choosing a Cloud Spanner timestamp, then executing all reads at that timestamp.
1333
- # Since they do not acquire locks, they do not block concurrent read-write
1334
- # transactions. Unlike locking read-write transactions, snapshot read-only
1335
- # transactions never abort. They can fail if the chosen read timestamp is
1336
- # garbage collected; however, the default garbage collection policy is generous
1337
- # enough that most applications do not need to worry about this in practice.
1338
- # Snapshot read-only transactions do not need to call Commit or Rollback (and in
1339
- # fact are not permitted to do so). To execute a snapshot transaction, the
1340
- # client specifies a timestamp bound, which tells Cloud Spanner how to choose a
1341
- # read timestamp. The types of timestamp bound are: - Strong (the default). -
1342
- # Bounded staleness. - Exact staleness. If the Cloud Spanner database to be read
1343
- # is geographically distributed, stale read-only transactions can execute more
1344
- # quickly than strong or read-write transactions, because they are able to
1345
- # execute far from the leader replica. Each type of timestamp bound is discussed
1346
- # in detail below. Strong: Strong reads are guaranteed to see the effects of all
1347
- # transactions that have committed before the start of the read. Furthermore,
1348
- # all rows yielded by a single read are consistent with each other -- if any
1349
- # part of the read observes a transaction, all parts of the read see the
1350
- # transaction. Strong reads are not repeatable: two consecutive strong read-only
1351
- # transactions might return inconsistent results if there are concurrent writes.
1352
- # If consistency across reads is required, the reads should be executed within a
1353
- # transaction or at an exact read timestamp. Queries on change streams (see
1354
- # below for more details) must also specify the strong read timestamp bound. See
1355
- # TransactionOptions.ReadOnly.strong. Exact staleness: These timestamp bounds
1356
- # execute reads at a user-specified timestamp. Reads at a timestamp are
1357
- # guaranteed to see a consistent prefix of the global transaction history: they
1358
- # observe modifications done by all transactions with a commit timestamp less
1359
- # than or equal to the read timestamp, and observe none of the modifications
1360
- # done by transactions with a larger commit timestamp. They will block until all
1361
- # conflicting transactions that may be assigned commit timestamps <= the read
1362
- # timestamp have finished. The timestamp can either be expressed as an absolute
1363
- # Cloud Spanner commit timestamp or a staleness relative to the current time.
1364
- # These modes do not require a "negotiation phase" to pick a timestamp. As a
1365
- # result, they execute slightly faster than the equivalent boundedly stale
1366
- # concurrency modes. On the other hand, boundedly stale reads usually return
1367
- # fresher results. See TransactionOptions.ReadOnly.read_timestamp and
1368
- # TransactionOptions.ReadOnly.exact_staleness. Bounded staleness: Bounded
1369
- # staleness modes allow Cloud Spanner to pick the read timestamp, subject to a
1370
- # user-provided staleness bound. Cloud Spanner chooses the newest timestamp
1371
- # within the staleness bound that allows execution of the reads at the closest
1372
- # available replica without blocking. All rows yielded are consistent with each
1373
- # other -- if any part of the read observes a transaction, all parts of the read
1374
- # see the transaction. Boundedly stale reads are not repeatable: two stale reads,
1375
- # even if they use the same staleness bound, can execute at different
1376
- # timestamps and thus return inconsistent results. Boundedly stale reads execute
1377
- # in two phases: the first phase negotiates a timestamp among all replicas
1378
- # needed to serve the read. In the second phase, reads are executed at the
1379
- # negotiated timestamp. As a result of the two phase execution, bounded
1380
- # staleness reads are usually a little slower than comparable exact staleness
1381
- # reads. However, they are typically able to return fresher results, and are
1382
- # more likely to execute at the closest replica. Because the timestamp
1383
- # negotiation requires up-front knowledge of which rows will be read, it can
1384
- # only be used with single-use read-only transactions. See TransactionOptions.
1385
- # ReadOnly.max_staleness and TransactionOptions.ReadOnly.min_read_timestamp. Old
1386
- # read timestamps and garbage collection: Cloud Spanner continuously garbage
1387
- # collects deleted and overwritten data in the background to reclaim storage
1388
- # space. This process is known as "version GC". By default, version GC reclaims
1389
- # versions after they are one hour old. Because of this, Cloud Spanner cannot
1390
- # perform reads at read timestamps more than one hour in the past. This
1391
- # restriction also applies to in-progress reads and/or SQL queries whose
1392
- # timestamp become too old while executing. Reads and SQL queries with too-old
1393
- # read timestamps fail with the error `FAILED_PRECONDITION`. You can configure
1394
- # and extend the `VERSION_RETENTION_PERIOD` of a database up to a period as long
1395
- # as one week, which allows Cloud Spanner to perform reads up to one week in the
1396
- # past. Querying change Streams: A Change Stream is a schema object that can be
1397
- # configured to watch data changes on the entire database, a set of tables, or a
1398
- # set of columns in a database. When a change stream is created, Spanner
1399
- # automatically defines a corresponding SQL Table-Valued Function (TVF) that can
1400
- # be used to query the change records in the associated change stream using the
1401
- # ExecuteStreamingSql API. The name of the TVF for a change stream is generated
1402
- # from the name of the change stream: READ_. All queries on change stream TVFs
1403
- # must be executed using the ExecuteStreamingSql API with a single-use read-only
1404
- # transaction with a strong read-only timestamp_bound. The change stream TVF
1405
- # allows users to specify the start_timestamp and end_timestamp for the time
1406
- # range of interest. All change records within the retention period is
1407
- # accessible using the strong read-only timestamp_bound. All other
1408
- # TransactionOptions are invalid for change stream queries. In addition, if
1409
- # TransactionOptions.read_only.return_read_timestamp is set to true, a special
1410
- # value of 2^63 - 2 will be returned in the Transaction message that describes
1411
- # the transaction, instead of a valid read timestamp. This special value should
1412
- # be discarded and not used for any subsequent queries. Please see https://cloud.
1413
- # google.com/spanner/docs/change-streams for more details on how to query the
1414
- # change stream TVFs. Partitioned DML transactions: Partitioned DML transactions
1415
- # are used to execute DML statements with a different execution strategy that
1416
- # provides different, and often better, scalability properties for large, table-
1417
- # wide operations than DML in a ReadWrite transaction. Smaller scoped statements,
1418
- # such as an OLTP workload, should prefer using ReadWrite transactions.
1419
- # Partitioned DML partitions the keyspace and runs the DML statement on each
1420
- # partition in separate, internal transactions. These transactions commit
1421
- # automatically when complete, and run independently from one another. To reduce
1422
- # lock contention, this execution strategy only acquires read locks on rows that
1423
- # match the WHERE clause of the statement. Additionally, the smaller per-
1424
- # partition transactions hold locks for less time. That said, Partitioned DML is
1425
- # not a drop-in replacement for standard DML used in ReadWrite transactions. -
1426
- # The DML statement must be fully-partitionable. Specifically, the statement
1427
- # must be expressible as the union of many statements which each access only a
1428
- # single row of the table. - The statement is not applied atomically to all rows
1429
- # of the table. Rather, the statement is applied atomically to partitions of the
1430
- # table, in independent transactions. Secondary index rows are updated
1431
- # atomically with the base table rows. - Partitioned DML does not guarantee
1432
- # exactly-once execution semantics against a partition. The statement is applied
1433
- # at least once to each partition. It is strongly recommended that the DML
1434
- # statement should be idempotent to avoid unexpected results. For instance, it
1435
- # is potentially dangerous to run a statement such as `UPDATE table SET column =
1436
- # column + 1` as it could be run multiple times against some rows. - The
1437
- # partitions are committed automatically - there is no support for Commit or
1438
- # Rollback. If the call returns an error, or if the client issuing the
1439
- # ExecuteSql call dies, it is possible that some rows had the statement executed
1440
- # on them successfully. It is also possible that statement was never executed
1441
- # against other rows. - Partitioned DML transactions may only contain the
1442
- # execution of a single DML statement via ExecuteSql or ExecuteStreamingSql. -
1443
- # If any error is encountered during the execution of the partitioned DML
1444
- # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
1445
- # value that cannot be stored due to schema constraints), then the operation is
1446
- # stopped at that point and an error is returned. It is possible that at this
1447
- # point, some partitions have been committed (or even committed multiple times),
1448
- # and other partitions have not been run at all. Given the above, Partitioned
1449
- # DML is good fit for large, database-wide, operations that are idempotent, such
1450
- # as deleting old rows from a very large table.
1181
+ # Options to use for transactions.
1451
1182
  # Corresponds to the JSON property `singleUseTransaction`
1452
1183
  # @return [Google::Apis::SpannerV1::TransactionOptions]
1453
1184
  attr_accessor :single_use_transaction
@@ -2152,6 +1883,125 @@ module Google
2152
1883
  end
2153
1884
  end
2154
1885
 
1886
+ # A data change record contains a set of changes to a table with the same
1887
+ # modification type (insert, update, or delete) committed at the same commit
1888
+ # timestamp in one change stream partition for the same transaction. Multiple
1889
+ # data change records can be returned for the same transaction across multiple
1890
+ # change stream partitions.
1891
+ class DataChangeRecord
1892
+ include Google::Apis::Core::Hashable
1893
+
1894
+ # Provides metadata describing the columns associated with the mods listed below.
1895
+ # Corresponds to the JSON property `columnMetadata`
1896
+ # @return [Array<Google::Apis::SpannerV1::ColumnMetadata>]
1897
+ attr_accessor :column_metadata
1898
+
1899
+ # Indicates the timestamp in which the change was committed. DataChangeRecord.
1900
+ # commit_timestamps, PartitionStartRecord.start_timestamps, PartitionEventRecord.
1901
+ # commit_timestamps, and PartitionEndRecord.end_timestamps can have the same
1902
+ # value in the same partition.
1903
+ # Corresponds to the JSON property `commitTimestamp`
1904
+ # @return [String]
1905
+ attr_accessor :commit_timestamp
1906
+
1907
+ # Indicates whether this is the last record for a transaction in the current
1908
+ # partition. Clients can use this field to determine when all records for a
1909
+ # transaction in the current partition have been received.
1910
+ # Corresponds to the JSON property `isLastRecordInTransactionInPartition`
1911
+ # @return [Boolean]
1912
+ attr_accessor :is_last_record_in_transaction_in_partition
1913
+ alias_method :is_last_record_in_transaction_in_partition?, :is_last_record_in_transaction_in_partition
1914
+
1915
+ # Indicates whether the transaction is a system transaction. System transactions
1916
+ # include those issued by time-to-live (TTL), column backfill, etc.
1917
+ # Corresponds to the JSON property `isSystemTransaction`
1918
+ # @return [Boolean]
1919
+ attr_accessor :is_system_transaction
1920
+ alias_method :is_system_transaction?, :is_system_transaction
1921
+
1922
+ # Describes the type of change.
1923
+ # Corresponds to the JSON property `modType`
1924
+ # @return [String]
1925
+ attr_accessor :mod_type
1926
+
1927
+ # Describes the changes that were made.
1928
+ # Corresponds to the JSON property `mods`
1929
+ # @return [Array<Google::Apis::SpannerV1::Mod>]
1930
+ attr_accessor :mods
1931
+
1932
+ # Indicates the number of partitions that return data change records for this
1933
+ # transaction. This value can be helpful in assembling all records associated
1934
+ # with a particular transaction.
1935
+ # Corresponds to the JSON property `numberOfPartitionsInTransaction`
1936
+ # @return [Fixnum]
1937
+ attr_accessor :number_of_partitions_in_transaction
1938
+
1939
+ # Indicates the number of data change records that are part of this transaction
1940
+ # across all change stream partitions. This value can be used to assemble all
1941
+ # the records associated with a particular transaction.
1942
+ # Corresponds to the JSON property `numberOfRecordsInTransaction`
1943
+ # @return [Fixnum]
1944
+ attr_accessor :number_of_records_in_transaction
1945
+
1946
+ # Record sequence numbers are unique and monotonically increasing (but not
1947
+ # necessarily contiguous) for a specific timestamp across record types in the
1948
+ # same partition. To guarantee ordered processing, the reader should process
1949
+ # records (of potentially different types) in record_sequence order for a
1950
+ # specific timestamp in the same partition. The record sequence number ordering
1951
+ # across partitions is only meaningful in the context of a specific transaction.
1952
+ # Record sequence numbers are unique across partitions for a specific
1953
+ # transaction. Sort the DataChangeRecords for the same server_transaction_id by
1954
+ # record_sequence to reconstruct the ordering of the changes within the
1955
+ # transaction.
1956
+ # Corresponds to the JSON property `recordSequence`
1957
+ # @return [String]
1958
+ attr_accessor :record_sequence
1959
+
1960
+ # Provides a globally unique string that represents the transaction in which the
1961
+ # change was committed. Multiple transactions can have the same commit timestamp,
1962
+ # but each transaction has a unique server_transaction_id.
1963
+ # Corresponds to the JSON property `serverTransactionId`
1964
+ # @return [String]
1965
+ attr_accessor :server_transaction_id
1966
+
1967
+ # Name of the table affected by the change.
1968
+ # Corresponds to the JSON property `table`
1969
+ # @return [String]
1970
+ attr_accessor :table
1971
+
1972
+ # Indicates the transaction tag associated with this transaction.
1973
+ # Corresponds to the JSON property `transactionTag`
1974
+ # @return [String]
1975
+ attr_accessor :transaction_tag
1976
+
1977
+ # Describes the value capture type that was specified in the change stream
1978
+ # configuration when this change was captured.
1979
+ # Corresponds to the JSON property `valueCaptureType`
1980
+ # @return [String]
1981
+ attr_accessor :value_capture_type
1982
+
1983
+ def initialize(**args)
1984
+ update!(**args)
1985
+ end
1986
+
1987
+ # Update properties of this object
1988
+ def update!(**args)
1989
+ @column_metadata = args[:column_metadata] if args.key?(:column_metadata)
1990
+ @commit_timestamp = args[:commit_timestamp] if args.key?(:commit_timestamp)
1991
+ @is_last_record_in_transaction_in_partition = args[:is_last_record_in_transaction_in_partition] if args.key?(:is_last_record_in_transaction_in_partition)
1992
+ @is_system_transaction = args[:is_system_transaction] if args.key?(:is_system_transaction)
1993
+ @mod_type = args[:mod_type] if args.key?(:mod_type)
1994
+ @mods = args[:mods] if args.key?(:mods)
1995
+ @number_of_partitions_in_transaction = args[:number_of_partitions_in_transaction] if args.key?(:number_of_partitions_in_transaction)
1996
+ @number_of_records_in_transaction = args[:number_of_records_in_transaction] if args.key?(:number_of_records_in_transaction)
1997
+ @record_sequence = args[:record_sequence] if args.key?(:record_sequence)
1998
+ @server_transaction_id = args[:server_transaction_id] if args.key?(:server_transaction_id)
1999
+ @table = args[:table] if args.key?(:table)
2000
+ @transaction_tag = args[:transaction_tag] if args.key?(:transaction_tag)
2001
+ @value_capture_type = args[:value_capture_type] if args.key?(:value_capture_type)
2002
+ end
2003
+ end
2004
+
2155
2005
  # A Cloud Spanner database.
2156
2006
  class Database
2157
2007
  include Google::Apis::Core::Hashable
@@ -2267,6 +2117,33 @@ module Google
2267
2117
  end
2268
2118
  end
2269
2119
 
2120
+ # The configuration for each database in the target instance configuration.
2121
+ class DatabaseMoveConfig
2122
+ include Google::Apis::Core::Hashable
2123
+
2124
+ # Required. The unique identifier of the database resource in the Instance. For
2125
+ # example if the database uri is projects/foo/instances/bar/databases/baz, the
2126
+ # id to supply here is baz.
2127
+ # Corresponds to the JSON property `databaseId`
2128
+ # @return [String]
2129
+ attr_accessor :database_id
2130
+
2131
+ # Encryption configuration for a Cloud Spanner database.
2132
+ # Corresponds to the JSON property `encryptionConfig`
2133
+ # @return [Google::Apis::SpannerV1::InstanceEncryptionConfig]
2134
+ attr_accessor :encryption_config
2135
+
2136
+ def initialize(**args)
2137
+ update!(**args)
2138
+ end
2139
+
2140
+ # Update properties of this object
2141
+ def update!(**args)
2142
+ @database_id = args[:database_id] if args.key?(:database_id)
2143
+ @encryption_config = args[:encryption_config] if args.key?(:encryption_config)
2144
+ end
2145
+ end
2146
+
2270
2147
  # A Cloud Spanner database role.
2271
2148
  class DatabaseRole
2272
2149
  include Google::Apis::Core::Hashable
@@ -3053,6 +2930,29 @@ module Google
3053
2930
  end
3054
2931
  end
3055
2932
 
2933
+ # A heartbeat record is returned as a progress indicator, when there are no data
2934
+ # changes or any other partition record types in the change stream partition.
2935
+ class HeartbeatRecord
2936
+ include Google::Apis::Core::Hashable
2937
+
2938
+ # Indicates the timestamp at which the query has returned all the records in the
2939
+ # change stream partition with timestamp <= heartbeat timestamp. The heartbeat
2940
+ # timestamp will not be the same as the timestamps of other record types in the
2941
+ # same partition.
2942
+ # Corresponds to the JSON property `timestamp`
2943
+ # @return [String]
2944
+ attr_accessor :timestamp
2945
+
2946
+ def initialize(**args)
2947
+ update!(**args)
2948
+ end
2949
+
2950
+ # Update properties of this object
2951
+ def update!(**args)
2952
+ @timestamp = args[:timestamp] if args.key?(:timestamp)
2953
+ end
2954
+ end
2955
+
3056
2956
  # An `IncludeReplicas` contains a repeated set of `ReplicaSelection` which
3057
2957
  # indicates the order in which replicas should be considered.
3058
2958
  class IncludeReplicas
@@ -3468,6 +3368,48 @@ module Google
3468
3368
  end
3469
3369
  end
3470
3370
 
3371
+ # Encryption configuration for a Cloud Spanner database.
3372
+ class InstanceEncryptionConfig
3373
+ include Google::Apis::Core::Hashable
3374
+
3375
+ # Optional. This field is maintained for backwards compatibility. For new
3376
+ # callers, we recommend using `kms_key_names` to specify the KMS key. `
3377
+ # kms_key_name` should only be used if the location of the KMS key matches the
3378
+ # database instance’s configuration (location) exactly. E.g. The KMS location is
3379
+ # in us-central1 or nam3 and the database instance is also in us-central1 or
3380
+ # nam3. The Cloud KMS key to be used for encrypting and decrypting the database.
3381
+ # Values are of the form `projects//locations//keyRings//cryptoKeys/`.
3382
+ # Corresponds to the JSON property `kmsKeyName`
3383
+ # @return [String]
3384
+ attr_accessor :kms_key_name
3385
+
3386
+ # Optional. Specifies the KMS configuration for one or more keys used to encrypt
3387
+ # the database. Values are of the form `projects//locations//keyRings//
3388
+ # cryptoKeys/`. The keys referenced by `kms_key_names` must fully cover all
3389
+ # regions of the database's instance configuration. Some examples: * For
3390
+ # regional (single-region) instance configurations, specify a regional location
3391
+ # KMS key. * For multi-region instance configurations of type `GOOGLE_MANAGED`,
3392
+ # either specify a multi-region location KMS key or multiple regional location
3393
+ # KMS keys that cover all regions in the instance configuration. * For an
3394
+ # instance configuration of type `USER_MANAGED`, specify only regional location
3395
+ # KMS keys to cover each region in the instance configuration. Multi-region
3396
+ # location KMS keys aren't supported for `USER_MANAGED` type instance
3397
+ # configurations.
3398
+ # Corresponds to the JSON property `kmsKeyNames`
3399
+ # @return [Array<String>]
3400
+ attr_accessor :kms_key_names
3401
+
3402
+ def initialize(**args)
3403
+ update!(**args)
3404
+ end
3405
+
3406
+ # Update properties of this object
3407
+ def update!(**args)
3408
+ @kms_key_name = args[:kms_key_name] if args.key?(:kms_key_name)
3409
+ @kms_key_names = args[:kms_key_names] if args.key?(:kms_key_names)
3410
+ end
3411
+ end
3412
+
3471
3413
  # Encapsulates progress related information for a Cloud Spanner long running
3472
3414
  # instance operations.
3473
3415
  class InstanceOperationProgress
@@ -4430,6 +4372,92 @@ module Google
4430
4372
  end
4431
4373
  end
4432
4374
 
4375
+ # A mod describes all data changes in a watched table row.
4376
+ class Mod
4377
+ include Google::Apis::Core::Hashable
4378
+
4379
+ # Returns the value of the primary key of the modified row.
4380
+ # Corresponds to the JSON property `keys`
4381
+ # @return [Array<Google::Apis::SpannerV1::ModValue>]
4382
+ attr_accessor :keys
4383
+
4384
+ # Returns the new values after the change for the modified columns. Always empty
4385
+ # for DELETE.
4386
+ # Corresponds to the JSON property `newValues`
4387
+ # @return [Array<Google::Apis::SpannerV1::ModValue>]
4388
+ attr_accessor :new_values
4389
+
4390
+ # Returns the old values before the change for the modified columns. Always
4391
+ # empty for INSERT, or if old values are not being captured specified by
4392
+ # value_capture_type.
4393
+ # Corresponds to the JSON property `oldValues`
4394
+ # @return [Array<Google::Apis::SpannerV1::ModValue>]
4395
+ attr_accessor :old_values
4396
+
4397
+ def initialize(**args)
4398
+ update!(**args)
4399
+ end
4400
+
4401
+ # Update properties of this object
4402
+ def update!(**args)
4403
+ @keys = args[:keys] if args.key?(:keys)
4404
+ @new_values = args[:new_values] if args.key?(:new_values)
4405
+ @old_values = args[:old_values] if args.key?(:old_values)
4406
+ end
4407
+ end
4408
+
4409
+ # Returns the value and associated metadata for a particular field of the Mod.
4410
+ class ModValue
4411
+ include Google::Apis::Core::Hashable
4412
+
4413
+ # Index within the repeated column_metadata field, to obtain the column metadata
4414
+ # for the column that was modified.
4415
+ # Corresponds to the JSON property `columnMetadataIndex`
4416
+ # @return [Fixnum]
4417
+ attr_accessor :column_metadata_index
4418
+
4419
+ # The value of the column.
4420
+ # Corresponds to the JSON property `value`
4421
+ # @return [Object]
4422
+ attr_accessor :value
4423
+
4424
+ def initialize(**args)
4425
+ update!(**args)
4426
+ end
4427
+
4428
+ # Update properties of this object
4429
+ def update!(**args)
4430
+ @column_metadata_index = args[:column_metadata_index] if args.key?(:column_metadata_index)
4431
+ @value = args[:value] if args.key?(:value)
4432
+ end
4433
+ end
4434
+
4435
+ # Describes move-in of the key ranges into the change stream partition
4436
+ # identified by partition_token. To maintain processing the changes for a
4437
+ # particular key in timestamp order, the query processing the change stream
4438
+ # partition identified by partition_token should not advance beyond the
4439
+ # partition event record commit timestamp until the queries processing the
4440
+ # source change stream partitions have processed all change stream records with
4441
+ # timestamps <= the partition event record commit timestamp.
4442
+ class MoveInEvent
4443
+ include Google::Apis::Core::Hashable
4444
+
4445
+ # An unique partition identifier describing the source change stream partition
4446
+ # that recorded changes for the key range that is moving into this partition.
4447
+ # Corresponds to the JSON property `sourcePartitionToken`
4448
+ # @return [String]
4449
+ attr_accessor :source_partition_token
4450
+
4451
+ def initialize(**args)
4452
+ update!(**args)
4453
+ end
4454
+
4455
+ # Update properties of this object
4456
+ def update!(**args)
4457
+ @source_partition_token = args[:source_partition_token] if args.key?(:source_partition_token)
4458
+ end
4459
+ end
4460
+
4433
4461
  # The request for MoveInstance.
4434
4462
  class MoveInstanceRequest
4435
4463
  include Google::Apis::Core::Hashable
@@ -4440,6 +4468,12 @@ module Google
4440
4468
  # @return [String]
4441
4469
  attr_accessor :target_config
4442
4470
 
4471
+ # Optional. The configuration for each database in the target instance
4472
+ # configuration.
4473
+ # Corresponds to the JSON property `targetDatabaseMoveConfigs`
4474
+ # @return [Array<Google::Apis::SpannerV1::DatabaseMoveConfig>]
4475
+ attr_accessor :target_database_move_configs
4476
+
4443
4477
  def initialize(**args)
4444
4478
  update!(**args)
4445
4479
  end
@@ -4447,6 +4481,33 @@ module Google
4447
4481
  # Update properties of this object
4448
4482
  def update!(**args)
4449
4483
  @target_config = args[:target_config] if args.key?(:target_config)
4484
+ @target_database_move_configs = args[:target_database_move_configs] if args.key?(:target_database_move_configs)
4485
+ end
4486
+ end
4487
+
4488
+ # Describes move-out of the key ranges out of the change stream partition
4489
+ # identified by partition_token. To maintain processing the changes for a
4490
+ # particular key in timestamp order, the query processing the MoveOutEvent in
4491
+ # the partition identified by partition_token should inform the queries
4492
+ # processing the destination partitions that they can unblock and proceed
4493
+ # processing records past the commit_timestamp.
4494
+ class MoveOutEvent
4495
+ include Google::Apis::Core::Hashable
4496
+
4497
+ # An unique partition identifier describing the destination change stream
4498
+ # partition that will record changes for the key range that is moving out of
4499
+ # this partition.
4500
+ # Corresponds to the JSON property `destinationPartitionToken`
4501
+ # @return [String]
4502
+ attr_accessor :destination_partition_token
4503
+
4504
+ def initialize(**args)
4505
+ update!(**args)
4506
+ end
4507
+
4508
+ # Update properties of this object
4509
+ def update!(**args)
4510
+ @destination_partition_token = args[:destination_partition_token] if args.key?(:destination_partition_token)
4450
4511
  end
4451
4512
  end
4452
4513
 
@@ -4795,6 +4856,137 @@ module Google
4795
4856
  end
4796
4857
  end
4797
4858
 
4859
+ # A partition end record serves as a notification that the client should stop
4860
+ # reading the partition. No further records are expected to be retrieved on it.
4861
+ class PartitionEndRecord
4862
+ include Google::Apis::Core::Hashable
4863
+
4864
+ # End timestamp at which the change stream partition is terminated. All changes
4865
+ # generated by this partition will have timestamps <= end_timestamp.
4866
+ # DataChangeRecord.commit_timestamps, PartitionStartRecord.start_timestamps,
4867
+ # PartitionEventRecord.commit_timestamps, and PartitionEndRecord.end_timestamps
4868
+ # can have the same value in the same partition. PartitionEndRecord is the last
4869
+ # record returned for a partition.
4870
+ # Corresponds to the JSON property `endTimestamp`
4871
+ # @return [String]
4872
+ attr_accessor :end_timestamp
4873
+
4874
+ # Unique partition identifier describing the terminated change stream partition.
4875
+ # partition_token is equal to the partition token of the change stream partition
4876
+ # currently queried to return this PartitionEndRecord.
4877
+ # Corresponds to the JSON property `partitionToken`
4878
+ # @return [String]
4879
+ attr_accessor :partition_token
4880
+
4881
+ # Record sequence numbers are unique and monotonically increasing (but not
4882
+ # necessarily contiguous) for a specific timestamp across record types in the
4883
+ # same partition. To guarantee ordered processing, the reader should process
4884
+ # records (of potentially different types) in record_sequence order for a
4885
+ # specific timestamp in the same partition.
4886
+ # Corresponds to the JSON property `recordSequence`
4887
+ # @return [String]
4888
+ attr_accessor :record_sequence
4889
+
4890
+ def initialize(**args)
4891
+ update!(**args)
4892
+ end
4893
+
4894
+ # Update properties of this object
4895
+ def update!(**args)
4896
+ @end_timestamp = args[:end_timestamp] if args.key?(:end_timestamp)
4897
+ @partition_token = args[:partition_token] if args.key?(:partition_token)
4898
+ @record_sequence = args[:record_sequence] if args.key?(:record_sequence)
4899
+ end
4900
+ end
4901
+
4902
+ # A partition event record describes key range changes for a change stream
4903
+ # partition. The changes to a row defined by its primary key can be captured in
4904
+ # one change stream partition for a specific time range, and then be captured in
4905
+ # a different change stream partition for a different time range. This movement
4906
+ # of key ranges across change stream partitions is a reflection of activities,
4907
+ # such as Spanner's dynamic splitting and load balancing, etc. Processing this
4908
+ # event is needed if users want to guarantee processing of the changes for any
4909
+ # key in timestamp order. If time ordered processing of changes for a primary
4910
+ # key is not needed, this event can be ignored. To guarantee time ordered
4911
+ # processing for each primary key, if the event describes move-ins, the reader
4912
+ # of this partition needs to wait until the readers of the source partitions
4913
+ # have processed all records with timestamps <= this PartitionEventRecord.
4914
+ # commit_timestamp, before advancing beyond this PartitionEventRecord. If the
4915
+ # event describes move-outs, the reader can notify the readers of the
4916
+ # destination partitions that they can continue processing.
4917
+ class PartitionEventRecord
4918
+ include Google::Apis::Core::Hashable
4919
+
4920
+ # Indicates the commit timestamp at which the key range change occurred.
4921
+ # DataChangeRecord.commit_timestamps, PartitionStartRecord.start_timestamps,
4922
+ # PartitionEventRecord.commit_timestamps, and PartitionEndRecord.end_timestamps
4923
+ # can have the same value in the same partition.
4924
+ # Corresponds to the JSON property `commitTimestamp`
4925
+ # @return [String]
4926
+ attr_accessor :commit_timestamp
4927
+
4928
+ # Set when one or more key ranges are moved into the change stream partition
4929
+ # identified by partition_token. Example: Two key ranges are moved into
4930
+ # partition (P1) from partition (P2) and partition (P3) in a single transaction
4931
+ # at timestamp T. The PartitionEventRecord returned in P1 will reflect the move
4932
+ # as: PartitionEventRecord ` commit_timestamp: T partition_token: "P1"
4933
+ # move_in_events ` source_partition_token: "P2" ` move_in_events `
4934
+ # source_partition_token: "P3" ` ` The PartitionEventRecord returned in P2 will
4935
+ # reflect the move as: PartitionEventRecord ` commit_timestamp: T
4936
+ # partition_token: "P2" move_out_events ` destination_partition_token: "P1" ` `
4937
+ # The PartitionEventRecord returned in P3 will reflect the move as:
4938
+ # PartitionEventRecord ` commit_timestamp: T partition_token: "P3"
4939
+ # move_out_events ` destination_partition_token: "P1" ` `
4940
+ # Corresponds to the JSON property `moveInEvents`
4941
+ # @return [Array<Google::Apis::SpannerV1::MoveInEvent>]
4942
+ attr_accessor :move_in_events
4943
+
4944
+ # Set when one or more key ranges are moved out of the change stream partition
4945
+ # identified by partition_token. Example: Two key ranges are moved out of
4946
+ # partition (P1) to partition (P2) and partition (P3) in a single transaction at
4947
+ # timestamp T. The PartitionEventRecord returned in P1 will reflect the move as:
4948
+ # PartitionEventRecord ` commit_timestamp: T partition_token: "P1"
4949
+ # move_out_events ` destination_partition_token: "P2" ` move_out_events `
4950
+ # destination_partition_token: "P3" ` ` The PartitionEventRecord returned in P2
4951
+ # will reflect the move as: PartitionEventRecord ` commit_timestamp: T
4952
+ # partition_token: "P2" move_in_events ` source_partition_token: "P1" ` ` The
4953
+ # PartitionEventRecord returned in P3 will reflect the move as:
4954
+ # PartitionEventRecord ` commit_timestamp: T partition_token: "P3"
4955
+ # move_in_events ` source_partition_token: "P1" ` `
4956
+ # Corresponds to the JSON property `moveOutEvents`
4957
+ # @return [Array<Google::Apis::SpannerV1::MoveOutEvent>]
4958
+ attr_accessor :move_out_events
4959
+
4960
+ # Unique partition identifier describing the partition this event occurred on.
4961
+ # partition_token is equal to the partition token of the change stream partition
4962
+ # currently queried to return this PartitionEventRecord.
4963
+ # Corresponds to the JSON property `partitionToken`
4964
+ # @return [String]
4965
+ attr_accessor :partition_token
4966
+
4967
+ # Record sequence numbers are unique and monotonically increasing (but not
4968
+ # necessarily contiguous) for a specific timestamp across record types in the
4969
+ # same partition. To guarantee ordered processing, the reader should process
4970
+ # records (of potentially different types) in record_sequence order for a
4971
+ # specific timestamp in the same partition.
4972
+ # Corresponds to the JSON property `recordSequence`
4973
+ # @return [String]
4974
+ attr_accessor :record_sequence
4975
+
4976
+ def initialize(**args)
4977
+ update!(**args)
4978
+ end
4979
+
4980
+ # Update properties of this object
4981
+ def update!(**args)
4982
+ @commit_timestamp = args[:commit_timestamp] if args.key?(:commit_timestamp)
4983
+ @move_in_events = args[:move_in_events] if args.key?(:move_in_events)
4984
+ @move_out_events = args[:move_out_events] if args.key?(:move_out_events)
4985
+ @partition_token = args[:partition_token] if args.key?(:partition_token)
4986
+ @record_sequence = args[:record_sequence] if args.key?(:record_sequence)
4987
+ end
4988
+ end
4989
+
4798
4990
  # Options for a `PartitionQueryRequest` and `PartitionReadRequest`.
4799
4991
  class PartitionOptions
4800
4992
  include Google::Apis::Core::Hashable
@@ -4971,6 +5163,47 @@ module Google
4971
5163
  end
4972
5164
  end
4973
5165
 
5166
+ # A partition start record serves as a notification that the client should
5167
+ # schedule the partitions to be queried. PartitionStartRecord returns
5168
+ # information about one or more partitions.
5169
+ class PartitionStartRecord
5170
+ include Google::Apis::Core::Hashable
5171
+
5172
+ # Unique partition identifiers to be used in queries.
5173
+ # Corresponds to the JSON property `partitionTokens`
5174
+ # @return [Array<String>]
5175
+ attr_accessor :partition_tokens
5176
+
5177
+ # Record sequence numbers are unique and monotonically increasing (but not
5178
+ # necessarily contiguous) for a specific timestamp across record types in the
5179
+ # same partition. To guarantee ordered processing, the reader should process
5180
+ # records (of potentially different types) in record_sequence order for a
5181
+ # specific timestamp in the same partition.
5182
+ # Corresponds to the JSON property `recordSequence`
5183
+ # @return [String]
5184
+ attr_accessor :record_sequence
5185
+
5186
+ # Start timestamp at which the partitions should be queried to return change
5187
+ # stream records with timestamps >= start_timestamp. DataChangeRecord.
5188
+ # commit_timestamps, PartitionStartRecord.start_timestamps, PartitionEventRecord.
5189
+ # commit_timestamps, and PartitionEndRecord.end_timestamps can have the same
5190
+ # value in the same partition.
5191
+ # Corresponds to the JSON property `startTimestamp`
5192
+ # @return [String]
5193
+ attr_accessor :start_timestamp
5194
+
5195
+ def initialize(**args)
5196
+ update!(**args)
5197
+ end
5198
+
5199
+ # Update properties of this object
5200
+ def update!(**args)
5201
+ @partition_tokens = args[:partition_tokens] if args.key?(:partition_tokens)
5202
+ @record_sequence = args[:record_sequence] if args.key?(:record_sequence)
5203
+ @start_timestamp = args[:start_timestamp] if args.key?(:start_timestamp)
5204
+ end
5205
+ end
5206
+
4974
5207
  # Message type to initiate a Partitioned DML transaction.
4975
5208
  class PartitionedDml
4976
5209
  include Google::Apis::Core::Hashable
@@ -5385,7 +5618,7 @@ module Google
5385
5618
 
5386
5619
  # Executes all reads at the given timestamp. Unlike other modes, reads at a
5387
5620
  # specific timestamp are repeatable; the same read at the same timestamp always
5388
- # returns the same data. If the timestamp is in the future, the read will block
5621
+ # returns the same data. If the timestamp is in the future, the read is blocked
5389
5622
  # until the specified timestamp, modulo the read's deadline. Useful for large
5390
5623
  # scale consistent reads such as mapreduces, or for coordinating many reads
5391
5624
  # against a consistent snapshot of the data. A timestamp in RFC3339 UTC \"Zulu\"
@@ -6260,7 +6493,7 @@ module Google
6260
6493
  end
6261
6494
  end
6262
6495
 
6263
- # The split points of a table/index.
6496
+ # The split points of a table or an index.
6264
6497
  class SplitPoints
6265
6498
  include Google::Apis::Core::Hashable
6266
6499
 
@@ -6277,7 +6510,7 @@ module Google
6277
6510
  # @return [String]
6278
6511
  attr_accessor :index
6279
6512
 
6280
- # Required. The list of split keys, i.e., the split boundaries.
6513
+ # Required. The list of split keys. In essence, the split boundaries.
6281
6514
  # Corresponds to the JSON property `keys`
6282
6515
  # @return [Array<Google::Apis::SpannerV1::Key>]
6283
6516
  attr_accessor :keys
@@ -6482,212 +6715,21 @@ module Google
6482
6715
  end
6483
6716
  end
6484
6717
 
6485
- # Transactions: Each session can have at most one active transaction at a time (
6486
- # note that standalone reads and queries use a transaction internally and do
6487
- # count towards the one transaction limit). After the active transaction is
6488
- # completed, the session can immediately be re-used for the next transaction. It
6489
- # is not necessary to create a new session for each transaction. Transaction
6490
- # modes: Cloud Spanner supports three transaction modes: 1. Locking read-write.
6491
- # This type of transaction is the only way to write data into Cloud Spanner.
6492
- # These transactions rely on pessimistic locking and, if necessary, two-phase
6493
- # commit. Locking read-write transactions may abort, requiring the application
6494
- # to retry. 2. Snapshot read-only. Snapshot read-only transactions provide
6495
- # guaranteed consistency across several reads, but do not allow writes. Snapshot
6496
- # read-only transactions can be configured to read at timestamps in the past, or
6497
- # configured to perform a strong read (where Spanner will select a timestamp
6498
- # such that the read is guaranteed to see the effects of all transactions that
6499
- # have committed before the start of the read). Snapshot read-only transactions
6500
- # do not need to be committed. Queries on change streams must be performed with
6501
- # the snapshot read-only transaction mode, specifying a strong read. See
6502
- # TransactionOptions.ReadOnly.strong for more details. 3. Partitioned DML. This
6503
- # type of transaction is used to execute a single Partitioned DML statement.
6504
- # Partitioned DML partitions the key space and runs the DML statement over each
6505
- # partition in parallel using separate, internal transactions that commit
6506
- # independently. Partitioned DML transactions do not need to be committed. For
6507
- # transactions that only read, snapshot read-only transactions provide simpler
6508
- # semantics and are almost always faster. In particular, read-only transactions
6509
- # do not take locks, so they do not conflict with read-write transactions. As a
6510
- # consequence of not taking locks, they also do not abort, so retry loops are
6511
- # not needed. Transactions may only read-write data in a single database. They
6512
- # may, however, read-write data in different tables within that database.
6513
- # Locking read-write transactions: Locking transactions may be used to
6514
- # atomically read-modify-write data anywhere in a database. This type of
6515
- # transaction is externally consistent. Clients should attempt to minimize the
6516
- # amount of time a transaction is active. Faster transactions commit with higher
6517
- # probability and cause less contention. Cloud Spanner attempts to keep read
6518
- # locks active as long as the transaction continues to do reads, and the
6519
- # transaction has not been terminated by Commit or Rollback. Long periods of
6520
- # inactivity at the client may cause Cloud Spanner to release a transaction's
6521
- # locks and abort it. Conceptually, a read-write transaction consists of zero or
6522
- # more reads or SQL statements followed by Commit. At any time before Commit,
6523
- # the client can send a Rollback request to abort the transaction. Semantics:
6524
- # Cloud Spanner can commit the transaction if all read locks it acquired are
6525
- # still valid at commit time, and it is able to acquire write locks for all
6526
- # writes. Cloud Spanner can abort the transaction for any reason. If a commit
6527
- # attempt returns `ABORTED`, Cloud Spanner guarantees that the transaction has
6528
- # not modified any user data in Cloud Spanner. Unless the transaction commits,
6529
- # Cloud Spanner makes no guarantees about how long the transaction's locks were
6530
- # held for. It is an error to use Cloud Spanner locks for any sort of mutual
6531
- # exclusion other than between Cloud Spanner transactions themselves. Retrying
6532
- # aborted transactions: When a transaction aborts, the application can choose to
6533
- # retry the whole transaction again. To maximize the chances of successfully
6534
- # committing the retry, the client should execute the retry in the same session
6535
- # as the original attempt. The original session's lock priority increases with
6536
- # each consecutive abort, meaning that each attempt has a slightly better chance
6537
- # of success than the previous. Note that the lock priority is preserved per
6538
- # session (not per transaction). Lock priority is set by the first read or write
6539
- # in the first attempt of a read-write transaction. If the application starts a
6540
- # new session to retry the whole transaction, the transaction loses its original
6541
- # lock priority. Moreover, the lock priority is only preserved if the
6542
- # transaction fails with an `ABORTED` error. Under some circumstances (for
6543
- # example, many transactions attempting to modify the same row(s)), a
6544
- # transaction can abort many times in a short period before successfully
6545
- # committing. Thus, it is not a good idea to cap the number of retries a
6546
- # transaction can attempt; instead, it is better to limit the total amount of
6547
- # time spent retrying. Idle transactions: A transaction is considered idle if it
6548
- # has no outstanding reads or SQL queries and has not started a read or SQL
6549
- # query within the last 10 seconds. Idle transactions can be aborted by Cloud
6550
- # Spanner so that they don't hold on to locks indefinitely. If an idle
6551
- # transaction is aborted, the commit will fail with error `ABORTED`. If this
6552
- # behavior is undesirable, periodically executing a simple SQL query in the
6553
- # transaction (for example, `SELECT 1`) prevents the transaction from becoming
6554
- # idle. Snapshot read-only transactions: Snapshot read-only transactions
6555
- # provides a simpler method than locking read-write transactions for doing
6556
- # several consistent reads. However, this type of transaction does not support
6557
- # writes. Snapshot transactions do not take locks. Instead, they work by
6558
- # choosing a Cloud Spanner timestamp, then executing all reads at that timestamp.
6559
- # Since they do not acquire locks, they do not block concurrent read-write
6560
- # transactions. Unlike locking read-write transactions, snapshot read-only
6561
- # transactions never abort. They can fail if the chosen read timestamp is
6562
- # garbage collected; however, the default garbage collection policy is generous
6563
- # enough that most applications do not need to worry about this in practice.
6564
- # Snapshot read-only transactions do not need to call Commit or Rollback (and in
6565
- # fact are not permitted to do so). To execute a snapshot transaction, the
6566
- # client specifies a timestamp bound, which tells Cloud Spanner how to choose a
6567
- # read timestamp. The types of timestamp bound are: - Strong (the default). -
6568
- # Bounded staleness. - Exact staleness. If the Cloud Spanner database to be read
6569
- # is geographically distributed, stale read-only transactions can execute more
6570
- # quickly than strong or read-write transactions, because they are able to
6571
- # execute far from the leader replica. Each type of timestamp bound is discussed
6572
- # in detail below. Strong: Strong reads are guaranteed to see the effects of all
6573
- # transactions that have committed before the start of the read. Furthermore,
6574
- # all rows yielded by a single read are consistent with each other -- if any
6575
- # part of the read observes a transaction, all parts of the read see the
6576
- # transaction. Strong reads are not repeatable: two consecutive strong read-only
6577
- # transactions might return inconsistent results if there are concurrent writes.
6578
- # If consistency across reads is required, the reads should be executed within a
6579
- # transaction or at an exact read timestamp. Queries on change streams (see
6580
- # below for more details) must also specify the strong read timestamp bound. See
6581
- # TransactionOptions.ReadOnly.strong. Exact staleness: These timestamp bounds
6582
- # execute reads at a user-specified timestamp. Reads at a timestamp are
6583
- # guaranteed to see a consistent prefix of the global transaction history: they
6584
- # observe modifications done by all transactions with a commit timestamp less
6585
- # than or equal to the read timestamp, and observe none of the modifications
6586
- # done by transactions with a larger commit timestamp. They will block until all
6587
- # conflicting transactions that may be assigned commit timestamps <= the read
6588
- # timestamp have finished. The timestamp can either be expressed as an absolute
6589
- # Cloud Spanner commit timestamp or a staleness relative to the current time.
6590
- # These modes do not require a "negotiation phase" to pick a timestamp. As a
6591
- # result, they execute slightly faster than the equivalent boundedly stale
6592
- # concurrency modes. On the other hand, boundedly stale reads usually return
6593
- # fresher results. See TransactionOptions.ReadOnly.read_timestamp and
6594
- # TransactionOptions.ReadOnly.exact_staleness. Bounded staleness: Bounded
6595
- # staleness modes allow Cloud Spanner to pick the read timestamp, subject to a
6596
- # user-provided staleness bound. Cloud Spanner chooses the newest timestamp
6597
- # within the staleness bound that allows execution of the reads at the closest
6598
- # available replica without blocking. All rows yielded are consistent with each
6599
- # other -- if any part of the read observes a transaction, all parts of the read
6600
- # see the transaction. Boundedly stale reads are not repeatable: two stale reads,
6601
- # even if they use the same staleness bound, can execute at different
6602
- # timestamps and thus return inconsistent results. Boundedly stale reads execute
6603
- # in two phases: the first phase negotiates a timestamp among all replicas
6604
- # needed to serve the read. In the second phase, reads are executed at the
6605
- # negotiated timestamp. As a result of the two phase execution, bounded
6606
- # staleness reads are usually a little slower than comparable exact staleness
6607
- # reads. However, they are typically able to return fresher results, and are
6608
- # more likely to execute at the closest replica. Because the timestamp
6609
- # negotiation requires up-front knowledge of which rows will be read, it can
6610
- # only be used with single-use read-only transactions. See TransactionOptions.
6611
- # ReadOnly.max_staleness and TransactionOptions.ReadOnly.min_read_timestamp. Old
6612
- # read timestamps and garbage collection: Cloud Spanner continuously garbage
6613
- # collects deleted and overwritten data in the background to reclaim storage
6614
- # space. This process is known as "version GC". By default, version GC reclaims
6615
- # versions after they are one hour old. Because of this, Cloud Spanner cannot
6616
- # perform reads at read timestamps more than one hour in the past. This
6617
- # restriction also applies to in-progress reads and/or SQL queries whose
6618
- # timestamp become too old while executing. Reads and SQL queries with too-old
6619
- # read timestamps fail with the error `FAILED_PRECONDITION`. You can configure
6620
- # and extend the `VERSION_RETENTION_PERIOD` of a database up to a period as long
6621
- # as one week, which allows Cloud Spanner to perform reads up to one week in the
6622
- # past. Querying change Streams: A Change Stream is a schema object that can be
6623
- # configured to watch data changes on the entire database, a set of tables, or a
6624
- # set of columns in a database. When a change stream is created, Spanner
6625
- # automatically defines a corresponding SQL Table-Valued Function (TVF) that can
6626
- # be used to query the change records in the associated change stream using the
6627
- # ExecuteStreamingSql API. The name of the TVF for a change stream is generated
6628
- # from the name of the change stream: READ_. All queries on change stream TVFs
6629
- # must be executed using the ExecuteStreamingSql API with a single-use read-only
6630
- # transaction with a strong read-only timestamp_bound. The change stream TVF
6631
- # allows users to specify the start_timestamp and end_timestamp for the time
6632
- # range of interest. All change records within the retention period is
6633
- # accessible using the strong read-only timestamp_bound. All other
6634
- # TransactionOptions are invalid for change stream queries. In addition, if
6635
- # TransactionOptions.read_only.return_read_timestamp is set to true, a special
6636
- # value of 2^63 - 2 will be returned in the Transaction message that describes
6637
- # the transaction, instead of a valid read timestamp. This special value should
6638
- # be discarded and not used for any subsequent queries. Please see https://cloud.
6639
- # google.com/spanner/docs/change-streams for more details on how to query the
6640
- # change stream TVFs. Partitioned DML transactions: Partitioned DML transactions
6641
- # are used to execute DML statements with a different execution strategy that
6642
- # provides different, and often better, scalability properties for large, table-
6643
- # wide operations than DML in a ReadWrite transaction. Smaller scoped statements,
6644
- # such as an OLTP workload, should prefer using ReadWrite transactions.
6645
- # Partitioned DML partitions the keyspace and runs the DML statement on each
6646
- # partition in separate, internal transactions. These transactions commit
6647
- # automatically when complete, and run independently from one another. To reduce
6648
- # lock contention, this execution strategy only acquires read locks on rows that
6649
- # match the WHERE clause of the statement. Additionally, the smaller per-
6650
- # partition transactions hold locks for less time. That said, Partitioned DML is
6651
- # not a drop-in replacement for standard DML used in ReadWrite transactions. -
6652
- # The DML statement must be fully-partitionable. Specifically, the statement
6653
- # must be expressible as the union of many statements which each access only a
6654
- # single row of the table. - The statement is not applied atomically to all rows
6655
- # of the table. Rather, the statement is applied atomically to partitions of the
6656
- # table, in independent transactions. Secondary index rows are updated
6657
- # atomically with the base table rows. - Partitioned DML does not guarantee
6658
- # exactly-once execution semantics against a partition. The statement is applied
6659
- # at least once to each partition. It is strongly recommended that the DML
6660
- # statement should be idempotent to avoid unexpected results. For instance, it
6661
- # is potentially dangerous to run a statement such as `UPDATE table SET column =
6662
- # column + 1` as it could be run multiple times against some rows. - The
6663
- # partitions are committed automatically - there is no support for Commit or
6664
- # Rollback. If the call returns an error, or if the client issuing the
6665
- # ExecuteSql call dies, it is possible that some rows had the statement executed
6666
- # on them successfully. It is also possible that statement was never executed
6667
- # against other rows. - Partitioned DML transactions may only contain the
6668
- # execution of a single DML statement via ExecuteSql or ExecuteStreamingSql. -
6669
- # If any error is encountered during the execution of the partitioned DML
6670
- # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
6671
- # value that cannot be stored due to schema constraints), then the operation is
6672
- # stopped at that point and an error is returned. It is possible that at this
6673
- # point, some partitions have been committed (or even committed multiple times),
6674
- # and other partitions have not been run at all. Given the above, Partitioned
6675
- # DML is good fit for large, database-wide, operations that are idempotent, such
6676
- # as deleting old rows from a very large table.
6718
+ # Options to use for transactions.
6677
6719
  class TransactionOptions
6678
6720
  include Google::Apis::Core::Hashable
6679
6721
 
6680
- # When `exclude_txn_from_change_streams` is set to `true`: * Modifications from
6681
- # this transaction will not be recorded in change streams with DDL option `
6682
- # allow_txn_exclusion=true` that are tracking columns modified by these
6683
- # transactions. * Modifications from this transaction will be recorded in change
6684
- # streams with DDL option `allow_txn_exclusion=false or not set` that are
6685
- # tracking columns modified by these transactions. When `
6686
- # exclude_txn_from_change_streams` is set to `false` or not set, Modifications
6687
- # from this transaction will be recorded in all change streams that are tracking
6688
- # columns modified by these transactions. `exclude_txn_from_change_streams` may
6689
- # only be specified for read-write or partitioned-dml transactions, otherwise
6690
- # the API will return an `INVALID_ARGUMENT` error.
6722
+ # When `exclude_txn_from_change_streams` is set to `true`, it prevents read or
6723
+ # write transactions from being tracked in change streams. * If the DDL option `
6724
+ # allow_txn_exclusion` is set to `true`, then the updates made within this
6725
+ # transaction aren't recorded in the change stream. * If you don't set the DDL
6726
+ # option `allow_txn_exclusion` or if it's set to `false`, then the updates made
6727
+ # within this transaction are recorded in the change stream. When `
6728
+ # exclude_txn_from_change_streams` is set to `false` or not set, modifications
6729
+ # from this transaction are recorded in all change streams that are tracking
6730
+ # columns modified by these transactions. The `exclude_txn_from_change_streams`
6731
+ # option can only be specified for read-write or partitioned DML transactions,
6732
+ # otherwise the API returns an `INVALID_ARGUMENT` error.
6691
6733
  # Corresponds to the JSON property `excludeTxnFromChangeStreams`
6692
6734
  # @return [Boolean]
6693
6735
  attr_accessor :exclude_txn_from_change_streams
@@ -6733,198 +6775,7 @@ module Google
6733
6775
  class TransactionSelector
6734
6776
  include Google::Apis::Core::Hashable
6735
6777
 
6736
- # Transactions: Each session can have at most one active transaction at a time (
6737
- # note that standalone reads and queries use a transaction internally and do
6738
- # count towards the one transaction limit). After the active transaction is
6739
- # completed, the session can immediately be re-used for the next transaction. It
6740
- # is not necessary to create a new session for each transaction. Transaction
6741
- # modes: Cloud Spanner supports three transaction modes: 1. Locking read-write.
6742
- # This type of transaction is the only way to write data into Cloud Spanner.
6743
- # These transactions rely on pessimistic locking and, if necessary, two-phase
6744
- # commit. Locking read-write transactions may abort, requiring the application
6745
- # to retry. 2. Snapshot read-only. Snapshot read-only transactions provide
6746
- # guaranteed consistency across several reads, but do not allow writes. Snapshot
6747
- # read-only transactions can be configured to read at timestamps in the past, or
6748
- # configured to perform a strong read (where Spanner will select a timestamp
6749
- # such that the read is guaranteed to see the effects of all transactions that
6750
- # have committed before the start of the read). Snapshot read-only transactions
6751
- # do not need to be committed. Queries on change streams must be performed with
6752
- # the snapshot read-only transaction mode, specifying a strong read. See
6753
- # TransactionOptions.ReadOnly.strong for more details. 3. Partitioned DML. This
6754
- # type of transaction is used to execute a single Partitioned DML statement.
6755
- # Partitioned DML partitions the key space and runs the DML statement over each
6756
- # partition in parallel using separate, internal transactions that commit
6757
- # independently. Partitioned DML transactions do not need to be committed. For
6758
- # transactions that only read, snapshot read-only transactions provide simpler
6759
- # semantics and are almost always faster. In particular, read-only transactions
6760
- # do not take locks, so they do not conflict with read-write transactions. As a
6761
- # consequence of not taking locks, they also do not abort, so retry loops are
6762
- # not needed. Transactions may only read-write data in a single database. They
6763
- # may, however, read-write data in different tables within that database.
6764
- # Locking read-write transactions: Locking transactions may be used to
6765
- # atomically read-modify-write data anywhere in a database. This type of
6766
- # transaction is externally consistent. Clients should attempt to minimize the
6767
- # amount of time a transaction is active. Faster transactions commit with higher
6768
- # probability and cause less contention. Cloud Spanner attempts to keep read
6769
- # locks active as long as the transaction continues to do reads, and the
6770
- # transaction has not been terminated by Commit or Rollback. Long periods of
6771
- # inactivity at the client may cause Cloud Spanner to release a transaction's
6772
- # locks and abort it. Conceptually, a read-write transaction consists of zero or
6773
- # more reads or SQL statements followed by Commit. At any time before Commit,
6774
- # the client can send a Rollback request to abort the transaction. Semantics:
6775
- # Cloud Spanner can commit the transaction if all read locks it acquired are
6776
- # still valid at commit time, and it is able to acquire write locks for all
6777
- # writes. Cloud Spanner can abort the transaction for any reason. If a commit
6778
- # attempt returns `ABORTED`, Cloud Spanner guarantees that the transaction has
6779
- # not modified any user data in Cloud Spanner. Unless the transaction commits,
6780
- # Cloud Spanner makes no guarantees about how long the transaction's locks were
6781
- # held for. It is an error to use Cloud Spanner locks for any sort of mutual
6782
- # exclusion other than between Cloud Spanner transactions themselves. Retrying
6783
- # aborted transactions: When a transaction aborts, the application can choose to
6784
- # retry the whole transaction again. To maximize the chances of successfully
6785
- # committing the retry, the client should execute the retry in the same session
6786
- # as the original attempt. The original session's lock priority increases with
6787
- # each consecutive abort, meaning that each attempt has a slightly better chance
6788
- # of success than the previous. Note that the lock priority is preserved per
6789
- # session (not per transaction). Lock priority is set by the first read or write
6790
- # in the first attempt of a read-write transaction. If the application starts a
6791
- # new session to retry the whole transaction, the transaction loses its original
6792
- # lock priority. Moreover, the lock priority is only preserved if the
6793
- # transaction fails with an `ABORTED` error. Under some circumstances (for
6794
- # example, many transactions attempting to modify the same row(s)), a
6795
- # transaction can abort many times in a short period before successfully
6796
- # committing. Thus, it is not a good idea to cap the number of retries a
6797
- # transaction can attempt; instead, it is better to limit the total amount of
6798
- # time spent retrying. Idle transactions: A transaction is considered idle if it
6799
- # has no outstanding reads or SQL queries and has not started a read or SQL
6800
- # query within the last 10 seconds. Idle transactions can be aborted by Cloud
6801
- # Spanner so that they don't hold on to locks indefinitely. If an idle
6802
- # transaction is aborted, the commit will fail with error `ABORTED`. If this
6803
- # behavior is undesirable, periodically executing a simple SQL query in the
6804
- # transaction (for example, `SELECT 1`) prevents the transaction from becoming
6805
- # idle. Snapshot read-only transactions: Snapshot read-only transactions
6806
- # provides a simpler method than locking read-write transactions for doing
6807
- # several consistent reads. However, this type of transaction does not support
6808
- # writes. Snapshot transactions do not take locks. Instead, they work by
6809
- # choosing a Cloud Spanner timestamp, then executing all reads at that timestamp.
6810
- # Since they do not acquire locks, they do not block concurrent read-write
6811
- # transactions. Unlike locking read-write transactions, snapshot read-only
6812
- # transactions never abort. They can fail if the chosen read timestamp is
6813
- # garbage collected; however, the default garbage collection policy is generous
6814
- # enough that most applications do not need to worry about this in practice.
6815
- # Snapshot read-only transactions do not need to call Commit or Rollback (and in
6816
- # fact are not permitted to do so). To execute a snapshot transaction, the
6817
- # client specifies a timestamp bound, which tells Cloud Spanner how to choose a
6818
- # read timestamp. The types of timestamp bound are: - Strong (the default). -
6819
- # Bounded staleness. - Exact staleness. If the Cloud Spanner database to be read
6820
- # is geographically distributed, stale read-only transactions can execute more
6821
- # quickly than strong or read-write transactions, because they are able to
6822
- # execute far from the leader replica. Each type of timestamp bound is discussed
6823
- # in detail below. Strong: Strong reads are guaranteed to see the effects of all
6824
- # transactions that have committed before the start of the read. Furthermore,
6825
- # all rows yielded by a single read are consistent with each other -- if any
6826
- # part of the read observes a transaction, all parts of the read see the
6827
- # transaction. Strong reads are not repeatable: two consecutive strong read-only
6828
- # transactions might return inconsistent results if there are concurrent writes.
6829
- # If consistency across reads is required, the reads should be executed within a
6830
- # transaction or at an exact read timestamp. Queries on change streams (see
6831
- # below for more details) must also specify the strong read timestamp bound. See
6832
- # TransactionOptions.ReadOnly.strong. Exact staleness: These timestamp bounds
6833
- # execute reads at a user-specified timestamp. Reads at a timestamp are
6834
- # guaranteed to see a consistent prefix of the global transaction history: they
6835
- # observe modifications done by all transactions with a commit timestamp less
6836
- # than or equal to the read timestamp, and observe none of the modifications
6837
- # done by transactions with a larger commit timestamp. They will block until all
6838
- # conflicting transactions that may be assigned commit timestamps <= the read
6839
- # timestamp have finished. The timestamp can either be expressed as an absolute
6840
- # Cloud Spanner commit timestamp or a staleness relative to the current time.
6841
- # These modes do not require a "negotiation phase" to pick a timestamp. As a
6842
- # result, they execute slightly faster than the equivalent boundedly stale
6843
- # concurrency modes. On the other hand, boundedly stale reads usually return
6844
- # fresher results. See TransactionOptions.ReadOnly.read_timestamp and
6845
- # TransactionOptions.ReadOnly.exact_staleness. Bounded staleness: Bounded
6846
- # staleness modes allow Cloud Spanner to pick the read timestamp, subject to a
6847
- # user-provided staleness bound. Cloud Spanner chooses the newest timestamp
6848
- # within the staleness bound that allows execution of the reads at the closest
6849
- # available replica without blocking. All rows yielded are consistent with each
6850
- # other -- if any part of the read observes a transaction, all parts of the read
6851
- # see the transaction. Boundedly stale reads are not repeatable: two stale reads,
6852
- # even if they use the same staleness bound, can execute at different
6853
- # timestamps and thus return inconsistent results. Boundedly stale reads execute
6854
- # in two phases: the first phase negotiates a timestamp among all replicas
6855
- # needed to serve the read. In the second phase, reads are executed at the
6856
- # negotiated timestamp. As a result of the two phase execution, bounded
6857
- # staleness reads are usually a little slower than comparable exact staleness
6858
- # reads. However, they are typically able to return fresher results, and are
6859
- # more likely to execute at the closest replica. Because the timestamp
6860
- # negotiation requires up-front knowledge of which rows will be read, it can
6861
- # only be used with single-use read-only transactions. See TransactionOptions.
6862
- # ReadOnly.max_staleness and TransactionOptions.ReadOnly.min_read_timestamp. Old
6863
- # read timestamps and garbage collection: Cloud Spanner continuously garbage
6864
- # collects deleted and overwritten data in the background to reclaim storage
6865
- # space. This process is known as "version GC". By default, version GC reclaims
6866
- # versions after they are one hour old. Because of this, Cloud Spanner cannot
6867
- # perform reads at read timestamps more than one hour in the past. This
6868
- # restriction also applies to in-progress reads and/or SQL queries whose
6869
- # timestamp become too old while executing. Reads and SQL queries with too-old
6870
- # read timestamps fail with the error `FAILED_PRECONDITION`. You can configure
6871
- # and extend the `VERSION_RETENTION_PERIOD` of a database up to a period as long
6872
- # as one week, which allows Cloud Spanner to perform reads up to one week in the
6873
- # past. Querying change Streams: A Change Stream is a schema object that can be
6874
- # configured to watch data changes on the entire database, a set of tables, or a
6875
- # set of columns in a database. When a change stream is created, Spanner
6876
- # automatically defines a corresponding SQL Table-Valued Function (TVF) that can
6877
- # be used to query the change records in the associated change stream using the
6878
- # ExecuteStreamingSql API. The name of the TVF for a change stream is generated
6879
- # from the name of the change stream: READ_. All queries on change stream TVFs
6880
- # must be executed using the ExecuteStreamingSql API with a single-use read-only
6881
- # transaction with a strong read-only timestamp_bound. The change stream TVF
6882
- # allows users to specify the start_timestamp and end_timestamp for the time
6883
- # range of interest. All change records within the retention period is
6884
- # accessible using the strong read-only timestamp_bound. All other
6885
- # TransactionOptions are invalid for change stream queries. In addition, if
6886
- # TransactionOptions.read_only.return_read_timestamp is set to true, a special
6887
- # value of 2^63 - 2 will be returned in the Transaction message that describes
6888
- # the transaction, instead of a valid read timestamp. This special value should
6889
- # be discarded and not used for any subsequent queries. Please see https://cloud.
6890
- # google.com/spanner/docs/change-streams for more details on how to query the
6891
- # change stream TVFs. Partitioned DML transactions: Partitioned DML transactions
6892
- # are used to execute DML statements with a different execution strategy that
6893
- # provides different, and often better, scalability properties for large, table-
6894
- # wide operations than DML in a ReadWrite transaction. Smaller scoped statements,
6895
- # such as an OLTP workload, should prefer using ReadWrite transactions.
6896
- # Partitioned DML partitions the keyspace and runs the DML statement on each
6897
- # partition in separate, internal transactions. These transactions commit
6898
- # automatically when complete, and run independently from one another. To reduce
6899
- # lock contention, this execution strategy only acquires read locks on rows that
6900
- # match the WHERE clause of the statement. Additionally, the smaller per-
6901
- # partition transactions hold locks for less time. That said, Partitioned DML is
6902
- # not a drop-in replacement for standard DML used in ReadWrite transactions. -
6903
- # The DML statement must be fully-partitionable. Specifically, the statement
6904
- # must be expressible as the union of many statements which each access only a
6905
- # single row of the table. - The statement is not applied atomically to all rows
6906
- # of the table. Rather, the statement is applied atomically to partitions of the
6907
- # table, in independent transactions. Secondary index rows are updated
6908
- # atomically with the base table rows. - Partitioned DML does not guarantee
6909
- # exactly-once execution semantics against a partition. The statement is applied
6910
- # at least once to each partition. It is strongly recommended that the DML
6911
- # statement should be idempotent to avoid unexpected results. For instance, it
6912
- # is potentially dangerous to run a statement such as `UPDATE table SET column =
6913
- # column + 1` as it could be run multiple times against some rows. - The
6914
- # partitions are committed automatically - there is no support for Commit or
6915
- # Rollback. If the call returns an error, or if the client issuing the
6916
- # ExecuteSql call dies, it is possible that some rows had the statement executed
6917
- # on them successfully. It is also possible that statement was never executed
6918
- # against other rows. - Partitioned DML transactions may only contain the
6919
- # execution of a single DML statement via ExecuteSql or ExecuteStreamingSql. -
6920
- # If any error is encountered during the execution of the partitioned DML
6921
- # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
6922
- # value that cannot be stored due to schema constraints), then the operation is
6923
- # stopped at that point and an error is returned. It is possible that at this
6924
- # point, some partitions have been committed (or even committed multiple times),
6925
- # and other partitions have not been run at all. Given the above, Partitioned
6926
- # DML is good fit for large, database-wide, operations that are idempotent, such
6927
- # as deleting old rows from a very large table.
6778
+ # Options to use for transactions.
6928
6779
  # Corresponds to the JSON property `begin`
6929
6780
  # @return [Google::Apis::SpannerV1::TransactionOptions]
6930
6781
  attr_accessor :begin
@@ -6935,198 +6786,7 @@ module Google
6935
6786
  # @return [String]
6936
6787
  attr_accessor :id
6937
6788
 
6938
- # Transactions: Each session can have at most one active transaction at a time (
6939
- # note that standalone reads and queries use a transaction internally and do
6940
- # count towards the one transaction limit). After the active transaction is
6941
- # completed, the session can immediately be re-used for the next transaction. It
6942
- # is not necessary to create a new session for each transaction. Transaction
6943
- # modes: Cloud Spanner supports three transaction modes: 1. Locking read-write.
6944
- # This type of transaction is the only way to write data into Cloud Spanner.
6945
- # These transactions rely on pessimistic locking and, if necessary, two-phase
6946
- # commit. Locking read-write transactions may abort, requiring the application
6947
- # to retry. 2. Snapshot read-only. Snapshot read-only transactions provide
6948
- # guaranteed consistency across several reads, but do not allow writes. Snapshot
6949
- # read-only transactions can be configured to read at timestamps in the past, or
6950
- # configured to perform a strong read (where Spanner will select a timestamp
6951
- # such that the read is guaranteed to see the effects of all transactions that
6952
- # have committed before the start of the read). Snapshot read-only transactions
6953
- # do not need to be committed. Queries on change streams must be performed with
6954
- # the snapshot read-only transaction mode, specifying a strong read. See
6955
- # TransactionOptions.ReadOnly.strong for more details. 3. Partitioned DML. This
6956
- # type of transaction is used to execute a single Partitioned DML statement.
6957
- # Partitioned DML partitions the key space and runs the DML statement over each
6958
- # partition in parallel using separate, internal transactions that commit
6959
- # independently. Partitioned DML transactions do not need to be committed. For
6960
- # transactions that only read, snapshot read-only transactions provide simpler
6961
- # semantics and are almost always faster. In particular, read-only transactions
6962
- # do not take locks, so they do not conflict with read-write transactions. As a
6963
- # consequence of not taking locks, they also do not abort, so retry loops are
6964
- # not needed. Transactions may only read-write data in a single database. They
6965
- # may, however, read-write data in different tables within that database.
6966
- # Locking read-write transactions: Locking transactions may be used to
6967
- # atomically read-modify-write data anywhere in a database. This type of
6968
- # transaction is externally consistent. Clients should attempt to minimize the
6969
- # amount of time a transaction is active. Faster transactions commit with higher
6970
- # probability and cause less contention. Cloud Spanner attempts to keep read
6971
- # locks active as long as the transaction continues to do reads, and the
6972
- # transaction has not been terminated by Commit or Rollback. Long periods of
6973
- # inactivity at the client may cause Cloud Spanner to release a transaction's
6974
- # locks and abort it. Conceptually, a read-write transaction consists of zero or
6975
- # more reads or SQL statements followed by Commit. At any time before Commit,
6976
- # the client can send a Rollback request to abort the transaction. Semantics:
6977
- # Cloud Spanner can commit the transaction if all read locks it acquired are
6978
- # still valid at commit time, and it is able to acquire write locks for all
6979
- # writes. Cloud Spanner can abort the transaction for any reason. If a commit
6980
- # attempt returns `ABORTED`, Cloud Spanner guarantees that the transaction has
6981
- # not modified any user data in Cloud Spanner. Unless the transaction commits,
6982
- # Cloud Spanner makes no guarantees about how long the transaction's locks were
6983
- # held for. It is an error to use Cloud Spanner locks for any sort of mutual
6984
- # exclusion other than between Cloud Spanner transactions themselves. Retrying
6985
- # aborted transactions: When a transaction aborts, the application can choose to
6986
- # retry the whole transaction again. To maximize the chances of successfully
6987
- # committing the retry, the client should execute the retry in the same session
6988
- # as the original attempt. The original session's lock priority increases with
6989
- # each consecutive abort, meaning that each attempt has a slightly better chance
6990
- # of success than the previous. Note that the lock priority is preserved per
6991
- # session (not per transaction). Lock priority is set by the first read or write
6992
- # in the first attempt of a read-write transaction. If the application starts a
6993
- # new session to retry the whole transaction, the transaction loses its original
6994
- # lock priority. Moreover, the lock priority is only preserved if the
6995
- # transaction fails with an `ABORTED` error. Under some circumstances (for
6996
- # example, many transactions attempting to modify the same row(s)), a
6997
- # transaction can abort many times in a short period before successfully
6998
- # committing. Thus, it is not a good idea to cap the number of retries a
6999
- # transaction can attempt; instead, it is better to limit the total amount of
7000
- # time spent retrying. Idle transactions: A transaction is considered idle if it
7001
- # has no outstanding reads or SQL queries and has not started a read or SQL
7002
- # query within the last 10 seconds. Idle transactions can be aborted by Cloud
7003
- # Spanner so that they don't hold on to locks indefinitely. If an idle
7004
- # transaction is aborted, the commit will fail with error `ABORTED`. If this
7005
- # behavior is undesirable, periodically executing a simple SQL query in the
7006
- # transaction (for example, `SELECT 1`) prevents the transaction from becoming
7007
- # idle. Snapshot read-only transactions: Snapshot read-only transactions
7008
- # provides a simpler method than locking read-write transactions for doing
7009
- # several consistent reads. However, this type of transaction does not support
7010
- # writes. Snapshot transactions do not take locks. Instead, they work by
7011
- # choosing a Cloud Spanner timestamp, then executing all reads at that timestamp.
7012
- # Since they do not acquire locks, they do not block concurrent read-write
7013
- # transactions. Unlike locking read-write transactions, snapshot read-only
7014
- # transactions never abort. They can fail if the chosen read timestamp is
7015
- # garbage collected; however, the default garbage collection policy is generous
7016
- # enough that most applications do not need to worry about this in practice.
7017
- # Snapshot read-only transactions do not need to call Commit or Rollback (and in
7018
- # fact are not permitted to do so). To execute a snapshot transaction, the
7019
- # client specifies a timestamp bound, which tells Cloud Spanner how to choose a
7020
- # read timestamp. The types of timestamp bound are: - Strong (the default). -
7021
- # Bounded staleness. - Exact staleness. If the Cloud Spanner database to be read
7022
- # is geographically distributed, stale read-only transactions can execute more
7023
- # quickly than strong or read-write transactions, because they are able to
7024
- # execute far from the leader replica. Each type of timestamp bound is discussed
7025
- # in detail below. Strong: Strong reads are guaranteed to see the effects of all
7026
- # transactions that have committed before the start of the read. Furthermore,
7027
- # all rows yielded by a single read are consistent with each other -- if any
7028
- # part of the read observes a transaction, all parts of the read see the
7029
- # transaction. Strong reads are not repeatable: two consecutive strong read-only
7030
- # transactions might return inconsistent results if there are concurrent writes.
7031
- # If consistency across reads is required, the reads should be executed within a
7032
- # transaction or at an exact read timestamp. Queries on change streams (see
7033
- # below for more details) must also specify the strong read timestamp bound. See
7034
- # TransactionOptions.ReadOnly.strong. Exact staleness: These timestamp bounds
7035
- # execute reads at a user-specified timestamp. Reads at a timestamp are
7036
- # guaranteed to see a consistent prefix of the global transaction history: they
7037
- # observe modifications done by all transactions with a commit timestamp less
7038
- # than or equal to the read timestamp, and observe none of the modifications
7039
- # done by transactions with a larger commit timestamp. They will block until all
7040
- # conflicting transactions that may be assigned commit timestamps <= the read
7041
- # timestamp have finished. The timestamp can either be expressed as an absolute
7042
- # Cloud Spanner commit timestamp or a staleness relative to the current time.
7043
- # These modes do not require a "negotiation phase" to pick a timestamp. As a
7044
- # result, they execute slightly faster than the equivalent boundedly stale
7045
- # concurrency modes. On the other hand, boundedly stale reads usually return
7046
- # fresher results. See TransactionOptions.ReadOnly.read_timestamp and
7047
- # TransactionOptions.ReadOnly.exact_staleness. Bounded staleness: Bounded
7048
- # staleness modes allow Cloud Spanner to pick the read timestamp, subject to a
7049
- # user-provided staleness bound. Cloud Spanner chooses the newest timestamp
7050
- # within the staleness bound that allows execution of the reads at the closest
7051
- # available replica without blocking. All rows yielded are consistent with each
7052
- # other -- if any part of the read observes a transaction, all parts of the read
7053
- # see the transaction. Boundedly stale reads are not repeatable: two stale reads,
7054
- # even if they use the same staleness bound, can execute at different
7055
- # timestamps and thus return inconsistent results. Boundedly stale reads execute
7056
- # in two phases: the first phase negotiates a timestamp among all replicas
7057
- # needed to serve the read. In the second phase, reads are executed at the
7058
- # negotiated timestamp. As a result of the two phase execution, bounded
7059
- # staleness reads are usually a little slower than comparable exact staleness
7060
- # reads. However, they are typically able to return fresher results, and are
7061
- # more likely to execute at the closest replica. Because the timestamp
7062
- # negotiation requires up-front knowledge of which rows will be read, it can
7063
- # only be used with single-use read-only transactions. See TransactionOptions.
7064
- # ReadOnly.max_staleness and TransactionOptions.ReadOnly.min_read_timestamp. Old
7065
- # read timestamps and garbage collection: Cloud Spanner continuously garbage
7066
- # collects deleted and overwritten data in the background to reclaim storage
7067
- # space. This process is known as "version GC". By default, version GC reclaims
7068
- # versions after they are one hour old. Because of this, Cloud Spanner cannot
7069
- # perform reads at read timestamps more than one hour in the past. This
7070
- # restriction also applies to in-progress reads and/or SQL queries whose
7071
- # timestamp become too old while executing. Reads and SQL queries with too-old
7072
- # read timestamps fail with the error `FAILED_PRECONDITION`. You can configure
7073
- # and extend the `VERSION_RETENTION_PERIOD` of a database up to a period as long
7074
- # as one week, which allows Cloud Spanner to perform reads up to one week in the
7075
- # past. Querying change Streams: A Change Stream is a schema object that can be
7076
- # configured to watch data changes on the entire database, a set of tables, or a
7077
- # set of columns in a database. When a change stream is created, Spanner
7078
- # automatically defines a corresponding SQL Table-Valued Function (TVF) that can
7079
- # be used to query the change records in the associated change stream using the
7080
- # ExecuteStreamingSql API. The name of the TVF for a change stream is generated
7081
- # from the name of the change stream: READ_. All queries on change stream TVFs
7082
- # must be executed using the ExecuteStreamingSql API with a single-use read-only
7083
- # transaction with a strong read-only timestamp_bound. The change stream TVF
7084
- # allows users to specify the start_timestamp and end_timestamp for the time
7085
- # range of interest. All change records within the retention period is
7086
- # accessible using the strong read-only timestamp_bound. All other
7087
- # TransactionOptions are invalid for change stream queries. In addition, if
7088
- # TransactionOptions.read_only.return_read_timestamp is set to true, a special
7089
- # value of 2^63 - 2 will be returned in the Transaction message that describes
7090
- # the transaction, instead of a valid read timestamp. This special value should
7091
- # be discarded and not used for any subsequent queries. Please see https://cloud.
7092
- # google.com/spanner/docs/change-streams for more details on how to query the
7093
- # change stream TVFs. Partitioned DML transactions: Partitioned DML transactions
7094
- # are used to execute DML statements with a different execution strategy that
7095
- # provides different, and often better, scalability properties for large, table-
7096
- # wide operations than DML in a ReadWrite transaction. Smaller scoped statements,
7097
- # such as an OLTP workload, should prefer using ReadWrite transactions.
7098
- # Partitioned DML partitions the keyspace and runs the DML statement on each
7099
- # partition in separate, internal transactions. These transactions commit
7100
- # automatically when complete, and run independently from one another. To reduce
7101
- # lock contention, this execution strategy only acquires read locks on rows that
7102
- # match the WHERE clause of the statement. Additionally, the smaller per-
7103
- # partition transactions hold locks for less time. That said, Partitioned DML is
7104
- # not a drop-in replacement for standard DML used in ReadWrite transactions. -
7105
- # The DML statement must be fully-partitionable. Specifically, the statement
7106
- # must be expressible as the union of many statements which each access only a
7107
- # single row of the table. - The statement is not applied atomically to all rows
7108
- # of the table. Rather, the statement is applied atomically to partitions of the
7109
- # table, in independent transactions. Secondary index rows are updated
7110
- # atomically with the base table rows. - Partitioned DML does not guarantee
7111
- # exactly-once execution semantics against a partition. The statement is applied
7112
- # at least once to each partition. It is strongly recommended that the DML
7113
- # statement should be idempotent to avoid unexpected results. For instance, it
7114
- # is potentially dangerous to run a statement such as `UPDATE table SET column =
7115
- # column + 1` as it could be run multiple times against some rows. - The
7116
- # partitions are committed automatically - there is no support for Commit or
7117
- # Rollback. If the call returns an error, or if the client issuing the
7118
- # ExecuteSql call dies, it is possible that some rows had the statement executed
7119
- # on them successfully. It is also possible that statement was never executed
7120
- # against other rows. - Partitioned DML transactions may only contain the
7121
- # execution of a single DML statement via ExecuteSql or ExecuteStreamingSql. -
7122
- # If any error is encountered during the execution of the partitioned DML
7123
- # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
7124
- # value that cannot be stored due to schema constraints), then the operation is
7125
- # stopped at that point and an error is returned. It is possible that at this
7126
- # point, some partitions have been committed (or even committed multiple times),
7127
- # and other partitions have not been run at all. Given the above, Partitioned
7128
- # DML is good fit for large, database-wide, operations that are idempotent, such
7129
- # as deleting old rows from a very large table.
6789
+ # Options to use for transactions.
7130
6790
  # Corresponds to the JSON property `singleUse`
7131
6791
  # @return [Google::Apis::SpannerV1::TransactionOptions]
7132
6792
  attr_accessor :single_use