google-apis-spanner_v1 0.42.0 → 0.43.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -797,198 +797,7 @@ module Google
797
797
  # @return [Google::Apis::SpannerV1::Mutation]
798
798
  attr_accessor :mutation_key
799
799
 
800
- # Transactions: Each session can have at most one active transaction at a time (
801
- # note that standalone reads and queries use a transaction internally and do
802
- # count towards the one transaction limit). After the active transaction is
803
- # completed, the session can immediately be re-used for the next transaction. It
804
- # is not necessary to create a new session for each transaction. Transaction
805
- # modes: Cloud Spanner supports three transaction modes: 1. Locking read-write.
806
- # This type of transaction is the only way to write data into Cloud Spanner.
807
- # These transactions rely on pessimistic locking and, if necessary, two-phase
808
- # commit. Locking read-write transactions may abort, requiring the application
809
- # to retry. 2. Snapshot read-only. Snapshot read-only transactions provide
810
- # guaranteed consistency across several reads, but do not allow writes. Snapshot
811
- # read-only transactions can be configured to read at timestamps in the past, or
812
- # configured to perform a strong read (where Spanner selects a timestamp such
813
- # that the read is guaranteed to see the effects of all transactions that have
814
- # committed before the start of the read). Snapshot read-only transactions do
815
- # not need to be committed. Queries on change streams must be performed with the
816
- # snapshot read-only transaction mode, specifying a strong read. See
817
- # TransactionOptions.ReadOnly.strong for more details. 3. Partitioned DML. This
818
- # type of transaction is used to execute a single Partitioned DML statement.
819
- # Partitioned DML partitions the key space and runs the DML statement over each
820
- # partition in parallel using separate, internal transactions that commit
821
- # independently. Partitioned DML transactions do not need to be committed. For
822
- # transactions that only read, snapshot read-only transactions provide simpler
823
- # semantics and are almost always faster. In particular, read-only transactions
824
- # do not take locks, so they do not conflict with read-write transactions. As a
825
- # consequence of not taking locks, they also do not abort, so retry loops are
826
- # not needed. Transactions may only read-write data in a single database. They
827
- # may, however, read-write data in different tables within that database.
828
- # Locking read-write transactions: Locking transactions may be used to
829
- # atomically read-modify-write data anywhere in a database. This type of
830
- # transaction is externally consistent. Clients should attempt to minimize the
831
- # amount of time a transaction is active. Faster transactions commit with higher
832
- # probability and cause less contention. Cloud Spanner attempts to keep read
833
- # locks active as long as the transaction continues to do reads, and the
834
- # transaction has not been terminated by Commit or Rollback. Long periods of
835
- # inactivity at the client may cause Cloud Spanner to release a transaction's
836
- # locks and abort it. Conceptually, a read-write transaction consists of zero or
837
- # more reads or SQL statements followed by Commit. At any time before Commit,
838
- # the client can send a Rollback request to abort the transaction. Semantics:
839
- # Cloud Spanner can commit the transaction if all read locks it acquired are
840
- # still valid at commit time, and it is able to acquire write locks for all
841
- # writes. Cloud Spanner can abort the transaction for any reason. If a commit
842
- # attempt returns `ABORTED`, Cloud Spanner guarantees that the transaction has
843
- # not modified any user data in Cloud Spanner. Unless the transaction commits,
844
- # Cloud Spanner makes no guarantees about how long the transaction's locks were
845
- # held for. It is an error to use Cloud Spanner locks for any sort of mutual
846
- # exclusion other than between Cloud Spanner transactions themselves. Retrying
847
- # aborted transactions: When a transaction aborts, the application can choose to
848
- # retry the whole transaction again. To maximize the chances of successfully
849
- # committing the retry, the client should execute the retry in the same session
850
- # as the original attempt. The original session's lock priority increases with
851
- # each consecutive abort, meaning that each attempt has a slightly better chance
852
- # of success than the previous. Note that the lock priority is preserved per
853
- # session (not per transaction). Lock priority is set by the first read or write
854
- # in the first attempt of a read-write transaction. If the application starts a
855
- # new session to retry the whole transaction, the transaction loses its original
856
- # lock priority. Moreover, the lock priority is only preserved if the
857
- # transaction fails with an `ABORTED` error. Under some circumstances (for
858
- # example, many transactions attempting to modify the same row(s)), a
859
- # transaction can abort many times in a short period before successfully
860
- # committing. Thus, it is not a good idea to cap the number of retries a
861
- # transaction can attempt; instead, it is better to limit the total amount of
862
- # time spent retrying. Idle transactions: A transaction is considered idle if it
863
- # has no outstanding reads or SQL queries and has not started a read or SQL
864
- # query within the last 10 seconds. Idle transactions can be aborted by Cloud
865
- # Spanner so that they don't hold on to locks indefinitely. If an idle
866
- # transaction is aborted, the commit fails with error `ABORTED`. If this
867
- # behavior is undesirable, periodically executing a simple SQL query in the
868
- # transaction (for example, `SELECT 1`) prevents the transaction from becoming
869
- # idle. Snapshot read-only transactions: Snapshot read-only transactions
870
- # provides a simpler method than locking read-write transactions for doing
871
- # several consistent reads. However, this type of transaction does not support
872
- # writes. Snapshot transactions do not take locks. Instead, they work by
873
- # choosing a Cloud Spanner timestamp, then executing all reads at that timestamp.
874
- # Since they do not acquire locks, they do not block concurrent read-write
875
- # transactions. Unlike locking read-write transactions, snapshot read-only
876
- # transactions never abort. They can fail if the chosen read timestamp is
877
- # garbage collected; however, the default garbage collection policy is generous
878
- # enough that most applications do not need to worry about this in practice.
879
- # Snapshot read-only transactions do not need to call Commit or Rollback (and in
880
- # fact are not permitted to do so). To execute a snapshot transaction, the
881
- # client specifies a timestamp bound, which tells Cloud Spanner how to choose a
882
- # read timestamp. The types of timestamp bound are: - Strong (the default). -
883
- # Bounded staleness. - Exact staleness. If the Cloud Spanner database to be read
884
- # is geographically distributed, stale read-only transactions can execute more
885
- # quickly than strong or read-write transactions, because they are able to
886
- # execute far from the leader replica. Each type of timestamp bound is discussed
887
- # in detail below. Strong: Strong reads are guaranteed to see the effects of all
888
- # transactions that have committed before the start of the read. Furthermore,
889
- # all rows yielded by a single read are consistent with each other -- if any
890
- # part of the read observes a transaction, all parts of the read see the
891
- # transaction. Strong reads are not repeatable: two consecutive strong read-only
892
- # transactions might return inconsistent results if there are concurrent writes.
893
- # If consistency across reads is required, the reads should be executed within a
894
- # transaction or at an exact read timestamp. Queries on change streams (see
895
- # below for more details) must also specify the strong read timestamp bound. See
896
- # TransactionOptions.ReadOnly.strong. Exact staleness: These timestamp bounds
897
- # execute reads at a user-specified timestamp. Reads at a timestamp are
898
- # guaranteed to see a consistent prefix of the global transaction history: they
899
- # observe modifications done by all transactions with a commit timestamp less
900
- # than or equal to the read timestamp, and observe none of the modifications
901
- # done by transactions with a larger commit timestamp. They block until all
902
- # conflicting transactions that can be assigned commit timestamps <= the read
903
- # timestamp have finished. The timestamp can either be expressed as an absolute
904
- # Cloud Spanner commit timestamp or a staleness relative to the current time.
905
- # These modes do not require a "negotiation phase" to pick a timestamp. As a
906
- # result, they execute slightly faster than the equivalent boundedly stale
907
- # concurrency modes. On the other hand, boundedly stale reads usually return
908
- # fresher results. See TransactionOptions.ReadOnly.read_timestamp and
909
- # TransactionOptions.ReadOnly.exact_staleness. Bounded staleness: Bounded
910
- # staleness modes allow Cloud Spanner to pick the read timestamp, subject to a
911
- # user-provided staleness bound. Cloud Spanner chooses the newest timestamp
912
- # within the staleness bound that allows execution of the reads at the closest
913
- # available replica without blocking. All rows yielded are consistent with each
914
- # other -- if any part of the read observes a transaction, all parts of the read
915
- # see the transaction. Boundedly stale reads are not repeatable: two stale reads,
916
- # even if they use the same staleness bound, can execute at different
917
- # timestamps and thus return inconsistent results. Boundedly stale reads execute
918
- # in two phases: the first phase negotiates a timestamp among all replicas
919
- # needed to serve the read. In the second phase, reads are executed at the
920
- # negotiated timestamp. As a result of the two phase execution, bounded
921
- # staleness reads are usually a little slower than comparable exact staleness
922
- # reads. However, they are typically able to return fresher results, and are
923
- # more likely to execute at the closest replica. Because the timestamp
924
- # negotiation requires up-front knowledge of which rows are read, it can only be
925
- # used with single-use read-only transactions. See TransactionOptions.ReadOnly.
926
- # max_staleness and TransactionOptions.ReadOnly.min_read_timestamp. Old read
927
- # timestamps and garbage collection: Cloud Spanner continuously garbage collects
928
- # deleted and overwritten data in the background to reclaim storage space. This
929
- # process is known as "version GC". By default, version GC reclaims versions
930
- # after they are one hour old. Because of this, Cloud Spanner can't perform
931
- # reads at read timestamps more than one hour in the past. This restriction also
932
- # applies to in-progress reads and/or SQL queries whose timestamp become too old
933
- # while executing. Reads and SQL queries with too-old read timestamps fail with
934
- # the error `FAILED_PRECONDITION`. You can configure and extend the `
935
- # VERSION_RETENTION_PERIOD` of a database up to a period as long as one week,
936
- # which allows Cloud Spanner to perform reads up to one week in the past.
937
- # Querying change Streams: A Change Stream is a schema object that can be
938
- # configured to watch data changes on the entire database, a set of tables, or a
939
- # set of columns in a database. When a change stream is created, Spanner
940
- # automatically defines a corresponding SQL Table-Valued Function (TVF) that can
941
- # be used to query the change records in the associated change stream using the
942
- # ExecuteStreamingSql API. The name of the TVF for a change stream is generated
943
- # from the name of the change stream: READ_. All queries on change stream TVFs
944
- # must be executed using the ExecuteStreamingSql API with a single-use read-only
945
- # transaction with a strong read-only timestamp_bound. The change stream TVF
946
- # allows users to specify the start_timestamp and end_timestamp for the time
947
- # range of interest. All change records within the retention period is
948
- # accessible using the strong read-only timestamp_bound. All other
949
- # TransactionOptions are invalid for change stream queries. In addition, if
950
- # TransactionOptions.read_only.return_read_timestamp is set to true, a special
951
- # value of 2^63 - 2 is returned in the Transaction message that describes the
952
- # transaction, instead of a valid read timestamp. This special value should be
953
- # discarded and not used for any subsequent queries. Please see https://cloud.
954
- # google.com/spanner/docs/change-streams for more details on how to query the
955
- # change stream TVFs. Partitioned DML transactions: Partitioned DML transactions
956
- # are used to execute DML statements with a different execution strategy that
957
- # provides different, and often better, scalability properties for large, table-
958
- # wide operations than DML in a ReadWrite transaction. Smaller scoped statements,
959
- # such as an OLTP workload, should prefer using ReadWrite transactions.
960
- # Partitioned DML partitions the keyspace and runs the DML statement on each
961
- # partition in separate, internal transactions. These transactions commit
962
- # automatically when complete, and run independently from one another. To reduce
963
- # lock contention, this execution strategy only acquires read locks on rows that
964
- # match the WHERE clause of the statement. Additionally, the smaller per-
965
- # partition transactions hold locks for less time. That said, Partitioned DML is
966
- # not a drop-in replacement for standard DML used in ReadWrite transactions. -
967
- # The DML statement must be fully-partitionable. Specifically, the statement
968
- # must be expressible as the union of many statements which each access only a
969
- # single row of the table. - The statement is not applied atomically to all rows
970
- # of the table. Rather, the statement is applied atomically to partitions of the
971
- # table, in independent transactions. Secondary index rows are updated
972
- # atomically with the base table rows. - Partitioned DML does not guarantee
973
- # exactly-once execution semantics against a partition. The statement is applied
974
- # at least once to each partition. It is strongly recommended that the DML
975
- # statement should be idempotent to avoid unexpected results. For instance, it
976
- # is potentially dangerous to run a statement such as `UPDATE table SET column =
977
- # column + 1` as it could be run multiple times against some rows. - The
978
- # partitions are committed automatically - there is no support for Commit or
979
- # Rollback. If the call returns an error, or if the client issuing the
980
- # ExecuteSql call dies, it is possible that some rows had the statement executed
981
- # on them successfully. It is also possible that statement was never executed
982
- # against other rows. - Partitioned DML transactions may only contain the
983
- # execution of a single DML statement via ExecuteSql or ExecuteStreamingSql. -
984
- # If any error is encountered during the execution of the partitioned DML
985
- # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
986
- # value that can't be stored due to schema constraints), then the operation is
987
- # stopped at that point and an error is returned. It is possible that at this
988
- # point, some partitions have been committed (or even committed multiple times),
989
- # and other partitions have not been run at all. Given the above, Partitioned
990
- # DML is good fit for large, database-wide, operations that are idempotent, such
991
- # as deleting old rows from a very large table.
800
+ # Options to use for transactions.
992
801
  # Corresponds to the JSON property `options`
993
802
  # @return [Google::Apis::SpannerV1::TransactionOptions]
994
803
  attr_accessor :options
@@ -1179,6 +988,79 @@ module Google
1179
988
  end
1180
989
  end
1181
990
 
991
+ # Spanner Change Streams enable customers to capture and stream out changes to
992
+ # their Spanner databases in real-time. A change stream can be created with
993
+ # option partition_mode='IMMUTABLE_KEY_RANGE' or partition_mode='
994
+ # MUTABLE_KEY_RANGE'. This message is only used in Change Streams created with
995
+ # the option partition_mode='MUTABLE_KEY_RANGE'. Spanner automatically creates a
996
+ # special Table-Valued Function (TVF) along with each Change Streams. The
997
+ # function provides access to the change stream's records. The function is named
998
+ # READ_ (where is the name of the change stream), and it returns a table with
999
+ # only one column called ChangeRecord.
1000
+ class ChangeStreamRecord
1001
+ include Google::Apis::Core::Hashable
1002
+
1003
+ # A data change record contains a set of changes to a table with the same
1004
+ # modification type (insert, update, or delete) committed at the same commit
1005
+ # timestamp in one change stream partition for the same transaction. Multiple
1006
+ # data change records can be returned for the same transaction across multiple
1007
+ # change stream partitions.
1008
+ # Corresponds to the JSON property `dataChangeRecord`
1009
+ # @return [Google::Apis::SpannerV1::DataChangeRecord]
1010
+ attr_accessor :data_change_record
1011
+
1012
+ # A heartbeat record is returned as a progress indicator, when there are no data
1013
+ # changes or any other partition record types in the change stream partition.
1014
+ # Corresponds to the JSON property `heartbeatRecord`
1015
+ # @return [Google::Apis::SpannerV1::HeartbeatRecord]
1016
+ attr_accessor :heartbeat_record
1017
+
1018
+ # A partition end record serves as a notification that the client should stop
1019
+ # reading the partition. No further records are expected to be retrieved on it.
1020
+ # Corresponds to the JSON property `partitionEndRecord`
1021
+ # @return [Google::Apis::SpannerV1::PartitionEndRecord]
1022
+ attr_accessor :partition_end_record
1023
+
1024
+ # A partition event record describes key range changes for a change stream
1025
+ # partition. The changes to a row defined by its primary key can be captured in
1026
+ # one change stream partition for a specific time range, and then be captured in
1027
+ # a different change stream partition for a different time range. This movement
1028
+ # of key ranges across change stream partitions is a reflection of activities,
1029
+ # such as Spanner's dynamic splitting and load balancing, etc. Processing this
1030
+ # event is needed if users want to guarantee processing of the changes for any
1031
+ # key in timestamp order. If time ordered processing of changes for a primary
1032
+ # key is not needed, this event can be ignored. To guarantee time ordered
1033
+ # processing for each primary key, if the event describes move-ins, the reader
1034
+ # of this partition needs to wait until the readers of the source partitions
1035
+ # have processed all records with timestamps <= this PartitionEventRecord.
1036
+ # commit_timestamp, before advancing beyond this PartitionEventRecord. If the
1037
+ # event describes move-outs, the reader can notify the readers of the
1038
+ # destination partitions that they can continue processing.
1039
+ # Corresponds to the JSON property `partitionEventRecord`
1040
+ # @return [Google::Apis::SpannerV1::PartitionEventRecord]
1041
+ attr_accessor :partition_event_record
1042
+
1043
+ # A partition start record serves as a notification that the client should
1044
+ # schedule the partitions to be queried. PartitionStartRecord returns
1045
+ # information about one or more partitions.
1046
+ # Corresponds to the JSON property `partitionStartRecord`
1047
+ # @return [Google::Apis::SpannerV1::PartitionStartRecord]
1048
+ attr_accessor :partition_start_record
1049
+
1050
+ def initialize(**args)
1051
+ update!(**args)
1052
+ end
1053
+
1054
+ # Update properties of this object
1055
+ def update!(**args)
1056
+ @data_change_record = args[:data_change_record] if args.key?(:data_change_record)
1057
+ @heartbeat_record = args[:heartbeat_record] if args.key?(:heartbeat_record)
1058
+ @partition_end_record = args[:partition_end_record] if args.key?(:partition_end_record)
1059
+ @partition_event_record = args[:partition_event_record] if args.key?(:partition_event_record)
1060
+ @partition_start_record = args[:partition_start_record] if args.key?(:partition_start_record)
1061
+ end
1062
+ end
1063
+
1182
1064
  # Metadata associated with a parent-child relationship appearing in a PlanNode.
1183
1065
  class ChildLink
1184
1066
  include Google::Apis::Core::Hashable
@@ -1218,6 +1100,46 @@ module Google
1218
1100
  end
1219
1101
  end
1220
1102
 
1103
+ # Metadata for a column.
1104
+ class ColumnMetadata
1105
+ include Google::Apis::Core::Hashable
1106
+
1107
+ # Indicates whether the column is a primary key column.
1108
+ # Corresponds to the JSON property `isPrimaryKey`
1109
+ # @return [Boolean]
1110
+ attr_accessor :is_primary_key
1111
+ alias_method :is_primary_key?, :is_primary_key
1112
+
1113
+ # Name of the column.
1114
+ # Corresponds to the JSON property `name`
1115
+ # @return [String]
1116
+ attr_accessor :name
1117
+
1118
+ # Ordinal position of the column based on the original table definition in the
1119
+ # schema starting with a value of 1.
1120
+ # Corresponds to the JSON property `ordinalPosition`
1121
+ # @return [Fixnum]
1122
+ attr_accessor :ordinal_position
1123
+
1124
+ # `Type` indicates the type of a Cloud Spanner value, as might be stored in a
1125
+ # table cell or returned from an SQL query.
1126
+ # Corresponds to the JSON property `type`
1127
+ # @return [Google::Apis::SpannerV1::Type]
1128
+ attr_accessor :type
1129
+
1130
+ def initialize(**args)
1131
+ update!(**args)
1132
+ end
1133
+
1134
+ # Update properties of this object
1135
+ def update!(**args)
1136
+ @is_primary_key = args[:is_primary_key] if args.key?(:is_primary_key)
1137
+ @name = args[:name] if args.key?(:name)
1138
+ @ordinal_position = args[:ordinal_position] if args.key?(:ordinal_position)
1139
+ @type = args[:type] if args.key?(:type)
1140
+ end
1141
+ end
1142
+
1221
1143
  # The request for Commit.
1222
1144
  class CommitRequest
1223
1145
  include Google::Apis::Core::Hashable
@@ -1256,198 +1178,7 @@ module Google
1256
1178
  attr_accessor :return_commit_stats
1257
1179
  alias_method :return_commit_stats?, :return_commit_stats
1258
1180
 
1259
- # Transactions: Each session can have at most one active transaction at a time (
1260
- # note that standalone reads and queries use a transaction internally and do
1261
- # count towards the one transaction limit). After the active transaction is
1262
- # completed, the session can immediately be re-used for the next transaction. It
1263
- # is not necessary to create a new session for each transaction. Transaction
1264
- # modes: Cloud Spanner supports three transaction modes: 1. Locking read-write.
1265
- # This type of transaction is the only way to write data into Cloud Spanner.
1266
- # These transactions rely on pessimistic locking and, if necessary, two-phase
1267
- # commit. Locking read-write transactions may abort, requiring the application
1268
- # to retry. 2. Snapshot read-only. Snapshot read-only transactions provide
1269
- # guaranteed consistency across several reads, but do not allow writes. Snapshot
1270
- # read-only transactions can be configured to read at timestamps in the past, or
1271
- # configured to perform a strong read (where Spanner selects a timestamp such
1272
- # that the read is guaranteed to see the effects of all transactions that have
1273
- # committed before the start of the read). Snapshot read-only transactions do
1274
- # not need to be committed. Queries on change streams must be performed with the
1275
- # snapshot read-only transaction mode, specifying a strong read. See
1276
- # TransactionOptions.ReadOnly.strong for more details. 3. Partitioned DML. This
1277
- # type of transaction is used to execute a single Partitioned DML statement.
1278
- # Partitioned DML partitions the key space and runs the DML statement over each
1279
- # partition in parallel using separate, internal transactions that commit
1280
- # independently. Partitioned DML transactions do not need to be committed. For
1281
- # transactions that only read, snapshot read-only transactions provide simpler
1282
- # semantics and are almost always faster. In particular, read-only transactions
1283
- # do not take locks, so they do not conflict with read-write transactions. As a
1284
- # consequence of not taking locks, they also do not abort, so retry loops are
1285
- # not needed. Transactions may only read-write data in a single database. They
1286
- # may, however, read-write data in different tables within that database.
1287
- # Locking read-write transactions: Locking transactions may be used to
1288
- # atomically read-modify-write data anywhere in a database. This type of
1289
- # transaction is externally consistent. Clients should attempt to minimize the
1290
- # amount of time a transaction is active. Faster transactions commit with higher
1291
- # probability and cause less contention. Cloud Spanner attempts to keep read
1292
- # locks active as long as the transaction continues to do reads, and the
1293
- # transaction has not been terminated by Commit or Rollback. Long periods of
1294
- # inactivity at the client may cause Cloud Spanner to release a transaction's
1295
- # locks and abort it. Conceptually, a read-write transaction consists of zero or
1296
- # more reads or SQL statements followed by Commit. At any time before Commit,
1297
- # the client can send a Rollback request to abort the transaction. Semantics:
1298
- # Cloud Spanner can commit the transaction if all read locks it acquired are
1299
- # still valid at commit time, and it is able to acquire write locks for all
1300
- # writes. Cloud Spanner can abort the transaction for any reason. If a commit
1301
- # attempt returns `ABORTED`, Cloud Spanner guarantees that the transaction has
1302
- # not modified any user data in Cloud Spanner. Unless the transaction commits,
1303
- # Cloud Spanner makes no guarantees about how long the transaction's locks were
1304
- # held for. It is an error to use Cloud Spanner locks for any sort of mutual
1305
- # exclusion other than between Cloud Spanner transactions themselves. Retrying
1306
- # aborted transactions: When a transaction aborts, the application can choose to
1307
- # retry the whole transaction again. To maximize the chances of successfully
1308
- # committing the retry, the client should execute the retry in the same session
1309
- # as the original attempt. The original session's lock priority increases with
1310
- # each consecutive abort, meaning that each attempt has a slightly better chance
1311
- # of success than the previous. Note that the lock priority is preserved per
1312
- # session (not per transaction). Lock priority is set by the first read or write
1313
- # in the first attempt of a read-write transaction. If the application starts a
1314
- # new session to retry the whole transaction, the transaction loses its original
1315
- # lock priority. Moreover, the lock priority is only preserved if the
1316
- # transaction fails with an `ABORTED` error. Under some circumstances (for
1317
- # example, many transactions attempting to modify the same row(s)), a
1318
- # transaction can abort many times in a short period before successfully
1319
- # committing. Thus, it is not a good idea to cap the number of retries a
1320
- # transaction can attempt; instead, it is better to limit the total amount of
1321
- # time spent retrying. Idle transactions: A transaction is considered idle if it
1322
- # has no outstanding reads or SQL queries and has not started a read or SQL
1323
- # query within the last 10 seconds. Idle transactions can be aborted by Cloud
1324
- # Spanner so that they don't hold on to locks indefinitely. If an idle
1325
- # transaction is aborted, the commit fails with error `ABORTED`. If this
1326
- # behavior is undesirable, periodically executing a simple SQL query in the
1327
- # transaction (for example, `SELECT 1`) prevents the transaction from becoming
1328
- # idle. Snapshot read-only transactions: Snapshot read-only transactions
1329
- # provides a simpler method than locking read-write transactions for doing
1330
- # several consistent reads. However, this type of transaction does not support
1331
- # writes. Snapshot transactions do not take locks. Instead, they work by
1332
- # choosing a Cloud Spanner timestamp, then executing all reads at that timestamp.
1333
- # Since they do not acquire locks, they do not block concurrent read-write
1334
- # transactions. Unlike locking read-write transactions, snapshot read-only
1335
- # transactions never abort. They can fail if the chosen read timestamp is
1336
- # garbage collected; however, the default garbage collection policy is generous
1337
- # enough that most applications do not need to worry about this in practice.
1338
- # Snapshot read-only transactions do not need to call Commit or Rollback (and in
1339
- # fact are not permitted to do so). To execute a snapshot transaction, the
1340
- # client specifies a timestamp bound, which tells Cloud Spanner how to choose a
1341
- # read timestamp. The types of timestamp bound are: - Strong (the default). -
1342
- # Bounded staleness. - Exact staleness. If the Cloud Spanner database to be read
1343
- # is geographically distributed, stale read-only transactions can execute more
1344
- # quickly than strong or read-write transactions, because they are able to
1345
- # execute far from the leader replica. Each type of timestamp bound is discussed
1346
- # in detail below. Strong: Strong reads are guaranteed to see the effects of all
1347
- # transactions that have committed before the start of the read. Furthermore,
1348
- # all rows yielded by a single read are consistent with each other -- if any
1349
- # part of the read observes a transaction, all parts of the read see the
1350
- # transaction. Strong reads are not repeatable: two consecutive strong read-only
1351
- # transactions might return inconsistent results if there are concurrent writes.
1352
- # If consistency across reads is required, the reads should be executed within a
1353
- # transaction or at an exact read timestamp. Queries on change streams (see
1354
- # below for more details) must also specify the strong read timestamp bound. See
1355
- # TransactionOptions.ReadOnly.strong. Exact staleness: These timestamp bounds
1356
- # execute reads at a user-specified timestamp. Reads at a timestamp are
1357
- # guaranteed to see a consistent prefix of the global transaction history: they
1358
- # observe modifications done by all transactions with a commit timestamp less
1359
- # than or equal to the read timestamp, and observe none of the modifications
1360
- # done by transactions with a larger commit timestamp. They block until all
1361
- # conflicting transactions that can be assigned commit timestamps <= the read
1362
- # timestamp have finished. The timestamp can either be expressed as an absolute
1363
- # Cloud Spanner commit timestamp or a staleness relative to the current time.
1364
- # These modes do not require a "negotiation phase" to pick a timestamp. As a
1365
- # result, they execute slightly faster than the equivalent boundedly stale
1366
- # concurrency modes. On the other hand, boundedly stale reads usually return
1367
- # fresher results. See TransactionOptions.ReadOnly.read_timestamp and
1368
- # TransactionOptions.ReadOnly.exact_staleness. Bounded staleness: Bounded
1369
- # staleness modes allow Cloud Spanner to pick the read timestamp, subject to a
1370
- # user-provided staleness bound. Cloud Spanner chooses the newest timestamp
1371
- # within the staleness bound that allows execution of the reads at the closest
1372
- # available replica without blocking. All rows yielded are consistent with each
1373
- # other -- if any part of the read observes a transaction, all parts of the read
1374
- # see the transaction. Boundedly stale reads are not repeatable: two stale reads,
1375
- # even if they use the same staleness bound, can execute at different
1376
- # timestamps and thus return inconsistent results. Boundedly stale reads execute
1377
- # in two phases: the first phase negotiates a timestamp among all replicas
1378
- # needed to serve the read. In the second phase, reads are executed at the
1379
- # negotiated timestamp. As a result of the two phase execution, bounded
1380
- # staleness reads are usually a little slower than comparable exact staleness
1381
- # reads. However, they are typically able to return fresher results, and are
1382
- # more likely to execute at the closest replica. Because the timestamp
1383
- # negotiation requires up-front knowledge of which rows are read, it can only be
1384
- # used with single-use read-only transactions. See TransactionOptions.ReadOnly.
1385
- # max_staleness and TransactionOptions.ReadOnly.min_read_timestamp. Old read
1386
- # timestamps and garbage collection: Cloud Spanner continuously garbage collects
1387
- # deleted and overwritten data in the background to reclaim storage space. This
1388
- # process is known as "version GC". By default, version GC reclaims versions
1389
- # after they are one hour old. Because of this, Cloud Spanner can't perform
1390
- # reads at read timestamps more than one hour in the past. This restriction also
1391
- # applies to in-progress reads and/or SQL queries whose timestamp become too old
1392
- # while executing. Reads and SQL queries with too-old read timestamps fail with
1393
- # the error `FAILED_PRECONDITION`. You can configure and extend the `
1394
- # VERSION_RETENTION_PERIOD` of a database up to a period as long as one week,
1395
- # which allows Cloud Spanner to perform reads up to one week in the past.
1396
- # Querying change Streams: A Change Stream is a schema object that can be
1397
- # configured to watch data changes on the entire database, a set of tables, or a
1398
- # set of columns in a database. When a change stream is created, Spanner
1399
- # automatically defines a corresponding SQL Table-Valued Function (TVF) that can
1400
- # be used to query the change records in the associated change stream using the
1401
- # ExecuteStreamingSql API. The name of the TVF for a change stream is generated
1402
- # from the name of the change stream: READ_. All queries on change stream TVFs
1403
- # must be executed using the ExecuteStreamingSql API with a single-use read-only
1404
- # transaction with a strong read-only timestamp_bound. The change stream TVF
1405
- # allows users to specify the start_timestamp and end_timestamp for the time
1406
- # range of interest. All change records within the retention period is
1407
- # accessible using the strong read-only timestamp_bound. All other
1408
- # TransactionOptions are invalid for change stream queries. In addition, if
1409
- # TransactionOptions.read_only.return_read_timestamp is set to true, a special
1410
- # value of 2^63 - 2 is returned in the Transaction message that describes the
1411
- # transaction, instead of a valid read timestamp. This special value should be
1412
- # discarded and not used for any subsequent queries. Please see https://cloud.
1413
- # google.com/spanner/docs/change-streams for more details on how to query the
1414
- # change stream TVFs. Partitioned DML transactions: Partitioned DML transactions
1415
- # are used to execute DML statements with a different execution strategy that
1416
- # provides different, and often better, scalability properties for large, table-
1417
- # wide operations than DML in a ReadWrite transaction. Smaller scoped statements,
1418
- # such as an OLTP workload, should prefer using ReadWrite transactions.
1419
- # Partitioned DML partitions the keyspace and runs the DML statement on each
1420
- # partition in separate, internal transactions. These transactions commit
1421
- # automatically when complete, and run independently from one another. To reduce
1422
- # lock contention, this execution strategy only acquires read locks on rows that
1423
- # match the WHERE clause of the statement. Additionally, the smaller per-
1424
- # partition transactions hold locks for less time. That said, Partitioned DML is
1425
- # not a drop-in replacement for standard DML used in ReadWrite transactions. -
1426
- # The DML statement must be fully-partitionable. Specifically, the statement
1427
- # must be expressible as the union of many statements which each access only a
1428
- # single row of the table. - The statement is not applied atomically to all rows
1429
- # of the table. Rather, the statement is applied atomically to partitions of the
1430
- # table, in independent transactions. Secondary index rows are updated
1431
- # atomically with the base table rows. - Partitioned DML does not guarantee
1432
- # exactly-once execution semantics against a partition. The statement is applied
1433
- # at least once to each partition. It is strongly recommended that the DML
1434
- # statement should be idempotent to avoid unexpected results. For instance, it
1435
- # is potentially dangerous to run a statement such as `UPDATE table SET column =
1436
- # column + 1` as it could be run multiple times against some rows. - The
1437
- # partitions are committed automatically - there is no support for Commit or
1438
- # Rollback. If the call returns an error, or if the client issuing the
1439
- # ExecuteSql call dies, it is possible that some rows had the statement executed
1440
- # on them successfully. It is also possible that statement was never executed
1441
- # against other rows. - Partitioned DML transactions may only contain the
1442
- # execution of a single DML statement via ExecuteSql or ExecuteStreamingSql. -
1443
- # If any error is encountered during the execution of the partitioned DML
1444
- # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
1445
- # value that can't be stored due to schema constraints), then the operation is
1446
- # stopped at that point and an error is returned. It is possible that at this
1447
- # point, some partitions have been committed (or even committed multiple times),
1448
- # and other partitions have not been run at all. Given the above, Partitioned
1449
- # DML is good fit for large, database-wide, operations that are idempotent, such
1450
- # as deleting old rows from a very large table.
1181
+ # Options to use for transactions.
1451
1182
  # Corresponds to the JSON property `singleUseTransaction`
1452
1183
  # @return [Google::Apis::SpannerV1::TransactionOptions]
1453
1184
  attr_accessor :single_use_transaction
@@ -2152,6 +1883,125 @@ module Google
2152
1883
  end
2153
1884
  end
2154
1885
 
1886
+ # A data change record contains a set of changes to a table with the same
1887
+ # modification type (insert, update, or delete) committed at the same commit
1888
+ # timestamp in one change stream partition for the same transaction. Multiple
1889
+ # data change records can be returned for the same transaction across multiple
1890
+ # change stream partitions.
1891
+ class DataChangeRecord
1892
+ include Google::Apis::Core::Hashable
1893
+
1894
+ # Provides metadata describing the columns associated with the mods listed below.
1895
+ # Corresponds to the JSON property `columnMetadata`
1896
+ # @return [Array<Google::Apis::SpannerV1::ColumnMetadata>]
1897
+ attr_accessor :column_metadata
1898
+
1899
+ # Indicates the timestamp in which the change was committed. DataChangeRecord.
1900
+ # commit_timestamps, PartitionStartRecord.start_timestamps, PartitionEventRecord.
1901
+ # commit_timestamps, and PartitionEndRecord.end_timestamps can have the same
1902
+ # value in the same partition.
1903
+ # Corresponds to the JSON property `commitTimestamp`
1904
+ # @return [String]
1905
+ attr_accessor :commit_timestamp
1906
+
1907
+ # Indicates whether this is the last record for a transaction in the current
1908
+ # partition. Clients can use this field to determine when all records for a
1909
+ # transaction in the current partition have been received.
1910
+ # Corresponds to the JSON property `isLastRecordInTransactionInPartition`
1911
+ # @return [Boolean]
1912
+ attr_accessor :is_last_record_in_transaction_in_partition
1913
+ alias_method :is_last_record_in_transaction_in_partition?, :is_last_record_in_transaction_in_partition
1914
+
1915
+ # Indicates whether the transaction is a system transaction. System transactions
1916
+ # include those issued by time-to-live (TTL), column backfill, etc.
1917
+ # Corresponds to the JSON property `isSystemTransaction`
1918
+ # @return [Boolean]
1919
+ attr_accessor :is_system_transaction
1920
+ alias_method :is_system_transaction?, :is_system_transaction
1921
+
1922
+ # Describes the type of change.
1923
+ # Corresponds to the JSON property `modType`
1924
+ # @return [String]
1925
+ attr_accessor :mod_type
1926
+
1927
+ # Describes the changes that were made.
1928
+ # Corresponds to the JSON property `mods`
1929
+ # @return [Array<Google::Apis::SpannerV1::Mod>]
1930
+ attr_accessor :mods
1931
+
1932
+ # Indicates the number of partitions that return data change records for this
1933
+ # transaction. This value can be helpful in assembling all records associated
1934
+ # with a particular transaction.
1935
+ # Corresponds to the JSON property `numberOfPartitionsInTransaction`
1936
+ # @return [Fixnum]
1937
+ attr_accessor :number_of_partitions_in_transaction
1938
+
1939
+ # Indicates the number of data change records that are part of this transaction
1940
+ # across all change stream partitions. This value can be used to assemble all
1941
+ # the records associated with a particular transaction.
1942
+ # Corresponds to the JSON property `numberOfRecordsInTransaction`
1943
+ # @return [Fixnum]
1944
+ attr_accessor :number_of_records_in_transaction
1945
+
1946
+ # Record sequence numbers are unique and monotonically increasing (but not
1947
+ # necessarily contiguous) for a specific timestamp across record types in the
1948
+ # same partition. To guarantee ordered processing, the reader should process
1949
+ # records (of potentially different types) in record_sequence order for a
1950
+ # specific timestamp in the same partition. The record sequence number ordering
1951
+ # across partitions is only meaningful in the context of a specific transaction.
1952
+ # Record sequence numbers are unique across partitions for a specific
1953
+ # transaction. Sort the DataChangeRecords for the same server_transaction_id by
1954
+ # record_sequence to reconstruct the ordering of the changes within the
1955
+ # transaction.
1956
+ # Corresponds to the JSON property `recordSequence`
1957
+ # @return [String]
1958
+ attr_accessor :record_sequence
1959
+
1960
+ # Provides a globally unique string that represents the transaction in which the
1961
+ # change was committed. Multiple transactions can have the same commit timestamp,
1962
+ # but each transaction has a unique server_transaction_id.
1963
+ # Corresponds to the JSON property `serverTransactionId`
1964
+ # @return [String]
1965
+ attr_accessor :server_transaction_id
1966
+
1967
+ # Name of the table affected by the change.
1968
+ # Corresponds to the JSON property `table`
1969
+ # @return [String]
1970
+ attr_accessor :table
1971
+
1972
+ # Indicates the transaction tag associated with this transaction.
1973
+ # Corresponds to the JSON property `transactionTag`
1974
+ # @return [String]
1975
+ attr_accessor :transaction_tag
1976
+
1977
+ # Describes the value capture type that was specified in the change stream
1978
+ # configuration when this change was captured.
1979
+ # Corresponds to the JSON property `valueCaptureType`
1980
+ # @return [String]
1981
+ attr_accessor :value_capture_type
1982
+
1983
+ def initialize(**args)
1984
+ update!(**args)
1985
+ end
1986
+
1987
+ # Update properties of this object
1988
+ def update!(**args)
1989
+ @column_metadata = args[:column_metadata] if args.key?(:column_metadata)
1990
+ @commit_timestamp = args[:commit_timestamp] if args.key?(:commit_timestamp)
1991
+ @is_last_record_in_transaction_in_partition = args[:is_last_record_in_transaction_in_partition] if args.key?(:is_last_record_in_transaction_in_partition)
1992
+ @is_system_transaction = args[:is_system_transaction] if args.key?(:is_system_transaction)
1993
+ @mod_type = args[:mod_type] if args.key?(:mod_type)
1994
+ @mods = args[:mods] if args.key?(:mods)
1995
+ @number_of_partitions_in_transaction = args[:number_of_partitions_in_transaction] if args.key?(:number_of_partitions_in_transaction)
1996
+ @number_of_records_in_transaction = args[:number_of_records_in_transaction] if args.key?(:number_of_records_in_transaction)
1997
+ @record_sequence = args[:record_sequence] if args.key?(:record_sequence)
1998
+ @server_transaction_id = args[:server_transaction_id] if args.key?(:server_transaction_id)
1999
+ @table = args[:table] if args.key?(:table)
2000
+ @transaction_tag = args[:transaction_tag] if args.key?(:transaction_tag)
2001
+ @value_capture_type = args[:value_capture_type] if args.key?(:value_capture_type)
2002
+ end
2003
+ end
2004
+
2155
2005
  # A Cloud Spanner database.
2156
2006
  class Database
2157
2007
  include Google::Apis::Core::Hashable
@@ -3080,6 +2930,29 @@ module Google
3080
2930
  end
3081
2931
  end
3082
2932
 
2933
+ # A heartbeat record is returned as a progress indicator, when there are no data
2934
+ # changes or any other partition record types in the change stream partition.
2935
+ class HeartbeatRecord
2936
+ include Google::Apis::Core::Hashable
2937
+
2938
+ # Indicates the timestamp at which the query has returned all the records in the
2939
+ # change stream partition with timestamp <= heartbeat timestamp. The heartbeat
2940
+ # timestamp will not be the same as the timestamps of other record types in the
2941
+ # same partition.
2942
+ # Corresponds to the JSON property `timestamp`
2943
+ # @return [String]
2944
+ attr_accessor :timestamp
2945
+
2946
+ def initialize(**args)
2947
+ update!(**args)
2948
+ end
2949
+
2950
+ # Update properties of this object
2951
+ def update!(**args)
2952
+ @timestamp = args[:timestamp] if args.key?(:timestamp)
2953
+ end
2954
+ end
2955
+
3083
2956
  # An `IncludeReplicas` contains a repeated set of `ReplicaSelection` which
3084
2957
  # indicates the order in which replicas should be considered.
3085
2958
  class IncludeReplicas
@@ -4499,6 +4372,92 @@ module Google
4499
4372
  end
4500
4373
  end
4501
4374
 
4375
+ # A mod describes all data changes in a watched table row.
4376
+ class Mod
4377
+ include Google::Apis::Core::Hashable
4378
+
4379
+ # Returns the value of the primary key of the modified row.
4380
+ # Corresponds to the JSON property `keys`
4381
+ # @return [Array<Google::Apis::SpannerV1::ModValue>]
4382
+ attr_accessor :keys
4383
+
4384
+ # Returns the new values after the change for the modified columns. Always empty
4385
+ # for DELETE.
4386
+ # Corresponds to the JSON property `newValues`
4387
+ # @return [Array<Google::Apis::SpannerV1::ModValue>]
4388
+ attr_accessor :new_values
4389
+
4390
+ # Returns the old values before the change for the modified columns. Always
4391
+ # empty for INSERT, or if old values are not being captured specified by
4392
+ # value_capture_type.
4393
+ # Corresponds to the JSON property `oldValues`
4394
+ # @return [Array<Google::Apis::SpannerV1::ModValue>]
4395
+ attr_accessor :old_values
4396
+
4397
+ def initialize(**args)
4398
+ update!(**args)
4399
+ end
4400
+
4401
+ # Update properties of this object
4402
+ def update!(**args)
4403
+ @keys = args[:keys] if args.key?(:keys)
4404
+ @new_values = args[:new_values] if args.key?(:new_values)
4405
+ @old_values = args[:old_values] if args.key?(:old_values)
4406
+ end
4407
+ end
4408
+
4409
+ # Returns the value and associated metadata for a particular field of the Mod.
4410
+ class ModValue
4411
+ include Google::Apis::Core::Hashable
4412
+
4413
+ # Index within the repeated column_metadata field, to obtain the column metadata
4414
+ # for the column that was modified.
4415
+ # Corresponds to the JSON property `columnMetadataIndex`
4416
+ # @return [Fixnum]
4417
+ attr_accessor :column_metadata_index
4418
+
4419
+ # The value of the column.
4420
+ # Corresponds to the JSON property `value`
4421
+ # @return [Object]
4422
+ attr_accessor :value
4423
+
4424
+ def initialize(**args)
4425
+ update!(**args)
4426
+ end
4427
+
4428
+ # Update properties of this object
4429
+ def update!(**args)
4430
+ @column_metadata_index = args[:column_metadata_index] if args.key?(:column_metadata_index)
4431
+ @value = args[:value] if args.key?(:value)
4432
+ end
4433
+ end
4434
+
4435
+ # Describes move-in of the key ranges into the change stream partition
4436
+ # identified by partition_token. To maintain processing the changes for a
4437
+ # particular key in timestamp order, the query processing the change stream
4438
+ # partition identified by partition_token should not advance beyond the
4439
+ # partition event record commit timestamp until the queries processing the
4440
+ # source change stream partitions have processed all change stream records with
4441
+ # timestamps <= the partition event record commit timestamp.
4442
+ class MoveInEvent
4443
+ include Google::Apis::Core::Hashable
4444
+
4445
+ # An unique partition identifier describing the source change stream partition
4446
+ # that recorded changes for the key range that is moving into this partition.
4447
+ # Corresponds to the JSON property `sourcePartitionToken`
4448
+ # @return [String]
4449
+ attr_accessor :source_partition_token
4450
+
4451
+ def initialize(**args)
4452
+ update!(**args)
4453
+ end
4454
+
4455
+ # Update properties of this object
4456
+ def update!(**args)
4457
+ @source_partition_token = args[:source_partition_token] if args.key?(:source_partition_token)
4458
+ end
4459
+ end
4460
+
4502
4461
  # The request for MoveInstance.
4503
4462
  class MoveInstanceRequest
4504
4463
  include Google::Apis::Core::Hashable
@@ -4526,6 +4485,32 @@ module Google
4526
4485
  end
4527
4486
  end
4528
4487
 
4488
+ # Describes move-out of the key ranges out of the change stream partition
4489
+ # identified by partition_token. To maintain processing the changes for a
4490
+ # particular key in timestamp order, the query processing the MoveOutEvent in
4491
+ # the partition identified by partition_token should inform the queries
4492
+ # processing the destination partitions that they can unblock and proceed
4493
+ # processing records past the commit_timestamp.
4494
+ class MoveOutEvent
4495
+ include Google::Apis::Core::Hashable
4496
+
4497
+ # An unique partition identifier describing the destination change stream
4498
+ # partition that will record changes for the key range that is moving out of
4499
+ # this partition.
4500
+ # Corresponds to the JSON property `destinationPartitionToken`
4501
+ # @return [String]
4502
+ attr_accessor :destination_partition_token
4503
+
4504
+ def initialize(**args)
4505
+ update!(**args)
4506
+ end
4507
+
4508
+ # Update properties of this object
4509
+ def update!(**args)
4510
+ @destination_partition_token = args[:destination_partition_token] if args.key?(:destination_partition_token)
4511
+ end
4512
+ end
4513
+
4529
4514
  # When a read-write transaction is executed on a multiplexed session, this
4530
4515
  # precommit token is sent back to the client as a part of the Transaction
4531
4516
  # message in the BeginTransaction response and also as a part of the ResultSet
@@ -4871,6 +4856,137 @@ module Google
4871
4856
  end
4872
4857
  end
4873
4858
 
4859
+ # A partition end record serves as a notification that the client should stop
4860
+ # reading the partition. No further records are expected to be retrieved on it.
4861
+ class PartitionEndRecord
4862
+ include Google::Apis::Core::Hashable
4863
+
4864
+ # End timestamp at which the change stream partition is terminated. All changes
4865
+ # generated by this partition will have timestamps <= end_timestamp.
4866
+ # DataChangeRecord.commit_timestamps, PartitionStartRecord.start_timestamps,
4867
+ # PartitionEventRecord.commit_timestamps, and PartitionEndRecord.end_timestamps
4868
+ # can have the same value in the same partition. PartitionEndRecord is the last
4869
+ # record returned for a partition.
4870
+ # Corresponds to the JSON property `endTimestamp`
4871
+ # @return [String]
4872
+ attr_accessor :end_timestamp
4873
+
4874
+ # Unique partition identifier describing the terminated change stream partition.
4875
+ # partition_token is equal to the partition token of the change stream partition
4876
+ # currently queried to return this PartitionEndRecord.
4877
+ # Corresponds to the JSON property `partitionToken`
4878
+ # @return [String]
4879
+ attr_accessor :partition_token
4880
+
4881
+ # Record sequence numbers are unique and monotonically increasing (but not
4882
+ # necessarily contiguous) for a specific timestamp across record types in the
4883
+ # same partition. To guarantee ordered processing, the reader should process
4884
+ # records (of potentially different types) in record_sequence order for a
4885
+ # specific timestamp in the same partition.
4886
+ # Corresponds to the JSON property `recordSequence`
4887
+ # @return [String]
4888
+ attr_accessor :record_sequence
4889
+
4890
+ def initialize(**args)
4891
+ update!(**args)
4892
+ end
4893
+
4894
+ # Update properties of this object
4895
+ def update!(**args)
4896
+ @end_timestamp = args[:end_timestamp] if args.key?(:end_timestamp)
4897
+ @partition_token = args[:partition_token] if args.key?(:partition_token)
4898
+ @record_sequence = args[:record_sequence] if args.key?(:record_sequence)
4899
+ end
4900
+ end
4901
+
4902
+ # A partition event record describes key range changes for a change stream
4903
+ # partition. The changes to a row defined by its primary key can be captured in
4904
+ # one change stream partition for a specific time range, and then be captured in
4905
+ # a different change stream partition for a different time range. This movement
4906
+ # of key ranges across change stream partitions is a reflection of activities,
4907
+ # such as Spanner's dynamic splitting and load balancing, etc. Processing this
4908
+ # event is needed if users want to guarantee processing of the changes for any
4909
+ # key in timestamp order. If time ordered processing of changes for a primary
4910
+ # key is not needed, this event can be ignored. To guarantee time ordered
4911
+ # processing for each primary key, if the event describes move-ins, the reader
4912
+ # of this partition needs to wait until the readers of the source partitions
4913
+ # have processed all records with timestamps <= this PartitionEventRecord.
4914
+ # commit_timestamp, before advancing beyond this PartitionEventRecord. If the
4915
+ # event describes move-outs, the reader can notify the readers of the
4916
+ # destination partitions that they can continue processing.
4917
+ class PartitionEventRecord
4918
+ include Google::Apis::Core::Hashable
4919
+
4920
+ # Indicates the commit timestamp at which the key range change occurred.
4921
+ # DataChangeRecord.commit_timestamps, PartitionStartRecord.start_timestamps,
4922
+ # PartitionEventRecord.commit_timestamps, and PartitionEndRecord.end_timestamps
4923
+ # can have the same value in the same partition.
4924
+ # Corresponds to the JSON property `commitTimestamp`
4925
+ # @return [String]
4926
+ attr_accessor :commit_timestamp
4927
+
4928
+ # Set when one or more key ranges are moved into the change stream partition
4929
+ # identified by partition_token. Example: Two key ranges are moved into
4930
+ # partition (P1) from partition (P2) and partition (P3) in a single transaction
4931
+ # at timestamp T. The PartitionEventRecord returned in P1 will reflect the move
4932
+ # as: PartitionEventRecord ` commit_timestamp: T partition_token: "P1"
4933
+ # move_in_events ` source_partition_token: "P2" ` move_in_events `
4934
+ # source_partition_token: "P3" ` ` The PartitionEventRecord returned in P2 will
4935
+ # reflect the move as: PartitionEventRecord ` commit_timestamp: T
4936
+ # partition_token: "P2" move_out_events ` destination_partition_token: "P1" ` `
4937
+ # The PartitionEventRecord returned in P3 will reflect the move as:
4938
+ # PartitionEventRecord ` commit_timestamp: T partition_token: "P3"
4939
+ # move_out_events ` destination_partition_token: "P1" ` `
4940
+ # Corresponds to the JSON property `moveInEvents`
4941
+ # @return [Array<Google::Apis::SpannerV1::MoveInEvent>]
4942
+ attr_accessor :move_in_events
4943
+
4944
+ # Set when one or more key ranges are moved out of the change stream partition
4945
+ # identified by partition_token. Example: Two key ranges are moved out of
4946
+ # partition (P1) to partition (P2) and partition (P3) in a single transaction at
4947
+ # timestamp T. The PartitionEventRecord returned in P1 will reflect the move as:
4948
+ # PartitionEventRecord ` commit_timestamp: T partition_token: "P1"
4949
+ # move_out_events ` destination_partition_token: "P2" ` move_out_events `
4950
+ # destination_partition_token: "P3" ` ` The PartitionEventRecord returned in P2
4951
+ # will reflect the move as: PartitionEventRecord ` commit_timestamp: T
4952
+ # partition_token: "P2" move_in_events ` source_partition_token: "P1" ` ` The
4953
+ # PartitionEventRecord returned in P3 will reflect the move as:
4954
+ # PartitionEventRecord ` commit_timestamp: T partition_token: "P3"
4955
+ # move_in_events ` source_partition_token: "P1" ` `
4956
+ # Corresponds to the JSON property `moveOutEvents`
4957
+ # @return [Array<Google::Apis::SpannerV1::MoveOutEvent>]
4958
+ attr_accessor :move_out_events
4959
+
4960
+ # Unique partition identifier describing the partition this event occurred on.
4961
+ # partition_token is equal to the partition token of the change stream partition
4962
+ # currently queried to return this PartitionEventRecord.
4963
+ # Corresponds to the JSON property `partitionToken`
4964
+ # @return [String]
4965
+ attr_accessor :partition_token
4966
+
4967
+ # Record sequence numbers are unique and monotonically increasing (but not
4968
+ # necessarily contiguous) for a specific timestamp across record types in the
4969
+ # same partition. To guarantee ordered processing, the reader should process
4970
+ # records (of potentially different types) in record_sequence order for a
4971
+ # specific timestamp in the same partition.
4972
+ # Corresponds to the JSON property `recordSequence`
4973
+ # @return [String]
4974
+ attr_accessor :record_sequence
4975
+
4976
+ def initialize(**args)
4977
+ update!(**args)
4978
+ end
4979
+
4980
+ # Update properties of this object
4981
+ def update!(**args)
4982
+ @commit_timestamp = args[:commit_timestamp] if args.key?(:commit_timestamp)
4983
+ @move_in_events = args[:move_in_events] if args.key?(:move_in_events)
4984
+ @move_out_events = args[:move_out_events] if args.key?(:move_out_events)
4985
+ @partition_token = args[:partition_token] if args.key?(:partition_token)
4986
+ @record_sequence = args[:record_sequence] if args.key?(:record_sequence)
4987
+ end
4988
+ end
4989
+
4874
4990
  # Options for a `PartitionQueryRequest` and `PartitionReadRequest`.
4875
4991
  class PartitionOptions
4876
4992
  include Google::Apis::Core::Hashable
@@ -5047,6 +5163,47 @@ module Google
5047
5163
  end
5048
5164
  end
5049
5165
 
5166
+ # A partition start record serves as a notification that the client should
5167
+ # schedule the partitions to be queried. PartitionStartRecord returns
5168
+ # information about one or more partitions.
5169
+ class PartitionStartRecord
5170
+ include Google::Apis::Core::Hashable
5171
+
5172
+ # Unique partition identifiers to be used in queries.
5173
+ # Corresponds to the JSON property `partitionTokens`
5174
+ # @return [Array<String>]
5175
+ attr_accessor :partition_tokens
5176
+
5177
+ # Record sequence numbers are unique and monotonically increasing (but not
5178
+ # necessarily contiguous) for a specific timestamp across record types in the
5179
+ # same partition. To guarantee ordered processing, the reader should process
5180
+ # records (of potentially different types) in record_sequence order for a
5181
+ # specific timestamp in the same partition.
5182
+ # Corresponds to the JSON property `recordSequence`
5183
+ # @return [String]
5184
+ attr_accessor :record_sequence
5185
+
5186
+ # Start timestamp at which the partitions should be queried to return change
5187
+ # stream records with timestamps >= start_timestamp. DataChangeRecord.
5188
+ # commit_timestamps, PartitionStartRecord.start_timestamps, PartitionEventRecord.
5189
+ # commit_timestamps, and PartitionEndRecord.end_timestamps can have the same
5190
+ # value in the same partition.
5191
+ # Corresponds to the JSON property `startTimestamp`
5192
+ # @return [String]
5193
+ attr_accessor :start_timestamp
5194
+
5195
+ def initialize(**args)
5196
+ update!(**args)
5197
+ end
5198
+
5199
+ # Update properties of this object
5200
+ def update!(**args)
5201
+ @partition_tokens = args[:partition_tokens] if args.key?(:partition_tokens)
5202
+ @record_sequence = args[:record_sequence] if args.key?(:record_sequence)
5203
+ @start_timestamp = args[:start_timestamp] if args.key?(:start_timestamp)
5204
+ end
5205
+ end
5206
+
5050
5207
  # Message type to initiate a Partitioned DML transaction.
5051
5208
  class PartitionedDml
5052
5209
  include Google::Apis::Core::Hashable
@@ -6558,198 +6715,7 @@ module Google
6558
6715
  end
6559
6716
  end
6560
6717
 
6561
- # Transactions: Each session can have at most one active transaction at a time (
6562
- # note that standalone reads and queries use a transaction internally and do
6563
- # count towards the one transaction limit). After the active transaction is
6564
- # completed, the session can immediately be re-used for the next transaction. It
6565
- # is not necessary to create a new session for each transaction. Transaction
6566
- # modes: Cloud Spanner supports three transaction modes: 1. Locking read-write.
6567
- # This type of transaction is the only way to write data into Cloud Spanner.
6568
- # These transactions rely on pessimistic locking and, if necessary, two-phase
6569
- # commit. Locking read-write transactions may abort, requiring the application
6570
- # to retry. 2. Snapshot read-only. Snapshot read-only transactions provide
6571
- # guaranteed consistency across several reads, but do not allow writes. Snapshot
6572
- # read-only transactions can be configured to read at timestamps in the past, or
6573
- # configured to perform a strong read (where Spanner selects a timestamp such
6574
- # that the read is guaranteed to see the effects of all transactions that have
6575
- # committed before the start of the read). Snapshot read-only transactions do
6576
- # not need to be committed. Queries on change streams must be performed with the
6577
- # snapshot read-only transaction mode, specifying a strong read. See
6578
- # TransactionOptions.ReadOnly.strong for more details. 3. Partitioned DML. This
6579
- # type of transaction is used to execute a single Partitioned DML statement.
6580
- # Partitioned DML partitions the key space and runs the DML statement over each
6581
- # partition in parallel using separate, internal transactions that commit
6582
- # independently. Partitioned DML transactions do not need to be committed. For
6583
- # transactions that only read, snapshot read-only transactions provide simpler
6584
- # semantics and are almost always faster. In particular, read-only transactions
6585
- # do not take locks, so they do not conflict with read-write transactions. As a
6586
- # consequence of not taking locks, they also do not abort, so retry loops are
6587
- # not needed. Transactions may only read-write data in a single database. They
6588
- # may, however, read-write data in different tables within that database.
6589
- # Locking read-write transactions: Locking transactions may be used to
6590
- # atomically read-modify-write data anywhere in a database. This type of
6591
- # transaction is externally consistent. Clients should attempt to minimize the
6592
- # amount of time a transaction is active. Faster transactions commit with higher
6593
- # probability and cause less contention. Cloud Spanner attempts to keep read
6594
- # locks active as long as the transaction continues to do reads, and the
6595
- # transaction has not been terminated by Commit or Rollback. Long periods of
6596
- # inactivity at the client may cause Cloud Spanner to release a transaction's
6597
- # locks and abort it. Conceptually, a read-write transaction consists of zero or
6598
- # more reads or SQL statements followed by Commit. At any time before Commit,
6599
- # the client can send a Rollback request to abort the transaction. Semantics:
6600
- # Cloud Spanner can commit the transaction if all read locks it acquired are
6601
- # still valid at commit time, and it is able to acquire write locks for all
6602
- # writes. Cloud Spanner can abort the transaction for any reason. If a commit
6603
- # attempt returns `ABORTED`, Cloud Spanner guarantees that the transaction has
6604
- # not modified any user data in Cloud Spanner. Unless the transaction commits,
6605
- # Cloud Spanner makes no guarantees about how long the transaction's locks were
6606
- # held for. It is an error to use Cloud Spanner locks for any sort of mutual
6607
- # exclusion other than between Cloud Spanner transactions themselves. Retrying
6608
- # aborted transactions: When a transaction aborts, the application can choose to
6609
- # retry the whole transaction again. To maximize the chances of successfully
6610
- # committing the retry, the client should execute the retry in the same session
6611
- # as the original attempt. The original session's lock priority increases with
6612
- # each consecutive abort, meaning that each attempt has a slightly better chance
6613
- # of success than the previous. Note that the lock priority is preserved per
6614
- # session (not per transaction). Lock priority is set by the first read or write
6615
- # in the first attempt of a read-write transaction. If the application starts a
6616
- # new session to retry the whole transaction, the transaction loses its original
6617
- # lock priority. Moreover, the lock priority is only preserved if the
6618
- # transaction fails with an `ABORTED` error. Under some circumstances (for
6619
- # example, many transactions attempting to modify the same row(s)), a
6620
- # transaction can abort many times in a short period before successfully
6621
- # committing. Thus, it is not a good idea to cap the number of retries a
6622
- # transaction can attempt; instead, it is better to limit the total amount of
6623
- # time spent retrying. Idle transactions: A transaction is considered idle if it
6624
- # has no outstanding reads or SQL queries and has not started a read or SQL
6625
- # query within the last 10 seconds. Idle transactions can be aborted by Cloud
6626
- # Spanner so that they don't hold on to locks indefinitely. If an idle
6627
- # transaction is aborted, the commit fails with error `ABORTED`. If this
6628
- # behavior is undesirable, periodically executing a simple SQL query in the
6629
- # transaction (for example, `SELECT 1`) prevents the transaction from becoming
6630
- # idle. Snapshot read-only transactions: Snapshot read-only transactions
6631
- # provides a simpler method than locking read-write transactions for doing
6632
- # several consistent reads. However, this type of transaction does not support
6633
- # writes. Snapshot transactions do not take locks. Instead, they work by
6634
- # choosing a Cloud Spanner timestamp, then executing all reads at that timestamp.
6635
- # Since they do not acquire locks, they do not block concurrent read-write
6636
- # transactions. Unlike locking read-write transactions, snapshot read-only
6637
- # transactions never abort. They can fail if the chosen read timestamp is
6638
- # garbage collected; however, the default garbage collection policy is generous
6639
- # enough that most applications do not need to worry about this in practice.
6640
- # Snapshot read-only transactions do not need to call Commit or Rollback (and in
6641
- # fact are not permitted to do so). To execute a snapshot transaction, the
6642
- # client specifies a timestamp bound, which tells Cloud Spanner how to choose a
6643
- # read timestamp. The types of timestamp bound are: - Strong (the default). -
6644
- # Bounded staleness. - Exact staleness. If the Cloud Spanner database to be read
6645
- # is geographically distributed, stale read-only transactions can execute more
6646
- # quickly than strong or read-write transactions, because they are able to
6647
- # execute far from the leader replica. Each type of timestamp bound is discussed
6648
- # in detail below. Strong: Strong reads are guaranteed to see the effects of all
6649
- # transactions that have committed before the start of the read. Furthermore,
6650
- # all rows yielded by a single read are consistent with each other -- if any
6651
- # part of the read observes a transaction, all parts of the read see the
6652
- # transaction. Strong reads are not repeatable: two consecutive strong read-only
6653
- # transactions might return inconsistent results if there are concurrent writes.
6654
- # If consistency across reads is required, the reads should be executed within a
6655
- # transaction or at an exact read timestamp. Queries on change streams (see
6656
- # below for more details) must also specify the strong read timestamp bound. See
6657
- # TransactionOptions.ReadOnly.strong. Exact staleness: These timestamp bounds
6658
- # execute reads at a user-specified timestamp. Reads at a timestamp are
6659
- # guaranteed to see a consistent prefix of the global transaction history: they
6660
- # observe modifications done by all transactions with a commit timestamp less
6661
- # than or equal to the read timestamp, and observe none of the modifications
6662
- # done by transactions with a larger commit timestamp. They block until all
6663
- # conflicting transactions that can be assigned commit timestamps <= the read
6664
- # timestamp have finished. The timestamp can either be expressed as an absolute
6665
- # Cloud Spanner commit timestamp or a staleness relative to the current time.
6666
- # These modes do not require a "negotiation phase" to pick a timestamp. As a
6667
- # result, they execute slightly faster than the equivalent boundedly stale
6668
- # concurrency modes. On the other hand, boundedly stale reads usually return
6669
- # fresher results. See TransactionOptions.ReadOnly.read_timestamp and
6670
- # TransactionOptions.ReadOnly.exact_staleness. Bounded staleness: Bounded
6671
- # staleness modes allow Cloud Spanner to pick the read timestamp, subject to a
6672
- # user-provided staleness bound. Cloud Spanner chooses the newest timestamp
6673
- # within the staleness bound that allows execution of the reads at the closest
6674
- # available replica without blocking. All rows yielded are consistent with each
6675
- # other -- if any part of the read observes a transaction, all parts of the read
6676
- # see the transaction. Boundedly stale reads are not repeatable: two stale reads,
6677
- # even if they use the same staleness bound, can execute at different
6678
- # timestamps and thus return inconsistent results. Boundedly stale reads execute
6679
- # in two phases: the first phase negotiates a timestamp among all replicas
6680
- # needed to serve the read. In the second phase, reads are executed at the
6681
- # negotiated timestamp. As a result of the two phase execution, bounded
6682
- # staleness reads are usually a little slower than comparable exact staleness
6683
- # reads. However, they are typically able to return fresher results, and are
6684
- # more likely to execute at the closest replica. Because the timestamp
6685
- # negotiation requires up-front knowledge of which rows are read, it can only be
6686
- # used with single-use read-only transactions. See TransactionOptions.ReadOnly.
6687
- # max_staleness and TransactionOptions.ReadOnly.min_read_timestamp. Old read
6688
- # timestamps and garbage collection: Cloud Spanner continuously garbage collects
6689
- # deleted and overwritten data in the background to reclaim storage space. This
6690
- # process is known as "version GC". By default, version GC reclaims versions
6691
- # after they are one hour old. Because of this, Cloud Spanner can't perform
6692
- # reads at read timestamps more than one hour in the past. This restriction also
6693
- # applies to in-progress reads and/or SQL queries whose timestamp become too old
6694
- # while executing. Reads and SQL queries with too-old read timestamps fail with
6695
- # the error `FAILED_PRECONDITION`. You can configure and extend the `
6696
- # VERSION_RETENTION_PERIOD` of a database up to a period as long as one week,
6697
- # which allows Cloud Spanner to perform reads up to one week in the past.
6698
- # Querying change Streams: A Change Stream is a schema object that can be
6699
- # configured to watch data changes on the entire database, a set of tables, or a
6700
- # set of columns in a database. When a change stream is created, Spanner
6701
- # automatically defines a corresponding SQL Table-Valued Function (TVF) that can
6702
- # be used to query the change records in the associated change stream using the
6703
- # ExecuteStreamingSql API. The name of the TVF for a change stream is generated
6704
- # from the name of the change stream: READ_. All queries on change stream TVFs
6705
- # must be executed using the ExecuteStreamingSql API with a single-use read-only
6706
- # transaction with a strong read-only timestamp_bound. The change stream TVF
6707
- # allows users to specify the start_timestamp and end_timestamp for the time
6708
- # range of interest. All change records within the retention period is
6709
- # accessible using the strong read-only timestamp_bound. All other
6710
- # TransactionOptions are invalid for change stream queries. In addition, if
6711
- # TransactionOptions.read_only.return_read_timestamp is set to true, a special
6712
- # value of 2^63 - 2 is returned in the Transaction message that describes the
6713
- # transaction, instead of a valid read timestamp. This special value should be
6714
- # discarded and not used for any subsequent queries. Please see https://cloud.
6715
- # google.com/spanner/docs/change-streams for more details on how to query the
6716
- # change stream TVFs. Partitioned DML transactions: Partitioned DML transactions
6717
- # are used to execute DML statements with a different execution strategy that
6718
- # provides different, and often better, scalability properties for large, table-
6719
- # wide operations than DML in a ReadWrite transaction. Smaller scoped statements,
6720
- # such as an OLTP workload, should prefer using ReadWrite transactions.
6721
- # Partitioned DML partitions the keyspace and runs the DML statement on each
6722
- # partition in separate, internal transactions. These transactions commit
6723
- # automatically when complete, and run independently from one another. To reduce
6724
- # lock contention, this execution strategy only acquires read locks on rows that
6725
- # match the WHERE clause of the statement. Additionally, the smaller per-
6726
- # partition transactions hold locks for less time. That said, Partitioned DML is
6727
- # not a drop-in replacement for standard DML used in ReadWrite transactions. -
6728
- # The DML statement must be fully-partitionable. Specifically, the statement
6729
- # must be expressible as the union of many statements which each access only a
6730
- # single row of the table. - The statement is not applied atomically to all rows
6731
- # of the table. Rather, the statement is applied atomically to partitions of the
6732
- # table, in independent transactions. Secondary index rows are updated
6733
- # atomically with the base table rows. - Partitioned DML does not guarantee
6734
- # exactly-once execution semantics against a partition. The statement is applied
6735
- # at least once to each partition. It is strongly recommended that the DML
6736
- # statement should be idempotent to avoid unexpected results. For instance, it
6737
- # is potentially dangerous to run a statement such as `UPDATE table SET column =
6738
- # column + 1` as it could be run multiple times against some rows. - The
6739
- # partitions are committed automatically - there is no support for Commit or
6740
- # Rollback. If the call returns an error, or if the client issuing the
6741
- # ExecuteSql call dies, it is possible that some rows had the statement executed
6742
- # on them successfully. It is also possible that statement was never executed
6743
- # against other rows. - Partitioned DML transactions may only contain the
6744
- # execution of a single DML statement via ExecuteSql or ExecuteStreamingSql. -
6745
- # If any error is encountered during the execution of the partitioned DML
6746
- # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
6747
- # value that can't be stored due to schema constraints), then the operation is
6748
- # stopped at that point and an error is returned. It is possible that at this
6749
- # point, some partitions have been committed (or even committed multiple times),
6750
- # and other partitions have not been run at all. Given the above, Partitioned
6751
- # DML is good fit for large, database-wide, operations that are idempotent, such
6752
- # as deleting old rows from a very large table.
6718
+ # Options to use for transactions.
6753
6719
  class TransactionOptions
6754
6720
  include Google::Apis::Core::Hashable
6755
6721
 
@@ -6809,198 +6775,7 @@ module Google
6809
6775
  class TransactionSelector
6810
6776
  include Google::Apis::Core::Hashable
6811
6777
 
6812
- # Transactions: Each session can have at most one active transaction at a time (
6813
- # note that standalone reads and queries use a transaction internally and do
6814
- # count towards the one transaction limit). After the active transaction is
6815
- # completed, the session can immediately be re-used for the next transaction. It
6816
- # is not necessary to create a new session for each transaction. Transaction
6817
- # modes: Cloud Spanner supports three transaction modes: 1. Locking read-write.
6818
- # This type of transaction is the only way to write data into Cloud Spanner.
6819
- # These transactions rely on pessimistic locking and, if necessary, two-phase
6820
- # commit. Locking read-write transactions may abort, requiring the application
6821
- # to retry. 2. Snapshot read-only. Snapshot read-only transactions provide
6822
- # guaranteed consistency across several reads, but do not allow writes. Snapshot
6823
- # read-only transactions can be configured to read at timestamps in the past, or
6824
- # configured to perform a strong read (where Spanner selects a timestamp such
6825
- # that the read is guaranteed to see the effects of all transactions that have
6826
- # committed before the start of the read). Snapshot read-only transactions do
6827
- # not need to be committed. Queries on change streams must be performed with the
6828
- # snapshot read-only transaction mode, specifying a strong read. See
6829
- # TransactionOptions.ReadOnly.strong for more details. 3. Partitioned DML. This
6830
- # type of transaction is used to execute a single Partitioned DML statement.
6831
- # Partitioned DML partitions the key space and runs the DML statement over each
6832
- # partition in parallel using separate, internal transactions that commit
6833
- # independently. Partitioned DML transactions do not need to be committed. For
6834
- # transactions that only read, snapshot read-only transactions provide simpler
6835
- # semantics and are almost always faster. In particular, read-only transactions
6836
- # do not take locks, so they do not conflict with read-write transactions. As a
6837
- # consequence of not taking locks, they also do not abort, so retry loops are
6838
- # not needed. Transactions may only read-write data in a single database. They
6839
- # may, however, read-write data in different tables within that database.
6840
- # Locking read-write transactions: Locking transactions may be used to
6841
- # atomically read-modify-write data anywhere in a database. This type of
6842
- # transaction is externally consistent. Clients should attempt to minimize the
6843
- # amount of time a transaction is active. Faster transactions commit with higher
6844
- # probability and cause less contention. Cloud Spanner attempts to keep read
6845
- # locks active as long as the transaction continues to do reads, and the
6846
- # transaction has not been terminated by Commit or Rollback. Long periods of
6847
- # inactivity at the client may cause Cloud Spanner to release a transaction's
6848
- # locks and abort it. Conceptually, a read-write transaction consists of zero or
6849
- # more reads or SQL statements followed by Commit. At any time before Commit,
6850
- # the client can send a Rollback request to abort the transaction. Semantics:
6851
- # Cloud Spanner can commit the transaction if all read locks it acquired are
6852
- # still valid at commit time, and it is able to acquire write locks for all
6853
- # writes. Cloud Spanner can abort the transaction for any reason. If a commit
6854
- # attempt returns `ABORTED`, Cloud Spanner guarantees that the transaction has
6855
- # not modified any user data in Cloud Spanner. Unless the transaction commits,
6856
- # Cloud Spanner makes no guarantees about how long the transaction's locks were
6857
- # held for. It is an error to use Cloud Spanner locks for any sort of mutual
6858
- # exclusion other than between Cloud Spanner transactions themselves. Retrying
6859
- # aborted transactions: When a transaction aborts, the application can choose to
6860
- # retry the whole transaction again. To maximize the chances of successfully
6861
- # committing the retry, the client should execute the retry in the same session
6862
- # as the original attempt. The original session's lock priority increases with
6863
- # each consecutive abort, meaning that each attempt has a slightly better chance
6864
- # of success than the previous. Note that the lock priority is preserved per
6865
- # session (not per transaction). Lock priority is set by the first read or write
6866
- # in the first attempt of a read-write transaction. If the application starts a
6867
- # new session to retry the whole transaction, the transaction loses its original
6868
- # lock priority. Moreover, the lock priority is only preserved if the
6869
- # transaction fails with an `ABORTED` error. Under some circumstances (for
6870
- # example, many transactions attempting to modify the same row(s)), a
6871
- # transaction can abort many times in a short period before successfully
6872
- # committing. Thus, it is not a good idea to cap the number of retries a
6873
- # transaction can attempt; instead, it is better to limit the total amount of
6874
- # time spent retrying. Idle transactions: A transaction is considered idle if it
6875
- # has no outstanding reads or SQL queries and has not started a read or SQL
6876
- # query within the last 10 seconds. Idle transactions can be aborted by Cloud
6877
- # Spanner so that they don't hold on to locks indefinitely. If an idle
6878
- # transaction is aborted, the commit fails with error `ABORTED`. If this
6879
- # behavior is undesirable, periodically executing a simple SQL query in the
6880
- # transaction (for example, `SELECT 1`) prevents the transaction from becoming
6881
- # idle. Snapshot read-only transactions: Snapshot read-only transactions
6882
- # provides a simpler method than locking read-write transactions for doing
6883
- # several consistent reads. However, this type of transaction does not support
6884
- # writes. Snapshot transactions do not take locks. Instead, they work by
6885
- # choosing a Cloud Spanner timestamp, then executing all reads at that timestamp.
6886
- # Since they do not acquire locks, they do not block concurrent read-write
6887
- # transactions. Unlike locking read-write transactions, snapshot read-only
6888
- # transactions never abort. They can fail if the chosen read timestamp is
6889
- # garbage collected; however, the default garbage collection policy is generous
6890
- # enough that most applications do not need to worry about this in practice.
6891
- # Snapshot read-only transactions do not need to call Commit or Rollback (and in
6892
- # fact are not permitted to do so). To execute a snapshot transaction, the
6893
- # client specifies a timestamp bound, which tells Cloud Spanner how to choose a
6894
- # read timestamp. The types of timestamp bound are: - Strong (the default). -
6895
- # Bounded staleness. - Exact staleness. If the Cloud Spanner database to be read
6896
- # is geographically distributed, stale read-only transactions can execute more
6897
- # quickly than strong or read-write transactions, because they are able to
6898
- # execute far from the leader replica. Each type of timestamp bound is discussed
6899
- # in detail below. Strong: Strong reads are guaranteed to see the effects of all
6900
- # transactions that have committed before the start of the read. Furthermore,
6901
- # all rows yielded by a single read are consistent with each other -- if any
6902
- # part of the read observes a transaction, all parts of the read see the
6903
- # transaction. Strong reads are not repeatable: two consecutive strong read-only
6904
- # transactions might return inconsistent results if there are concurrent writes.
6905
- # If consistency across reads is required, the reads should be executed within a
6906
- # transaction or at an exact read timestamp. Queries on change streams (see
6907
- # below for more details) must also specify the strong read timestamp bound. See
6908
- # TransactionOptions.ReadOnly.strong. Exact staleness: These timestamp bounds
6909
- # execute reads at a user-specified timestamp. Reads at a timestamp are
6910
- # guaranteed to see a consistent prefix of the global transaction history: they
6911
- # observe modifications done by all transactions with a commit timestamp less
6912
- # than or equal to the read timestamp, and observe none of the modifications
6913
- # done by transactions with a larger commit timestamp. They block until all
6914
- # conflicting transactions that can be assigned commit timestamps <= the read
6915
- # timestamp have finished. The timestamp can either be expressed as an absolute
6916
- # Cloud Spanner commit timestamp or a staleness relative to the current time.
6917
- # These modes do not require a "negotiation phase" to pick a timestamp. As a
6918
- # result, they execute slightly faster than the equivalent boundedly stale
6919
- # concurrency modes. On the other hand, boundedly stale reads usually return
6920
- # fresher results. See TransactionOptions.ReadOnly.read_timestamp and
6921
- # TransactionOptions.ReadOnly.exact_staleness. Bounded staleness: Bounded
6922
- # staleness modes allow Cloud Spanner to pick the read timestamp, subject to a
6923
- # user-provided staleness bound. Cloud Spanner chooses the newest timestamp
6924
- # within the staleness bound that allows execution of the reads at the closest
6925
- # available replica without blocking. All rows yielded are consistent with each
6926
- # other -- if any part of the read observes a transaction, all parts of the read
6927
- # see the transaction. Boundedly stale reads are not repeatable: two stale reads,
6928
- # even if they use the same staleness bound, can execute at different
6929
- # timestamps and thus return inconsistent results. Boundedly stale reads execute
6930
- # in two phases: the first phase negotiates a timestamp among all replicas
6931
- # needed to serve the read. In the second phase, reads are executed at the
6932
- # negotiated timestamp. As a result of the two phase execution, bounded
6933
- # staleness reads are usually a little slower than comparable exact staleness
6934
- # reads. However, they are typically able to return fresher results, and are
6935
- # more likely to execute at the closest replica. Because the timestamp
6936
- # negotiation requires up-front knowledge of which rows are read, it can only be
6937
- # used with single-use read-only transactions. See TransactionOptions.ReadOnly.
6938
- # max_staleness and TransactionOptions.ReadOnly.min_read_timestamp. Old read
6939
- # timestamps and garbage collection: Cloud Spanner continuously garbage collects
6940
- # deleted and overwritten data in the background to reclaim storage space. This
6941
- # process is known as "version GC". By default, version GC reclaims versions
6942
- # after they are one hour old. Because of this, Cloud Spanner can't perform
6943
- # reads at read timestamps more than one hour in the past. This restriction also
6944
- # applies to in-progress reads and/or SQL queries whose timestamp become too old
6945
- # while executing. Reads and SQL queries with too-old read timestamps fail with
6946
- # the error `FAILED_PRECONDITION`. You can configure and extend the `
6947
- # VERSION_RETENTION_PERIOD` of a database up to a period as long as one week,
6948
- # which allows Cloud Spanner to perform reads up to one week in the past.
6949
- # Querying change Streams: A Change Stream is a schema object that can be
6950
- # configured to watch data changes on the entire database, a set of tables, or a
6951
- # set of columns in a database. When a change stream is created, Spanner
6952
- # automatically defines a corresponding SQL Table-Valued Function (TVF) that can
6953
- # be used to query the change records in the associated change stream using the
6954
- # ExecuteStreamingSql API. The name of the TVF for a change stream is generated
6955
- # from the name of the change stream: READ_. All queries on change stream TVFs
6956
- # must be executed using the ExecuteStreamingSql API with a single-use read-only
6957
- # transaction with a strong read-only timestamp_bound. The change stream TVF
6958
- # allows users to specify the start_timestamp and end_timestamp for the time
6959
- # range of interest. All change records within the retention period is
6960
- # accessible using the strong read-only timestamp_bound. All other
6961
- # TransactionOptions are invalid for change stream queries. In addition, if
6962
- # TransactionOptions.read_only.return_read_timestamp is set to true, a special
6963
- # value of 2^63 - 2 is returned in the Transaction message that describes the
6964
- # transaction, instead of a valid read timestamp. This special value should be
6965
- # discarded and not used for any subsequent queries. Please see https://cloud.
6966
- # google.com/spanner/docs/change-streams for more details on how to query the
6967
- # change stream TVFs. Partitioned DML transactions: Partitioned DML transactions
6968
- # are used to execute DML statements with a different execution strategy that
6969
- # provides different, and often better, scalability properties for large, table-
6970
- # wide operations than DML in a ReadWrite transaction. Smaller scoped statements,
6971
- # such as an OLTP workload, should prefer using ReadWrite transactions.
6972
- # Partitioned DML partitions the keyspace and runs the DML statement on each
6973
- # partition in separate, internal transactions. These transactions commit
6974
- # automatically when complete, and run independently from one another. To reduce
6975
- # lock contention, this execution strategy only acquires read locks on rows that
6976
- # match the WHERE clause of the statement. Additionally, the smaller per-
6977
- # partition transactions hold locks for less time. That said, Partitioned DML is
6978
- # not a drop-in replacement for standard DML used in ReadWrite transactions. -
6979
- # The DML statement must be fully-partitionable. Specifically, the statement
6980
- # must be expressible as the union of many statements which each access only a
6981
- # single row of the table. - The statement is not applied atomically to all rows
6982
- # of the table. Rather, the statement is applied atomically to partitions of the
6983
- # table, in independent transactions. Secondary index rows are updated
6984
- # atomically with the base table rows. - Partitioned DML does not guarantee
6985
- # exactly-once execution semantics against a partition. The statement is applied
6986
- # at least once to each partition. It is strongly recommended that the DML
6987
- # statement should be idempotent to avoid unexpected results. For instance, it
6988
- # is potentially dangerous to run a statement such as `UPDATE table SET column =
6989
- # column + 1` as it could be run multiple times against some rows. - The
6990
- # partitions are committed automatically - there is no support for Commit or
6991
- # Rollback. If the call returns an error, or if the client issuing the
6992
- # ExecuteSql call dies, it is possible that some rows had the statement executed
6993
- # on them successfully. It is also possible that statement was never executed
6994
- # against other rows. - Partitioned DML transactions may only contain the
6995
- # execution of a single DML statement via ExecuteSql or ExecuteStreamingSql. -
6996
- # If any error is encountered during the execution of the partitioned DML
6997
- # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
6998
- # value that can't be stored due to schema constraints), then the operation is
6999
- # stopped at that point and an error is returned. It is possible that at this
7000
- # point, some partitions have been committed (or even committed multiple times),
7001
- # and other partitions have not been run at all. Given the above, Partitioned
7002
- # DML is good fit for large, database-wide, operations that are idempotent, such
7003
- # as deleting old rows from a very large table.
6778
+ # Options to use for transactions.
7004
6779
  # Corresponds to the JSON property `begin`
7005
6780
  # @return [Google::Apis::SpannerV1::TransactionOptions]
7006
6781
  attr_accessor :begin
@@ -7011,198 +6786,7 @@ module Google
7011
6786
  # @return [String]
7012
6787
  attr_accessor :id
7013
6788
 
7014
- # Transactions: Each session can have at most one active transaction at a time (
7015
- # note that standalone reads and queries use a transaction internally and do
7016
- # count towards the one transaction limit). After the active transaction is
7017
- # completed, the session can immediately be re-used for the next transaction. It
7018
- # is not necessary to create a new session for each transaction. Transaction
7019
- # modes: Cloud Spanner supports three transaction modes: 1. Locking read-write.
7020
- # This type of transaction is the only way to write data into Cloud Spanner.
7021
- # These transactions rely on pessimistic locking and, if necessary, two-phase
7022
- # commit. Locking read-write transactions may abort, requiring the application
7023
- # to retry. 2. Snapshot read-only. Snapshot read-only transactions provide
7024
- # guaranteed consistency across several reads, but do not allow writes. Snapshot
7025
- # read-only transactions can be configured to read at timestamps in the past, or
7026
- # configured to perform a strong read (where Spanner selects a timestamp such
7027
- # that the read is guaranteed to see the effects of all transactions that have
7028
- # committed before the start of the read). Snapshot read-only transactions do
7029
- # not need to be committed. Queries on change streams must be performed with the
7030
- # snapshot read-only transaction mode, specifying a strong read. See
7031
- # TransactionOptions.ReadOnly.strong for more details. 3. Partitioned DML. This
7032
- # type of transaction is used to execute a single Partitioned DML statement.
7033
- # Partitioned DML partitions the key space and runs the DML statement over each
7034
- # partition in parallel using separate, internal transactions that commit
7035
- # independently. Partitioned DML transactions do not need to be committed. For
7036
- # transactions that only read, snapshot read-only transactions provide simpler
7037
- # semantics and are almost always faster. In particular, read-only transactions
7038
- # do not take locks, so they do not conflict with read-write transactions. As a
7039
- # consequence of not taking locks, they also do not abort, so retry loops are
7040
- # not needed. Transactions may only read-write data in a single database. They
7041
- # may, however, read-write data in different tables within that database.
7042
- # Locking read-write transactions: Locking transactions may be used to
7043
- # atomically read-modify-write data anywhere in a database. This type of
7044
- # transaction is externally consistent. Clients should attempt to minimize the
7045
- # amount of time a transaction is active. Faster transactions commit with higher
7046
- # probability and cause less contention. Cloud Spanner attempts to keep read
7047
- # locks active as long as the transaction continues to do reads, and the
7048
- # transaction has not been terminated by Commit or Rollback. Long periods of
7049
- # inactivity at the client may cause Cloud Spanner to release a transaction's
7050
- # locks and abort it. Conceptually, a read-write transaction consists of zero or
7051
- # more reads or SQL statements followed by Commit. At any time before Commit,
7052
- # the client can send a Rollback request to abort the transaction. Semantics:
7053
- # Cloud Spanner can commit the transaction if all read locks it acquired are
7054
- # still valid at commit time, and it is able to acquire write locks for all
7055
- # writes. Cloud Spanner can abort the transaction for any reason. If a commit
7056
- # attempt returns `ABORTED`, Cloud Spanner guarantees that the transaction has
7057
- # not modified any user data in Cloud Spanner. Unless the transaction commits,
7058
- # Cloud Spanner makes no guarantees about how long the transaction's locks were
7059
- # held for. It is an error to use Cloud Spanner locks for any sort of mutual
7060
- # exclusion other than between Cloud Spanner transactions themselves. Retrying
7061
- # aborted transactions: When a transaction aborts, the application can choose to
7062
- # retry the whole transaction again. To maximize the chances of successfully
7063
- # committing the retry, the client should execute the retry in the same session
7064
- # as the original attempt. The original session's lock priority increases with
7065
- # each consecutive abort, meaning that each attempt has a slightly better chance
7066
- # of success than the previous. Note that the lock priority is preserved per
7067
- # session (not per transaction). Lock priority is set by the first read or write
7068
- # in the first attempt of a read-write transaction. If the application starts a
7069
- # new session to retry the whole transaction, the transaction loses its original
7070
- # lock priority. Moreover, the lock priority is only preserved if the
7071
- # transaction fails with an `ABORTED` error. Under some circumstances (for
7072
- # example, many transactions attempting to modify the same row(s)), a
7073
- # transaction can abort many times in a short period before successfully
7074
- # committing. Thus, it is not a good idea to cap the number of retries a
7075
- # transaction can attempt; instead, it is better to limit the total amount of
7076
- # time spent retrying. Idle transactions: A transaction is considered idle if it
7077
- # has no outstanding reads or SQL queries and has not started a read or SQL
7078
- # query within the last 10 seconds. Idle transactions can be aborted by Cloud
7079
- # Spanner so that they don't hold on to locks indefinitely. If an idle
7080
- # transaction is aborted, the commit fails with error `ABORTED`. If this
7081
- # behavior is undesirable, periodically executing a simple SQL query in the
7082
- # transaction (for example, `SELECT 1`) prevents the transaction from becoming
7083
- # idle. Snapshot read-only transactions: Snapshot read-only transactions
7084
- # provides a simpler method than locking read-write transactions for doing
7085
- # several consistent reads. However, this type of transaction does not support
7086
- # writes. Snapshot transactions do not take locks. Instead, they work by
7087
- # choosing a Cloud Spanner timestamp, then executing all reads at that timestamp.
7088
- # Since they do not acquire locks, they do not block concurrent read-write
7089
- # transactions. Unlike locking read-write transactions, snapshot read-only
7090
- # transactions never abort. They can fail if the chosen read timestamp is
7091
- # garbage collected; however, the default garbage collection policy is generous
7092
- # enough that most applications do not need to worry about this in practice.
7093
- # Snapshot read-only transactions do not need to call Commit or Rollback (and in
7094
- # fact are not permitted to do so). To execute a snapshot transaction, the
7095
- # client specifies a timestamp bound, which tells Cloud Spanner how to choose a
7096
- # read timestamp. The types of timestamp bound are: - Strong (the default). -
7097
- # Bounded staleness. - Exact staleness. If the Cloud Spanner database to be read
7098
- # is geographically distributed, stale read-only transactions can execute more
7099
- # quickly than strong or read-write transactions, because they are able to
7100
- # execute far from the leader replica. Each type of timestamp bound is discussed
7101
- # in detail below. Strong: Strong reads are guaranteed to see the effects of all
7102
- # transactions that have committed before the start of the read. Furthermore,
7103
- # all rows yielded by a single read are consistent with each other -- if any
7104
- # part of the read observes a transaction, all parts of the read see the
7105
- # transaction. Strong reads are not repeatable: two consecutive strong read-only
7106
- # transactions might return inconsistent results if there are concurrent writes.
7107
- # If consistency across reads is required, the reads should be executed within a
7108
- # transaction or at an exact read timestamp. Queries on change streams (see
7109
- # below for more details) must also specify the strong read timestamp bound. See
7110
- # TransactionOptions.ReadOnly.strong. Exact staleness: These timestamp bounds
7111
- # execute reads at a user-specified timestamp. Reads at a timestamp are
7112
- # guaranteed to see a consistent prefix of the global transaction history: they
7113
- # observe modifications done by all transactions with a commit timestamp less
7114
- # than or equal to the read timestamp, and observe none of the modifications
7115
- # done by transactions with a larger commit timestamp. They block until all
7116
- # conflicting transactions that can be assigned commit timestamps <= the read
7117
- # timestamp have finished. The timestamp can either be expressed as an absolute
7118
- # Cloud Spanner commit timestamp or a staleness relative to the current time.
7119
- # These modes do not require a "negotiation phase" to pick a timestamp. As a
7120
- # result, they execute slightly faster than the equivalent boundedly stale
7121
- # concurrency modes. On the other hand, boundedly stale reads usually return
7122
- # fresher results. See TransactionOptions.ReadOnly.read_timestamp and
7123
- # TransactionOptions.ReadOnly.exact_staleness. Bounded staleness: Bounded
7124
- # staleness modes allow Cloud Spanner to pick the read timestamp, subject to a
7125
- # user-provided staleness bound. Cloud Spanner chooses the newest timestamp
7126
- # within the staleness bound that allows execution of the reads at the closest
7127
- # available replica without blocking. All rows yielded are consistent with each
7128
- # other -- if any part of the read observes a transaction, all parts of the read
7129
- # see the transaction. Boundedly stale reads are not repeatable: two stale reads,
7130
- # even if they use the same staleness bound, can execute at different
7131
- # timestamps and thus return inconsistent results. Boundedly stale reads execute
7132
- # in two phases: the first phase negotiates a timestamp among all replicas
7133
- # needed to serve the read. In the second phase, reads are executed at the
7134
- # negotiated timestamp. As a result of the two phase execution, bounded
7135
- # staleness reads are usually a little slower than comparable exact staleness
7136
- # reads. However, they are typically able to return fresher results, and are
7137
- # more likely to execute at the closest replica. Because the timestamp
7138
- # negotiation requires up-front knowledge of which rows are read, it can only be
7139
- # used with single-use read-only transactions. See TransactionOptions.ReadOnly.
7140
- # max_staleness and TransactionOptions.ReadOnly.min_read_timestamp. Old read
7141
- # timestamps and garbage collection: Cloud Spanner continuously garbage collects
7142
- # deleted and overwritten data in the background to reclaim storage space. This
7143
- # process is known as "version GC". By default, version GC reclaims versions
7144
- # after they are one hour old. Because of this, Cloud Spanner can't perform
7145
- # reads at read timestamps more than one hour in the past. This restriction also
7146
- # applies to in-progress reads and/or SQL queries whose timestamp become too old
7147
- # while executing. Reads and SQL queries with too-old read timestamps fail with
7148
- # the error `FAILED_PRECONDITION`. You can configure and extend the `
7149
- # VERSION_RETENTION_PERIOD` of a database up to a period as long as one week,
7150
- # which allows Cloud Spanner to perform reads up to one week in the past.
7151
- # Querying change Streams: A Change Stream is a schema object that can be
7152
- # configured to watch data changes on the entire database, a set of tables, or a
7153
- # set of columns in a database. When a change stream is created, Spanner
7154
- # automatically defines a corresponding SQL Table-Valued Function (TVF) that can
7155
- # be used to query the change records in the associated change stream using the
7156
- # ExecuteStreamingSql API. The name of the TVF for a change stream is generated
7157
- # from the name of the change stream: READ_. All queries on change stream TVFs
7158
- # must be executed using the ExecuteStreamingSql API with a single-use read-only
7159
- # transaction with a strong read-only timestamp_bound. The change stream TVF
7160
- # allows users to specify the start_timestamp and end_timestamp for the time
7161
- # range of interest. All change records within the retention period is
7162
- # accessible using the strong read-only timestamp_bound. All other
7163
- # TransactionOptions are invalid for change stream queries. In addition, if
7164
- # TransactionOptions.read_only.return_read_timestamp is set to true, a special
7165
- # value of 2^63 - 2 is returned in the Transaction message that describes the
7166
- # transaction, instead of a valid read timestamp. This special value should be
7167
- # discarded and not used for any subsequent queries. Please see https://cloud.
7168
- # google.com/spanner/docs/change-streams for more details on how to query the
7169
- # change stream TVFs. Partitioned DML transactions: Partitioned DML transactions
7170
- # are used to execute DML statements with a different execution strategy that
7171
- # provides different, and often better, scalability properties for large, table-
7172
- # wide operations than DML in a ReadWrite transaction. Smaller scoped statements,
7173
- # such as an OLTP workload, should prefer using ReadWrite transactions.
7174
- # Partitioned DML partitions the keyspace and runs the DML statement on each
7175
- # partition in separate, internal transactions. These transactions commit
7176
- # automatically when complete, and run independently from one another. To reduce
7177
- # lock contention, this execution strategy only acquires read locks on rows that
7178
- # match the WHERE clause of the statement. Additionally, the smaller per-
7179
- # partition transactions hold locks for less time. That said, Partitioned DML is
7180
- # not a drop-in replacement for standard DML used in ReadWrite transactions. -
7181
- # The DML statement must be fully-partitionable. Specifically, the statement
7182
- # must be expressible as the union of many statements which each access only a
7183
- # single row of the table. - The statement is not applied atomically to all rows
7184
- # of the table. Rather, the statement is applied atomically to partitions of the
7185
- # table, in independent transactions. Secondary index rows are updated
7186
- # atomically with the base table rows. - Partitioned DML does not guarantee
7187
- # exactly-once execution semantics against a partition. The statement is applied
7188
- # at least once to each partition. It is strongly recommended that the DML
7189
- # statement should be idempotent to avoid unexpected results. For instance, it
7190
- # is potentially dangerous to run a statement such as `UPDATE table SET column =
7191
- # column + 1` as it could be run multiple times against some rows. - The
7192
- # partitions are committed automatically - there is no support for Commit or
7193
- # Rollback. If the call returns an error, or if the client issuing the
7194
- # ExecuteSql call dies, it is possible that some rows had the statement executed
7195
- # on them successfully. It is also possible that statement was never executed
7196
- # against other rows. - Partitioned DML transactions may only contain the
7197
- # execution of a single DML statement via ExecuteSql or ExecuteStreamingSql. -
7198
- # If any error is encountered during the execution of the partitioned DML
7199
- # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
7200
- # value that can't be stored due to schema constraints), then the operation is
7201
- # stopped at that point and an error is returned. It is possible that at this
7202
- # point, some partitions have been committed (or even committed multiple times),
7203
- # and other partitions have not been run at all. Given the above, Partitioned
7204
- # DML is good fit for large, database-wide, operations that are idempotent, such
7205
- # as deleting old rows from a very large table.
6789
+ # Options to use for transactions.
7206
6790
  # Corresponds to the JSON property `singleUse`
7207
6791
  # @return [Google::Apis::SpannerV1::TransactionOptions]
7208
6792
  attr_accessor :single_use