google-cloud-bigquery 1.39.0 → 1.40.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 599d3411dcba02466b7d161bdf01c9b1d2b6f8874c98b5bf309afb4e4f6e324a
4
- data.tar.gz: 2a186e09cd38a6140c92d1df3fef992091b4448211625e4a564ea3f29802ec9a
3
+ metadata.gz: a0cd69b913db71840e3b893b15edfefe3c18bb204c8a8e542a82afcee37c0eeb
4
+ data.tar.gz: cfd00e65e67aac9e07e1eada563f221c462004319c7fa2a6d38915434ef21f19
5
5
  SHA512:
6
- metadata.gz: 2f1eb30d1ed33b4e961eb4bfd5714ad3e92afc25ca22e1668c6a2d00fc0ad50319bc634b3e2d957fb826ee2991b15802c18a1fcf50bd96a7c3c846030485adda
7
- data.tar.gz: f439398806448a13ba5f8b70f7c2846e68e6594c379b75145837d67e81fcb8b59eaaed3512569b553b93562c0aaae96a5570e8b95521a21d0c7f7854056f5214
6
+ metadata.gz: b9027b585d39714f8977ccdc397d04f8caf4f036f6be86ff091f0b1268379ef0c7410247d4186b6a164ceece002ade98bdea6efb3473747e14b1b11bf90649a5
7
+ data.tar.gz: 40d261fc0e31e07b05aa619099588a3341f74cd3ec0cf7a3e0ba46efc629a94069076e190987484b3450212c310f6230abd5e8186679bfbe16dedc50509efc03
data/AUTHENTICATION.md CHANGED
@@ -106,15 +106,6 @@ To configure your system for this, simply:
106
106
  **NOTE:** This is _not_ recommended for running in production. The Cloud SDK
107
107
  *should* only be used during development.
108
108
 
109
- [gce-how-to]: https://cloud.google.com/compute/docs/authentication#using
110
- [dev-console]: https://console.cloud.google.com/project
111
-
112
- [enable-apis]: https://raw.githubusercontent.com/GoogleCloudPlatform/gcloud-common/master/authentication/enable-apis.png
113
-
114
- [create-new-service-account]: https://raw.githubusercontent.com/GoogleCloudPlatform/gcloud-common/master/authentication/create-new-service-account.png
115
- [create-new-service-account-existing-keys]: https://raw.githubusercontent.com/GoogleCloudPlatform/gcloud-common/master/authentication/create-new-service-account-existing-keys.png
116
- [reuse-service-account]: https://raw.githubusercontent.com/GoogleCloudPlatform/gcloud-common/master/authentication/reuse-service-account.png
117
-
118
109
  ## Creating a Service Account
119
110
 
120
111
  Google Cloud requires a **Project ID** and **Service Account Credentials** to
@@ -124,31 +115,22 @@ connect to most services with google-cloud-bigquery.
124
115
  If you are not running this client on Google Compute Engine, you need a Google
125
116
  Developers service account.
126
117
 
127
- 1. Visit the [Google Developers Console][dev-console].
118
+ 1. Visit the [Google Cloud Console](https://console.cloud.google.com/project).
128
119
  1. Create a new project or click on an existing project.
129
- 1. Activate the slide-out navigation tray and select **API Manager**. From
120
+ 1. Activate the menu in the upper left and select **APIs & Services**. From
130
121
  here, you will enable the APIs that your application requires.
131
122
 
132
- ![Enable the APIs that your application requires][enable-apis]
133
-
134
123
  *Note: You may need to enable billing in order to use these services.*
135
124
 
136
125
  1. Select **Credentials** from the side navigation.
137
126
 
138
- You should see a screen like one of the following.
139
-
140
- ![Create a new service account][create-new-service-account]
141
-
142
- ![Create a new service account With Existing Keys][create-new-service-account-existing-keys]
143
-
144
- Find the "Add credentials" drop down and select "Service account" to be
145
- guided through downloading a new JSON key file.
146
-
147
- If you want to re-use an existing service account, you can easily generate a
148
- new key file. Just select the account you wish to re-use, and click "Generate
149
- new JSON key":
127
+ Find the "Create credentials" drop down near the top of the page, and select
128
+ "Service account" to be guided through downloading a new JSON key file.
150
129
 
151
- ![Re-use an existing service account][reuse-service-account]
130
+ If you want to re-use an existing service account, you can easily generate
131
+ a new key file. Just select the account you wish to re-use click the pencil
132
+ tool on the right side to edit the service account, select the **Keys** tab,
133
+ and then select **Add Key**.
152
134
 
153
135
  The key file you download will be used by this library to authenticate API
154
136
  requests and should be stored in a secure location.
data/CHANGELOG.md CHANGED
@@ -1,5 +1,11 @@
1
1
  # Release History
2
2
 
3
+ ### 1.40.0 (2022-12-14)
4
+
5
+ #### Features
6
+
7
+ * support table snapshot and clone ([#19354](https://github.com/googleapis/google-cloud-ruby/issues/19354))
8
+
3
9
  ### 1.39.0 (2022-07-27)
4
10
 
5
11
  #### Features
data/LOGGING.md CHANGED
@@ -4,7 +4,7 @@ To enable logging for this library, set the logger for the underlying [Google
4
4
  API
5
5
  Client](https://github.com/google/google-api-ruby-client/blob/master/README.md#logging)
6
6
  library. The logger that you set may be a Ruby stdlib
7
- [`Logger`](https://ruby-doc.org/stdlib/libdoc/logger/rdoc/Logger.html) as
7
+ [`Logger`](https://ruby-doc.org/current/stdlibs/logger/Logger.html) as
8
8
  shown below, or a
9
9
  [`Google::Cloud::Logging::Logger`](https://googleapis.dev/ruby/google-cloud-logging/latest)
10
10
  that will write logs to [Stackdriver
data/OVERVIEW.md CHANGED
@@ -276,7 +276,7 @@ To follow along with these examples, you will need to set up billing on the
276
276
  [Google Developers Console](https://console.developers.google.com).
277
277
 
278
278
  In addition to CSV, data can be imported from files that are formatted as
279
- [Newline-delimited JSON](http://jsonlines.org/),
279
+ [Newline-delimited JSON](https://jsonlines.org/),
280
280
  [Avro](http://avro.apache.org/),
281
281
  [ORC](https://cloud.google.com/bigquery/docs/loading-data-cloud-storage-orc),
282
282
  [Parquet](https://parquet.apache.org/) or from a Google Cloud Datastore backup.
@@ -17,6 +17,24 @@ require "google/cloud/bigquery/encryption_configuration"
17
17
  module Google
18
18
  module Cloud
19
19
  module Bigquery
20
+ module OperationType
21
+ # Different operation types supported in table copy job.
22
+ # https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#operationtype
23
+
24
+ # The source and destination table have the same table type.
25
+ COPY = "COPY".freeze
26
+
27
+ # The source table type is TABLE and the destination table type is SNAPSHOT.
28
+ SNAPSHOT = "SNAPSHOT".freeze
29
+
30
+ # The source table type is SNAPSHOT and the destination table type is TABLE.
31
+ RESTORE = "RESTORE".freeze
32
+
33
+ # The source and destination table have the same table type, but only bill for
34
+ # unique data.
35
+ CLONE = "CLONE".freeze
36
+ end
37
+
20
38
  ##
21
39
  # # CopyJob
22
40
  #
@@ -157,7 +175,8 @@ module Google
157
175
  job_ref = service.job_ref_from options[:job_id], options[:prefix]
158
176
  copy_cfg = Google::Apis::BigqueryV2::JobConfigurationTableCopy.new(
159
177
  source_table: source,
160
- destination_table: target
178
+ destination_table: target,
179
+ operation_type: options[:operation_type]
161
180
  )
162
181
  req = Google::Apis::BigqueryV2::Job.new(
163
182
  job_reference: job_ref,
@@ -1816,7 +1816,7 @@ module Google
1816
1816
  # The following values are supported:
1817
1817
  #
1818
1818
  # * `csv` - CSV
1819
- # * `json` - [Newline-delimited JSON](http://jsonlines.org/)
1819
+ # * `json` - [Newline-delimited JSON](https://jsonlines.org/)
1820
1820
  # * `avro` - [Avro](http://avro.apache.org/)
1821
1821
  # * `sheets` - Google Sheets
1822
1822
  # * `datastore_backup` - Cloud Datastore backup
@@ -1879,7 +1879,7 @@ module Google
1879
1879
  # The following values are supported:
1880
1880
  #
1881
1881
  # * `csv` - CSV
1882
- # * `json` - [Newline-delimited JSON](http://jsonlines.org/)
1882
+ # * `json` - [Newline-delimited JSON](https://jsonlines.org/)
1883
1883
  # * `avro` - [Avro](http://avro.apache.org/)
1884
1884
  # * `orc` - [ORC](https://cloud.google.com/bigquery/docs/loading-data-cloud-storage-orc)
1885
1885
  # * `parquet` - [Parquet](https://parquet.apache.org/)
@@ -2141,7 +2141,7 @@ module Google
2141
2141
  # The following values are supported:
2142
2142
  #
2143
2143
  # * `csv` - CSV
2144
- # * `json` - [Newline-delimited JSON](http://jsonlines.org/)
2144
+ # * `json` - [Newline-delimited JSON](https://jsonlines.org/)
2145
2145
  # * `avro` - [Avro](http://avro.apache.org/)
2146
2146
  # * `orc` - [ORC](https://cloud.google.com/bigquery/docs/loading-data-cloud-storage-orc)
2147
2147
  # * `parquet` - [Parquet](https://parquet.apache.org/)
@@ -108,7 +108,7 @@ module Google
108
108
 
109
109
  ##
110
110
  # Checks if the destination format for the table data is [newline-delimited
111
- # JSON](http://jsonlines.org/). The default is `false`. Not applicable when
111
+ # JSON](https://jsonlines.org/). The default is `false`. Not applicable when
112
112
  # extracting models.
113
113
  #
114
114
  # @return [Boolean] `true` when `NEWLINE_DELIMITED_JSON`, `false` if not
@@ -362,7 +362,7 @@ module Google
362
362
  # Supported values for tables:
363
363
  #
364
364
  # * `csv` - CSV
365
- # * `json` - [Newline-delimited JSON](http://jsonlines.org/)
365
+ # * `json` - [Newline-delimited JSON](https://jsonlines.org/)
366
366
  # * `avro` - [Avro](http://avro.apache.org/)
367
367
  #
368
368
  # Supported values for models:
@@ -188,7 +188,7 @@ module Google
188
188
 
189
189
  ##
190
190
  # Checks if the format of the source data is [newline-delimited
191
- # JSON](http://jsonlines.org/). The default is `false`.
191
+ # JSON](https://jsonlines.org/). The default is `false`.
192
192
  #
193
193
  # @return [Boolean] `true` when the source format is
194
194
  # `NEWLINE_DELIMITED_JSON`, `false` otherwise.
@@ -1269,7 +1269,7 @@ module Google
1269
1269
  # The following values are supported:
1270
1270
  #
1271
1271
  # * `csv` - CSV
1272
- # * `json` - [Newline-delimited JSON](http://jsonlines.org/)
1272
+ # * `json` - [Newline-delimited JSON](https://jsonlines.org/)
1273
1273
  # * `avro` - [Avro](http://avro.apache.org/)
1274
1274
  # * `orc` - [ORC](https://cloud.google.com/bigquery/docs/loading-data-cloud-storage-orc)
1275
1275
  # * `parquet` - [Parquet](https://parquet.apache.org/)
@@ -961,7 +961,7 @@ module Google
961
961
  # The following values are supported:
962
962
  #
963
963
  # * `csv` - CSV
964
- # * `json` - [Newline-delimited JSON](http://jsonlines.org/)
964
+ # * `json` - [Newline-delimited JSON](https://jsonlines.org/)
965
965
  # * `avro` - [Avro](http://avro.apache.org/)
966
966
  # * `sheets` - Google Sheets
967
967
  # * `datastore_backup` - Cloud Datastore backup
@@ -1554,7 +1554,7 @@ module Google
1554
1554
  # Supported values for tables:
1555
1555
  #
1556
1556
  # * `csv` - CSV
1557
- # * `json` - [Newline-delimited JSON](http://jsonlines.org/)
1557
+ # * `json` - [Newline-delimited JSON](https://jsonlines.org/)
1558
1558
  # * `avro` - [Avro](http://avro.apache.org/)
1559
1559
  #
1560
1560
  # Supported values for models:
@@ -1683,7 +1683,7 @@ module Google
1683
1683
  # Supported values for tables:
1684
1684
  #
1685
1685
  # * `csv` - CSV
1686
- # * `json` - [Newline-delimited JSON](http://jsonlines.org/)
1686
+ # * `json` - [Newline-delimited JSON](https://jsonlines.org/)
1687
1687
  # * `avro` - [Avro](http://avro.apache.org/)
1688
1688
  #
1689
1689
  # Supported values for models:
@@ -154,6 +154,45 @@ module Google
154
154
  @gapi.table_reference.project_id
155
155
  end
156
156
 
157
+ ##
158
+ # The type of the table like if its a TABLE, VIEW or SNAPSHOT etc.,
159
+ #
160
+ # @return [String, nil] Type of the table, or
161
+ # `nil` if the object is a reference (see {#reference?}).
162
+ #
163
+ # @!group Attributes
164
+ #
165
+ def type
166
+ return nil if reference?
167
+ @gapi.type
168
+ end
169
+
170
+ ##
171
+ # The Information about base table and snapshot time of the table.
172
+ #
173
+ # @return [Google::Apis::BigqueryV2::SnapshotDefinition, nil] Snapshot definition of table snapshot, or
174
+ # `nil` if not snapshot or the object is a reference (see {#reference?}).
175
+ #
176
+ # @!group Attributes
177
+ #
178
+ def snapshot_definition
179
+ return nil if reference?
180
+ @gapi.snapshot_definition
181
+ end
182
+
183
+ ##
184
+ # The Information about base table and clone time of the table.
185
+ #
186
+ # @return [Google::Apis::BigqueryV2::CloneDefinition, nil] Clone definition of table clone, or
187
+ # `nil` if not clone or the object is a reference (see {#reference?}).
188
+ #
189
+ # @!group Attributes
190
+ #
191
+ def clone_definition
192
+ return nil if reference?
193
+ @gapi.clone_definition
194
+ end
195
+
157
196
  ##
158
197
  # @private The gapi fragment containing the Project ID, Dataset ID, and
159
198
  # Table ID.
@@ -820,6 +859,40 @@ module Google
820
859
  @gapi.type == "VIEW"
821
860
  end
822
861
 
862
+ ##
863
+ # Checks if the table's type is `SNAPSHOT`, indicating that the table
864
+ # represents a BigQuery table snapshot.
865
+ #
866
+ # @see https://cloud.google.com/bigquery/docs/table-snapshots-intro
867
+ #
868
+ # @return [Boolean, nil] `true` when the type is `SNAPSHOT`, `false`
869
+ # otherwise, if the object is a resource (see {#resource?}); `nil` if
870
+ # the object is a reference (see {#reference?}).
871
+ #
872
+ # @!group Attributes
873
+ #
874
+ def snapshot?
875
+ return nil if reference?
876
+ @gapi.type == "SNAPSHOT"
877
+ end
878
+
879
+ ##
880
+ # Checks if the table's type is `CLONE`, indicating that the table
881
+ # represents a BigQuery table clone.
882
+ #
883
+ # @see https://cloud.google.com/bigquery/docs/table-clones-intro
884
+ #
885
+ # @return [Boolean, nil] `true` when the type is `CLONE`, `false`
886
+ # otherwise, if the object is a resource (see {#resource?}); `nil` if
887
+ # the object is a reference (see {#reference?}).
888
+ #
889
+ # @!group Attributes
890
+ #
891
+ def clone?
892
+ return nil if reference?
893
+ !@gapi.clone_definition.nil?
894
+ end
895
+
823
896
  ##
824
897
  # Checks if the table's type is `MATERIALIZED_VIEW`, indicating that
825
898
  # the table represents a BigQuery materialized view.
@@ -1697,9 +1770,16 @@ module Google
1697
1770
  #
1698
1771
  # @!group Data
1699
1772
  #
1700
- def copy_job destination_table, create: nil, write: nil, job_id: nil, prefix: nil, labels: nil, dryrun: nil
1773
+ def copy_job destination_table, create: nil, write: nil, job_id: nil, prefix: nil, labels: nil, dryrun: nil,
1774
+ operation_type: nil
1701
1775
  ensure_service!
1702
- options = { create: create, write: write, dryrun: dryrun, labels: labels, job_id: job_id, prefix: prefix }
1776
+ options = { create: create,
1777
+ write: write,
1778
+ dryrun: dryrun,
1779
+ labels: labels,
1780
+ job_id: job_id,
1781
+ prefix: prefix,
1782
+ operation_type: operation_type }
1703
1783
  updater = CopyJob::Updater.from_options(
1704
1784
  service,
1705
1785
  table_ref,
@@ -1780,10 +1860,195 @@ module Google
1780
1860
  # @!group Data
1781
1861
  #
1782
1862
  def copy destination_table, create: nil, write: nil, &block
1783
- job = copy_job destination_table, create: create, write: write, &block
1784
- job.wait_until_done!
1785
- ensure_job_succeeded! job
1786
- true
1863
+ copy_job_with_operation_type destination_table,
1864
+ create: create,
1865
+ write: write,
1866
+ operation_type: OperationType::COPY,
1867
+ &block
1868
+ end
1869
+
1870
+ ##
1871
+ # Clones the data from the table to another table using a synchronous
1872
+ # method that blocks for a response.
1873
+ # The source and destination table have the same table type, but only bill for
1874
+ # unique data.
1875
+ # Timeouts and transient errors are generally handled as needed to complete the job.
1876
+ # See also {#copy_job}.
1877
+ #
1878
+ # The geographic location for the job ("US", "EU", etc.) can be set via
1879
+ # {CopyJob::Updater#location=} in a block passed to this method. If the
1880
+ # table is a full resource representation (see {#resource_full?}), the
1881
+ # location of the job will be automatically set to the location of the
1882
+ # table.
1883
+ #
1884
+ # @param [Table, String] destination_table The destination for the
1885
+ # copied data. This can also be a string identifier as specified by
1886
+ # the [Standard SQL Query
1887
+ # Reference](https://cloud.google.com/bigquery/docs/reference/standard-sql/query-syntax#from-clause)
1888
+ # (`project-name.dataset_id.table_id`) or the [Legacy SQL Query
1889
+ # Reference](https://cloud.google.com/bigquery/query-reference#from)
1890
+ # (`project-name:dataset_id.table_id`). This is useful for referencing
1891
+ # tables in other projects and datasets.
1892
+ #
1893
+ # @yield [job] a job configuration object
1894
+ # @yieldparam [Google::Cloud::Bigquery::CopyJob::Updater] job a job
1895
+ # configuration object for setting additional options.
1896
+ #
1897
+ # @return [Boolean] Returns `true` if the copy operation succeeded.
1898
+ #
1899
+ # @example
1900
+ # require "google/cloud/bigquery"
1901
+ #
1902
+ # bigquery = Google::Cloud::Bigquery.new
1903
+ # dataset = bigquery.dataset "my_dataset"
1904
+ # table = dataset.table "my_table"
1905
+ # destination_table = dataset.table "my_destination_table"
1906
+ #
1907
+ # table.clone destination_table
1908
+ #
1909
+ # @example Passing a string identifier for the destination table:
1910
+ # require "google/cloud/bigquery"
1911
+ #
1912
+ # bigquery = Google::Cloud::Bigquery.new
1913
+ # dataset = bigquery.dataset "my_dataset"
1914
+ # table = dataset.table "my_table"
1915
+ #
1916
+ # table.clone "other-project:other_dataset.other_table"
1917
+ #
1918
+ # @!group Data
1919
+ #
1920
+ def clone destination_table, &block
1921
+ copy_job_with_operation_type destination_table,
1922
+ operation_type: OperationType::CLONE,
1923
+ &block
1924
+ end
1925
+
1926
+ ##
1927
+ # Takes snapshot of the data from the table to another table using a synchronous
1928
+ # method that blocks for a response.
1929
+ # The source table type is TABLE and the destination table type is SNAPSHOT.
1930
+ # Timeouts and transient errors are generally handled as needed to complete the job.
1931
+ # See also {#copy_job}.
1932
+ #
1933
+ # The geographic location for the job ("US", "EU", etc.) can be set via
1934
+ # {CopyJob::Updater#location=} in a block passed to this method. If the
1935
+ # table is a full resource representation (see {#resource_full?}), the
1936
+ # location of the job will be automatically set to the location of the
1937
+ # table.
1938
+ #
1939
+ # @param [Table, String] destination_table The destination for the
1940
+ # copied data. This can also be a string identifier as specified by
1941
+ # the [Standard SQL Query
1942
+ # Reference](https://cloud.google.com/bigquery/docs/reference/standard-sql/query-syntax#from-clause)
1943
+ # (`project-name.dataset_id.table_id`) or the [Legacy SQL Query
1944
+ # Reference](https://cloud.google.com/bigquery/query-reference#from)
1945
+ # (`project-name:dataset_id.table_id`). This is useful for referencing
1946
+ # tables in other projects and datasets.
1947
+ #
1948
+ # @yield [job] a job configuration object
1949
+ # @yieldparam [Google::Cloud::Bigquery::CopyJob::Updater] job a job
1950
+ # configuration object for setting additional options.
1951
+ #
1952
+ # @return [Boolean] Returns `true` if the copy operation succeeded.
1953
+ #
1954
+ # @example
1955
+ # require "google/cloud/bigquery"
1956
+ #
1957
+ # bigquery = Google::Cloud::Bigquery.new
1958
+ # dataset = bigquery.dataset "my_dataset"
1959
+ # table = dataset.table "my_table"
1960
+ # destination_table = dataset.table "my_destination_table"
1961
+ #
1962
+ # table.snapshot destination_table
1963
+ #
1964
+ # @example Passing a string identifier for the destination table:
1965
+ # require "google/cloud/bigquery"
1966
+ #
1967
+ # bigquery = Google::Cloud::Bigquery.new
1968
+ # dataset = bigquery.dataset "my_dataset"
1969
+ # table = dataset.table "my_table"
1970
+ #
1971
+ # table.snapshot "other-project:other_dataset.other_table"
1972
+ #
1973
+ # @!group Data
1974
+ #
1975
+ def snapshot destination_table, &block
1976
+ copy_job_with_operation_type destination_table,
1977
+ operation_type: OperationType::SNAPSHOT,
1978
+ &block
1979
+ end
1980
+
1981
+ ##
1982
+ # Restore the data from the table to another table using a synchronous
1983
+ # method that blocks for a response.
1984
+ # The source table type is SNAPSHOT and the destination table type is TABLE.
1985
+ # Timeouts and transient errors are generally handled as needed to complete the job.
1986
+ # See also {#copy_job}.
1987
+ #
1988
+ # The geographic location for the job ("US", "EU", etc.) can be set via
1989
+ # {CopyJob::Updater#location=} in a block passed to this method. If the
1990
+ # table is a full resource representation (see {#resource_full?}), the
1991
+ # location of the job will be automatically set to the location of the
1992
+ # table.
1993
+ #
1994
+ # @param [Table, String] destination_table The destination for the
1995
+ # copied data. This can also be a string identifier as specified by
1996
+ # the [Standard SQL Query
1997
+ # Reference](https://cloud.google.com/bigquery/docs/reference/standard-sql/query-syntax#from-clause)
1998
+ # (`project-name.dataset_id.table_id`) or the [Legacy SQL Query
1999
+ # Reference](https://cloud.google.com/bigquery/query-reference#from)
2000
+ # (`project-name:dataset_id.table_id`). This is useful for referencing
2001
+ # tables in other projects and datasets.
2002
+ # @param [String] create Specifies whether the job is allowed to create
2003
+ # new tables. The default value is `needed`.
2004
+ #
2005
+ # The following values are supported:
2006
+ #
2007
+ # * `needed` - Create the table if it does not exist.
2008
+ # * `never` - The table must already exist. A 'notFound' error is
2009
+ # raised if the table does not exist.
2010
+ # @param [String] write Specifies how to handle data already present in
2011
+ # the destination table. The default value is `empty`.
2012
+ #
2013
+ # The following values are supported:
2014
+ #
2015
+ # * `truncate` - BigQuery overwrites the table data.
2016
+ # * `append` - BigQuery appends the data to the table.
2017
+ # * `empty` - An error will be returned if the destination table
2018
+ # already contains data.
2019
+ # @yield [job] a job configuration object
2020
+ # @yieldparam [Google::Cloud::Bigquery::CopyJob::Updater] job a job
2021
+ # configuration object for setting additional options.
2022
+ #
2023
+ # @return [Boolean] Returns `true` if the copy operation succeeded.
2024
+ #
2025
+ # @example
2026
+ # require "google/cloud/bigquery"
2027
+ #
2028
+ # bigquery = Google::Cloud::Bigquery.new
2029
+ # dataset = bigquery.dataset "my_dataset"
2030
+ # table = dataset.table "my_table"
2031
+ # destination_table = dataset.table "my_destination_table"
2032
+ #
2033
+ # table.restore destination_table
2034
+ #
2035
+ # @example Passing a string identifier for the destination table:
2036
+ # require "google/cloud/bigquery"
2037
+ #
2038
+ # bigquery = Google::Cloud::Bigquery.new
2039
+ # dataset = bigquery.dataset "my_dataset"
2040
+ # table = dataset.table "my_table"
2041
+ #
2042
+ # table.restore "other-project:other_dataset.other_table"
2043
+ #
2044
+ # @!group Data
2045
+ #
2046
+ def restore destination_table, create: nil, write: nil, &block
2047
+ copy_job_with_operation_type destination_table,
2048
+ create: create,
2049
+ write: write,
2050
+ operation_type: OperationType::RESTORE,
2051
+ &block
1787
2052
  end
1788
2053
 
1789
2054
  ##
@@ -1812,7 +2077,7 @@ module Google
1812
2077
  # The following values are supported:
1813
2078
  #
1814
2079
  # * `csv` - CSV
1815
- # * `json` - [Newline-delimited JSON](http://jsonlines.org/)
2080
+ # * `json` - [Newline-delimited JSON](https://jsonlines.org/)
1816
2081
  # * `avro` - [Avro](http://avro.apache.org/)
1817
2082
  # @param [String] compression The compression type to use for exported
1818
2083
  # files. Possible values include `GZIP` and `NONE`. The default value
@@ -1915,7 +2180,7 @@ module Google
1915
2180
  # The following values are supported:
1916
2181
  #
1917
2182
  # * `csv` - CSV
1918
- # * `json` - [Newline-delimited JSON](http://jsonlines.org/)
2183
+ # * `json` - [Newline-delimited JSON](https://jsonlines.org/)
1919
2184
  # * `avro` - [Avro](http://avro.apache.org/)
1920
2185
  # @param [String] compression The compression type to use for exported
1921
2186
  # files. Possible values include `GZIP` and `NONE`. The default value
@@ -1986,7 +2251,7 @@ module Google
1986
2251
  # The following values are supported:
1987
2252
  #
1988
2253
  # * `csv` - CSV
1989
- # * `json` - [Newline-delimited JSON](http://jsonlines.org/)
2254
+ # * `json` - [Newline-delimited JSON](https://jsonlines.org/)
1990
2255
  # * `avro` - [Avro](http://avro.apache.org/)
1991
2256
  # * `orc` - [ORC](https://cloud.google.com/bigquery/docs/loading-data-cloud-storage-orc)
1992
2257
  # * `parquet` - [Parquet](https://parquet.apache.org/)
@@ -2199,7 +2464,7 @@ module Google
2199
2464
  # The following values are supported:
2200
2465
  #
2201
2466
  # * `csv` - CSV
2202
- # * `json` - [Newline-delimited JSON](http://jsonlines.org/)
2467
+ # * `json` - [Newline-delimited JSON](https://jsonlines.org/)
2203
2468
  # * `avro` - [Avro](http://avro.apache.org/)
2204
2469
  # * `orc` - [ORC](https://cloud.google.com/bigquery/docs/loading-data-cloud-storage-orc)
2205
2470
  # * `parquet` - [Parquet](https://parquet.apache.org/)
@@ -2739,6 +3004,17 @@ module Google
2739
3004
 
2740
3005
  protected
2741
3006
 
3007
+ def copy_job_with_operation_type destination_table, create: nil, write: nil, operation_type: nil, &block
3008
+ job = copy_job destination_table,
3009
+ create: create,
3010
+ write: write,
3011
+ operation_type: operation_type,
3012
+ &block
3013
+ job.wait_until_done!
3014
+ ensure_job_succeeded! job
3015
+ true
3016
+ end
3017
+
2742
3018
  ##
2743
3019
  # Raise an error unless an active service is available.
2744
3020
  def ensure_service!
@@ -16,7 +16,7 @@
16
16
  module Google
17
17
  module Cloud
18
18
  module Bigquery
19
- VERSION = "1.39.0".freeze
19
+ VERSION = "1.40.0".freeze
20
20
  end
21
21
  end
22
22
  end
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: google-cloud-bigquery
3
3
  version: !ruby/object:Gem::Version
4
- version: 1.39.0
4
+ version: 1.40.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Mike Moore
@@ -9,7 +9,7 @@ authors:
9
9
  autorequire:
10
10
  bindir: bin
11
11
  cert_chain: []
12
- date: 2022-07-28 00:00:00.000000000 Z
12
+ date: 2022-12-14 00:00:00.000000000 Z
13
13
  dependencies:
14
14
  - !ruby/object:Gem::Dependency
15
15
  name: concurrent-ruby