google-cloud-bigtable 2.7.0 → 2.8.0
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +4 -4
- data/AUTHENTICATION.md +8 -26
- data/CHANGELOG.md +12 -0
- data/LOGGING.md +1 -1
- data/lib/google/cloud/bigtable/backup.rb +67 -0
- data/lib/google/cloud/bigtable/instance/cluster_map.rb +1 -1
- data/lib/google/cloud/bigtable/instance.rb +2 -2
- data/lib/google/cloud/bigtable/project.rb +62 -1
- data/lib/google/cloud/bigtable/service.rb +24 -11
- data/lib/google/cloud/bigtable/table.rb +1 -1
- data/lib/google/cloud/bigtable/version.rb +1 -1
- metadata +3 -3
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA256:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: 735e0a2c4fcb24227bbdf5e7f1ee76d6e8f718b6d23cf30bdf56fd8c6f13f830
|
4
|
+
data.tar.gz: fd7c5cb8fa45cd44a7e0a39c78a88bf63e52214c133505dbfbc2977a86b89cd0
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: 2ee38cb00b572386d02f7b7f4f7abd593b0d8022e4dd366a54b2789bf8f4e8ee04b73a9ece3286707a81ce68006fb8f2f84685a721057c16b607911e4d8da5e2
|
7
|
+
data.tar.gz: a9686ba49ac1a4761761570f0486edcbb6d628eaaa5c8183df65b6f697c9f50022abf8b61f7420f6b79b5f7e00ad4b9a99f1f0f45266ca248fc47ae8e874ec5a
|
data/AUTHENTICATION.md
CHANGED
@@ -124,15 +124,6 @@ To configure your system for this, simply:
|
|
124
124
|
**NOTE:** This is _not_ recommended for running in production. The Cloud SDK
|
125
125
|
*should* only be used during development.
|
126
126
|
|
127
|
-
[gce-how-to]: https://cloud.google.com/compute/docs/authentication#using
|
128
|
-
[dev-console]: https://console.cloud.google.com/project
|
129
|
-
|
130
|
-
[enable-apis]: https://raw.githubusercontent.com/GoogleCloudPlatform/gcloud-common/master/authentication/enable-apis.png
|
131
|
-
|
132
|
-
[create-new-service-account]: https://raw.githubusercontent.com/GoogleCloudPlatform/gcloud-common/master/authentication/create-new-service-account.png
|
133
|
-
[create-new-service-account-existing-keys]: https://raw.githubusercontent.com/GoogleCloudPlatform/gcloud-common/master/authentication/create-new-service-account-existing-keys.png
|
134
|
-
[reuse-service-account]: https://raw.githubusercontent.com/GoogleCloudPlatform/gcloud-common/master/authentication/reuse-service-account.png
|
135
|
-
|
136
127
|
## Creating a Service Account
|
137
128
|
|
138
129
|
Google Cloud requires a **Project ID** and **Service Account Credentials** to
|
@@ -143,31 +134,22 @@ If you are not running this client within [Google Cloud Platform
|
|
143
134
|
environments](#google-cloud-platform-environments), you need a Google
|
144
135
|
Developers service account.
|
145
136
|
|
146
|
-
1. Visit the [Google
|
137
|
+
1. Visit the [Google Cloud Console](https://console.cloud.google.com/project).
|
147
138
|
1. Create a new project or click on an existing project.
|
148
|
-
1. Activate the
|
139
|
+
1. Activate the menu in the upper left and select **APIs & Services**. From
|
149
140
|
here, you will enable the APIs that your application requires.
|
150
141
|
|
151
|
-
![Enable the APIs that your application requires][enable-apis]
|
152
|
-
|
153
142
|
*Note: You may need to enable billing in order to use these services.*
|
154
143
|
|
155
144
|
1. Select **Credentials** from the side navigation.
|
156
145
|
|
157
|
-
|
158
|
-
|
159
|
-
![Create a new service account][create-new-service-account]
|
160
|
-
|
161
|
-
![Create a new service account With Existing Keys][create-new-service-account-existing-keys]
|
162
|
-
|
163
|
-
Find the "Add credentials" drop down and select "Service account" to be
|
164
|
-
guided through downloading a new JSON key file.
|
165
|
-
|
166
|
-
If you want to re-use an existing service account, you can easily generate a
|
167
|
-
new key file. Just select the account you wish to re-use, and click "Generate
|
168
|
-
new JSON key":
|
146
|
+
Find the "Create credentials" drop down near the top of the page, and select
|
147
|
+
"Service account" to be guided through downloading a new JSON key file.
|
169
148
|
|
170
|
-
|
149
|
+
If you want to re-use an existing service account, you can easily generate
|
150
|
+
a new key file. Just select the account you wish to re-use click the pencil
|
151
|
+
tool on the right side to edit the service account, select the **Keys** tab,
|
152
|
+
and then select **Add Key**.
|
171
153
|
|
172
154
|
The key file you download will be used by this library to authenticate API
|
173
155
|
requests and should be stored in a secure location.
|
data/CHANGELOG.md
CHANGED
@@ -1,5 +1,17 @@
|
|
1
1
|
# Release History
|
2
2
|
|
3
|
+
### 2.8.0 (2023-08-17)
|
4
|
+
|
5
|
+
#### Features
|
6
|
+
|
7
|
+
* Added support for copybackup ([#19341](https://github.com/googleapis/google-cloud-ruby/issues/19341))
|
8
|
+
|
9
|
+
### 2.7.1 (2023-06-16)
|
10
|
+
|
11
|
+
#### Documentation
|
12
|
+
|
13
|
+
* Fixed broken links in authentication documentation ([#21619](https://github.com/googleapis/google-cloud-ruby/issues/21619))
|
14
|
+
|
3
15
|
### 2.7.0 (2022-07-01)
|
4
16
|
|
5
17
|
#### Features
|
data/LOGGING.md
CHANGED
@@ -3,7 +3,7 @@
|
|
3
3
|
To enable logging for this library, set the logger for the underlying
|
4
4
|
[gRPC](https://github.com/grpc/grpc/tree/master/src/ruby) library. The logger
|
5
5
|
that you set may be a Ruby stdlib
|
6
|
-
[`Logger`](https://ruby-doc.org/
|
6
|
+
[`Logger`](https://ruby-doc.org/current/stdlibs/logger/Logger.html) as
|
7
7
|
shown below, or a
|
8
8
|
[`Google::Cloud::Logging::Logger`](https://googleapis.dev/ruby/google-cloud-logging/latest)
|
9
9
|
that will write logs to [Stackdriver
|
@@ -109,6 +109,18 @@ module Google
|
|
109
109
|
@grpc.name
|
110
110
|
end
|
111
111
|
|
112
|
+
##
|
113
|
+
# The name of the backup from which this backup is copied.
|
114
|
+
# Value will be empty if its not a copied backup and of the form -
|
115
|
+
# `projects/<project>/instances/<instance>/clusters/<cluster>/backups/<source_backup>`
|
116
|
+
# if this is a copied backup.
|
117
|
+
#
|
118
|
+
# @return [String]
|
119
|
+
#
|
120
|
+
def source_backup
|
121
|
+
@grpc.source_backup
|
122
|
+
end
|
123
|
+
|
112
124
|
##
|
113
125
|
# The table from which this backup was created.
|
114
126
|
#
|
@@ -412,6 +424,61 @@ module Google
|
|
412
424
|
Table::RestoreJob.from_grpc grpc, service
|
413
425
|
end
|
414
426
|
|
427
|
+
##
|
428
|
+
# Creates a copy of the backup at the desired location. Copy of the backup won't be created
|
429
|
+
# if the backup is already a copied one.
|
430
|
+
#
|
431
|
+
# @param dest_project_id [String] Existing project ID. Copy of the backup
|
432
|
+
# will be created in this project. Required.
|
433
|
+
# @param dest_instance_id [Instance, String] Existing instance ID. Copy
|
434
|
+
# of the backup will be created in this instance. Required.
|
435
|
+
# @param dest_cluster_id [String] Existing cluster ID. Copy of the backup
|
436
|
+
# will be created in this cluster. Required.
|
437
|
+
# @param new_backup_id [String] The id of the copy of the backup to be created. This string must
|
438
|
+
# be between 1 and 50 characters in length and match the regex
|
439
|
+
# `[_a-zA-Z0-9][-_.a-zA-Z0-9]*`. Required.
|
440
|
+
# @param expire_time [Time] The expiration time of the copy of the backup, with microseconds
|
441
|
+
# granularity that must be at least 6 hours and at most 30 days from the time the request
|
442
|
+
# is received. Once the `expire_time` has passed, Cloud Bigtable will delete the backup
|
443
|
+
# and free the resources used by the backup. Required.
|
444
|
+
#
|
445
|
+
# @return [Google::Cloud::Bigtable::Backup::Job] The job representing the long-running, asynchronous
|
446
|
+
# processing of a copy backup operation.
|
447
|
+
#
|
448
|
+
# @example Create a copy of the backup at a specific location
|
449
|
+
# require "google/cloud/bigtable"
|
450
|
+
#
|
451
|
+
# bigtable = Google::Cloud::Bigtable.new
|
452
|
+
# instance = bigtable.instance "my-instance"
|
453
|
+
# cluster = instance.cluster "my-cluster"
|
454
|
+
#
|
455
|
+
# backup = cluster.backup "my-backup"
|
456
|
+
#
|
457
|
+
# job = backup.copy dest_project_id:"my-project-2",
|
458
|
+
# dest_instance_id:"my-instance-2",
|
459
|
+
# dest_cluster_id:"my-cluster-2",
|
460
|
+
# new_backup_id:"my-backup-2",
|
461
|
+
# expire_time: Time.now + 60 * 60 * 7
|
462
|
+
#
|
463
|
+
# job.wait_until_done!
|
464
|
+
# job.done? #=> true
|
465
|
+
#
|
466
|
+
# if job.error?
|
467
|
+
# status = job.error
|
468
|
+
# else
|
469
|
+
# backup = job.backup
|
470
|
+
# end
|
471
|
+
#
|
472
|
+
def copy dest_project_id:, dest_instance_id:, dest_cluster_id:, new_backup_id:, expire_time:
|
473
|
+
grpc = service.copy_backup project_id: dest_project_id,
|
474
|
+
instance_id: dest_instance_id,
|
475
|
+
cluster_id: dest_cluster_id,
|
476
|
+
backup_id: new_backup_id,
|
477
|
+
source_backup: service.backup_path(instance_id, cluster_id, backup_id),
|
478
|
+
expire_time: expire_time
|
479
|
+
Backup::Job.from_grpc grpc, service
|
480
|
+
end
|
481
|
+
|
415
482
|
##
|
416
483
|
# Updates the backup.
|
417
484
|
#
|
@@ -429,7 +429,7 @@ module Google
|
|
429
429
|
serve_nodes: nodes,
|
430
430
|
default_storage_type: storage_type,
|
431
431
|
location: location
|
432
|
-
}.
|
432
|
+
}.compact
|
433
433
|
|
434
434
|
cluster = Google::Cloud::Bigtable::Admin::V2::Cluster.new attrs
|
435
435
|
grpc = service.create_cluster instance_id, cluster_id, cluster
|
@@ -721,7 +721,7 @@ module Google
|
|
721
721
|
single_cluster_routing: single_cluster_routing,
|
722
722
|
description: description,
|
723
723
|
etag: etag
|
724
|
-
}.
|
724
|
+
}.compact
|
725
725
|
|
726
726
|
grpc = service.create_app_profile(
|
727
727
|
instance_id,
|
@@ -226,7 +226,7 @@ module Google
|
|
226
226
|
def create_instance instance_id, display_name: nil, type: nil, labels: nil, clusters: nil
|
227
227
|
labels = labels.to_h { |k, v| [String(k), String(v)] } if labels
|
228
228
|
|
229
|
-
instance_attrs = { display_name: display_name, type: type, labels: labels }.
|
229
|
+
instance_attrs = { display_name: display_name, type: type, labels: labels }.compact
|
230
230
|
instance = Google::Cloud::Bigtable::Admin::V2::Instance.new instance_attrs
|
231
231
|
clusters ||= Instance::ClusterMap.new
|
232
232
|
yield clusters if block_given?
|
@@ -457,6 +457,67 @@ module Google
|
|
457
457
|
true
|
458
458
|
end
|
459
459
|
|
460
|
+
##
|
461
|
+
# Creates a copy of the backup at the desired location. Copy of the backup won't be created
|
462
|
+
# if the backup is already a copied one.
|
463
|
+
#
|
464
|
+
# @param dest_project_id [String] Existing project ID. Copy of the backup
|
465
|
+
# will be created in this project. Required.
|
466
|
+
# @param dest_instance_id [Instance, String] Existing instance ID. Copy
|
467
|
+
# of the backup will be created in this instance. Required.
|
468
|
+
# @param dest_cluster_id [String] Existing cluster ID. Copy of the backup
|
469
|
+
# will be created in this cluster. Required.
|
470
|
+
# @param new_backup_id [String] The id of the copy of the backup to be created. This string must
|
471
|
+
# be between 1 and 50 characters in length and match the regex
|
472
|
+
# `[_a-zA-Z0-9][-_.a-zA-Z0-9]*`. Required.
|
473
|
+
# @param source_instance_id [String] Existing instance ID. Backup will be copied from this instance. Required.
|
474
|
+
# @param source_cluster_id [String] Existing cluster ID. Backup will be copied from this cluster. Required.
|
475
|
+
# @param source_backup_id [Instance, String] Existing backup ID. This backup will be copied. Required.
|
476
|
+
# @param expire_time [Time] The expiration time of the copy of the backup, with microseconds
|
477
|
+
# granularity that must be at least 6 hours and at most 30 days from the time the request
|
478
|
+
# is received. Once the `expire_time` has passed, Cloud Bigtable will delete the backup
|
479
|
+
# and free the resources used by the backup. Required.
|
480
|
+
#
|
481
|
+
# @return [Google::Cloud::Bigtable::Backup::Job] The job representing the long-running, asynchronous
|
482
|
+
# processing of a copy backup operation.
|
483
|
+
#
|
484
|
+
# @example Create a copy of the specified backup at a specific location
|
485
|
+
# require "google/cloud/bigtable"
|
486
|
+
#
|
487
|
+
# bigtable = Google::Cloud::Bigtable.new
|
488
|
+
#
|
489
|
+
# job = bigtable.copy_backup dest_project_id:"my-project-2",
|
490
|
+
# dest_instance_id:"my-instance-2",
|
491
|
+
# dest_cluster_id:"my-cluster-2",
|
492
|
+
# new_backup_id:"my-backup-2",
|
493
|
+
# source_instance_id:"my-instance",
|
494
|
+
# source_cluster_id:"my-cluster",
|
495
|
+
# source_backup_id:"my-backup",
|
496
|
+
# expire_time: Time.now + 60 * 60 * 7
|
497
|
+
#
|
498
|
+
# job.wait_until_done!
|
499
|
+
# job.done? #=> true
|
500
|
+
#
|
501
|
+
# if job.error?
|
502
|
+
# status = job.error
|
503
|
+
# else
|
504
|
+
# backup = job.backup
|
505
|
+
# end
|
506
|
+
#
|
507
|
+
def copy_backup dest_project_id:, dest_instance_id:, dest_cluster_id:, new_backup_id:,
|
508
|
+
source_instance_id:, source_cluster_id:, source_backup_id:, expire_time:
|
509
|
+
ensure_service!
|
510
|
+
grpc = service.copy_backup project_id: dest_project_id,
|
511
|
+
instance_id: dest_instance_id,
|
512
|
+
cluster_id: dest_cluster_id,
|
513
|
+
backup_id: new_backup_id,
|
514
|
+
source_backup: service.backup_path(source_instance_id,
|
515
|
+
source_cluster_id,
|
516
|
+
source_backup_id),
|
517
|
+
expire_time: expire_time
|
518
|
+
Backup::Job.from_grpc grpc, service
|
519
|
+
end
|
520
|
+
|
460
521
|
protected
|
461
522
|
|
462
523
|
# @private
|
@@ -282,12 +282,12 @@ module Google
|
|
282
282
|
initial_splits = initial_splits.map { |key| { key: key } } if initial_splits
|
283
283
|
|
284
284
|
tables.create_table(
|
285
|
-
{
|
285
|
+
**{
|
286
286
|
parent: instance_path(instance_id),
|
287
287
|
table_id: table_id,
|
288
288
|
table: table,
|
289
289
|
initial_splits: initial_splits
|
290
|
-
}.
|
290
|
+
}.compact
|
291
291
|
)
|
292
292
|
end
|
293
293
|
|
@@ -650,7 +650,7 @@ module Google
|
|
650
650
|
|
651
651
|
def read_rows instance_id, table_id, app_profile_id: nil, rows: nil, filter: nil, rows_limit: nil
|
652
652
|
client.read_rows(
|
653
|
-
{
|
653
|
+
**{
|
654
654
|
table_name: table_path(instance_id, table_id),
|
655
655
|
rows: rows,
|
656
656
|
filter: filter,
|
@@ -666,22 +666,22 @@ module Google
|
|
666
666
|
|
667
667
|
def mutate_row table_name, row_key, mutations, app_profile_id: nil
|
668
668
|
client.mutate_row(
|
669
|
-
{
|
669
|
+
**{
|
670
670
|
table_name: table_name,
|
671
671
|
app_profile_id: app_profile_id,
|
672
672
|
row_key: row_key,
|
673
673
|
mutations: mutations
|
674
|
-
}.
|
674
|
+
}.compact
|
675
675
|
)
|
676
676
|
end
|
677
677
|
|
678
678
|
def mutate_rows table_name, entries, app_profile_id: nil
|
679
679
|
client.mutate_rows(
|
680
|
-
{
|
680
|
+
**{
|
681
681
|
table_name: table_name,
|
682
682
|
app_profile_id: app_profile_id,
|
683
683
|
entries: entries
|
684
|
-
}.
|
684
|
+
}.compact
|
685
685
|
)
|
686
686
|
end
|
687
687
|
|
@@ -692,25 +692,25 @@ module Google
|
|
692
692
|
true_mutations: nil,
|
693
693
|
false_mutations: nil
|
694
694
|
client.check_and_mutate_row(
|
695
|
-
{
|
695
|
+
**{
|
696
696
|
table_name: table_name,
|
697
697
|
app_profile_id: app_profile_id,
|
698
698
|
row_key: row_key,
|
699
699
|
predicate_filter: predicate_filter,
|
700
700
|
true_mutations: true_mutations,
|
701
701
|
false_mutations: false_mutations
|
702
|
-
}.
|
702
|
+
}.compact
|
703
703
|
)
|
704
704
|
end
|
705
705
|
|
706
706
|
def read_modify_write_row table_name, row_key, rules, app_profile_id: nil
|
707
707
|
client.read_modify_write_row(
|
708
|
-
{
|
708
|
+
**{
|
709
709
|
table_name: table_name,
|
710
710
|
app_profile_id: app_profile_id,
|
711
711
|
row_key: row_key,
|
712
712
|
rules: rules
|
713
|
-
}.
|
713
|
+
}.compact
|
714
714
|
)
|
715
715
|
end
|
716
716
|
|
@@ -725,6 +725,19 @@ module Google
|
|
725
725
|
tables.create_backup parent: cluster_path(instance_id, cluster_id), backup_id: backup_id, backup: backup
|
726
726
|
end
|
727
727
|
|
728
|
+
##
|
729
|
+
# Starts copying the selected backup to the chosen location.
|
730
|
+
# The underlying Google::Longrunning::Operation tracks the copying of backup.
|
731
|
+
#
|
732
|
+
# @return [Gapic::Operation]
|
733
|
+
#
|
734
|
+
def copy_backup project_id:, instance_id:, cluster_id:, backup_id:, source_backup:, expire_time:
|
735
|
+
tables.copy_backup parent: "projects/#{project_id}/instances/#{instance_id}/clusters/#{cluster_id}",
|
736
|
+
backup_id: backup_id,
|
737
|
+
source_backup: source_backup,
|
738
|
+
expire_time: expire_time
|
739
|
+
end
|
740
|
+
|
728
741
|
##
|
729
742
|
# @return [Google::Cloud::Bigtable::Admin::V2::Backup]
|
730
743
|
#
|
@@ -470,7 +470,7 @@ module Google
|
|
470
470
|
table = Google::Cloud::Bigtable::Admin::V2::Table.new({
|
471
471
|
column_families: column_families.to_grpc_hash,
|
472
472
|
granularity: granularity
|
473
|
-
}.
|
473
|
+
}.compact)
|
474
474
|
|
475
475
|
grpc = service.create_table instance_id, table_id, table, initial_splits: initial_splits
|
476
476
|
from_grpc grpc, service, view: :SCHEMA_VIEW
|
metadata
CHANGED
@@ -1,14 +1,14 @@
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
2
2
|
name: google-cloud-bigtable
|
3
3
|
version: !ruby/object:Gem::Version
|
4
|
-
version: 2.
|
4
|
+
version: 2.8.0
|
5
5
|
platform: ruby
|
6
6
|
authors:
|
7
7
|
- Google LLC
|
8
8
|
autorequire:
|
9
9
|
bindir: bin
|
10
10
|
cert_chain: []
|
11
|
-
date:
|
11
|
+
date: 2023-08-17 00:00:00.000000000 Z
|
12
12
|
dependencies:
|
13
13
|
- !ruby/object:Gem::Dependency
|
14
14
|
name: google-cloud-bigtable-admin-v2
|
@@ -249,7 +249,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
|
|
249
249
|
- !ruby/object:Gem::Version
|
250
250
|
version: '0'
|
251
251
|
requirements: []
|
252
|
-
rubygems_version: 3.
|
252
|
+
rubygems_version: 3.4.2
|
253
253
|
signing_key:
|
254
254
|
specification_version: 4
|
255
255
|
summary: API Client library for Cloud Bigtable API
|