google-cloud-bigquery 1.21.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (44) hide show
  1. checksums.yaml +7 -0
  2. data/.yardopts +16 -0
  3. data/AUTHENTICATION.md +158 -0
  4. data/CHANGELOG.md +397 -0
  5. data/CODE_OF_CONDUCT.md +40 -0
  6. data/CONTRIBUTING.md +188 -0
  7. data/LICENSE +201 -0
  8. data/LOGGING.md +27 -0
  9. data/OVERVIEW.md +463 -0
  10. data/TROUBLESHOOTING.md +31 -0
  11. data/lib/google-cloud-bigquery.rb +139 -0
  12. data/lib/google/cloud/bigquery.rb +145 -0
  13. data/lib/google/cloud/bigquery/argument.rb +197 -0
  14. data/lib/google/cloud/bigquery/convert.rb +383 -0
  15. data/lib/google/cloud/bigquery/copy_job.rb +316 -0
  16. data/lib/google/cloud/bigquery/credentials.rb +50 -0
  17. data/lib/google/cloud/bigquery/data.rb +526 -0
  18. data/lib/google/cloud/bigquery/dataset.rb +2845 -0
  19. data/lib/google/cloud/bigquery/dataset/access.rb +1021 -0
  20. data/lib/google/cloud/bigquery/dataset/list.rb +162 -0
  21. data/lib/google/cloud/bigquery/encryption_configuration.rb +123 -0
  22. data/lib/google/cloud/bigquery/external.rb +2432 -0
  23. data/lib/google/cloud/bigquery/extract_job.rb +368 -0
  24. data/lib/google/cloud/bigquery/insert_response.rb +180 -0
  25. data/lib/google/cloud/bigquery/job.rb +657 -0
  26. data/lib/google/cloud/bigquery/job/list.rb +162 -0
  27. data/lib/google/cloud/bigquery/load_job.rb +1704 -0
  28. data/lib/google/cloud/bigquery/model.rb +740 -0
  29. data/lib/google/cloud/bigquery/model/list.rb +164 -0
  30. data/lib/google/cloud/bigquery/project.rb +1655 -0
  31. data/lib/google/cloud/bigquery/project/list.rb +161 -0
  32. data/lib/google/cloud/bigquery/query_job.rb +1695 -0
  33. data/lib/google/cloud/bigquery/routine.rb +1108 -0
  34. data/lib/google/cloud/bigquery/routine/list.rb +165 -0
  35. data/lib/google/cloud/bigquery/schema.rb +564 -0
  36. data/lib/google/cloud/bigquery/schema/field.rb +668 -0
  37. data/lib/google/cloud/bigquery/service.rb +589 -0
  38. data/lib/google/cloud/bigquery/standard_sql.rb +495 -0
  39. data/lib/google/cloud/bigquery/table.rb +3340 -0
  40. data/lib/google/cloud/bigquery/table/async_inserter.rb +520 -0
  41. data/lib/google/cloud/bigquery/table/list.rb +172 -0
  42. data/lib/google/cloud/bigquery/time.rb +65 -0
  43. data/lib/google/cloud/bigquery/version.rb +22 -0
  44. metadata +297 -0
@@ -0,0 +1,7 @@
1
+ ---
2
+ SHA256:
3
+ metadata.gz: f38543236358fc319ecfcc2058ffa499e24034027aa56644924d2cf496815550
4
+ data.tar.gz: 37196fa1c3db03e48df4cb0ae5ae5b9b1ee07b9d112063467ebc57d25b34551a
5
+ SHA512:
6
+ metadata.gz: a234359fb8a04b42f22725f5540cf0378b7e51b3d847ddd4f24ba41f2a54b4de85068bb9aee10089995c29794c0a64694223ac6bf88202eda6a646ce02c85275
7
+ data.tar.gz: 99c6c915df70afde18de06907802c60d18df253238e6f3ab752827ea3cfc0aee0ab9bef8dc0816f5980db26334dd2b15ce0a2ecf7db3b026e2f0caa101dcafc8
@@ -0,0 +1,16 @@
1
+ --no-private
2
+ --title=Google Cloud BigQuery
3
+ --markup markdown
4
+ --markup-provider redcarpet
5
+ --main OVERVIEW.md
6
+
7
+ ./lib/**/*.rb
8
+ -
9
+ OVERVIEW.md
10
+ AUTHENTICATION.md
11
+ LOGGING.md
12
+ CONTRIBUTING.md
13
+ TROUBLESHOOTING.md
14
+ CHANGELOG.md
15
+ CODE_OF_CONDUCT.md
16
+ LICENSE
@@ -0,0 +1,158 @@
1
+ # Authentication
2
+
3
+ In general, the google-cloud-bigquery library uses [Service
4
+ Account](https://cloud.google.com/iam/docs/creating-managing-service-accounts)
5
+ credentials to connect to Google Cloud services. When running on Google Cloud
6
+ Platform (GCP), including Google Compute Engine (GCE), Google Kubernetes Engine
7
+ (GKE), Google App Engine (GAE), Google Cloud Functions (GCF) and Cloud Run,
8
+ the credentials will be discovered automatically. When running on other
9
+ environments, the Service Account credentials can be specified by providing the
10
+ path to the [JSON
11
+ keyfile](https://cloud.google.com/iam/docs/managing-service-account-keys) for
12
+ the account (or the JSON itself) in environment variables. Additionally, Cloud
13
+ SDK credentials can also be discovered automatically, but this is only
14
+ recommended during development.
15
+
16
+ ## Project and Credential Lookup
17
+
18
+ The google-cloud-bigquery library aims to make authentication as simple as
19
+ possible, and provides several mechanisms to configure your system without
20
+ providing **Project ID** and **Service Account Credentials** directly in code.
21
+
22
+ **Project ID** is discovered in the following order:
23
+
24
+ 1. Specify project ID in method arguments
25
+ 2. Specify project ID in configuration
26
+ 3. Discover project ID in environment variables
27
+ 4. Discover GCE project ID
28
+
29
+ **Credentials** are discovered in the following order:
30
+
31
+ 1. Specify credentials in method arguments
32
+ 2. Specify credentials in configuration
33
+ 3. Discover credentials path in environment variables
34
+ 4. Discover credentials JSON in environment variables
35
+ 5. Discover credentials file in the Cloud SDK's path
36
+ 6. Discover GCE credentials
37
+
38
+ ### Google Cloud Platform environments
39
+
40
+ When running on Google Cloud Platform (GCP), including Google Compute Engine (GCE),
41
+ Google Kubernetes Engine (GKE), Google App Engine (GAE), Google Cloud Functions
42
+ (GCF) and Cloud Run, the **Project ID** and **Credentials** and are discovered
43
+ automatically. Code should be written as if already authenticated.
44
+
45
+ ### Environment Variables
46
+
47
+ The **Project ID** and **Credentials JSON** can be placed in environment
48
+ variables instead of declaring them directly in code. Each service has its own
49
+ environment variable, allowing for different service accounts to be used for
50
+ different services. (See the READMEs for the individual service gems for
51
+ details.) The path to the **Credentials JSON** file can be stored in the
52
+ environment variable, or the **Credentials JSON** itself can be stored for
53
+ environments such as Docker containers where writing files is difficult or not
54
+ encouraged.
55
+
56
+ The environment variables that BigQuery checks for project ID are:
57
+
58
+ 1. `BIGQUERY_PROJECT`
59
+ 2. `GOOGLE_CLOUD_PROJECT`
60
+
61
+ The environment variables that BigQuery checks for credentials are configured on {Google::Cloud::Bigquery::Credentials}:
62
+
63
+ 1. `BIGQUERY_CREDENTIALS` - Path to JSON file, or JSON contents
64
+ 2. `BIGQUERY_KEYFILE` - Path to JSON file, or JSON contents
65
+ 3. `GOOGLE_CLOUD_CREDENTIALS` - Path to JSON file, or JSON contents
66
+ 4. `GOOGLE_CLOUD_KEYFILE` - Path to JSON file, or JSON contents
67
+ 5. `GOOGLE_APPLICATION_CREDENTIALS` - Path to JSON file
68
+
69
+ ```ruby
70
+ require "google/cloud/bigquery"
71
+
72
+ ENV["BIGQUERY_PROJECT"] = "my-project-id"
73
+ ENV["BIGQUERY_CREDENTIALS"] = "path/to/keyfile.json"
74
+
75
+ bigquery = Google::Cloud::Bigquery.new
76
+ ```
77
+
78
+ ### Configuration
79
+
80
+ The **Project ID** and **Credentials JSON** can be configured instead of placing them in environment variables or providing them as arguments.
81
+
82
+ ```ruby
83
+ require "google/cloud/bigquery"
84
+
85
+ Google::Cloud::Bigquery.configure do |config|
86
+ config.project_id = "my-project-id"
87
+ config.credentials = "path/to/keyfile.json"
88
+ end
89
+
90
+ bigquery = Google::Cloud::Bigquery.new
91
+ ```
92
+
93
+ ### Cloud SDK
94
+
95
+ This option allows for an easy way to authenticate during development. If
96
+ credentials are not provided in code or in environment variables, then Cloud SDK
97
+ credentials are discovered.
98
+
99
+ To configure your system for this, simply:
100
+
101
+ 1. [Download and install the Cloud SDK](https://cloud.google.com/sdk)
102
+ 2. Authenticate using OAuth 2.0 `$ gcloud auth login`
103
+ 3. Write code as if already authenticated.
104
+
105
+ **NOTE:** This is _not_ recommended for running in production. The Cloud SDK
106
+ *should* only be used during development.
107
+
108
+ [gce-how-to]: https://cloud.google.com/compute/docs/authentication#using
109
+ [dev-console]: https://console.cloud.google.com/project
110
+
111
+ [enable-apis]: https://raw.githubusercontent.com/GoogleCloudPlatform/gcloud-common/master/authentication/enable-apis.png
112
+
113
+ [create-new-service-account]: https://raw.githubusercontent.com/GoogleCloudPlatform/gcloud-common/master/authentication/create-new-service-account.png
114
+ [create-new-service-account-existing-keys]: https://raw.githubusercontent.com/GoogleCloudPlatform/gcloud-common/master/authentication/create-new-service-account-existing-keys.png
115
+ [reuse-service-account]: https://raw.githubusercontent.com/GoogleCloudPlatform/gcloud-common/master/authentication/reuse-service-account.png
116
+
117
+ ## Creating a Service Account
118
+
119
+ Google Cloud requires a **Project ID** and **Service Account Credentials** to
120
+ connect to the APIs. You will use the **Project ID** and **JSON key file** to
121
+ connect to most services with google-cloud-bigquery.
122
+
123
+ If you are not running this client on Google Compute Engine, you need a Google
124
+ Developers service account.
125
+
126
+ 1. Visit the [Google Developers Console][dev-console].
127
+ 1. Create a new project or click on an existing project.
128
+ 1. Activate the slide-out navigation tray and select **API Manager**. From
129
+ here, you will enable the APIs that your application requires.
130
+
131
+ ![Enable the APIs that your application requires][enable-apis]
132
+
133
+ *Note: You may need to enable billing in order to use these services.*
134
+
135
+ 1. Select **Credentials** from the side navigation.
136
+
137
+ You should see a screen like one of the following.
138
+
139
+ ![Create a new service account][create-new-service-account]
140
+
141
+ ![Create a new service account With Existing Keys][create-new-service-account-existing-keys]
142
+
143
+ Find the "Add credentials" drop down and select "Service account" to be
144
+ guided through downloading a new JSON key file.
145
+
146
+ If you want to re-use an existing service account, you can easily generate a
147
+ new key file. Just select the account you wish to re-use, and click "Generate
148
+ new JSON key":
149
+
150
+ ![Re-use an existing service account][reuse-service-account]
151
+
152
+ The key file you download will be used by this library to authenticate API
153
+ requests and should be stored in a secure location.
154
+
155
+ ## Troubleshooting
156
+
157
+ If you're having trouble authenticating you can ask for help by following the
158
+ {file:TROUBLESHOOTING.md Troubleshooting Guide}.
@@ -0,0 +1,397 @@
1
+ # Release History
2
+
3
+ ### 1.21.2 / 2020-07-21
4
+
5
+ #### Documentation
6
+
7
+ * Update Data#each samples
8
+
9
+ ### 1.21.1 / 2020-05-28
10
+
11
+ #### Documentation
12
+
13
+ * Fix a few broken links
14
+
15
+ ### 1.21.0 / 2020-03-31
16
+
17
+ #### Features
18
+
19
+ * Add Job#parent_job_id and Job#script_statistics
20
+ * Add parent_job to Project#jobs
21
+ * Add Job#num_child_jobs
22
+ * Add Job#parent_job_id
23
+ * Add Job#script_statistics
24
+
25
+ ### 1.20.0 / 2020-03-11
26
+
27
+ #### Features
28
+
29
+ * Add Range Partitioning
30
+ * Add range partitioning methods to Table and Table::Updater
31
+ * Add range partitioning methods to LoadJob
32
+ * Add range partitioning methods to QueryJob
33
+
34
+ ### 1.19.0 / 2020-02-11
35
+
36
+ #### Features
37
+
38
+ * Add Routine
39
+ * Add Dataset#create_routine
40
+ * Add Argument
41
+ * Update StandardSql classes to expose public initializer
42
+ * Add Data#ddl_target_routine and QueryJob#ddl_target_routine
43
+ * Allow row inserts to skip insert_id generation
44
+ * Streaming inserts using an insert_id are not able to be inserted as fast as inserts without an insert_id
45
+ * Add the ability for users to skip insert_id generation in order to speed up the inserts
46
+ * The default behavior continues to generate insert_id values for each row inserted
47
+ * Add yield documentation for Dataset#insert
48
+
49
+ ### 1.18.1 / 2019-12-18
50
+
51
+ #### Bug Fixes
52
+
53
+ * Fix MonitorMixin usage on Ruby 2.7
54
+ * Ruby 2.7 will error if new_cond is called before super().
55
+ * Make the call to super() be the first call in initialize
56
+
57
+ ### 1.18.0 / 2019-11-06
58
+
59
+ #### Features
60
+
61
+ * Add optional query parameter types
62
+ * Allow query parameters to be nil/NULL when providing an optional
63
+ * Add types argument to the following methods:
64
+ * Project#query
65
+ * Project#query_job
66
+ * Dataset#query
67
+ * Dataset#query_job
68
+ * Add param types helper methods
69
+ * Return the BigQuery field type code, using the same format as the
70
+ * Add Schema::Field#param_type
71
+ * Add Schema#param_types
72
+ * Add Data#param_types
73
+ * Add Table#param_types
74
+ * Add External::CvsSource#param_types
75
+ * Add External::JsonSource#param_types
76
+ * Add support for all_users special role in Dataset access
77
+
78
+ ### 1.17.0 / 2019-10-29
79
+
80
+ This release requires Ruby 2.4 or later.
81
+
82
+ #### Documentation
83
+
84
+ * Clarify which Google Cloud Platform environments support automatic authentication
85
+
86
+ ### 1.16.0 / 2019-10-03
87
+
88
+ #### Features
89
+
90
+ * Add Dataset default_encryption
91
+ * Add Dataset#default_encryption
92
+ * Add Dataset#default_encryption=
93
+
94
+ ### 1.15.0 / 2019-09-30
95
+
96
+ #### Features
97
+
98
+ * Add Model encryption
99
+ * Add Model#encryption
100
+ * Add Model#encryption=
101
+ * Add range support for Google Sheets
102
+ * Add External::SheetsSource#range
103
+ * Add External::SheetsSource#range=
104
+ * Support use_avro_logical_types on extract jobs
105
+ * Add ExtractJob#use_avro_logical_types?
106
+ * Add ExtractJob::Updater#use_avro_logical_types=
107
+
108
+ ### 1.14.1 / 2019-09-04
109
+
110
+ #### Documentation
111
+
112
+ * Add note about streaming insert issues
113
+ * Acknowledge tradeoffs when inserting rows soon after
114
+ table metadata has been changed.
115
+ * Add link to BigQuery Troubleshooting guide.
116
+
117
+ ### 1.14.0 / 2019-08-23
118
+
119
+ #### Features
120
+
121
+ * Support overriding of service endpoint
122
+
123
+ #### Performance Improvements
124
+
125
+ * Use MiniMime to detect content types
126
+
127
+ #### Documentation
128
+
129
+ * Update documentation
130
+
131
+ ### 1.13.0 / 2019-07-31
132
+
133
+ * Add Table#require_partition_filter
134
+ * List jobs using min and max created_at
135
+ * Reduce thread usage at startup
136
+ * Allocate threads in pool as needed, not all up front
137
+ * Update documentation links
138
+
139
+ ### 1.12.0 / 2019-07-10
140
+
141
+ * Add BigQuery Model API
142
+ * Add Model
143
+ * Add StandardSql Field, DataType, StructType
144
+ * Add Dataset#model and Dataset#models
145
+ * Correct Float value conversion
146
+ * Ensure that NaN, Infinity, and -Infinity are converted correctly.
147
+
148
+ ### 1.11.2 / 2019-06-11
149
+
150
+ * Update "Loading data" link
151
+
152
+ ### 1.11.1 / 2019-05-21
153
+
154
+ * Declare explicit dependency on mime-types
155
+
156
+ ### 1.11.0 / 2019-02-01
157
+
158
+ * Make use of Credentials#project_id
159
+ * Use Credentials#project_id
160
+ If a project_id is not provided, use the value on the Credentials object.
161
+ This value was added in googleauth 0.7.0.
162
+ * Loosen googleauth dependency
163
+ Allow for new releases up to 0.10.
164
+ The googleauth devs have committed to maintaining the current API
165
+ and will not make backwards compatible changes before 0.10.
166
+
167
+ ### 1.10.0 / 2018-12-06
168
+
169
+ * Add dryrun param to Project#query_job and Dataset#query_job
170
+ * Add copy and extract methods to Project
171
+ * Add Project#extract and Project#extract_job
172
+ * Add Project#copy and Project#copy_job
173
+ * Deprecate dryrun param in Table#copy_job, Table#extract_job and
174
+ Table#load_job
175
+ * Fix memoization in Dataset#exists? and Table#exists?
176
+ * Add force param to Dataset#exists? and Table#exists?
177
+
178
+ ### 1.9.0 / 2018-10-25
179
+
180
+ * Add clustering fields to LoadJob, QueryJob and Table
181
+ * Add DDL/DML support
182
+ * Update QueryJob#data to not return table rows for DDL/DML
183
+ * Add DDL/DML statistics attrs to QueryJob and Data
184
+ * Add #numeric to Table::Updater and LoadJob::Updater (@leklund)
185
+
186
+ ### 1.8.2 / 2018-09-20
187
+
188
+ * Update documentation.
189
+ * Change documentation URL to googleapis GitHub org.
190
+ * Fix circular require warning.
191
+
192
+ ### 1.8.1 / 2018-09-12
193
+
194
+ * Add missing documentation files to package.
195
+
196
+ ### 1.8.0 / 2018-09-10
197
+
198
+ * Add support for OCR format.
199
+ * Update documentation.
200
+
201
+ ### 1.7.1 / 2018-08-21
202
+
203
+ * Update documentation.
204
+
205
+ ### 1.7.0 / 2018-06-29
206
+
207
+ * Add #schema_update_options to LoadJob and #schema_update_options= to LoadJob::Updater.
208
+ * Add time partitioning for the target table to LoadJob and QueryJob.
209
+ * Add #statement_type, #ddl_operation_performed, #ddl_target_table to QueryJob.
210
+
211
+ ### 1.6.0 / 2018-06-22
212
+
213
+ * Documentation updates.
214
+ * Updated dependencies.
215
+
216
+ ### 1.5.0 / 2018-05-21
217
+
218
+ * Add Schema.load and Schema.dump to read/write a table schema from/to a JSON file or other IO source. The JSON file schema is the same as for the bq CLI.
219
+ * Add support for the NUMERIC data type.
220
+ * Add documentation for enabling logging.
221
+
222
+ ### 1.4.0 / 2018-05-07
223
+
224
+ * Add Parquet support to #load and #load_job.
225
+
226
+ ### 1.3.0 / 2018-04-05
227
+
228
+ * Add insert_ids option to #insert in Dataset, Table, and AsyncInserter.
229
+ * Add BigQuery Project#service_account_email.
230
+ * Add support for setting Job location to nil in blocks for Job properties.
231
+
232
+ ### 1.2.0 / 2018-03-31
233
+
234
+ * Add geo-regionalization (location) support to Jobs.
235
+ * Add Project#encryption support to Jobs.
236
+ * Rename Encryption to EncryptionConfiguration.
237
+ * Add blocks for setting Job properties to all Job creation methods.
238
+ * Add support for lists of URLs to #load and #load_job. (jeremywadsack)
239
+ * Fix Schema::Field type helpers.
240
+ * Fix Table#load example in README.
241
+
242
+ ### 1.1.0 / 2018-02-27
243
+
244
+ * Support table partitioning by field.
245
+ * Support Shared Configuration.
246
+ * Improve AsyncInserter performance.
247
+
248
+ ### 1.0.0 / 2018-01-10
249
+
250
+ * Release 1.0.0
251
+ * Update authentication documentation
252
+ * Update Data documentation and code examples
253
+ * Remove reference to sync and async queries
254
+ * Allow use of URI objects for Dataset#load, Table#load, and Table#load_job
255
+
256
+ ### 0.30.0 / 2017-11-14
257
+
258
+ * Add `Google::Cloud::Bigquery::Credentials` class.
259
+ * Rename constructor arguments to `project_id` and `credentials`.
260
+ (The previous arguments `project` and `keyfile` are still supported.)
261
+ * Support creating `Dataset` and `Table` objects without making API calls using
262
+ `skip_lookup` argument.
263
+ * Add `Dataset#reference?` and `Dataset#resource?` helper method.
264
+ * Add `Table#reference?` and `Table#resource?` and `Table#resource_partial?`
265
+ and `Table#resource_full?` helper methods.
266
+ * `Dataset#insert_async` and `Dataset#insert_async` now yields a
267
+ `Table::AsyncInserter::Result` object.
268
+ * `View` is removed, now uses `Table` class.
269
+ * Needed to support `skip_lookup` argument.
270
+ * Calling `Table#data` on a view now raises (breaking change).
271
+ * Performance improvements for queries.
272
+ * Updated `google-api-client`, `googleauth` dependencies.
273
+
274
+ ### 0.29.0 / 2017-10-09
275
+
276
+ This is a major release with many new features and several breaking changes.
277
+
278
+ #### Major Changes
279
+
280
+ * All queries now use a new implementation, using a job and polling for results.
281
+ * The copy, load, extract methods now all have high-level and low-level versions, similar to `query` and `query_job`.
282
+ * Added asynchronous row insertion, allowing data to be collected and inserted in batches.
283
+ * Support external data sources for both queries and table views.
284
+ * Added create-on-insert support for tables.
285
+ * Allow for customizing job IDs to aid in organizing jobs.
286
+
287
+ #### Change Details
288
+
289
+ * Update high-level queries as follows:
290
+ * Update `QueryJob#wait_until_done!` to use `getQueryResults`.
291
+ * Update `Project#query` and `Dataset#query` with breaking changes:
292
+ * Remove `timeout` and `dryrun` parameters.
293
+ * Change return type from `QueryData` to `Data`.
294
+ * Add `QueryJob#data`
295
+ * Alias `QueryJob#query_results` to `QueryJob#data` with breaking changes:
296
+ * Remove the `timeout` parameter.
297
+ * Change the return type from `QueryData` to `Data`.
298
+ * Update `View#data` with breaking changes:
299
+ * Remove the `timeout` and `dryrun` parameters.
300
+ * Change the return type from `QueryData` to `Data`.
301
+ * Remove `QueryData`.
302
+ * Update `Project#query` and `Dataset#query` with improved errors, replacing the previous simple error with one that contains all available information for why the job failed.
303
+ * Rename `Dataset#load` to `Dataset#load_job`; add high-level, synchronous version as `Dataset#load`.
304
+ * Rename `Table#copy` to `Table#copy_job`; add high-level, synchronous version as `Table#copy`.
305
+ * Rename `Table#extract` to `Table#extract_job`; add high-level, synchronous version as `Table#extract`.
306
+ * Rename `Table#load` to `Table#load_job`; add high-level, synchronous version as `Table#load`.
307
+ * Add support for querying external data sources with `External`.
308
+ * Add `Table::AsyncInserter`, `Dataset#insert_async` and `Table#insert_async` to collect and insert rows in batches.
309
+ * Add `Dataset#insert` to support creating a table while inserting rows if the table does not exist.
310
+ * Update retry logic to conform to the [BigQuery SLA](https://cloud.google.com/bigquery/sla).
311
+ * Use a minimum back-off interval of 1 second; for each consecutive error, increase the back-off interval exponentially up to 32 seconds.
312
+ * Retry if all error reasons are retriable, not if any of the error reasons are retriable.
313
+ * Add support for labels to `Dataset`, `Table`, `View` and `Job`.
314
+ * Add `filter` option to `Project#datasets` and `Project#jobs`.
315
+ * Add support for user-defined functions to `Project#query_job`, `Dataset#query_job`, `QueryJob` and `View`.
316
+ * In `Dataset`, `Table`, and `View` updates, add the use of ETags for optimistic concurrency control.
317
+ * Update `Dataset#load` and `Table#load`:
318
+ * Add `null_marker` option and `LoadJob#null_marker`.
319
+ * Add `autodetect` option and `LoadJob#autodetect?`.
320
+ * Fix the default value for `LoadJob#quoted_newlines?`.
321
+ * Add `job_id` and `prefix` options for controlling client-side job ID generation to `Project#query_job`, `Dataset#load`, `Dataset#query_job`, `Table#copy`, `Table#extract`, and `Table#load`.
322
+ * Add `Job#user_email`.
323
+ * Set the maximum delay of `Job#wait_until_done!` polling to 60 seconds.
324
+ * Automatically retry `Job#cancel`.
325
+ * Allow users to specify if a `View` query is using Standard vs. Legacy SQL.
326
+ * Add `project` option to `Project#query_job`.
327
+ * Add `QueryJob#query_plan`, `QueryJob::Stage` and `QueryJob::Step` to expose query plan information.
328
+ * Add `Table#buffer_bytes`, `Table#buffer_rows` and `Table#buffer_oldest_at` to expose streaming buffer information.
329
+ * Update `Dataset#insert` and `Table#insert` to raise an error if `rows` is empty.
330
+ * Update `Error` with a mapping from code 412 to `FailedPreconditionError`.
331
+ * Update `Data#schema` to freeze the returned `Schema` object (as in `View` and `LoadJob`.)
332
+
333
+ ### 0.28.0 / 2017-09-28
334
+
335
+ * Update Google API Client dependency to 0.14.x.
336
+
337
+ ### 0.27.1 / 2017-07-11
338
+
339
+ * Add `InsertResponse::InsertError#index` (zedalaye)
340
+
341
+ ### 0.27.0 / 2017-06-28
342
+
343
+ * Add `maximum_billing_tier` and `maximum_bytes_billed` to `QueryJob`, `Project#query_job` and `Dataset#query_job`.
344
+ * Add `Dataset#load` to support creating, configuring and loading a table in one API call.
345
+ * Add `Project#schema`.
346
+ * Upgrade dependency on Google API Client.
347
+ * Update gem spec homepage links.
348
+ * Update examples of field access to use symbols instead of strings in the documentation.
349
+
350
+ ### 0.26.0 / 2017-04-05
351
+
352
+ * Upgrade dependency on Google API Client
353
+
354
+ ### 0.25.0 / 2017-03-31
355
+
356
+ * Add `#cancel` to `Job`
357
+ * Updated documentation
358
+
359
+ ### 0.24.0 / 2017-03-03
360
+
361
+ Major release, several new features, some breaking changes.
362
+
363
+ * Standard SQL is now the default syntax.
364
+ * Legacy SQL syntax can be enabled by providing `legacy_sql: true`.
365
+ * Several fixes to how data values are formatted when returned from BigQuery.
366
+ * Returned data rows are now hashes with Symbol keys instead of String keys.
367
+ * Several fixes to how data values are formatted when importing to BigQuery.
368
+ * Several improvements to manipulating table schema fields.
369
+ * Removal of `Schema#fields=` and `Data#raw` methods.
370
+ * Removal of `fields` argument from `Dataset#create_table` method.
371
+ * Dependency on Google API Client has been updated to 0.10.x.
372
+
373
+ ### 0.23.0 / 2016-12-8
374
+
375
+ * Support Query Parameters using `params` method arguments to `query` and `query_job`
376
+ * Add `standard_sql`/`legacy_sql` method arguments to to `query` and `query_job`
377
+ * Add `standard_sql?`/`legacy_sql?` attributes to `QueryJob`
378
+ * Many documentation improvements
379
+
380
+ ### 0.21.0 / 2016-10-20
381
+
382
+ * New service constructor Google::Cloud::Bigquery.new
383
+
384
+ ### 0.20.2 / 2016-09-30
385
+
386
+ * Add list of projects that the current credentials can access. (remi)
387
+
388
+ ### 0.20.1 / 2016-09-02
389
+
390
+ * Fix for timeout on uploads.
391
+
392
+ ### 0.20.0 / 2016-08-26
393
+
394
+ This gem contains the Google BigQuery service implementation for the `google-cloud` gem. The `google-cloud` gem replaces the old `gcloud` gem. Legacy code can continue to use the `gcloud` gem.
395
+
396
+ * Namespace is now `Google::Cloud`
397
+ * The `google-cloud` gem is now an umbrella package for individual gems