bigbroda 0.0.6 → 0.0.7
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/README.md +25 -23
- data/lib/google_bigquery/auth.rb +1 -1
- data/lib/google_bigquery/version.rb +1 -1
- metadata +1 -1
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA1:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: ee5bfec85d610513c98a0859842ce00827fc6447
|
4
|
+
data.tar.gz: 5ac1a4183d960946834461d5baa585b4510e1e1e
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: 930b75e37fd5402f3468651d2d2f3574ee82675fd952290c8a6bc613df221c60eae84c69361d1a5ff216a1f8d333545c5aea35f5e2cd377ff6acd5dde55090dc
|
7
|
+
data.tar.gz: bda1e4134eac5eeb36be1ed4ed9e7b47a0e9d2384ee3bcaf8b3778afe7bd1ec02485215fc36d7c4abb579f83913989e200f935ff429571b6f6369d4366812fb4
|
data/README.md
CHANGED
@@ -6,9 +6,9 @@ GoogleBigQuery ActiveRecord Adapter & standalone API client
|
|
6
6
|
|
7
7
|
https://developers.google.com/bigquery/what-is-bigquery
|
8
8
|
|
9
|
-
BigQuery is fantastic for running ad hoc aggregate queries across a very very large dataset - large web logs, ad analysis, sensor data, sales data... etc. Basically, many kinds of "full table scan" queries. Queries are written in a SQL-style language (you don't have to write custom MapReduce functions).
|
9
|
+
BigQuery is fantastic for running ad hoc aggregate queries across a very very large dataset - large web logs, ad analysis, sensor data, sales data... etc. Basically, many kinds of "full table scan" queries. Queries are written in a SQL-style language (you don't have to write custom MapReduce functions).
|
10
10
|
|
11
|
-
But!, Bigquery has a constraint to consider before diving in,
|
11
|
+
But!, Bigquery has a constraint to consider before diving in,
|
12
12
|
BQ is append only , that means that you can't update records or delete them.
|
13
13
|
|
14
14
|
So, use BigQuery as an OLAP (Online Analytical Processing) service, not as OLTP (Online Transactional Processing). In other words, use BigQuery as a DataWareHouse.
|
@@ -21,6 +21,8 @@ Add 'google_bigquery' to your application's Gemfile or install it yourself as:
|
|
21
21
|
|
22
22
|
## Rails / ActiveRecord:
|
23
23
|
|
24
|
+
This gem supports ActiveRecord >= 4
|
25
|
+
|
24
26
|
#### Configure GoogleBigQuery:
|
25
27
|
|
26
28
|
rails g google_bigquery:install
|
@@ -47,7 +49,7 @@ ActiveRecord connection in plain ruby:
|
|
47
49
|
|
48
50
|
```ruby
|
49
51
|
ActiveRecord::Base.establish_connection(
|
50
|
-
:adapter => 'bigquery',
|
52
|
+
:adapter => 'bigquery',
|
51
53
|
:project => "MyBigQueryProject",
|
52
54
|
:database => "MyBigTable"
|
53
55
|
)
|
@@ -83,7 +85,7 @@ Then you will have to execute the migration programaticly. like this:
|
|
83
85
|
```ruby
|
84
86
|
UserMigration.up
|
85
87
|
```
|
86
|
-
or
|
88
|
+
or
|
87
89
|
|
88
90
|
```ruby
|
89
91
|
AddPublishedToUser.change
|
@@ -133,7 +135,7 @@ User.create([{name: "miki"}, {name: "jara"}])
|
|
133
135
|
|
134
136
|
```
|
135
137
|
|
136
|
-
NOTE: by default the adapter will set Id values as an SecureRandom.hex, and for now all the foreign keys are created as a STRING type
|
138
|
+
NOTE: by default the adapter will set Id values as an SecureRandom.hex, and for now all the foreign keys are created as a STRING type
|
137
139
|
|
138
140
|
#### Deletion and edition of single rows:
|
139
141
|
|
@@ -156,10 +158,10 @@ User.bigquery_export(destination)
|
|
156
158
|
```
|
157
159
|
where destination should be a valid google cloud store uri. The adapter will manage that , so you only need to pass the file name. Example:
|
158
160
|
|
159
|
-
User.bigquery_export("file.json")
|
161
|
+
User.bigquery_export("file.json")
|
160
162
|
|
161
163
|
the adapter will convert that option to gs://[configured_database]/[file.json]. Just be sure to create the bucket propperly in Cloud Storage panel.
|
162
|
-
Also if you don't pass the file argument you will get an generated uri like: gs://[configured_database]/[table_name].json.
|
164
|
+
Also if you don't pass the file argument you will get an generated uri like: gs://[configured_database]/[table_name].json.
|
163
165
|
|
164
166
|
#### Import
|
165
167
|
|
@@ -172,12 +174,12 @@ User.bigquery_import([an_array_with_paths_to_gs_uris])
|
|
172
174
|
```
|
173
175
|
|
174
176
|
From multipart/related post:
|
175
|
-
|
177
|
+
|
176
178
|
PENDING
|
177
179
|
|
178
180
|
### Migrations:
|
179
181
|
|
180
|
-
This adapter has migration support migrations built in, but
|
182
|
+
This adapter has migration support migrations built in, but
|
181
183
|
|
182
184
|
```ruby
|
183
185
|
class CreateUsers < ActiveRecord::Migration
|
@@ -204,8 +206,8 @@ end
|
|
204
206
|
|
205
207
|
```
|
206
208
|
|
207
|
-
Note:
|
208
|
-
+ Big query does not provide a way to update columns nor delete, so update_column, or remove_column migration are cancelled with an catched exception.
|
209
|
+
Note:
|
210
|
+
+ Big query does not provide a way to update columns nor delete, so update_column, or remove_column migration are cancelled with an catched exception.
|
209
211
|
+ Also the schema_migrations table is not created in DB, is created as a json file in db/schema_migrations.json instead. Be sure to add the file in your version control.
|
210
212
|
|
211
213
|
## Standalone Client:
|
@@ -278,13 +280,13 @@ GoogleBigquery::Dataset.list(@project_id)
|
|
278
280
|
|
279
281
|
#### Create/Insert:
|
280
282
|
|
281
|
-
```ruby
|
283
|
+
```ruby
|
282
284
|
GoogleBigquery::Dataset.create(@project, {"datasetReference"=> { "datasetId" => @dataset_id }} )
|
283
285
|
```
|
284
286
|
|
285
287
|
#### Delete:
|
286
288
|
|
287
|
-
```ruby
|
289
|
+
```ruby
|
288
290
|
GoogleBigquery::Dataset.delete(@project, @dataset_id }} )
|
289
291
|
```
|
290
292
|
|
@@ -292,10 +294,10 @@ GoogleBigquery::Dataset.delete(@project, @dataset_id }} )
|
|
292
294
|
|
293
295
|
Updates information in an existing dataset. The update method replaces the entire dataset resource, whereas the patch method only replaces fields that are provided in the submitted dataset resource.
|
294
296
|
|
295
|
-
```ruby
|
297
|
+
```ruby
|
296
298
|
GoogleBigquery::Dataset.update(@project, @dataset_id,
|
297
299
|
{"datasetReference"=> {
|
298
|
-
"datasetId" =>@dataset_id },
|
300
|
+
"datasetId" =>@dataset_id },
|
299
301
|
"description"=> "foobar"} )
|
300
302
|
```
|
301
303
|
|
@@ -305,7 +307,7 @@ GoogleBigquery::Dataset.update(@project, @dataset_id,
|
|
305
307
|
```ruby
|
306
308
|
GoogleBigquery::Dataset.patch(@project, @dataset_id,
|
307
309
|
{"datasetReference"=> {
|
308
|
-
"datasetId" =>@dataset_id },
|
310
|
+
"datasetId" =>@dataset_id },
|
309
311
|
"description"=> "foobar"} )
|
310
312
|
```
|
311
313
|
|
@@ -320,8 +322,8 @@ GoogleBigquery::Dataset.patch(@project, @dataset_id,
|
|
320
322
|
@table_body = { "tableReference"=> {
|
321
323
|
"projectId"=> @project,
|
322
324
|
"datasetId"=> @dataset_id,
|
323
|
-
"tableId"=> @table_name},
|
324
|
-
"schema"=> [fields:
|
325
|
+
"tableId"=> @table_name},
|
326
|
+
"schema"=> [fields:
|
325
327
|
{:name=> "name", :type=> "string", :mode => "REQUIRED"},
|
326
328
|
{:name=> "age", :type=> "integer"},
|
327
329
|
{:name=> "weight", :type=> "float"},
|
@@ -330,17 +332,17 @@ GoogleBigquery::Dataset.patch(@project, @dataset_id,
|
|
330
332
|
}
|
331
333
|
|
332
334
|
GoogleBigquery::Table.create(@project, @dataset_id, @table_body
|
333
|
-
```
|
335
|
+
```
|
334
336
|
|
335
337
|
#### Update:
|
336
338
|
|
337
339
|
```ruby
|
338
340
|
GoogleBigquery::Table.update(@project, @dataset_id, @table_name,
|
339
341
|
{"tableReference"=> {
|
340
|
-
"projectId" => @project, "datasetId" =>@dataset_id, "tableId" => @table_name },
|
342
|
+
"projectId" => @project, "datasetId" =>@dataset_id, "tableId" => @table_name },
|
341
343
|
"description"=> "foobar"} )
|
342
344
|
```
|
343
|
-
|
345
|
+
|
344
346
|
#### Delete:
|
345
347
|
|
346
348
|
```ruby
|
@@ -354,7 +356,7 @@ GoogleBigquery::Table.delete(@project, @dataset_id, @table_name )
|
|
354
356
|
```
|
355
357
|
|
356
358
|
### Table Data
|
357
|
-
|
359
|
+
|
358
360
|
https://developers.google.com/bigquery/docs/reference/v2/tabledata
|
359
361
|
|
360
362
|
#### InsertAll
|
@@ -406,7 +408,7 @@ GoogleBigquery::TableData.list(@project, @dataset_id, @table_name)
|
|
406
408
|
4. Push to the branch (`git push origin my-new-feature`)
|
407
409
|
5. Create new Pull Request
|
408
410
|
|
409
|
-
|
411
|
+
|
410
412
|
#TODO:
|
411
413
|
|
412
414
|
ActiveRecord:
|
data/lib/google_bigquery/auth.rb
CHANGED
@@ -11,7 +11,7 @@ module GoogleBigquery
|
|
11
11
|
end
|
12
12
|
|
13
13
|
def authorize
|
14
|
-
@client = Google::APIClient.new
|
14
|
+
@client = Google::APIClient.new(application_name: "BigBroda", application_version: GoogleBigquery::VERSION )
|
15
15
|
@client.authorization = @asserter.authorize
|
16
16
|
@client.retries = @config.retries.to_i if @config.retries.to_i > 1
|
17
17
|
@api = @client.discovered_api('bigquery', 'v2')
|