gd_bam 0.0.2 → 0.0.3

Sign up to get free protection for your applications and to get access to all the features.
data/README.md CHANGED
@@ -1,73 +1,73 @@
1
1
  #BAsh Machinery = BAM
2
2
 
3
- This thing is fresh from the oven. It is 0.0.1 so there are lots of rough edges. On the other hand there is enough to make you dangerous. Play with it, break it, let me know.
3
+ BAM is a tool that helps you be more productive while creating and maintaining projects.
4
4
 
5
- ###What are goals of BAM
6
- * Be able to spin up a predefined project in hours not days
7
- * Everything implemented using CloudConnect so we can still leverage the infrastructure and deploy to secure
8
- * Make some specific modifications easy (new fields from sources)
9
5
 
10
- ###What are not goals of BAM
11
- * Supersede Clover/CC
12
- * Provide templates for development. The generated grapsh are meant not to be tampered with.
13
- * define a whole processing language. This might be the next extension but right now the goal is to be able to spin up predefined projects as fast and easily as possible. I am not defining primitives for joins, reformats etc.
6
+ ##Installation
14
7
 
15
- ##Overview
8
+ make sure you have ruby (1.9 and 1.8.7 is currently supported) and that you have gem installed. On mac this is already on your machine.
16
9
 
17
- BAM is consisting of two parts. The underlying layer that allows you to build ETLs from prebuild constructs. Second part should make possible to express different configurations in user comprehensible way and configure first layer for specific projects so you do not need to deal with low level stuff when you decide that you want to use Amount instead of Total Price in GoodSales project.
10
+ `gem install gd_bam`
18
11
 
19
- ####1st layer
12
+ Done.
20
13
 
21
- There are 3 basic pieces that you will be playing around. Let's have a look at those
14
+ <!-- ##Spin up a new project
22
15
 
23
- 1) Tap
24
- This is a fancy name for source of data. It can be downloader from SF or CSV file. Tap configurations are source specific. Currently there is SF implemented.
16
+ Here project refers to the GoodData project.
25
17
 
26
- 2) Sink
27
- This is the target of your data. The only sink we have currently is GD.
18
+ `bam project`
28
19
 
29
- 3) Graph
30
- This is a clover graph. So it plays well with Ultra it needs to be created in a specific way. You can use graphs from the library or those that you provide locally (N/A yet).
20
+ This spins up a completely new empty project. You can further specify to spin up several predefined templates that bam knows about. Currently it is just goodsales. You do not need to worry about versions etc. Bam does this for you and spins up the latest available.
31
21
 
22
+ ###Scaffolding
23
+ Scaffolding helps you create the files and provides you initial structure. Project here reffers to BAM project.
32
24
 
33
- 4) Flow
34
- This is something that describes how the data are flowing. The previous three pieces are the things that you can use in the flow.
25
+ create a project `bam scaffold project test`
35
26
 
27
+ this will create a project. You can also scaffold a project from a know template. Currently it is again just goodsales. The goal is that after spinning you gooddata and BAM project you are good to run. If you want the defaults there is nothing you need to do besides filling credentials.
36
28
 
37
- ###2nd layer
38
- TBD
29
+ -->
30
+ ##Sample project
39
31
 
40
- ##Installation
32
+ We will spin a goodsales project and load it with data. Prerequisite for this is a functioning Salesforce poject that you can grab at force.com.
41
33
 
42
- make sure you have ruby (1.9 and 1.8 is supported) and that you have gem installed
34
+ `bam scaffold project test --blueprint goodsales`
43
35
 
44
- `gem install gd_bam`
36
+ now you can go inside `cd test`. You will notice several directories and files. We will get to `flows`, `taps` and `sinks` late. Currently focus just on `params.json`
37
+ If you open it you will see several parameters which you will have to fill. Common one should be predefined and empty. BAM should warn you when it needs something it does not have.
45
38
 
46
- create a project `bam scaffold project test`
39
+ One of the parameters is project_pid. To get that you need a project.
47
40
 
48
- this will create a project
41
+ `bam project --blueprint goodsales`
42
+ This should spin for a while and eventually should give you project ID. Fill it in your params.json.
49
43
 
50
- ##Sample project
44
+ Now go ahead and generate the downloaders.
45
+ `bam generate_downloaders`
46
+
47
+ We will talk in detail why we split etl for project into downloaders and rest of the ETL. Now just trust us. By default it is generated into downloader_project folder. You can go ahead and run it on platform.
48
+
49
+ `bam run project_downloaders --email joe@example.com`
50
+
51
+ You can watch the progress in our CloudConnect console.
52
+
53
+ Now generate the etl.
51
54
 
52
- now you can go inside `cd test` and generate it
53
55
  `bam generate`
54
56
 
55
- This tap will download users from sf (you have to provide credentials in params.json). It then runs graph called "process user" (this is part of the distribution). This graph concatenates first name and last name together.
57
+ This works the same as with downloaders but its default target is clover_project
56
58
 
57
- GoodData::CloverGenerator::DSL::flow("user") do |f|
58
- tap(:id => "user")
59
-
60
- graph("process_owner")
61
- metadata("user") do |m|
62
- m.remove("FirstName")
63
- m.remove("LastName")
64
- m.add(:name => "Name")
65
- end
66
-
67
- sink(:id => "user")
68
- end
59
+ `bam run clover_project --email joe@example.com`
69
60
 
70
- Now you have to provide it the definition of tap which you can do like this.
61
+ After it is finished log in to gooddata go into your project and celebrate. You just did project using BAM.
62
+
63
+ ##Taps
64
+ Taps are sources of data. There are various types (:-)) but right now you can use just salesforce tap.
65
+
66
+ ###Common properties
67
+ Every tap has the source and an id. Source tells it where to go to grab data. DB, CSV, SalesForce. Depending on the type the definition might be different. Id is a field that holds a name with which you can reference particular tap. Obviously it needs to be unique.
68
+
69
+ ###Salesforce
70
+ This is a tap that connects to SalesForce using credentials mentioned in params.json. Object tells the downlooader which SF onject to grab.
71
71
 
72
72
  {
73
73
  "source" : "salesforce"
@@ -92,6 +92,83 @@ Now you have to provide it the definition of tap which you can do like this.
92
92
  ]
93
93
  }
94
94
 
95
+ ####Limit
96
+ Sometimes it is useful to limit number of grabbed values for example for testing purposes.
97
+
98
+ {
99
+ "source" : "salesforce"
100
+ ,"object" : "User"
101
+ ,"id" : "user"
102
+ ,"fields" : [
103
+ {
104
+ "name" : "Id"
105
+ }
106
+ .
107
+ .
108
+ .
109
+ ]
110
+ ,"limit": 100
111
+ }
112
+
113
+ ####Acts as
114
+ Sometime it is needed to use one field several times in a source of data or you want to "call" certain field differently because the ETL relies on partiular name. Both cases are handled using `acts_as`
115
+
116
+ {
117
+ "source" : "salesforce"
118
+ ,"object" : "User"
119
+ ,"id" : "user"
120
+ ,"fields" : [
121
+ {
122
+ "name" : "Id", "acts_as" : ["Id", "Name"]
123
+ },
124
+ {
125
+ "name" : "Custom_Amount__c", "acts_as" : ["RenamedAmount"]
126
+ }
127
+ ]
128
+ }
129
+
130
+ Id will be routed to both Id and Name. Custom_Amount__c will be called RenamedAmount.
131
+
132
+ ####Condition
133
+ You can also specify a condition during download. I recommend using it only if it drastically lowers the amount of data that goes over wire. Otherwise implement it elsewhere.
134
+
135
+ ####Incremental
136
+ It is wasteful to download everything on and on again. If you specify the incremental=true you are telling BAM to download only incrementally. This means several things even if BAM tries to hide away as much as it can it might be condusing. Incremental tap means
137
+
138
+ * tap is treated differently. If you generate the ETL with BAM generate the taps will not reach into SF but into some intermediary store.
139
+ * with generate_downloaders you can generete graphs that will handle the incremental nature of downloaders and all the bookkeeping and make sure that nothing gets lost. It stores the data in an intermediary store.
140
+
141
+ ####Why split graphs?
142
+ The reason for this is simple. When you download only incrementally you do not stress the wires that much and that means you can run it pretty often. By runnning it often it means that even if something horrible happens once it will probably run succesfully next time. And as we mentioned this is cheap. On the other hand runnning the main ETL is often very expensive and recovering from failure is usually different so splitting them simplifies development of each. Since they are independent they can be even developed by different people which is sometimes useful.
143
+
144
+ ####Taps validation
145
+ Fail early. There is nothing more frustrating than when the ETL fails during exwcution. When you develop the taps you can ask BAM to connect to SF and validate that the fields are present. This is not bulletproof since some fields can go away at any time but it gives you good idea if you did not misspelled any fields.
146
+
147
+ ####Mandatory fields
148
+ Sometimes it is necessary to move fields around in SF. In such case the tap will. If you know this upfront you can tell BAM that this field is not mandatory and it will silently go along filling the missing field with ''
149
+
150
+ ###Sinks
151
+
152
+
153
+ This tap will download users from sf (you have to provide credentials in params.json). It then runs graph called "process user" (this is part of the distribution). This graph concatenates first name and last name together.
154
+
155
+ GoodData::CloverGenerator::DSL::flow("user") do |f|
156
+ tap(:id => "user")
157
+
158
+ graph("process_owner")
159
+ metadata("user") do |m|
160
+ m.remove("FirstName")
161
+ m.remove("LastName")
162
+ m.add(:name => "Name")
163
+ end
164
+
165
+ sink(:id => "user")
166
+ end
167
+
168
+ Now you have to provide it the definition of tap which you can do like this.
169
+
170
+
171
+
95
172
  Also you need to provide a definition for sink which can look somwhow like this.
96
173
 
97
174
  {
@@ -113,7 +190,7 @@ For this example to work you need to provide SF and gd credentials. Provide them
113
190
  Now run `bam generate` and there will be a folder with the clover project generated. Open it in CC find main.grf and run it. After crunching for a while you should see data in the project.
114
191
 
115
192
  ### Runtime commands
116
- Part of the distribution is the bam executable which lets you do several neat things.
193
+ Part of the distribution is the bam executable which lets you do several neat things on the commandline
117
194
 
118
195
  Run `bam` to get the list of commands
119
196
  Run `bam help command` to get help about the command
@@ -148,4 +225,8 @@ sinks
148
225
  Currently works only for SF. Validates that the target SF instance has all the fields in the objects that are specified in the taps definitions.
149
226
 
150
227
  ### validate_datasets
151
- Vallidates the sinks (currently only GD) with the definitions in the proeject. It looks for fields that are defined inside sinks and are not in the projects missing references etc. More description needed.
228
+ Vallidates the sinks (currently only GD) with the definitions in the proeject. It looks for fields that are defined inside sinks and are not in the projects missing references etc. More description needed.
229
+
230
+ ##Roadmap
231
+ * Allow different storage then ES (Vertica)
232
+ * Contract checkers
data/bin/bam CHANGED
@@ -86,19 +86,114 @@ command :sinks_validate do |c|
86
86
  end
87
87
  end
88
88
 
89
+ desc 'Creates project'
90
+ command :project do |c|
91
+
92
+ c.desc 'blueprint name. Currently support goodsales'
93
+ c.arg_name 'blueprint'
94
+ c.flag :blueprint
95
+
96
+ c.desc 'token'
97
+ c.arg_name 'token'
98
+ c.flag :token
99
+
100
+ c.action do |global_options,options,args|
101
+
102
+ pid = case options[:blueprint]
103
+ when "goodsales"
104
+ "i49w4c73c2mh75iiehte3fv3fbos8h2k"
105
+ when nil
106
+ fail "Empty project not supported now"
107
+ end
108
+ GoodData::CloverGenerator.connect_to_gd
109
+ with_users = options[:with_users]
110
+
111
+ export = {
112
+ :exportProject => {
113
+ :exportUsers => with_users ? 1 : 0,
114
+ :exportData => 1
115
+ }
116
+ }
117
+
118
+ result = GoodData.post("/gdc/md/#{pid}/maintenance/export", export)
119
+ token = result["exportArtifact"]["token"]
120
+ status_url = result["exportArtifact"]["status"]["uri"]
121
+
122
+
123
+ state = GoodData.get(status_url)["taskState"]["status"]
124
+ while state == "RUNNING"
125
+ sleep 5
126
+ result = GoodData.get(status_url)
127
+ state = result["taskState"]["status"]
128
+ end
129
+
130
+ old_project = GoodData::Project[pid]
131
+
132
+
133
+ pr = {
134
+ :project => {
135
+ :content => {
136
+ :guidedNavigation => 1,
137
+ :driver => "Pg",
138
+ :authorizationToken => options[:token]
139
+ },
140
+ :meta => {
141
+ :title => "Test Project",
142
+ :summary => "Testing Project"
143
+ }
144
+ }
145
+ }
146
+ result = GoodData.post("/gdc/projects/", pr)
147
+ uri = result["uri"]
148
+ while(GoodData.get(uri)["project"]["content"]["state"] == "LOADING")
149
+ sleep(5)
150
+ end
151
+
152
+ new_project = GoodData::Project[uri]
153
+
154
+ import = {
155
+ :importProject => {
156
+ :token => token
157
+ }
158
+ }
159
+
160
+ result = GoodData.post("/gdc/md/#{new_project.obj_id}/maintenance/import", import)
161
+ status_url = result["uri"]
162
+ state = GoodData.get(status_url)["taskState"]["status"]
163
+ while state == "RUNNING"
164
+ sleep 5
165
+ result = GoodData.get(status_url)
166
+ state = result["taskState"]["status"]
167
+ end
168
+
169
+ end
170
+
171
+ end
172
+
89
173
 
90
174
  desc 'Generates structures'
91
175
  arg_name 'what you want to generate project, tap, flow, dataset'
92
176
  command :scaffold do |c|
177
+
178
+ c.desc 'blueprint name. Currently support goodsales'
179
+ c.arg_name 'blueprint'
180
+ c.flag :blueprint
181
+
93
182
  c.action do |global_options,options,args|
94
183
  command = args.first
95
184
  fail "You did not provide what I should scaffold. I can generate project, tap, flow, sink nothing else" unless ["project", "tap", "flow", "sink"].include?(command)
96
185
  case command
97
186
  when "project"
98
- puts "project"
99
187
  directory = args[1]
100
188
  fail "Directory has to be provided as an argument. See help" if directory.nil?
101
- GoodData::CloverGenerator.setup_bash_structure(directory)
189
+ if options[:blueprint].nil?
190
+ GoodData::CloverGenerator.setup_bash_structure(directory)
191
+ else
192
+ case options[:blueprint]
193
+ when "goodsales"
194
+ system "git clone git@github.com:gooddata/goodsales_base.git #{directory}"
195
+ end
196
+ end
102
197
  when "flow"
103
198
  name = args[1]
104
199
  fail "Name of the flow has to be provided as an argument. See help" if name.nil?
@@ -164,10 +259,8 @@ command :run do |c|
164
259
  verbose = global_options[:v]
165
260
 
166
261
  GoodData::CloverGenerator.connect_to_gd
167
- GoodData::CloverGenerator.create_email_channel
168
262
  options = global_options.merge({:name => "temporary"})
169
- GoodData::CloverGenerator.deploy(args.first, options) do |deploy_response|
170
-
263
+ GoodData::CloverGenerator.deploy(dir, options) do |deploy_response|
171
264
  puts HighLine::color("Executing", HighLine::BOLD) if verbose
172
265
  GoodData::CloverGenerator.create_email_channel do
173
266
  GoodData::CloverGenerator.execute_process(deploy_response["cloverTransformation"]["links"]["executions"], dir)
data/lib/bam/version.rb CHANGED
@@ -1,3 +1,3 @@
1
1
  module Bam
2
- VERSION = '0.0.2'
2
+ VERSION = '0.0.3'
3
3
  end
@@ -1,11 +1,11 @@
1
1
  <?xml version="1.0" encoding="UTF-8"?>
2
- <Graph author="fluke" created="Tue Feb 05 15:38:24 PST 2013" guiVersion="3.3.1" id="1360179808937" licenseType="Commercial" modified="Fri Feb 22 13:36:18 PST 2013" modifiedBy="fluke" name="process_name" revision="1.12" showComponentDetails="true">
2
+ <Graph author="fluke" created="Tue Feb 05 15:38:24 PST 2013" guiVersion="3.3.2" id="1360179808937" licenseType="Commercial" modified="Wed Apr 17 09:21:46 PDT 2013" modifiedBy="josefvitek" name="process_name" revision="1.18" showComponentDetails="true">
3
3
  <Global>
4
4
  <Metadata fileURL="${PROJECT}/metadata/${FLOW}/${NAME}/1_in.xml" id="Metadata0"/>
5
5
  <Metadata fileURL="${PROJECT}/metadata/${FLOW}/${NAME}/1_out.xml" id="Metadata1"/>
6
6
  <MetadataGroup id="ComponentGroup0" name="metadata"/>
7
7
  <Property fileURL="params.txt" id="GraphParameter0"/>
8
- <Property fileURL="workspace.prm" id="GraphParameter1"/>
8
+ <Property fileURL="workspace.prm" id="GraphParameter2"/>
9
9
  <Dictionary/>
10
10
  </Global>
11
11
  <Phase number="0">
@@ -20,9 +20,9 @@ function integer transform() {
20
20
 
21
21
  $out.0.* = $in.0.*;
22
22
  $out.0.status = status;
23
- $out.0.SortOrder = "0";
24
- $out.0.label = $in.0.Id;
25
-
23
+ $out.0.label = $in.0.Id;
24
+ $out.0.SortOrder = nvl($in.0.SortOrder, "9999");
25
+
26
26
  return ALL;
27
27
  }
28
28
 
data/lib/runtime.rb CHANGED
@@ -60,7 +60,7 @@ module GoodData
60
60
  FileUtils::mkdir_p base + dir
61
61
  end
62
62
  FileUtils::touch(base + 'params.txt')
63
- render_template("project.erb", PARAMS, :to_file => base + '.project')
63
+ render_template("project.erb", PARAMS.merge(:project_name => "downloader-#{PARAMS[:project_name]}"), :to_file => base + '.project')
64
64
  render_template("workspace.prm.erb", PARAMS, :to_file => base + 'workspace.prm')
65
65
  end
66
66
 
@@ -165,38 +165,48 @@ module GoodData
165
165
  dry_run = options[:dry]
166
166
  project = build_project
167
167
  datasets = project.get_datasets
168
- datasets.each do |ds|
169
- dataset_path = "cl_file_#{ds[:id]}"
170
- File.open(dataset_path, "w") do |temp|
171
- builder = Builder::XmlMarkup.new(:target=>temp, :indent=>2)
172
- builder.schema do |builder|
173
- builder.name(ds[:gd_name])
174
- builder.title(ds[:gd_name])
175
- builder.columns do |b|
176
- ds[:fields].each do |f|
177
- builder.column do |builder|
178
- builder.title(f[:name])
179
- builder.ldmType(f[:type].upcase)
180
- builder.reference(f[:for]) if f.has_key?(:for)
181
- builder.reference(f[:ref]) if f.has_key?(:ref)
182
- builder.schemaReference(f[:schema]) if f.has_key?(:schema)
183
- if f[:type] == "date"
184
- builder.schemaReference("#{f[:dd]}")
185
- builder.name("#{f[:name]}")
186
- else
187
- builder.name(f[:name] || f[:ref])
168
+ model_update_dir = Pathname('model_update')
169
+ cl_home = ENV['CL_HOME'] || PARAMS['CL_HOME'] || fail("Home of cl tool cannot be found. Either set up CL_HOME in your env with 'export CL_HOME=path/to/cl or set it up in your params.json. Point to the directory of CL not to the bin dir.'")
170
+ cl_home = Pathname(cl_home) + 'bin/gdi.sh'
171
+
172
+ FileUtils::mkdir_p(model_update_dir)
173
+ File.open(model_update_dir + 'xy', 'w')
174
+ FileUtils::cd(model_update_dir) do
175
+ datasets.each do |ds|
176
+ dataset_path = Pathname("cl_file_#{ds[:id]}")
177
+ File.open(dataset_path, "w") do |temp|
178
+ builder = Builder::XmlMarkup.new(:target=>temp, :indent=>2)
179
+ builder.schema do |builder|
180
+ builder.name(ds[:gd_name])
181
+ builder.title(ds[:gd_name])
182
+ builder.columns do |b|
183
+ ds[:fields].each do |f|
184
+ builder.column do |builder|
185
+ builder.title(f[:name])
186
+ builder.ldmType(f[:type].upcase)
187
+ builder.reference(f[:for]) if f.has_key?(:for)
188
+ builder.reference(f[:ref]) if f.has_key?(:ref)
189
+ builder.schemaReference(f[:schema]) if f.has_key?(:schema)
190
+ if f[:type] == "date"
191
+ builder.schemaReference("#{f[:dd]}")
192
+ builder.name("#{f[:name]}")
193
+ else
194
+ builder.name(f[:name] || f[:ref])
195
+ end
188
196
  end
189
197
  end
190
198
  end
191
199
  end
192
200
  end
201
+ template_name = dry_run ? "update_dataset_dry.script.erb" : "update_dataset.script.erb"
202
+ render_template(template_name, PARAMS.merge({"config_file" => dataset_path.expand_path}), :to_file => 'update_dataset.script')
203
+ puts "Generate #{ds[:id]}"
204
+
205
+ system("#{cl_home} update_dataset.script --username #{PARAMS[:gd_login]} --password #{PARAMS[:gd_pass]}")
206
+ File.delete(dataset_path)
193
207
  end
194
- template_name = dry_run ? "update_dataset_dry.script.erb" : "update_dataset.script.erb"
195
- render_template(template_name, PARAMS.merge({"config_file" => dataset_path}), :to_file => 'update_dataset.script')
196
- puts "Generate #{ds[:id]}"
197
- system("~/Downloads/gooddata-cli-1.2.65/bin/gdi.sh update_dataset.script --username #{PARAMS[:gd_login]} --password #{PARAMS[:gd_pass]}")
198
- File.delete(dataset_path)
199
208
  end
209
+ FileUtils::rm_rf(model_update_dir)
200
210
  end
201
211
 
202
212
  def self.generate_downloaders(options={})
@@ -216,6 +226,7 @@ module GoodData
216
226
 
217
227
 
218
228
  def self.execute_process(link, dir)
229
+ binding.pry
219
230
  result = GoodData.post(link, {
220
231
  :graphExecution => {
221
232
  :graph => "./#{dir}/graphs/main.grf",
@@ -228,7 +239,7 @@ module GoodData
228
239
  def self.connect_to_gd
229
240
  GoodData.logger = Logger.new(STDOUT)
230
241
  GoodData.connect(PARAMS[:gd_login], PARAMS[:gd_pass])
231
- GoodData.project = PARAMS[:project_pid]
242
+ GoodData.project = PARAMS[:project_pid] if !PARAMS[:project_pid].nil? && !PARAMS[:project_pid].empty?
232
243
  end
233
244
 
234
245
  def self.create_email_channel(&block)
@@ -262,6 +273,7 @@ module GoodData
262
273
  deploy_name = options[:name]
263
274
  verbose = options[:verbose] || false
264
275
  puts HighLine::color("Deploying #{dir}", HighLine::BOLD) if verbose
276
+ res = nil
265
277
 
266
278
  Tempfile.open("deploy-graph-archive") do |temp|
267
279
  Zip::ZipOutputStream.open(temp.path) do |zio|
@@ -274,7 +286,6 @@ module GoodData
274
286
  end
275
287
  end
276
288
 
277
-
278
289
  GoodData.connection.upload(temp.path)
279
290
  process_id = options[:process]
280
291
 
@@ -291,6 +302,7 @@ module GoodData
291
302
  end
292
303
  end
293
304
  puts HighLine::color("Deploy DONE #{dir}", HighLine::BOLD) if verbose
305
+ res
294
306
  end
295
307
 
296
308
  def self.deploy(dir, options={}, &block)
@@ -1,4 +1,4 @@
1
1
  OpenProject(id="<%= project_pid %>");
2
- UseCsv(csvDataFile="xy", configFile="<%= config_file %>", hasHeader="true", separator = ",");
2
+ UseCsv(csvDataFile="model_update/xy", configFile="<%= config_file %>", hasHeader="true", separator = ",");
3
3
  GenerateUpdateMaql(maqlFile="<%= config_file %>.maql");
4
4
  ExecuteMaql(maqlFile="<%= config_file %>.maql");
@@ -1,3 +1,3 @@
1
1
  OpenProject(id="<%= project_pid %>");
2
- UseCsv(csvDataFile="xy", configFile="<%= config_file %>", hasHeader="true", separator = ",");
2
+ UseCsv(csvDataFile="model_update/xy", configFile="<%= config_file %>", hasHeader="true", separator = ",");
3
3
  GenerateUpdateMaql(maqlFile="<%= config_file %>.maql");
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: gd_bam
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.0.2
4
+ version: 0.0.3
5
5
  prerelease:
6
6
  platform: ruby
7
7
  authors:
@@ -9,7 +9,7 @@ authors:
9
9
  autorequire:
10
10
  bindir: bin
11
11
  cert_chain: []
12
- date: 2013-04-12 00:00:00.000000000 Z
12
+ date: 2013-04-18 00:00:00.000000000 Z
13
13
  dependencies:
14
14
  - !ruby/object:Gem::Dependency
15
15
  name: rake
@@ -256,17 +256,17 @@ dependencies:
256
256
  requirement: !ruby/object:Gem::Requirement
257
257
  none: false
258
258
  requirements:
259
- - - ! '>='
259
+ - - '='
260
260
  - !ruby/object:Gem::Version
261
- version: '0'
261
+ version: 0.5.9
262
262
  type: :runtime
263
263
  prerelease: false
264
264
  version_requirements: !ruby/object:Gem::Requirement
265
265
  none: false
266
266
  requirements:
267
- - - ! '>='
267
+ - - '='
268
268
  - !ruby/object:Gem::Version
269
- version: '0'
269
+ version: 0.5.9
270
270
  - !ruby/object:Gem::Dependency
271
271
  name: highline
272
272
  requirement: !ruby/object:Gem::Requirement
@@ -393,7 +393,7 @@ required_ruby_version: !ruby/object:Gem::Requirement
393
393
  version: '0'
394
394
  segments:
395
395
  - 0
396
- hash: -632830288147751058
396
+ hash: -596246612218275016
397
397
  required_rubygems_version: !ruby/object:Gem::Requirement
398
398
  none: false
399
399
  requirements:
@@ -402,7 +402,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
402
402
  version: '0'
403
403
  segments:
404
404
  - 0
405
- hash: -632830288147751058
405
+ hash: -596246612218275016
406
406
  requirements: []
407
407
  rubyforge_project:
408
408
  rubygems_version: 1.8.25