active_data_frame 0.1.1

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml ADDED
@@ -0,0 +1,7 @@
1
+ ---
2
+ SHA1:
3
+ metadata.gz: 5c1042e5d6a9e65c386a0dc5353e6fcc3e065a84
4
+ data.tar.gz: 13ee5f0520a97c563dc5bfedb4408010464b2e5f
5
+ SHA512:
6
+ metadata.gz: 7e9f1118a5c18a0aed0bc933ec2e9bfc7d443412e762da9cee9707bdf4084922e9d2904f3668088ff2d3866bcebf9a90b0b8c9e21c90e023c9fb2ca19d1c57c3
7
+ data.tar.gz: f957ad5532cfcd4a5d635d278a0a5ee6de2c4b6179e5e82f785f6aa9961ae5ea64846d3c138816c1d2e5b2443dc766bcfe7576e2403c7d21fcf77b8e37315b6a
data/.gitignore ADDED
@@ -0,0 +1,9 @@
1
+ /.bundle/
2
+ /.yardoc
3
+ /Gemfile.lock
4
+ /_yardoc/
5
+ /coverage/
6
+ /doc/
7
+ /pkg/
8
+ /spec/reports/
9
+ /tmp/
@@ -0,0 +1,74 @@
1
+ # Contributor Covenant Code of Conduct
2
+
3
+ ## Our Pledge
4
+
5
+ In the interest of fostering an open and welcoming environment, we as
6
+ contributors and maintainers pledge to making participation in our project and
7
+ our community a harassment-free experience for everyone, regardless of age, body
8
+ size, disability, ethnicity, gender identity and expression, level of experience,
9
+ nationality, personal appearance, race, religion, or sexual identity and
10
+ orientation.
11
+
12
+ ## Our Standards
13
+
14
+ Examples of behavior that contributes to creating a positive environment
15
+ include:
16
+
17
+ * Using welcoming and inclusive language
18
+ * Being respectful of differing viewpoints and experiences
19
+ * Gracefully accepting constructive criticism
20
+ * Focusing on what is best for the community
21
+ * Showing empathy towards other community members
22
+
23
+ Examples of unacceptable behavior by participants include:
24
+
25
+ * The use of sexualized language or imagery and unwelcome sexual attention or
26
+ advances
27
+ * Trolling, insulting/derogatory comments, and personal or political attacks
28
+ * Public or private harassment
29
+ * Publishing others' private information, such as a physical or electronic
30
+ address, without explicit permission
31
+ * Other conduct which could reasonably be considered inappropriate in a
32
+ professional setting
33
+
34
+ ## Our Responsibilities
35
+
36
+ Project maintainers are responsible for clarifying the standards of acceptable
37
+ behavior and are expected to take appropriate and fair corrective action in
38
+ response to any instances of unacceptable behavior.
39
+
40
+ Project maintainers have the right and responsibility to remove, edit, or
41
+ reject comments, commits, code, wiki edits, issues, and other contributions
42
+ that are not aligned to this Code of Conduct, or to ban temporarily or
43
+ permanently any contributor for other behaviors that they deem inappropriate,
44
+ threatening, offensive, or harmful.
45
+
46
+ ## Scope
47
+
48
+ This Code of Conduct applies both within project spaces and in public spaces
49
+ when an individual is representing the project or its community. Examples of
50
+ representing a project or community include using an official project e-mail
51
+ address, posting via an official social media account, or acting as an appointed
52
+ representative at an online or offline event. Representation of a project may be
53
+ further defined and clarified by project maintainers.
54
+
55
+ ## Enforcement
56
+
57
+ Instances of abusive, harassing, or otherwise unacceptable behavior may be
58
+ reported by contacting the project team at TODO: Write your email address. All
59
+ complaints will be reviewed and investigated and will result in a response that
60
+ is deemed necessary and appropriate to the circumstances. The project team is
61
+ obligated to maintain confidentiality with regard to the reporter of an incident.
62
+ Further details of specific enforcement policies may be posted separately.
63
+
64
+ Project maintainers who do not follow or enforce the Code of Conduct in good
65
+ faith may face temporary or permanent repercussions as determined by other
66
+ members of the project's leadership.
67
+
68
+ ## Attribution
69
+
70
+ This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
71
+ available at [http://contributor-covenant.org/version/1/4][version]
72
+
73
+ [homepage]: http://contributor-covenant.org
74
+ [version]: http://contributor-covenant.org/version/1/4/
data/Gemfile ADDED
@@ -0,0 +1,4 @@
1
+ source 'https://rubygems.org'
2
+
3
+ # Specify your gem's dependencies in active_data_frame.gemspec
4
+ gemspec
data/README.md ADDED
@@ -0,0 +1,36 @@
1
+ # ActiveDataFrame
2
+
3
+ Welcome to your new gem! In this directory, you'll find the files you need to be able to package up your Ruby library into a gem. Put your Ruby code in the file `lib/active_data_frame`. To experiment with that code, run `bin/console` for an interactive prompt.
4
+
5
+ TODO: Delete this and the text above, and describe your gem
6
+
7
+ ## Installation
8
+
9
+ Add this line to your application's Gemfile:
10
+
11
+ ```ruby
12
+ gem 'active_data_frame'
13
+ ```
14
+
15
+ And then execute:
16
+
17
+ $ bundle
18
+
19
+ Or install it yourself as:
20
+
21
+ $ gem install active_data_frame
22
+
23
+ ## Usage
24
+
25
+ TODO: Write usage instructions here
26
+
27
+ ## Development
28
+
29
+ After checking out the repo, run `bin/setup` to install dependencies. You can also run `bin/console` for an interactive prompt that will allow you to experiment.
30
+
31
+ To install this gem onto your local machine, run `bundle exec rake install`. To release a new version, update the version number in `version.rb`, and then run `bundle exec rake release`, which will create a git tag for the version, push git commits and tags, and push the `.gem` file to [rubygems.org](https://rubygems.org).
32
+
33
+ ## Contributing
34
+
35
+ Bug reports and pull requests are welcome on GitHub at https://github.com/[USERNAME]/active_data_frame. This project is intended to be a safe, welcoming space for collaboration, and contributors are expected to adhere to the [Contributor Covenant](http://contributor-covenant.org) code of conduct.
36
+
data/Rakefile ADDED
@@ -0,0 +1,2 @@
1
+ require "bundler/gem_tasks"
2
+ task :default => :spec
@@ -0,0 +1,28 @@
1
+ # coding: utf-8
2
+ lib = File.expand_path('../lib', __FILE__)
3
+ $LOAD_PATH.unshift(lib) unless $LOAD_PATH.include?(lib)
4
+ require 'active_data_frame/version'
5
+
6
+ Gem::Specification.new do |spec|
7
+ spec.name = "active_data_frame"
8
+ spec.version = ActiveDataFrame::VERSION
9
+ spec.authors = ["Wouter Coppieters"]
10
+ spec.email = ["wc@pico.net.nz"]
11
+
12
+ spec.summary = 'An active data frame helper'
13
+ spec.description = 'An active data frame helper'
14
+
15
+ spec.files = `git ls-files -z`.split("\x0").reject do |f|
16
+ f.match(%r{^(test|spec|features)/})
17
+ end
18
+ spec.bindir = "exe"
19
+ spec.executables = spec.files.grep(%r{^exe/}) { |f| File.basename(f) }
20
+ spec.require_paths = ["lib"]
21
+
22
+ spec.add_development_dependency "bundler", "~> 1.13"
23
+ spec.add_development_dependency "rake", "~> 10.0"
24
+ spec.add_development_dependency "pry-byebug", "~> 3.4.0", '>= 3.4.0'
25
+ spec.add_development_dependency 'pry', '~> 0.10.2', '>= 0.10.0'
26
+ spec.add_runtime_dependency 'activerecord', '~> 5.0.0'
27
+ spec.add_runtime_dependency 'rmatrix', '~> 0.1.10', '>=0.1.10'
28
+ end
@@ -0,0 +1,83 @@
1
+ Refactor:
2
+ ✔ Is Engine neccessary? @done (17-03-31 08:20)
3
+ ✔ Add Typecode expectations @done (17-04-06 08:16)
4
+ ☐ Add enum capabilities
5
+ ☐ Better errors when using bad indices in RMatrix
6
+ ☐ Better printing in RMatrix
7
+ ☐ Refactor + Tidy
8
+ ☐ Tests
9
+ ☐ Experiment with MonetDB speed
10
+ ☐ Check support for different numeric/string/bool.etc types
11
+ ✔ Experiment with single precision @done (17-03-31 08:18)
12
+ ActiveRecordMonetDBAdapter:
13
+ ☐ Work on support for MonetDB
14
+
15
+ ActiveDataFrame:
16
+ ✔ Refactor grouping/summing code @done (17-03-31 08:20)
17
+ ✔ Allow includes to combine frames @done (17-03-27 10:36)
18
+ ✔ Performance test on ICP data @done (17-03-27 08:41)
19
+ ✔ Alternate RDBMS support (SQLLite, MySQL) @done (17-03-27 09:58)
20
+
21
+
22
+ Utilities:
23
+ ☐ KMeans clustering and DBScan built in to multi-d array
24
+
25
+ Later:
26
+ ☐ Build generic Merge/Cache structure which will either cache infinite columns or rows
27
+ - class Unit
28
+ - df_cache :all_loads, ::loads, direction: :row
29
+ - end
30
+
31
+ Ruby dataframe library inspiration:
32
+ - Integration with Nyaplot
33
+ - Integration with Statsample
34
+
35
+ ✔ Generator creates A migration and data_frame and block classes. Block/DataFrame classes have a type, a period unit and a period length @done (17-01-12 10:29)
36
+ ✔ Type is: @done (17-01-12 10:29)
37
+ ✔ Bit @done (17-01-12 10:29)
38
+ ✔ Short @done (17-01-12 10:29)
39
+ ✔ Int @done (17-01-12 10:29)
40
+ ✔ Long @done (17-01-12 10:29)
41
+ ✔ Float @done (17-01-12 10:29)
42
+ ✔ Double @done (17-01-12 10:29)
43
+
44
+ ✔ Insert useful metadata into block type class. @done (17-01-12 19:43)
45
+ ✔ Number of columns @done (17-01-12 19:43)
46
+ ✔ Column getters @done (17-01-12 19:43)
47
+ ✔ Column setters @done (17-01-12 19:43)
48
+ ✔ Select SQL @done (17-01-12 19:43)
49
+
50
+ ✔ DataBlock and DataDataFrame provides: @done (17-01-12 19:43)
51
+ ✔ #[] @done (17-01-12 19:43)
52
+ ✔ #[]= @done (17-01-12 19:43)
53
+ ✔ #self.matrix(columns:, *time, period_unit: period_length: default derive from first) @done (17-01-12 19:43)
54
+ ✔ #self.avg @done (17-01-12 19:43)
55
+ ✔ #self.sum @done (17-01-12 19:43)
56
+ ✔ #self.count_zero @done (17-01-12 19:43)
57
+ ✔ #self.where @done (17-01-12 19:43)
58
+ ✔ #self.max @done (17-01-12 19:43)
59
+ ✔ #self.min @done (17-01-12 19:43)
60
+ ✔ Bulk service can bulk insert and update (Implement for PostgreSQL first) @done (17-01-12 19:43)
61
+ ✔ Time helper service @done (17-01-12 19:43)
62
+
63
+ Thoughts:
64
+ ✔ Can extract non-contiguous columns at once (Optimise) @done (17-01-14 09:40)
65
+ ✔ Extract doesn't use from, count. Use either: @done (17-01-14 09:40)
66
+ ✔ [from..to] @done (17-01-14 09:40)
67
+ ✔ [from1, from2, from3] @done (17-01-14 09:40)
68
+ ✔ Class can define column mapper array (create reverse object to index hash from this) @done (17-01-14 21:46)
69
+ ✔ Column mapper function is used in column_mapper for results @done (17-01-14 21:46)
70
+ ✔ Row mapper will translate ActiveRecord items to indices @done (17-01-14 21:46)
71
+ ✔ Add column and row maps to results @done (17-01-14 21:46)
72
+
73
+ ✔ Add where queries based on column names. @done (17-01-16 09:21)
74
+ ✔ E.g @done (17-01-16 09:21)
75
+ ✔ Iris.where(Iris.columns.sepal_length == 3).or(Iris.columns.petal_length < 3) @done (17-01-16 09:21)
76
+
77
+ ✔ Rename to active data frame @done (17-01-24 18:28)
78
+ ✔ Add option to name columns and rows RMatrix (For printing) @done (17-02-27 09:20)
79
+ ✔ Finish RMatrix @done (17-03-02 09:01)
80
+
81
+ RMatrix:
82
+ ✔ Ensure assignment works @done (17-03-21 09:56)
83
+ ✘ Raw is simply a copy of self without mappings @cancelled (17-03-21 09:56)
data/bin/console ADDED
@@ -0,0 +1,14 @@
1
+ #!/usr/bin/env ruby
2
+
3
+ require "bundler/setup"
4
+ require "active_data_frame"
5
+
6
+ # You can add fixtures and/or initialization code here to make experimenting
7
+ # with your gem easier. You can also use a different console, if you like.
8
+
9
+ # (If you use this, don't forget to add pry to your Gemfile!)
10
+ # require "pry"
11
+ # Pry.start
12
+
13
+ require "irb"
14
+ IRB.start
data/bin/setup ADDED
@@ -0,0 +1,8 @@
1
+ #!/usr/bin/env bash
2
+ set -euo pipefail
3
+ IFS=$'\n\t'
4
+ set -vx
5
+
6
+ bundle install
7
+
8
+ # Do any other automated setup that you need to do here
@@ -0,0 +1,5 @@
1
+ require 'active_data_frame/data_frame_proxy'
2
+ require 'active_data_frame/table'
3
+ require 'active_data_frame/row'
4
+ require 'active_data_frame/has_data_frame'
5
+ require 'rmatrix'
@@ -0,0 +1,112 @@
1
+ module ActiveDataFrame
2
+ class DataFrameProxy
3
+ attr_accessor :block_type, :data_frame_type, :block_type_name
4
+ def initialize(block_type, data_frame_type)
5
+ self.block_type = block_type
6
+ self.data_frame_type = data_frame_type
7
+ self.block_type_name = block_type.table_name.gsub(/_blocks$/,'').gsub(/^blocks_/,'')
8
+ end
9
+
10
+ def [](*ranges)
11
+ get(extract_ranges(ranges))
12
+ end
13
+
14
+ def []=(from, values)
15
+ from = column_map[from] if column_map && column_map[from]
16
+ set(from, M[values, typecode: block_type::TYPECODE].to_a.flatten)
17
+ end
18
+
19
+ def column_map
20
+ data_frame_type.column_map(self.block_type_name)
21
+ end
22
+
23
+ def column_name_map
24
+ data_frame_type.column_name_map(self.block_type_name)
25
+ end
26
+
27
+ def reverse_column_map
28
+ data_frame_type.reverse_column_map(self.block_type_name)
29
+ end
30
+
31
+ def method_missing(name, *args, &block)
32
+ if column_name_map && column_map[name]
33
+ self[name]
34
+ else
35
+ super
36
+ end
37
+ end
38
+
39
+ def extract_ranges(ranges)
40
+ ranges = unmap_ranges(ranges, column_map) if column_map
41
+ ranges.map do |range|
42
+ case range
43
+ when Range then range
44
+ when Fixnum then range..range
45
+ else raise "Unexpected index #{range}"
46
+ end
47
+ end
48
+ end
49
+
50
+ def range_size
51
+ 0
52
+ end
53
+
54
+ def flatten_ranges(ranges)
55
+ end
56
+
57
+ def unmap_ranges(ranges, map)
58
+ ranges.map do |range|
59
+ case range
60
+ when Range
61
+ first = (map[range.first] rescue nil) || range.first
62
+ ends = (map[range.end] rescue nil) || range.end
63
+ range.exclude_end? ? first...ends : first..ends
64
+ else map[range] || range
65
+ end
66
+ end
67
+ end
68
+
69
+ def get_bounds(from, to, index=0)
70
+ from_block_index = from / block_type::BLOCK_SIZE
71
+ from_block_offset = from % block_type::BLOCK_SIZE
72
+ to_block_index = to / block_type::BLOCK_SIZE
73
+ to_block_offset = to % block_type::BLOCK_SIZE
74
+ return Struct.new(:from, :to, :length, :index).new(
75
+ Struct.new(:index, :offset, :position).new(from_block_index, from_block_offset, from),
76
+ Struct.new(:index, :offset, :position).new(to_block_index, to_block_offset, to),
77
+ (to - from) + 1,
78
+ index
79
+ )
80
+ end
81
+
82
+ def self.suppress_logs
83
+ ActiveRecord::Base.logger, old_logger = nil, ActiveRecord::Base.logger
84
+ yield.tap do
85
+ ActiveRecord::Base.logger = old_logger
86
+ end
87
+ end
88
+
89
+ def iterate_bounds(all_bounds)
90
+ cursor = 0
91
+ all_bounds.each do |bounds|
92
+ index = bounds.from.index
93
+ while index <= bounds.to.index
94
+ left = index == bounds.from.index ? bounds.from.offset : 0
95
+ right = index == bounds.to.index ? bounds.to.offset : block_type::BLOCK_SIZE - 1
96
+ size = (right - left)+1
97
+ yield index, left, right, cursor, size
98
+ cursor += size
99
+ index += 1
100
+ end
101
+ end
102
+ end
103
+
104
+ def blocks_between(bounds, block_scope: scope)
105
+ bounds[1..-1].reduce(
106
+ block_scope.where( block_type.table_name => { period_index: (bounds[0].from.index..bounds[0].to.index)})
107
+ ) do | or_chain, bound|
108
+ or_chain.or(block_scope.where( block_type.table_name => { period_index: (bound.from.index..bound.to.index)}))
109
+ end
110
+ end
111
+ end
112
+ end
@@ -0,0 +1,152 @@
1
+ require 'active_support/concern'
2
+
3
+
4
+ module ActiveDataFrame
5
+ class GroupProxy
6
+ attr_accessor :groups
7
+ def initialize(groups)
8
+ self.groups = groups
9
+ end
10
+
11
+ def min(column_name)
12
+ aggregate('minimum', column_name)
13
+ end
14
+
15
+ def max(column_name)
16
+ aggregate('maximum', column_name)
17
+ end
18
+
19
+ def sum(column_name)
20
+ aggregate('sum', column_name)
21
+ end
22
+
23
+ def average(column_name)
24
+ aggregate('average', column_name)
25
+ end
26
+
27
+ def count
28
+ aggregate('count')
29
+ end
30
+
31
+ private
32
+ def aggregate *agg
33
+ counts = self.groups.send(*agg)
34
+ grouped = {}
35
+ counts.each do |keys, value|
36
+ keys = Array(keys)
37
+ child = keys[0..-2].reduce(grouped){|parent, key| parent[key] ||= {}}
38
+ child[keys[-1]] = value
39
+ end
40
+ grouped
41
+ end
42
+ end
43
+
44
+ def self.HasDataFrame(singular_table_name, table_name, block_type)
45
+ to_inject = Module.new
46
+ to_inject.extend ActiveSupport::Concern
47
+ to_inject.included do
48
+ define_method(singular_table_name){
49
+ @data_frame_proxies ||= {}
50
+ @data_frame_proxies[singular_table_name] ||= Row.new(block_type, self.class, self)
51
+ }
52
+
53
+ define_method(:inspect){
54
+ inspection = "not initialized"
55
+ if defined?(@attributes) && @attributes
56
+ inspection = @attributes.keys.collect { |name|
57
+ if has_attribute?(name)
58
+ "#{name}: #{attribute_for_inspect(name)}"
59
+ end
60
+ }.compact.join(", ")
61
+ end
62
+ "<#{self.class} #{inspection}>"
63
+ }
64
+ end
65
+
66
+ to_inject.class_methods do
67
+ define_method(:df_column_names){
68
+ @@column_names ||= {}
69
+ }
70
+
71
+ define_method(:df_column_maps){
72
+ @@column_maps ||= {}
73
+ }
74
+
75
+ define_method(:df_reverse_column_maps){
76
+ @@reverse_column_maps ||= {}
77
+ }
78
+
79
+ define_method(:with_groups){|*groups|
80
+ GroupProxy.new(group(*groups))
81
+ }
82
+
83
+ define_method(table_name){
84
+ Table.new(block_type, all)
85
+ }
86
+
87
+ define_method("include_#{table_name}"){|*dimensions, unmap: true|
88
+ scope = self.all
89
+ blocks_for_tables = scope.instance_eval{ @blocks_for_tables ||= {} }
90
+ included_blocks = blocks_for_tables[singular_table_name] ||= {}
91
+ dimensions.flatten.each do |key|
92
+ if unmap && column_map(singular_table_name)
93
+ idx = column_map(singular_table_name)[key]
94
+ else
95
+ idx = key
96
+ key = "t#{key}"
97
+ end
98
+ block_index = idx / block_type::BLOCK_SIZE
99
+ block_offset = (idx % block_type::BLOCK_SIZE).succ
100
+ included_blocks[block_index] ||= []
101
+ included_blocks[block_index] << {name: key, idx: block_offset}
102
+ end
103
+ query = "(SELECT * FROM #{self.table_name} " + blocks_for_tables.reduce('') do |aggregate, (table_name, included_blocks)|
104
+ aggregate +
105
+ included_blocks.reduce('') do |aggregate, (block_idx, blocks)|
106
+ blocks_table_name = "#{table_name}_blocks"
107
+ aggregate + " LEFT JOIN(SELECT #{blocks_table_name}.data_frame_type, #{blocks_table_name}.data_frame_id, " + blocks.map{|block| "#{blocks_table_name}.t#{block[:idx]} as \"#{block[:name]}\""}.join(', ') + " FROM #{table_name}_blocks "+
108
+ " WHERE #{blocks_table_name}.period_index = #{block_idx}"+") b#{table_name}#{block_idx} ON b#{table_name}#{block_idx}.data_frame_type = '#{self.name}' AND b#{table_name}#{block_idx}.data_frame_id = #{self.table_name}.id"
109
+ end
110
+ end + ") as #{self.table_name}"
111
+ scope.from(query)
112
+ }
113
+
114
+ define_method("#{singular_table_name}_column_names") do |names|
115
+ df_column_names[singular_table_name] ||= {}
116
+ df_column_maps[singular_table_name] ||= {}
117
+ df_column_names[singular_table_name][self] = names
118
+ df_column_maps[singular_table_name][self] = names.map.with_index.to_h
119
+ end
120
+
121
+ define_method("#{singular_table_name}_column_map") do |column_map|
122
+ df_column_names[singular_table_name] = nil
123
+ df_column_maps[singular_table_name] ||= {}
124
+ df_column_maps[singular_table_name][self] = column_map
125
+ end
126
+
127
+ define_method("#{singular_table_name}_reverse_column_map"){|reverse_column_map|
128
+ df_reverse_column_maps[singular_table_name] ||= {}
129
+ df_reverse_column_maps[singular_table_name][self] = reverse_column_map
130
+ }
131
+
132
+ define_method(:include_data_blocks){|table_name, *args|
133
+ send("include_#{table_name}", *args)
134
+ }
135
+
136
+ define_method(:column_map){|table_name|
137
+ df_column_maps[table_name][self] if defined? df_column_maps[table_name] rescue nil
138
+ }
139
+
140
+ define_method(:column_name_map){|table_name|
141
+ df_column_names[table_name][self] if defined? df_column_names[table_name]
142
+ }
143
+
144
+ define_method(:reverse_column_map){|table_name|
145
+ df_reverse_column_maps[table_name] ||= {}
146
+ df_reverse_column_maps[table_name][self] ||= column_map(table_name).invert if column_map(table_name)
147
+ }
148
+ end
149
+
150
+ return to_inject
151
+ end
152
+ end
@@ -0,0 +1,134 @@
1
+ module ActiveDataFrame
2
+ class Row < DataFrameProxy
3
+
4
+ attr_accessor :instance
5
+
6
+ def initialize(block_type, data_frame_type, instance)
7
+ super(block_type, data_frame_type)
8
+ self.instance = instance
9
+ end
10
+
11
+ def inspect
12
+ "#{data_frame_type.name} Row(#{instance.id})"
13
+ end
14
+
15
+ def set(from, values)
16
+ to = (from + values.length) - 1
17
+ bounds = get_bounds(from, to)
18
+
19
+ self.class.suppress_logs do
20
+ new_blocks = Hash.new do |h, k|
21
+ h[k] = [[0] * block_type::BLOCK_SIZE]
22
+ end
23
+
24
+ existing = blocks_between([bounds]).pluck(:id, :period_index, *block_type::COLUMNS).map do |id, period_index, *block_values|
25
+ [period_index, [block_values, id]]
26
+ end.to_h
27
+
28
+ iterate_bounds([bounds]) do |index, left, right, cursor, size|
29
+ chunk = values[cursor...cursor + size]
30
+ block = existing[index] || new_blocks[index]
31
+ block.first[left..right] = chunk.to_a
32
+ end
33
+
34
+ bulk_update(existing) unless existing.size.zero?
35
+ bulk_insert(new_blocks) unless new_blocks.size.zero?
36
+ values
37
+ end
38
+ end
39
+
40
+ def get(ranges)
41
+ all_bounds = ranges.map.with_index do |range, index|
42
+ get_bounds(range.first, range.exclude_end? ? range.end - 1 : range.end, index)
43
+ end
44
+
45
+ existing = blocks_between(all_bounds).pluck(:period_index, *block_type::COLUMNS).map{|pi, *values| [pi, values]}.to_h
46
+ result = M.blank(typecode: block_type::TYPECODE, columns: all_bounds.map(&:length).sum)
47
+
48
+ iterate_bounds(all_bounds) do |index, left, right, cursor, size|
49
+ if block = existing[index]
50
+ chunk = block[left..right]
51
+ result.narray[cursor...cursor + size] = chunk.length == 1 ? chunk.first : chunk
52
+ end
53
+ end
54
+
55
+ if column_map && !column_map.default_proc
56
+ total = 0
57
+ range_sizes = ranges.map do |range, memo|
58
+ last_total = total
59
+ total += range.size
60
+ [range.first, range.size, last_total]
61
+ end
62
+ index_of = ->(column){
63
+ selected = range_sizes.find{|start, size, total| start <= column && start + size >= column}
64
+ if selected
65
+ start, size, total = selected
66
+ (column - start) + total
67
+ else
68
+ nil
69
+ end
70
+ }
71
+ result.column_map = column_map.map do |name, column|
72
+ [name, index_of[column_map[name]]]
73
+ end.to_h
74
+ end
75
+ result
76
+ end
77
+
78
+ private
79
+ ##
80
+ # Update block data for all blocks in a single call
81
+ ##
82
+ def bulk_update(existing)
83
+ case ActiveRecord::Base.connection_config[:adapter]
84
+ when 'postgresql'
85
+ # Fast bulk update
86
+ updates = ''
87
+ existing.each do |period_index, (values, id)|
88
+ updates << "(#{id}, #{values.map{|v| v.inspect.gsub('"',"'") }.join(',')}),"
89
+ end
90
+ perform_update(updates)
91
+ else
92
+ ids = existing.map {|_, (_, id)| id}
93
+ updates = block_type::COLUMNS.map.with_index do |column, column_idx|
94
+ [column, "CASE period_index\n#{existing.map{|period_index, (values, id)| "WHEN #{period_index} then #{values[column_idx]}"}.join("\n")} \nEND\n"]
95
+ end.to_h
96
+ update_statement = updates.map{|cl, up| "#{cl} = #{up}" }.join(', ')
97
+ block_type.connection.execute("UPDATE #{block_type.table_name} SET #{update_statement} WHERE #{block_type.table_name}.id IN (#{ids.join(',')});")
98
+ end
99
+ end
100
+
101
+ ##
102
+ # Insert block data for all blocks in a single call
103
+ ##
104
+ def bulk_insert(new_blocks)
105
+ inserts = ''
106
+ new_blocks.each do |period_index, (values)|
107
+ inserts << \
108
+ case ActiveRecord::Base.connection_config[:adapter]
109
+ when 'postgresql', 'mysql2' then "(#{values.map{|v| v.inspect.gsub('"',"'") }.join(',')}, #{instance.id}, #{period_index}, '#{data_frame_type.name}', now(), now()),"
110
+ else "(#{values.map{|v| v.inspect.gsub('"',"'") }.join(',')}, #{instance.id}, #{period_index}, '#{data_frame_type.name}', datetime(), datetime()),"
111
+ end
112
+ end
113
+ perform_insert(inserts)
114
+ end
115
+
116
+ def perform_update(updates)
117
+ block_type.transaction do
118
+ block_type.connection.execute(
119
+ "UPDATE #{block_type.table_name} SET #{block_type::COLUMNS.map{|col| "#{col} = t.#{col}" }.join(", ")} FROM(VALUES #{updates[0..-2]}) as t(id, #{block_type::COLUMNS.join(',')}) WHERE #{block_type.table_name}.id = t.id"
120
+ )
121
+ end
122
+ true
123
+ end
124
+
125
+ def perform_insert(inserts)
126
+ sql = "INSERT INTO #{block_type.table_name} (#{block_type::COLUMNS.join(',')}, data_frame_id, period_index, data_frame_type, created_at, updated_at) VALUES #{inserts[0..-2]}"
127
+ block_type.connection.execute sql
128
+ end
129
+
130
+ def scope
131
+ @scope ||= block_type.where(data_frame_type: data_frame_type.name, data_frame_id: instance.id)
132
+ end
133
+ end
134
+ end
@@ -0,0 +1,253 @@
1
+ module ActiveDataFrame
2
+ class Table < DataFrameProxy
3
+
4
+ def set(from, values)
5
+ data_frame_type.find_each do |instance|
6
+ Row.new(self.block_type, self.data_frame_type, instance).set(from, values)
7
+ end
8
+ end
9
+
10
+ def inspect
11
+ "#{data_frame_type.name} Table"
12
+ end
13
+
14
+ def build_case_map(all_bounds)
15
+ map = block_type::COLUMNS.map{|col| [col, []]}.to_h
16
+
17
+ all_bounds.each do |bound|
18
+ case bound.from.index
19
+ when bound.to.index
20
+ (bound.from.offset+1..bound.to.offset+1).each do |col_idx|
21
+ map["t#{col_idx}"] << (bound.from.index..bound.from.index)
22
+ end
23
+ else
24
+ (bound.from.offset+1..block_type::COLUMNS.size).each do |col_idx|
25
+ map["t#{col_idx}"] << (bound.from.index..bound.from.index)
26
+ end
27
+ (1..block_type::COLUMNS.size).each do |col_idx|
28
+ map["t#{col_idx}"] << (bound.from.index.succ..bound.to.index-1)
29
+ end if bound.from.index.succ != bound.to.index
30
+ (1..bound.to.offset+1).each do |col_idx|
31
+ map["t#{col_idx}"] << (bound.to.index..bound.to.index)
32
+ end
33
+ end
34
+ end
35
+ map
36
+ end
37
+
38
+ def column_cases(cases, agg=nil)
39
+ block_type::COLUMNS.map do |col|
40
+ col_cases = cases[col].sort_by(&:begin).reduce([]) do |agg, col_case|
41
+ if agg.empty?
42
+ agg << col_case
43
+ agg
44
+ else
45
+ if agg[-1].end.succ == col_case.begin
46
+ agg[-1] = (agg[-1].begin..col_case.end)
47
+ else
48
+ agg << col_case
49
+ end
50
+ agg
51
+ end
52
+ end
53
+
54
+ if agg
55
+ case col_cases.length
56
+ when 0 then "NULL as #{col}"
57
+ else
58
+ case_str = col_cases.map do |match|
59
+ case
60
+ when match.begin == match.end then "period_index = #{match.begin}"
61
+ else "period_index BETWEEN #{match.begin} AND #{match.end}"
62
+ end
63
+ end.join(" OR ")
64
+ "CASE WHEN #{case_str} THEN #{agg}(#{col}) ELSE NULL END"
65
+ end
66
+ else
67
+ case col_cases.length
68
+ when 0 then "NULL as #{col}"
69
+ else
70
+ case_str = col_cases.map do |match|
71
+ case
72
+ when match.begin == match.end then "period_index = #{match.begin}"
73
+ else "period_index BETWEEN #{match.begin} AND #{match.end}"
74
+ end
75
+ end.join(" OR ")
76
+ "CASE WHEN #{case_str} THEN #{col} ELSE NULL END"
77
+ end
78
+ end
79
+ end
80
+ end
81
+
82
+ def get(ranges)
83
+ ranges = extract_ranges(ranges)
84
+ all_bounds = ranges.map.with_index do |range, index|
85
+ get_bounds(range.first, range.exclude_end? ? range.end - 1 : range.end, index)
86
+ end
87
+
88
+ case_map = build_case_map(all_bounds)
89
+
90
+ existing_blocks = Hash.new{|h, index| h[index] = {}}
91
+
92
+ index_map = {}
93
+ res = ActiveRecord::Base.transaction do
94
+ ids = data_frame_type.pluck(:id)
95
+
96
+ as_sql = blocks_between(
97
+ all_bounds,
98
+ block_scope: data_frame_type.unscoped
99
+ .joins("LEFT JOIN #{block_type.table_name} ON #{data_frame_type.table_name}.id = #{block_type.table_name}.data_frame_id")
100
+ .joins("RIGHT JOIN (#{data_frame_type.select(:id).to_sql}) as ref ON ref.id = #{block_type.table_name}.data_frame_id")
101
+
102
+ ).where(
103
+ block_type.table_name => {data_frame_type: data_frame_type.name }
104
+ ).select(:period_index, :data_frame_id, *column_cases(case_map)).to_sql
105
+
106
+ index_map = ids.each_with_index.to_h
107
+ ActiveRecord::Base.connection.execute(as_sql)
108
+ end
109
+
110
+ res.each_row do |pi, data_frame_id, *values|
111
+ existing_blocks[pi][data_frame_id] = values
112
+ end
113
+
114
+ result = M.blank(typecode: block_type::TYPECODE, columns: all_bounds.map(&:length).sum, rows: index_map.size)
115
+ iterate_bounds(all_bounds) do |index, left, right, cursor, size|
116
+ if blocks = existing_blocks[index]
117
+ blocks.each do |data_frame_id, block|
118
+ row = index_map[data_frame_id]
119
+ next unless row
120
+ chunk = block[left..right]
121
+ result.narray[cursor...cursor + size, row] = chunk
122
+ end
123
+ end
124
+ end
125
+ if column_map && !column_map.default_proc
126
+ total = 0
127
+ range_sizes = ranges.map do |range, memo|
128
+ last_total = total
129
+ total += range.size
130
+ [range.first, range.size, last_total]
131
+ end
132
+ index_of = ->(column){
133
+ selected = range_sizes.find{|start, size, total| start <= column && start + size >= column}
134
+ if selected
135
+ start, size, total = selected
136
+ (column - start) + total
137
+ else
138
+ nil
139
+ end
140
+ }
141
+ result.column_map = column_map.map do |name, column|
142
+ [name, index_of[column_map[name]]]
143
+ end.to_h
144
+ end
145
+ result.row_map = Hash.new do |h ,k|
146
+ h[k] = begin
147
+ case k
148
+ when ActiveRecord::Base then index_map[k.id]
149
+ when ActiveRecord::Relation then k.pluck(:id).map{|i| index_map[i] }
150
+ when ->(list){ list.kind_of?(Array) && list.all?{|v| v.kind_of?(ActiveRecord::Base)}} then k.map{|i| index_map[i.id] }
151
+ when Numeric then index_map[k]
152
+ end
153
+ end
154
+ end
155
+ result
156
+ end
157
+
158
+ def idx_where_sum_gte(*ranges, max)
159
+ select_agg_indices(extract_ranges(ranges), 'SUM', ->(x, y){ x <= y } , 'SUM(%) > :max', max: max)
160
+ end
161
+
162
+ def idx_where_sum_lte(*ranges, min)
163
+ select_agg_indices(extract_ranges(ranges), 'SUM', ->(x, y){ x >= y } , 'SUM(%) < :min', min: min)
164
+ end
165
+
166
+ def AggregateProxy(agg)
167
+ proxy = Object.new
168
+ aggregate, extract_ranges = method(:aggregate), method(:extract_ranges)
169
+ proxy.define_singleton_method(:[]) do |*ranges|
170
+ aggregate[extract_ranges[ranges], agg]
171
+ end
172
+ proxy
173
+ end
174
+
175
+ def avg
176
+ @avg ||= AggregateProxy('AVG')
177
+ end
178
+
179
+ def sum
180
+ @sum ||= AggregateProxy('SUM')
181
+ end
182
+
183
+ def max
184
+ @max ||= AggregateProxy('MAX')
185
+ end
186
+
187
+ def min
188
+ @min ||= AggregateProxy('MIN')
189
+ end
190
+
191
+ private
192
+
193
+ def scope
194
+ @scope ||= block_type.where(data_frame_type: data_frame_type.name, data_frame_id: data_frame_type.select(:id))
195
+ end
196
+
197
+ def select_agg_indices(ranges, agg, filter, condition, **args)
198
+ all_bounds = ranges.map.with_index do |range, index|
199
+ get_bounds(range.first, range.exclude_end? ? range.end - 1 : range.end, index)
200
+ end
201
+ existing = blocks_between(all_bounds)
202
+ .group(:period_index)
203
+ .having(
204
+ block_type::COLUMNS.map do |cl|
205
+ condition.gsub('%', cl)
206
+ end.join(" OR "),
207
+ **args
208
+ )
209
+ .pluck(
210
+ :period_index,
211
+ *block_type::COLUMNS.map do |cl|
212
+ "#{agg}(#{cl}) as #{cl}"
213
+ end
214
+ )
215
+ .map{|pi, *values| [pi, values]}.to_h
216
+ indices = existing.flat_map do |period_index, *values|
217
+ index = block_type::BLOCK_SIZE * period_index - 1
218
+ M[values, typecode: block_type::TYPECODE].mask{|x|
219
+ index += 1
220
+ !all_bounds.any?{|b| (b.from.position..b.to.position).include?(index) } || filter[x, args.values.first ]
221
+ }.where.to_a.map{|v| block_type::BLOCK_SIZE * period_index + v}.to_a
222
+ end
223
+
224
+ if column_map
225
+ indices.map{|i| reverse_column_map[i.to_i] || i.to_i }
226
+ else
227
+ indices
228
+ end
229
+ end
230
+
231
+ def aggregate(ranges, agg)
232
+ all_bounds = ranges.map.with_index do |range, index|
233
+ get_bounds(range.first, range.exclude_end? ? range.end - 1 : range.end, index)
234
+ end
235
+
236
+ case_map = build_case_map(all_bounds)
237
+ existing = blocks_between(all_bounds)
238
+ .group(:period_index)
239
+ .pluck(:period_index, *column_cases(case_map, agg))
240
+ .map{|pi, *values| [pi, values]}.to_h
241
+ result = M.blank(columns: all_bounds.map(&:length).sum, typecode: block_type::TYPECODE)
242
+
243
+ iterate_bounds(all_bounds) do |index, left, right, cursor, size|
244
+ if block = existing[index]
245
+ chunk = block[left..right]
246
+ result.narray[cursor...cursor + size] = chunk.length == 1 ? chunk.first : chunk
247
+ end
248
+ end
249
+ result.column_map = column_map if column_map
250
+ result
251
+ end
252
+ end
253
+ end
@@ -0,0 +1,3 @@
1
+ module ActiveDataFrame
2
+ VERSION = "0.1.1"
3
+ end
@@ -0,0 +1,101 @@
1
+ require 'rails/generators/active_record'
2
+
3
+ module ActiveDataFrame
4
+ class InstallGenerator < ActiveRecord::Generators::Base
5
+ desc "Generates a new data_frame type"
6
+
7
+ STREAM_TYPES = %w(bit byte int long float double)
8
+ # Commandline options can be defined here using Thor-like options:
9
+ argument :type, :type => :string, :default => 'float', :desc => "DataFrame type. One of(#{STREAM_TYPES*" ,"})"
10
+ argument :columns, :type => :numeric, :default => 512, :desc => "Number of columns"
11
+ argument :inject, type: :array, default: []
12
+
13
+ def self.source_root
14
+ @source_root ||= File.join(File.dirname(__FILE__), 'templates')
15
+ end
16
+
17
+ def generate_model
18
+ invoke "active_record:model", ["blocks/#{singular_block_table_name}"], migration: false
19
+ end
20
+
21
+ def block_type
22
+ "#{singular_table_name}_block".camelize
23
+ end
24
+
25
+ def block_table_name
26
+ "#{singular_table_name}_blocks"
27
+ end
28
+
29
+ def singular_block_table_name
30
+ "#{singular_table_name}_block"
31
+ end
32
+
33
+ def concern_name
34
+ "Has#{singular_table_name.camelize}"
35
+ end
36
+
37
+ def concern_file_name
38
+ "has_#{singular_table_name}"
39
+ end
40
+
41
+ def inject_concern_content
42
+ inject.each do |inject_into|
43
+ content = " include #{concern_name}\n"
44
+ class_name = inject_into.camelize
45
+ inject_into_class(self.class.path_for_model(inject_into), class_name, content) if self.class.model_exists?(inject_into, destination_root)
46
+ end
47
+ end
48
+
49
+ def inject_data_frame_helpers
50
+ content = \
51
+ <<RUBY
52
+ BLOCK_SIZE = #{columns}
53
+ COLUMNS = %w(#{columns.times.map{|i| "t#{i+1}" }.join(" ")})
54
+ TYPECODE = M::Typecode::FLOAT
55
+ self.table_name = '#{block_table_name}'
56
+ RUBY
57
+ class_name = "Blocks::#{singular_block_table_name.camelize}"
58
+ inject_into_class(self.class.path_for_model(singular_block_table_name), class_name, content) if self.class.model_exists?(singular_block_table_name, destination_root)
59
+ end
60
+
61
+ def copy_concern
62
+ template "has_concern.rb", "app/models/concerns/#{concern_file_name}.rb"
63
+ end
64
+
65
+ def self.path_for_model(model)
66
+ File.join("app", "models", "blocks", "#{model.underscore}.rb")
67
+ end
68
+
69
+ def self.model_exists?(model, destination_root)
70
+ File.exist?(File.join(destination_root, self.path_for_model(model)))
71
+ end
72
+
73
+ def copy_migration
74
+ migration_template "migration.rb", "db/migrate/active_data_frame_create_#{table_name}.rb", migration_version: migration_version
75
+ end
76
+
77
+ def migration_data
78
+ <<RUBY
79
+ t.integer :data_frame_id, index: true
80
+ t.string :data_frame_type, index: true
81
+ t.integer :period_index, index: true
82
+ #{
83
+ columns.times.map do |i|
84
+ " t.#{type} :t#{i+1}"
85
+ end.join("\n")
86
+ }
87
+ RUBY
88
+ end
89
+
90
+ def migration_version
91
+ if rails5?
92
+ "[#{Rails::VERSION::MAJOR}.#{Rails::VERSION::MINOR}]"
93
+ end
94
+ end
95
+
96
+ def rails5?
97
+ Rails.version.start_with? '5'
98
+ end
99
+
100
+ end
101
+ end
@@ -0,0 +1,6 @@
1
+ require 'active_support/concern'
2
+
3
+ module <%= concern_name %>
4
+ extend ActiveSupport::Concern
5
+ include ActiveDataFrame::HasDataFrame('<%= singular_table_name %>', '<%= table_name %>',Blocks::<%= block_type %>)
6
+ end
@@ -0,0 +1,11 @@
1
+ class ActiveDataFrameCreate<%= table_name.camelize %> < ActiveRecord::Migration<%= migration_version %>
2
+ def change
3
+ create_table :<%= block_table_name %> do |t|
4
+ <%= migration_data -%>
5
+ t.timestamps null: false
6
+ end
7
+
8
+
9
+ add_index :<%= block_table_name %>, [:data_frame_type, :data_frame_id , :period_index], :unique => true, name: 'index_<%= block_table_name %>_on_type_id_and_index'
10
+ end
11
+ end
metadata ADDED
@@ -0,0 +1,163 @@
1
+ --- !ruby/object:Gem::Specification
2
+ name: active_data_frame
3
+ version: !ruby/object:Gem::Version
4
+ version: 0.1.1
5
+ platform: ruby
6
+ authors:
7
+ - Wouter Coppieters
8
+ autorequire:
9
+ bindir: exe
10
+ cert_chain: []
11
+ date: 2017-08-01 00:00:00.000000000 Z
12
+ dependencies:
13
+ - !ruby/object:Gem::Dependency
14
+ name: bundler
15
+ requirement: !ruby/object:Gem::Requirement
16
+ requirements:
17
+ - - "~>"
18
+ - !ruby/object:Gem::Version
19
+ version: '1.13'
20
+ type: :development
21
+ prerelease: false
22
+ version_requirements: !ruby/object:Gem::Requirement
23
+ requirements:
24
+ - - "~>"
25
+ - !ruby/object:Gem::Version
26
+ version: '1.13'
27
+ - !ruby/object:Gem::Dependency
28
+ name: rake
29
+ requirement: !ruby/object:Gem::Requirement
30
+ requirements:
31
+ - - "~>"
32
+ - !ruby/object:Gem::Version
33
+ version: '10.0'
34
+ type: :development
35
+ prerelease: false
36
+ version_requirements: !ruby/object:Gem::Requirement
37
+ requirements:
38
+ - - "~>"
39
+ - !ruby/object:Gem::Version
40
+ version: '10.0'
41
+ - !ruby/object:Gem::Dependency
42
+ name: pry-byebug
43
+ requirement: !ruby/object:Gem::Requirement
44
+ requirements:
45
+ - - "~>"
46
+ - !ruby/object:Gem::Version
47
+ version: 3.4.0
48
+ - - ">="
49
+ - !ruby/object:Gem::Version
50
+ version: 3.4.0
51
+ type: :development
52
+ prerelease: false
53
+ version_requirements: !ruby/object:Gem::Requirement
54
+ requirements:
55
+ - - "~>"
56
+ - !ruby/object:Gem::Version
57
+ version: 3.4.0
58
+ - - ">="
59
+ - !ruby/object:Gem::Version
60
+ version: 3.4.0
61
+ - !ruby/object:Gem::Dependency
62
+ name: pry
63
+ requirement: !ruby/object:Gem::Requirement
64
+ requirements:
65
+ - - "~>"
66
+ - !ruby/object:Gem::Version
67
+ version: 0.10.2
68
+ - - ">="
69
+ - !ruby/object:Gem::Version
70
+ version: 0.10.0
71
+ type: :development
72
+ prerelease: false
73
+ version_requirements: !ruby/object:Gem::Requirement
74
+ requirements:
75
+ - - "~>"
76
+ - !ruby/object:Gem::Version
77
+ version: 0.10.2
78
+ - - ">="
79
+ - !ruby/object:Gem::Version
80
+ version: 0.10.0
81
+ - !ruby/object:Gem::Dependency
82
+ name: activerecord
83
+ requirement: !ruby/object:Gem::Requirement
84
+ requirements:
85
+ - - "~>"
86
+ - !ruby/object:Gem::Version
87
+ version: 5.0.0
88
+ type: :runtime
89
+ prerelease: false
90
+ version_requirements: !ruby/object:Gem::Requirement
91
+ requirements:
92
+ - - "~>"
93
+ - !ruby/object:Gem::Version
94
+ version: 5.0.0
95
+ - !ruby/object:Gem::Dependency
96
+ name: rmatrix
97
+ requirement: !ruby/object:Gem::Requirement
98
+ requirements:
99
+ - - "~>"
100
+ - !ruby/object:Gem::Version
101
+ version: 0.1.10
102
+ - - ">="
103
+ - !ruby/object:Gem::Version
104
+ version: 0.1.10
105
+ type: :runtime
106
+ prerelease: false
107
+ version_requirements: !ruby/object:Gem::Requirement
108
+ requirements:
109
+ - - "~>"
110
+ - !ruby/object:Gem::Version
111
+ version: 0.1.10
112
+ - - ">="
113
+ - !ruby/object:Gem::Version
114
+ version: 0.1.10
115
+ description: An active data frame helper
116
+ email:
117
+ - wc@pico.net.nz
118
+ executables: []
119
+ extensions: []
120
+ extra_rdoc_files: []
121
+ files:
122
+ - ".gitignore"
123
+ - CODE_OF_CONDUCT.md
124
+ - Gemfile
125
+ - README.md
126
+ - Rakefile
127
+ - active_data_frame.gemspec
128
+ - active_data_frame.todo
129
+ - bin/console
130
+ - bin/setup
131
+ - lib/active_data_frame.rb
132
+ - lib/active_data_frame/data_frame_proxy.rb
133
+ - lib/active_data_frame/has_data_frame.rb
134
+ - lib/active_data_frame/row.rb
135
+ - lib/active_data_frame/table.rb
136
+ - lib/active_data_frame/version.rb
137
+ - lib/generators/active_data_frame/install_generator.rb
138
+ - lib/generators/active_data_frame/templates/has_concern.rb
139
+ - lib/generators/active_data_frame/templates/migration.rb
140
+ homepage:
141
+ licenses: []
142
+ metadata: {}
143
+ post_install_message:
144
+ rdoc_options: []
145
+ require_paths:
146
+ - lib
147
+ required_ruby_version: !ruby/object:Gem::Requirement
148
+ requirements:
149
+ - - ">="
150
+ - !ruby/object:Gem::Version
151
+ version: '0'
152
+ required_rubygems_version: !ruby/object:Gem::Requirement
153
+ requirements:
154
+ - - ">="
155
+ - !ruby/object:Gem::Version
156
+ version: '0'
157
+ requirements: []
158
+ rubyforge_project:
159
+ rubygems_version: 2.5.1
160
+ signing_key:
161
+ specification_version: 4
162
+ summary: An active data frame helper
163
+ test_files: []