brian 0.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
data/.gitignore ADDED
@@ -0,0 +1,17 @@
1
+ *.gem
2
+ *.rbc
3
+ .bundle
4
+ .config
5
+ .yardoc
6
+ Gemfile.lock
7
+ InstalledFiles
8
+ _yardoc
9
+ coverage
10
+ doc/
11
+ lib/bundler/man
12
+ pkg
13
+ rdoc
14
+ spec/reports
15
+ test/tmp
16
+ test/version_tmp
17
+ tmp
data/Gemfile ADDED
@@ -0,0 +1,4 @@
1
+ source 'https://rubygems.org'
2
+
3
+ # Specify your gem's dependencies in brain.gemspec
4
+ gemspec
data/LICENSE ADDED
@@ -0,0 +1,46 @@
1
+ This project is licensed as follows under the MIT license (Expat):
2
+
3
+ Copyright (c) 2012 Adam Watkins
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining
6
+ a copy of this software and associated documentation files (the
7
+ "Software"), to deal in the Software without restriction, including
8
+ without limitation the rights to use, copy, modify, merge, publish,
9
+ distribute, sublicense, and/or sell copies of the Software, and to
10
+ permit persons to whom the Software is furnished to do so, subject to
11
+ the following conditions:
12
+
13
+ The above copyright notice and this permission notice shall be
14
+ included in all copies or substantial portions of the Software.
15
+
16
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
17
+ EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
18
+ MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
19
+ NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
20
+ LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
21
+ OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
22
+ WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
23
+
24
+ The software is a derivative work of 'brain.js',
25
+ also licensed as follows under the MIT license (Expat):
26
+ Copyright (c) 2010 Heather Arthur
27
+
28
+ Permission is hereby granted, free of charge, to any person obtaining
29
+ a copy of this software and associated documentation files (the
30
+ "Software"), to deal in the Software without restriction, including
31
+ without limitation the rights to use, copy, modify, merge, publish,
32
+ distribute, sublicense, and/or sell copies of the Software, and to
33
+ permit persons to whom the Software is furnished to do so, subject to
34
+ the following conditions:
35
+
36
+ The above copyright notice and this permission notice shall be
37
+ included in all copies or substantial portions of the Software.
38
+
39
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
40
+ EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
41
+ MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
42
+ NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
43
+ LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
44
+ OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
45
+ WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
46
+
data/README.md ADDED
@@ -0,0 +1,94 @@
1
+ # brian
2
+
3
+ `brian` is a Ruby port of a [neural network](http://en.wikipedia.org/wiki/Artificial_neural_network) library, implementing a [multilayer perceptron](http://en.wikipedia.org/wiki/Multilayer_perceptron). The JavaScript library called `brain.js` that it is based on can be found [here](https://github.com/harthur/brain).
4
+
5
+ Here's an example of using it to approximate the XOR function:
6
+ ```ruby
7
+ net = Brian::NeuralNetwork.new
8
+
9
+ net.train([{input: [0, 0], output: [0]},
10
+ {input: [0, 1], output: [1]},
11
+ {input: [1, 0], output: [1]},
12
+ {input: [1, 1], output: [0]}])
13
+
14
+ output = net.run([1, 0]) # => [0.931]
15
+ ```
16
+
17
+ The author of the orignal JavaScript library provides a more involved, realistic example of using a perceptron:
18
+ [Demo: training a neural network to recognize color contrast](http://harthur.github.com/brain/)
19
+
20
+
21
+ # Training
22
+ Use `train()` to train the network with an array of training data. The network has to be trained with all the data in bulk in one call to `train()`. The more training patterns, the longer it will take to train, but the better the network will be at classifiying new patterns.
23
+
24
+ ## Data format
25
+ Each training pattern should have an `input` and an `output`, both of which can be either an array of numbers from `0` to `1` or a hash of numbers from `0` to `1`. For a Ruby port of the [color constrast demo](http://harthur.github.com/brain/) it would be like something like this:
26
+
27
+ ```ruby
28
+ net = Brian::NeuralNetwork.new
29
+
30
+ net.train([{input: { r: 0.03, g: 0.7, b: 0.5 }, output: { black: 1.0 }},
31
+ {input: { r: 0.16, g: 0.09, b: 0.2 }, output: { white: 1.0 }},
32
+ {input: { r: 0.5, g: 0.5, b: 1.0 }, output: { white: 1.0 }}])
33
+
34
+ output = net.run({ r: 1, g: 0.4, b: 0 }) # => {:black=>0.024, :white=>0.976}
35
+ ````
36
+
37
+ ## Options
38
+ `train()` takes a hash of options as its second argument:
39
+
40
+ ```ruby
41
+ net.train(data, {
42
+ error_thresh: 0.004, # error threshold to reach
43
+ iterations: 20000, # maximum training iterations
44
+ log: true, # puts progress periodically
45
+ log_period: 10 # number of iterations between logging
46
+ })
47
+ ```
48
+
49
+ The network will train until the training error has gone below the threshold (default `0.004`) or the max number of iterations (default `20000`) has been reached, whichever comes first.
50
+
51
+ By default training won't let you know how its doing until the end, but set `log` to `true` to get periodic updates on the current training error of the network. The training error should decrease every time.
52
+
53
+ ## Output
54
+ The ouput of `train()` is a hash of information about how the training went:
55
+
56
+ ```ruby
57
+ {
58
+ error: 0.0039139985510105032, // training error
59
+ iterations: 406 // training iterations
60
+ }
61
+ ```
62
+
63
+ # Serialisation
64
+
65
+ The states of trained networks can be stored with `#to_hash` and retrieved with `::new_with_hash`:
66
+
67
+ ```ruby
68
+
69
+ net = Brian::NeuralNetwork.new
70
+ net.train(data)
71
+
72
+ saved_state = net.to_hash
73
+
74
+ #...
75
+
76
+ net = Brian::NeuralNetwork.new_with_hash(saved_state)
77
+ ```
78
+
79
+ ## JSON
80
+
81
+ Calling `#to_json` on the Hash produced by `#to_hash` should produce JSON compatible with [the original JavaScript library](https://github.com/harthur/brain):
82
+
83
+ ```ruby
84
+ require 'json'
85
+
86
+ net = Brian::NeuralNetwork.new
87
+ net.train(data)
88
+
89
+ net_json = net.to_hash.to_json => "{\"layers\":...
90
+ ```
91
+
92
+ # Licensing
93
+
94
+ In coherence with the licensing of the JavaScript `brain.js` library, this gem is licensed under the MIT license (Expat).
data/Rakefile ADDED
@@ -0,0 +1,2 @@
1
+ #!/usr/bin/env rake
2
+ require "bundler/gem_tasks"
data/brian.gemspec ADDED
@@ -0,0 +1,20 @@
1
+ # -*- encoding: utf-8 -*-
2
+ require File.expand_path('../lib/brian/version', __FILE__)
3
+
4
+ Gem::Specification.new do |gem|
5
+ gem.authors = ["Adam Watkins"]
6
+ gem.email = ["adam@stupidpupil.co.uk"]
7
+ gem.description = %q{A port of the brain.js library, implementing a multilayer perceptron - a neural network for supervised learning.}
8
+ gem.summary = %q{Multilayer perceptron (neural network) library.}
9
+ gem.homepage = ""
10
+
11
+ gem.files = `git ls-files`.split($\)
12
+ gem.executables = gem.files.grep(%r{^bin/}).map{ |f| File.basename(f) }
13
+ gem.test_files = gem.files.grep(%r{^(test|spec|features)/})
14
+ gem.name = "brian"
15
+ gem.require_paths = ["lib"]
16
+ gem.version = Brian::VERSION
17
+ gem.license = "MIT"
18
+
19
+ gem.add_development_dependency "rspec", "~> 2.11"
20
+ end
data/lib/brian.rb ADDED
@@ -0,0 +1,4 @@
1
+ require "brian/version"
2
+ require "brian/lookup"
3
+ require "brian/neural_network"
4
+ require "brian/hash"
data/lib/brian/hash.rb ADDED
@@ -0,0 +1,77 @@
1
+ module Brian
2
+ class NeuralNetwork
3
+
4
+ def to_hash
5
+ layers = []
6
+ @sizes.count.times do |layer|
7
+ layers[layer] = {}
8
+
9
+ if layer == 0 and @input_lookup
10
+ nodes = @input_lookup.keys
11
+ elsif (layer == @output_layer) and @output_lookup
12
+ nodes = @output_lookup.keys
13
+ else
14
+ nodes = (0..@sizes[layer]-1).to_a
15
+ end
16
+
17
+
18
+ nodes.each_with_index do |node,j|
19
+ layers[layer][node] = {}
20
+
21
+ next if layer == 0
22
+ layers[layer][node][:bias] = @biases[layer][j]
23
+
24
+ layers[layer][node][:weights] = {}
25
+
26
+ layers[layer-1].keys.each do |k|
27
+ index = k
28
+ index = @input_lookup[k] if (layer == 1) and @input_lookup
29
+
30
+ layers[layer][node][:weights][k] = @weights[layer][j][index]
31
+ end
32
+ end
33
+ end
34
+
35
+ return {layers:layers}
36
+ end
37
+
38
+ def self.new_with_hash(hash)
39
+ net = NeuralNetwork.new
40
+
41
+ net.instance_eval do
42
+ size = hash[:layers].count
43
+ @output_layer = size -1
44
+
45
+ @sizes = Array.new(size)
46
+ @weights = Array.new(size)
47
+ @biases = Array.new(size)
48
+ @outputs = Array.new(size)
49
+
50
+ hash[:layers].each_with_index do |layer, i|
51
+ if i == 0 and layer[0].nil?
52
+ @input_lookup = Brian::Lookup.lookup_from_hash(layer)
53
+ end
54
+
55
+ if i == @output_layer and layer[0].nil?
56
+ @output_lookup = Brian::Lookup.lookup_from_hash(layer)
57
+ end
58
+
59
+ nodes = layer.keys
60
+
61
+ @sizes[i] = nodes.count
62
+ @weights[i] = []
63
+ @biases[i] = []
64
+ @outputs[i] = []
65
+
66
+ nodes.each_with_index do |node, j|
67
+ @biases[i][j] = layer[node][:bias]
68
+ @weights[i][j] = layer[node][:weights].nil? ? nil : layer[node][:weights].values
69
+ end
70
+ end
71
+
72
+ end
73
+ return net
74
+ end
75
+
76
+ end
77
+ end
@@ -0,0 +1,43 @@
1
+ module Brian
2
+ module Lookup
3
+ def self.build_lookup(hashes)
4
+ hash = hashes.reduce do |memo, hash|
5
+ memo.merge(hash)
6
+ end
7
+
8
+ return Brian::Lookup.lookup_from_hash(hash)
9
+ end
10
+
11
+ def self.lookup_from_hash(hash)
12
+ lookup = {}
13
+ index = 0
14
+
15
+ hash.keys.each do |k|
16
+ lookup[k] = index
17
+ index += 1
18
+ end
19
+
20
+ return lookup
21
+ end
22
+
23
+ def self.to_array(lookup, hash)
24
+ array = []
25
+
26
+ lookup.each_pair do |k,v|
27
+ array[v] = hash.has_key?(k) ? hash[k] : 0
28
+ end
29
+
30
+ return array
31
+ end
32
+
33
+ def self.to_hash(lookup, array)
34
+ hash = {}
35
+
36
+ lookup.each_pair do |k,v|
37
+ hash[k] = array[v]
38
+ end
39
+
40
+ return hash
41
+ end
42
+ end
43
+ end
@@ -0,0 +1,223 @@
1
+ module Brian
2
+ class NeuralNetwork
3
+
4
+ def self.random_weight
5
+ rand()*0.4 - 0.2
6
+ end
7
+
8
+ def self.mse(errors)
9
+ errors.map {|e| e**2}.inject(:+)/errors.length
10
+ end
11
+
12
+ def self.activation_function(sum)
13
+ 1.0 / (1.0 + Math.exp(-sum))
14
+ end
15
+
16
+ def initialize
17
+ @learning_rate = 0.3
18
+ @momentum = 0.1
19
+ end
20
+
21
+
22
+ def initialize_layers(sizes)
23
+ @sizes = sizes
24
+ @output_layer = @sizes.length - 1
25
+
26
+ @biases = []
27
+ @weights = []
28
+ @outputs = []
29
+
30
+ @deltas = []
31
+ @changes = []
32
+ @errors = []
33
+
34
+ @sizes.length.times do |layer|
35
+ size = @sizes[layer]
36
+
37
+ @deltas[layer] = Array.new(size) {0}
38
+ @errors[layer] = Array.new(size) {0}
39
+ @outputs[layer] = Array.new(size) {0}
40
+
41
+ next if layer == 0
42
+
43
+ @biases[layer] = Array.new(size) {NeuralNetwork.random_weight}
44
+ @weights[layer] = Array.new(size)
45
+ @changes[layer] = Array.new(size)
46
+
47
+ size.times do |node|
48
+ prev_size = @sizes[layer - 1]
49
+ @weights[layer][node] = Array.new(prev_size) {NeuralNetwork.random_weight}
50
+ @changes[layer][node] = Array.new(prev_size) {0}
51
+
52
+ end
53
+ end
54
+ end
55
+
56
+ def run(input)
57
+ input = Brian::Lookup.to_array(@input_lookup, input) if @input_lookup
58
+
59
+ output = self.run_input(input)
60
+
61
+ output = Brian::Lookup.to_hash(@output_lookup, output) if @output_lookup
62
+
63
+ return output
64
+ end
65
+
66
+ def run_input(input)
67
+ @outputs[0] = input
68
+
69
+ @sizes.count.times do |layer|
70
+ next if layer == 0
71
+ @sizes[layer].times do |node|
72
+ weights = @weights[layer][node]
73
+ sum = @biases[layer][node]
74
+
75
+ weights.each_with_index {|w,k| sum += w*input[k]}
76
+
77
+ @outputs[layer][node] = NeuralNetwork.activation_function(sum)
78
+ end
79
+
80
+ input = @outputs[layer]
81
+ end
82
+
83
+
84
+ return @outputs[@output_layer]
85
+ end
86
+
87
+ def format_data(data)
88
+ if not data[0][:input].is_a?(Array)
89
+ if @input_lookup.nil?
90
+ inputs = data.map {|d| d[:input]}
91
+ @input_lookup = Brian::Lookup.build_lookup(inputs)
92
+ end
93
+
94
+ data.each do |d|
95
+ d[:input] = Brian::Lookup.to_array(@input_lookup,d[:input])
96
+ end
97
+ end
98
+
99
+ if not data[0][:output].is_a?(Array)
100
+ if @output_lookup.nil?
101
+ inputs = data.map {|d| d[:output]}
102
+ @output_lookup = Brian::Lookup.build_lookup(inputs)
103
+ end
104
+
105
+ data.each do |d|
106
+ d[:output] = Brian::Lookup.to_array(@output_lookup,d[:output])
107
+ end
108
+ end
109
+
110
+ return data
111
+ end
112
+
113
+ def train(data, options = {})
114
+ data = self.format_data(data)
115
+
116
+ options = ({
117
+ iterations:20000,
118
+ error_thresh:0.005,
119
+ log:false,
120
+ log_period:10,
121
+ callback_period:10
122
+ }).merge(options)
123
+
124
+ input_size = data[0][:input].size
125
+ output_size = data[0][:output].size
126
+
127
+ hidden_sizes = @hidden_sizes
128
+
129
+ if hidden_sizes.nil?
130
+ hidden_sizes = [[3,(input_size.to_f/2).floor].max]
131
+ end
132
+
133
+ sizes = [input_size,hidden_sizes,output_size].flatten
134
+ self.initialize_layers(sizes)
135
+
136
+ error = 1
137
+
138
+ iterations = 0
139
+ options[:iterations].times do |i|
140
+ sum = 0
141
+ iterations = i
142
+ data.each do |d|
143
+ err = self.train_pattern(d[:input],d[:output])
144
+ sum += err
145
+ end
146
+
147
+ error = sum/data.count
148
+
149
+ if options[:log] and (i % options[:log_period] == 0)
150
+ puts "iterations:#{i} training_error #{error}"
151
+ end
152
+
153
+ if options[:callback] and (i % options[:callback_period] == 0)
154
+ options[:callback].call({error:error, iterations:i})
155
+ end
156
+
157
+ break if error <= options[:error_thresh]
158
+ end
159
+
160
+ return {error:error, iterations:iterations}
161
+ end
162
+
163
+
164
+ def train_pattern(input, target)
165
+ #Forward propogate
166
+ self.run_input(input)
167
+
168
+ #Back propogate
169
+ self.calculate_deltas(target)
170
+ self.adjust_weights()
171
+
172
+ error = Brian::NeuralNetwork.mse(@errors[@output_layer])
173
+
174
+ return error
175
+ end
176
+
177
+ def calculate_deltas(target)
178
+ @sizes.length.times do |layer|
179
+ layer = -(layer+1)
180
+ @sizes[layer].times do |node|
181
+ output = @outputs[layer][node]
182
+ error = 0
183
+
184
+ if layer == -1 #Output layer
185
+ error = (target[node] - output).to_f
186
+ else
187
+ deltas = @deltas[layer+1]
188
+ deltas.each_with_index do |d,k|
189
+ error += d * @weights[layer+1][k][node]
190
+ end
191
+ end
192
+
193
+ @errors[layer][node] = error
194
+ @deltas[layer][node] = error*output*(1.0-output)
195
+ end
196
+ end
197
+ end
198
+
199
+ def adjust_weights
200
+ @sizes.count.times do |layer|
201
+ next if layer == 0
202
+ incoming = @outputs[layer-1]
203
+
204
+ @sizes[layer].times do |node|
205
+ delta = @deltas[layer][node]
206
+
207
+ incoming.each_with_index do |i,k|
208
+ change = @changes[layer][node][k]
209
+
210
+ change *= @momentum
211
+ change += @learning_rate * delta * i
212
+
213
+ @changes[layer][node][k] = change
214
+ @weights[layer][node][k] += change
215
+ end
216
+
217
+ @biases[layer][node] += @learning_rate * delta
218
+ end
219
+ end
220
+ end
221
+
222
+ end
223
+ end
@@ -0,0 +1,3 @@
1
+ module Brian
2
+ VERSION = "0.1.0"
3
+ end
data/spec/hash_spec.rb ADDED
@@ -0,0 +1,40 @@
1
+ require 'brian'
2
+
3
+ describe Brian::NeuralNetwork do
4
+ describe "Can store and retrieve XOR-network" do
5
+ before do
6
+ @net = Brian::NeuralNetwork.new
7
+ @net.train([
8
+ {input: [0, 0], output: [0]},
9
+ {input: [0, 1], output: [1]},
10
+ {input: [1, 0], output: [1]},
11
+ {input: [1, 1], output: [0]}])
12
+ end
13
+
14
+ it "Produces identical output" do
15
+ net2 = Brian::NeuralNetwork.new_with_hash(@net.to_hash)
16
+
17
+ input = [1,0]
18
+ @net.run(input).should eql net2.run(input)
19
+ end
20
+ end
21
+
22
+ describe "Can store and retrieve the Colour Contrast-network" do
23
+ before do
24
+ @net = Brian::NeuralNetwork.new
25
+ @net.train([
26
+ {input: { r: 0.03, g: 0.7, b: 0.5 }, output: { black: 1.0 }},
27
+ {input: { r: 0.16, g: 0.09, b: 0.2 }, output: { white: 1.0 }},
28
+ {input: { r: 0.5, g: 0.5, b: 1.0 }, output: { white: 1.0 }}])
29
+ end
30
+
31
+
32
+ it "Produces identical output" do
33
+ net2 = Brian::NeuralNetwork.new_with_hash(@net.to_hash)
34
+
35
+ input = { r: 1, g: 0.4, b: 0 }
36
+ @net.run(input).should eql net2.run(input)
37
+ end
38
+
39
+ end
40
+ end
@@ -0,0 +1,19 @@
1
+ require 'brian'
2
+
3
+ describe Brian::Lookup do
4
+ it "Build Lookup" do
5
+ Brian::Lookup.build_lookup([{a: 1}, {b: 6, c: 7}]).should eql({a:0, b:1, c:2})
6
+ end
7
+
8
+ it "Lookup from Hash" do
9
+ Brian::Lookup.lookup_from_hash({a: 6, b: 7}).should eql({a:0, b:1})
10
+ end
11
+
12
+ it "Hash to Array" do
13
+ Brian::Lookup.to_array({a: 0, b: 1}, {a: 6}).should eql([6,0])
14
+ end
15
+
16
+ it "Array to Hash" do
17
+ Brian::Lookup.to_hash({a: 0, b: 1}, [6, 7]).should eql({a: 6, b: 7})
18
+ end
19
+ end
@@ -0,0 +1,47 @@
1
+ require 'brian'
2
+
3
+ describe Brian::NeuralNetwork do
4
+ describe "Can learn XOR" do
5
+
6
+ before do
7
+ @net = Brian::NeuralNetwork.new
8
+ @net.train([
9
+ {input: [0, 0], output: [0]},
10
+ {input: [0, 1], output: [1]},
11
+ {input: [1, 0], output: [1]},
12
+ {input: [1, 1], output: [0]}])
13
+ end
14
+
15
+ it "[1,0] => [1]" do
16
+ @net.run([1, 0]).map {|x| x.round}.should eql([1])
17
+ end
18
+
19
+
20
+ it "[1,1] => [0]" do
21
+ @net.run([1, 1]).map {|x| x.round}.should eql([0])
22
+ end
23
+
24
+ it "[0,0] => [0]" do
25
+ @net.run([0, 0]).map {|x| x.round}.should eql([0])
26
+ end
27
+ end
28
+
29
+ describe "Can learn the Colour Contrast demo" do
30
+
31
+ before do
32
+ @net = Brian::NeuralNetwork.new
33
+ @net.train([
34
+ {input: { r: 0.03, g: 0.7, b: 0.5 }, output: { black: 1.0 }},
35
+ {input: { r: 0.16, g: 0.09, b: 0.2 }, output: { white: 1.0 }},
36
+ {input: { r: 0.5, g: 0.5, b: 1.0 }, output: { white: 1.0 }}])
37
+ end
38
+
39
+ it "{r:1,g:0.4,b:0} => White" do
40
+ result = @net.run({ r: 1, g: 0.4, b: 0 })
41
+ result[:white].round.should eql(1)
42
+ result[:black].round.should eql(0)
43
+ end
44
+
45
+ end
46
+
47
+ end
metadata ADDED
@@ -0,0 +1,97 @@
1
+ --- !ruby/object:Gem::Specification
2
+ name: brian
3
+ version: !ruby/object:Gem::Version
4
+ hash: 27
5
+ prerelease: false
6
+ segments:
7
+ - 0
8
+ - 1
9
+ - 0
10
+ version: 0.1.0
11
+ platform: ruby
12
+ authors:
13
+ - Adam Watkins
14
+ autorequire:
15
+ bindir: bin
16
+ cert_chain: []
17
+
18
+ date: 2012-09-23 00:00:00 +01:00
19
+ default_executable:
20
+ dependencies:
21
+ - !ruby/object:Gem::Dependency
22
+ name: rspec
23
+ prerelease: false
24
+ requirement: &id001 !ruby/object:Gem::Requirement
25
+ none: false
26
+ requirements:
27
+ - - ~>
28
+ - !ruby/object:Gem::Version
29
+ hash: 21
30
+ segments:
31
+ - 2
32
+ - 11
33
+ version: "2.11"
34
+ type: :development
35
+ version_requirements: *id001
36
+ description: A port of the brain.js library, implementing a multilayer perceptron - a neural network for supervised learning.
37
+ email:
38
+ - adam@stupidpupil.co.uk
39
+ executables: []
40
+
41
+ extensions: []
42
+
43
+ extra_rdoc_files: []
44
+
45
+ files:
46
+ - .gitignore
47
+ - Gemfile
48
+ - LICENSE
49
+ - README.md
50
+ - Rakefile
51
+ - brian.gemspec
52
+ - lib/brian.rb
53
+ - lib/brian/hash.rb
54
+ - lib/brian/lookup.rb
55
+ - lib/brian/neural_network.rb
56
+ - lib/brian/version.rb
57
+ - spec/hash_spec.rb
58
+ - spec/lookup_spec.rb
59
+ - spec/neural_network_spec.rb
60
+ has_rdoc: true
61
+ homepage: ""
62
+ licenses:
63
+ - MIT
64
+ post_install_message:
65
+ rdoc_options: []
66
+
67
+ require_paths:
68
+ - lib
69
+ required_ruby_version: !ruby/object:Gem::Requirement
70
+ none: false
71
+ requirements:
72
+ - - ">="
73
+ - !ruby/object:Gem::Version
74
+ hash: 3
75
+ segments:
76
+ - 0
77
+ version: "0"
78
+ required_rubygems_version: !ruby/object:Gem::Requirement
79
+ none: false
80
+ requirements:
81
+ - - ">="
82
+ - !ruby/object:Gem::Version
83
+ hash: 3
84
+ segments:
85
+ - 0
86
+ version: "0"
87
+ requirements: []
88
+
89
+ rubyforge_project:
90
+ rubygems_version: 1.3.7
91
+ signing_key:
92
+ specification_version: 3
93
+ summary: Multilayer perceptron (neural network) library.
94
+ test_files:
95
+ - spec/hash_spec.rb
96
+ - spec/lookup_spec.rb
97
+ - spec/neural_network_spec.rb