rb-brain 0.0.1

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml ADDED
@@ -0,0 +1,7 @@
1
+ ---
2
+ SHA1:
3
+ metadata.gz: c16b832f4b21365c93b698c01c14fbea54eb2e74
4
+ data.tar.gz: e65a2a8c1c76c8b83c00972184942d02fbbeefa3
5
+ SHA512:
6
+ metadata.gz: cabeba60da560694cb1343f0c0f384533e563ea4f829838783a5cea8ac30c81cf487e0cc2faede4d832ae6e1ea8da3156eca6b4dd7da66c9b29ae2c5be7d6a4f
7
+ data.tar.gz: 80aab9ccc24424d6b1763c985ff693f81e09b6c41aa1d613d4dc46f431b438d8ce83a4468d78da67df16a5b75858c14747b67e0132b4ccc34df8de3438fab9f7
data/Gemfile ADDED
@@ -0,0 +1,4 @@
1
+ source 'https://rubygems.org'
2
+
3
+ # Specify your gem's dependencies in carrierwave-webdav.gemspec
4
+ gemspec
data/LICENSE ADDED
@@ -0,0 +1,22 @@
1
+ Copyright (c) 2014 Qinix
2
+
3
+ MIT License
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining
6
+ a copy of this software and associated documentation files (the
7
+ "Software"), to deal in the Software without restriction, including
8
+ without limitation the rights to use, copy, modify, merge, publish,
9
+ distribute, sublicense, and/or sell copies of the Software, and to
10
+ permit persons to whom the Software is furnished to do so, subject to
11
+ the following conditions:
12
+
13
+ The above copyright notice and this permission notice shall be
14
+ included in all copies or substantial portions of the Software.
15
+
16
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
17
+ EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
18
+ MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
19
+ NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
20
+ LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
21
+ OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
22
+ WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
data/README.md ADDED
@@ -0,0 +1,105 @@
1
+ # rb-brain
2
+
3
+ `rb-brain` is an easy-to-use [neural network](http://en.wikipedia.org/wiki/Artificial_neural_network) library written in Ruby. Here's an example of using it to approximate the XOR function:
4
+
5
+ ```ruby
6
+ require 'brain'
7
+
8
+ net = Brain::NeuralNetwork.new
9
+
10
+ net.train([{input: [0, 0], output: [0]},
11
+ {input: [0, 1], output: [1]},
12
+ {input: [1, 0], output: [1]},
13
+ {input: [1, 1], output: [0]}])
14
+
15
+ output = net.run([1, 0]) # [0.948]
16
+ ```
17
+
18
+ ## Installation
19
+
20
+ Install the latest release:
21
+
22
+ gem install rb-brain
23
+
24
+ Require it in your code:
25
+
26
+ require 'brain'
27
+
28
+ Or, in Rails you can add it to your Gemfile:
29
+
30
+ gem 'rb-brain', :require => 'brain'
31
+
32
+ ## Training
33
+ Use `train()` to train the network with an array of training data. The network has to be trained with all the data in bulk in one call to `train()`. The more training patterns, the longer it will probably take to train, but the better the network will be at classifiying new patterns.
34
+
35
+ #### Data format
36
+ Each training pattern should have an `input` and an `output`, both of which can be either an array of numbers from `0` to `1` or a hash of numbers from `0` to `1`. For hash it looks something like this:
37
+
38
+ ```ruby
39
+ require 'brain'
40
+
41
+ net = Brain::NeuralNetwork.new
42
+
43
+ net.train([{input: {x: 0,y: 0}, output: {result: 0}},
44
+ {input: {x: 0,y: 1}, output: {result: 1}},
45
+ {input: {x: 1,y: 0}, output: {result: 1}},
46
+ {input: {x: 1,y: 1}, output: {result: 0}}])
47
+
48
+ output = net.run({x: 1,y: 0}) # {result: 0.948}
49
+ ```
50
+
51
+ #### Options
52
+ `train()` takes a hash of options as its second argument:
53
+
54
+ ```ruby
55
+ net.train(data,
56
+ error_thresh: 0.005, # error threshold to reach
57
+ iterations: 20000, # maximum training iterations
58
+ log: true, # print progress periodically
59
+ log_period: 10, # number of iterations between logging
60
+ learning_rate: 0.3 # learning rate
61
+ )
62
+ ```
63
+
64
+ The network will train until the training error has gone below the threshold (default `0.003`) or the max number of iterations (default `20000`) has been reached, whichever comes first.
65
+
66
+ By default training won't let you know how its doing until the end, but set `log` to `true` to get periodic updates on the current training error of the network. The training error should decrease every time.
67
+
68
+ The learning rate is a parameter that influences how quickly the network trains. It's a number from `0` to `1`. If the learning rate is close to `0` it will take longer to train. If the learning rate is closer to `1` it will train faster but it's in danger of training to a local minimum and performing badly on new data. The default learning rate is `0.3`.
69
+
70
+ #### Output
71
+ The output of `train()` is a hash of information about how the training went:
72
+
73
+ ```ruby
74
+ {
75
+ error: 0.0039139985510105032, # training error
76
+ iterations: 406 # training iterations
77
+ }
78
+ ```
79
+
80
+ #### Failing
81
+ If the network failed to train, the error will be above the error threshold. This could happen because the training data is too noisy (most likely), the network doesn't have enough hidden layers or nodes to handle the complexity of the data, or it hasn't trained for enough iterations.
82
+
83
+ If the training error is still something huge like `0.4` after 20000 iterations, it's a good sign that the network can't make sense of the data you're giving it.
84
+
85
+ ## Options
86
+ `NeuralNetwork` takes a hash of options:
87
+
88
+ ```ruby
89
+ net = Brain::NeuralNetwork.new(
90
+ hidden_layers: [4],
91
+ learning_rate: 0.6 # global learning rate, useful when training using streams
92
+ )
93
+ ```
94
+
95
+ #### hidden_ayers
96
+ Specify the number of hidden layers in the network and the size of each layer. For example, if you want two hidden layers - the first with 3 nodes and the second with 4 nodes, you'd give:
97
+
98
+ ```
99
+ hidden_layers: [3, 4]
100
+ ```
101
+
102
+ By default `brain` uses one hidden layer with size proportionate to the size of the input array.
103
+
104
+ # Acknowledgement
105
+ I learned a lot from [harthur/brain](https://github.com/harthur/brain), Most of the code is rewritten from this repo. I would like to thank the author of the repo.
data/lib/brain.rb ADDED
@@ -0,0 +1 @@
1
+ require 'brain/neuralnetwork'
@@ -0,0 +1,51 @@
1
+ module Brain
2
+ class Lookup
3
+ class << self
4
+ def build_lookup(hashes)
5
+ # [{a: 1}, {b: 6, c: 7}] -> {a: 0, b: 1, c: 2}
6
+ h = hashes.reduce do |memo, hash|
7
+ memo.merge hash
8
+ end
9
+ lookup_from_hash h
10
+ end
11
+
12
+ def lookup_from_hash(hash)
13
+ # {a: 6, b: 7} -> {a: 0, b: 1}
14
+ lookup = {}
15
+ index = 0
16
+ hash.each do |k, v|
17
+ lookup[k] = index
18
+ index += 1
19
+ end
20
+ lookup
21
+ end
22
+
23
+ def to_array(lookup, hash)
24
+ # {a: 0, b: 1}, {a: 6} -> [6, 0]
25
+ array = []
26
+ lookup.each do |k, v|
27
+ array[lookup[k]] = hash[k] || 0
28
+ end
29
+ array
30
+ end
31
+
32
+ def to_hash(lookup, array)
33
+ # {a: 0, b: 1}, [6, 7] -> {a: 6, b: 7}
34
+ hash = {}
35
+ lookup.each do |k, v|
36
+ hash[k] = array[lookup[k]]
37
+ end
38
+ hash
39
+ end
40
+
41
+ def lookup_from_array(array)
42
+ lookup = {}
43
+ z = 0
44
+ array.reverse.each do |i|
45
+ lookup[i] = z
46
+ z += 1
47
+ end
48
+ end
49
+ end
50
+ end
51
+ end
@@ -0,0 +1,209 @@
1
+ require 'brain/lookup'
2
+
3
+ module Brain
4
+ class NeuralNetwork
5
+ def initialize(options = {})
6
+ @learning_rate = options[:learning_rate] || 0.3
7
+ @momentum = options[:momentum] || 0.1
8
+ @hidden_sizes = options[:hidden_layers]
9
+ @binary_thresh = options[:binary_thresh] || 0.5
10
+ end
11
+
12
+ def init(sizes)
13
+ @sizes = sizes
14
+ @output_layer = @sizes.length - 1
15
+
16
+ @biases = [] # weights for bias nodes
17
+ @weights = []
18
+ @outputs = []
19
+
20
+ # state for training
21
+ @deltas = []
22
+ @changes = [] # for momentum
23
+ @errors = []
24
+
25
+ (0..@output_layer).each do |layer|
26
+ size = @sizes[layer]
27
+ @deltas[layer] = Array.new size, 0
28
+ @errors[layer] = Array.new size, 0
29
+ @outputs[layer] = Array.new size, 0
30
+
31
+ if layer > 0
32
+ @biases[layer] = randos size
33
+ @weights[layer] = Array.new size
34
+ @changes[layer] = Array.new size
35
+
36
+ (0...size).each do |node|
37
+ prev_size = @sizes[layer - 1]
38
+ @weights[layer][node] = randos prev_size
39
+ @changes[layer][node] = Array.new prev_size, 0
40
+ end
41
+ end
42
+ end
43
+ end
44
+
45
+ def run(input)
46
+ input = Lookup.to_array(@input_lookup, input) if @input_lookup
47
+
48
+ output = run_input input
49
+ output = Lookup.to_hash(@output_lookup, output) if @output_lookup
50
+
51
+ output
52
+ end
53
+
54
+ def run_input(input)
55
+ @outputs[0] = input
56
+ output = 0
57
+
58
+ (1..@output_layer).each do |layer|
59
+ (0...@sizes[layer]).each do |node|
60
+ weights = @weights[layer][node]
61
+
62
+ sum = @biases[layer][node]
63
+ (0...weights.length).each do |k|
64
+ sum += weights[k] * input[k]
65
+ end
66
+ @outputs[layer][node] = 1 / (1 + Math.exp(-sum))
67
+ end
68
+ output = input = @outputs[layer]
69
+ end
70
+
71
+ output
72
+ end
73
+
74
+ def train(data, options = {})
75
+ data = format_data data
76
+
77
+ iterations = options[:iterations] || 20000
78
+ error_thresh = options[:error_thresh] || 0.003
79
+ log = options[:log] || false
80
+ log_period = options[:log_period] || 10
81
+ learning_rate = options[:learning_rate] || @learning_rate || 0.3
82
+
83
+ input_size = data[0][:input].length
84
+ output_size = data[0][:output].length
85
+
86
+ hidden_sizes = @hidden_sizes
87
+ hidden_sizes = [[3, (input_size / 2.0).floor].max] unless hidden_sizes
88
+ sizes = [input_size, hidden_sizes, output_size].flatten
89
+ init sizes
90
+
91
+ error = 1
92
+ done_iterations = iterations
93
+ (0...iterations).each do |i|
94
+ unless error > error_thresh
95
+ done_iterations = i
96
+ break
97
+ end
98
+ sum = 0
99
+ data.each do |d|
100
+ err = train_pattern d[:input], d[:output], learning_rate
101
+ sum += err
102
+ end
103
+ error = sum / data.length
104
+
105
+ puts "iterations: #{i}, training error: #{error}" if log and (i % log_period == 0)
106
+ end
107
+
108
+ {
109
+ error: error,
110
+ iterations: done_iterations
111
+ }
112
+ end
113
+
114
+ def train_pattern(input, target, learning_rate)
115
+ learning_rate ||= @learning_rate
116
+
117
+ # forward propogate
118
+ run_input input
119
+ calculate_deltas target
120
+ adjust_weights learning_rate
121
+
122
+ mse @errors[@output_layer]
123
+ end
124
+
125
+ def calculate_deltas(target)
126
+ (0..@output_layer).to_a.reverse.each do |layer|
127
+ (0...@sizes[layer]).each do |node|
128
+ output = @outputs[layer][node]
129
+
130
+ error = 0
131
+ if layer == @output_layer
132
+ error = target[node] - output
133
+ else
134
+ deltas = @deltas[layer + 1]
135
+ (0...deltas.length).each do |k|
136
+ error += deltas[k] * @weights[layer + 1][k][node]
137
+ end
138
+ end
139
+ @errors[layer][node] = error
140
+ @deltas[layer][node] = error * output * (1 - output)
141
+ end
142
+ end
143
+ end
144
+
145
+ def adjust_weights(learning_rate)
146
+ (1..@output_layer).each do |layer|
147
+ incoming = @outputs[layer - 1]
148
+
149
+ (0...@sizes[layer]).each do |node|
150
+ delta = @deltas[layer][node]
151
+
152
+ (0...incoming.length).each do |k|
153
+ change = @changes[layer][node][k]
154
+
155
+ change = (learning_rate * delta * incoming[k]) + (@momentum * change)
156
+
157
+ @changes[layer][node][k] = change
158
+ @weights[layer][node][k] += change
159
+ end
160
+ @biases[layer][node] += learning_rate * delta
161
+ end
162
+ end
163
+ end
164
+
165
+ def format_data(data)
166
+ unless data.is_a? Array
167
+ data = [data]
168
+ end
169
+
170
+ #turn sparse hash input into arrays with 0s as filler
171
+ unless data[0][:input].is_a? Array
172
+ @input_lookup = Lookup.build_lookup data.map {|d| d[:input]} unless @input_lookup
173
+ data.map! do |datum|
174
+ array = Lookup.to_array @input_lookup, datum[:input]
175
+ datum.merge({ input: array })
176
+ end
177
+ end
178
+
179
+ unless data[0][:output].is_a? Array
180
+ @output_lookup = Lookup.build_lookup data.map {|d| d[:output]} unless @output_lookup
181
+ data.map! do |datum|
182
+ array = Lookup.to_array @output_lookup, datum[:output]
183
+ datum.merge({ output: array })
184
+ end
185
+ end
186
+
187
+ data
188
+ end
189
+
190
+ private
191
+ def random_weight
192
+ Random.rand * 0.4 - 0.2
193
+ end
194
+
195
+ def randos(size)
196
+ array = Array.new size
197
+ array.map! {|item| item = random_weight}
198
+ end
199
+
200
+ def mse(errors)
201
+ # mean squared error
202
+ sum = 0
203
+ errors.each do |e|
204
+ sum += e ** 2
205
+ end
206
+ sum / errors.length
207
+ end
208
+ end
209
+ end
data/rb-brain.gemspec ADDED
@@ -0,0 +1,20 @@
1
+ # -*- encoding: utf-8 -*-
2
+ lib = File.expand_path('../lib', __FILE__)
3
+ $LOAD_PATH.unshift(lib) unless $LOAD_PATH.include?(lib)
4
+
5
+ Gem::Specification.new do |gem|
6
+ gem.name = "rb-brain"
7
+ gem.version = "0.0.1"
8
+ gem.authors = ["Eric Zhang"]
9
+ gem.email = ["i@qinix.com"]
10
+ gem.description = %q{rb-brain is an easy-to-use neural network written in ruby}
11
+ gem.summary = %q{rb-brain is an easy-to-use neural network written in ruby}
12
+ gem.homepage = "https://github.com/qinix/rb-brain"
13
+ gem.license = "MIT"
14
+
15
+ gem.files = `git ls-files`.split($/)
16
+ gem.executables = gem.files.grep(%r{^bin/}).map{ |f| File.basename(f) }
17
+ gem.test_files = gem.files.grep(%r{^(test|spec|features)/})
18
+ gem.require_paths = ["lib"]
19
+
20
+ end
metadata ADDED
@@ -0,0 +1,51 @@
1
+ --- !ruby/object:Gem::Specification
2
+ name: rb-brain
3
+ version: !ruby/object:Gem::Version
4
+ version: 0.0.1
5
+ platform: ruby
6
+ authors:
7
+ - Eric Zhang
8
+ autorequire:
9
+ bindir: bin
10
+ cert_chain: []
11
+ date: 2014-12-22 00:00:00.000000000 Z
12
+ dependencies: []
13
+ description: rb-brain is an easy-to-use neural network written in ruby
14
+ email:
15
+ - i@qinix.com
16
+ executables: []
17
+ extensions: []
18
+ extra_rdoc_files: []
19
+ files:
20
+ - Gemfile
21
+ - LICENSE
22
+ - README.md
23
+ - lib/brain.rb
24
+ - lib/brain/lookup.rb
25
+ - lib/brain/neuralnetwork.rb
26
+ - rb-brain.gemspec
27
+ homepage: https://github.com/qinix/rb-brain
28
+ licenses:
29
+ - MIT
30
+ metadata: {}
31
+ post_install_message:
32
+ rdoc_options: []
33
+ require_paths:
34
+ - lib
35
+ required_ruby_version: !ruby/object:Gem::Requirement
36
+ requirements:
37
+ - - ">="
38
+ - !ruby/object:Gem::Version
39
+ version: '0'
40
+ required_rubygems_version: !ruby/object:Gem::Requirement
41
+ requirements:
42
+ - - ">="
43
+ - !ruby/object:Gem::Version
44
+ version: '0'
45
+ requirements: []
46
+ rubyforge_project:
47
+ rubygems_version: 2.4.3
48
+ signing_key:
49
+ specification_version: 4
50
+ summary: rb-brain is an easy-to-use neural network written in ruby
51
+ test_files: []