neuronet 6.0.0 → 6.0.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (3) hide show
  1. data/README.md +407 -564
  2. data/lib/neuronet.rb +157 -30
  3. metadata +2 -2
data/lib/neuronet.rb CHANGED
@@ -1,8 +1,20 @@
1
1
  # Neuronet module
2
2
  module Neuronet
3
- VERSION = '6.0.0'
4
-
5
- # The squash function for Neuronet is the sigmoid function.
3
+ VERSION = '6.0.1'
4
+
5
+ # An artificial neural network uses a squash function
6
+ # to determine the activation value of a neuron.
7
+ # The squash function for Neuronet is the
8
+ # [Sigmoid function](http://en.wikipedia.org/wiki/Sigmoid_function)
9
+ # which sets the neuron's activation value between 1.0 and 0.0.
10
+ # This activation value is often thought of on/off or true/false.
11
+ # For classification problems, activation values near one are considered true
12
+ # while activation values near 0.0 are considered false.
13
+ # In Neuronet I make a distinction between the neuron's activation value and
14
+ # it's representation to the problem.
15
+ # This attribute, activation, need never appear in an implementation of Neuronet, but
16
+ # it is mapped back to it's unsquashed value every time
17
+ # the implementation asks for the neuron's value.
6
18
  # One should scale the problem with most data points between -1 and 1,
7
19
  # extremes under 2s, and no outbounds above 3s.
8
20
  # Standard deviations from the mean is probably a good way to figure the scale of the problem.
@@ -14,13 +26,23 @@ module Neuronet
14
26
  Math.log(squashed / (1.0 - squashed))
15
27
  end
16
28
 
17
- # By default, Neuronet builds a zeroed network.
18
- # Noise adds random fluctuations to create a search for minima.
29
+ # Although the implementation is free to set all parameters for each neuron,
30
+ # Neuronet by default creates zeroed neurons.
31
+ # Association between inputs and outputs are trained, and
32
+ # neurons differentiate from each other randomly.
33
+ # Differentiation among neurons is achieved by noise in the back-propagation of errors.
34
+ # This noise is provided by Neuronet.noise.
35
+ # I chose rand + rand to give the noise an average value of one and a bell shape distribution.
19
36
  def self.noise
20
37
  rand + rand
21
38
  end
22
39
 
23
- # A Node, used for the input layer.
40
+ # In Neuronet, there are two main types of objects: Nodes and Connections.
41
+ # A Node has a value which the implementation can set.
42
+ # A plain Node instance is used primarily as input neurons, and
43
+ # its value is not changed by training.
44
+ # It is a terminal for backpropagation of errors.
45
+ # Nodes are used for the input layer.
24
46
  class Node
25
47
  attr_reader :activation
26
48
  # A Node is constant (Input)
@@ -47,7 +69,9 @@ module Neuronet
47
69
  end
48
70
  end
49
71
 
50
- # A weighted connection to a neuron (or node).
72
+ # Connections between neurons (and nodes) are there own separate objects.
73
+ # In Neuronet, a neuron contains it's bias, and a list of it's connections.
74
+ # Each connection contains it's weight (strength) and connected node.
51
75
  class Connection
52
76
  attr_accessor :node, :weight
53
77
  def initialize(node, weight=0.0)
@@ -59,21 +83,32 @@ module Neuronet
59
83
  @node.activation * @weight
60
84
  end
61
85
 
62
- # Updates and returns the value of the connection.
63
- # Updates the connected node.
86
+ # Connection#update returns the updated value of a connection,
87
+ # which is the weighted updated activation of
88
+ # the node it's connected to ( weight * node.update ).
89
+ # This method is the one to use
90
+ # whenever the value of the inputs are changed (right after training).
91
+ # Otherwise, both update and value should give the same result.
92
+ # Use Connection#value when back calculations are not needed instead.
64
93
  def update
65
94
  @node.update * @weight
66
95
  end
67
96
 
68
- # Adjusts the connection weight according to error and
69
- # backpropagates the error to the connected node.
97
+ # Connectoin#backpropagate modifies the connection's weight
98
+ # in proportion to the error given and passes that error
99
+ # to its connected node via the node's backpropagate method.
70
100
  def backpropagate(error)
71
101
  @weight += @node.activation * error * Neuronet.noise
72
102
  @node.backpropagate(error)
73
103
  end
74
104
  end
75
105
 
76
- # A Neuron with bias and connections
106
+ # A Neuron is a Node with some extra features.
107
+ # It adds two attributes: connections, and bias.
108
+ # The connections attribute is a list of
109
+ # the neuron's connections to other neurons (or nodes).
110
+ # A neuron's bias is it's kicker (or deduction) to it's activation value,
111
+ # a sum of its connections values.
77
112
  class Neuron < Node
78
113
  attr_reader :connections
79
114
  attr_accessor :bias
@@ -84,33 +119,54 @@ module Neuronet
84
119
  end
85
120
 
86
121
  # Updates the activation with the current value of bias and updated values of connections.
122
+ # If you're not familiar with ruby's Array::inject method,
123
+ # it is a Ruby way of doing summations. Checkout:
124
+ # [Jay Field's Thoughts on Ruby: inject](http://blog.jayfields.com/2008/03/ruby-inject.html)
125
+ # [Induction ( for_all )](http://carlosjhr64.blogspot.com/2011/02/induction.html)
87
126
  def update
88
127
  self.value = @bias + @connections.inject(0.0){|sum,connection| sum + connection.update}
89
128
  end
90
129
 
91
- # Updates the activation with the current values of bias and connections
92
- # For when connections are already updated.
130
+ # For when connections are already updated,
131
+ # Neuron#partial updates the activation with the current values of bias and connections.
132
+ # It is not always necessary to burrow all the way down to the terminal input node
133
+ # to update the current neuron if it's connected neurons have all been updated.
134
+ # The implementation should set it's algorithm to use partial
135
+ # instead of update as update will most likely needlessly update previously updated neurons.
93
136
  def partial
94
137
  self.value = @bias + @connections.inject(0.0){|sum,connection| sum + connection.value}
95
138
  end
96
139
 
97
- # Adjusts bias according to error and
98
- # backpropagates the error to the connections.
140
+ # The backpropagate method modifies
141
+ # the neuron's bias in proportion to the given error and
142
+ # passes on this error to each of its connection's backpropagate method.
143
+ # While updates flows from input to output,
144
+ # back-propagation of errors flows from output to input.
99
145
  def backpropagate(error)
146
+ # Adjusts bias according to error and...
100
147
  @bias += error * Neuronet.noise
148
+ # backpropagates the error to the connections.
101
149
  @connections.each{|connection| connection.backpropagate(error)}
102
150
  end
103
151
 
104
152
  # Connects the neuron to another node.
105
153
  # Updates the activation with the new connection.
106
- # The default weight=0 means there is no initial association
154
+ # The default weight=0 means there is no initial association.
155
+ # The connect method is how the implementation adds a connection,
156
+ # the way to connect the neuron to another.
157
+ # To connect neuron out to neuron in, for example, it is:
158
+ # in = Neuronet::Neuron.new
159
+ # out = Neuronet::Neuron.new
160
+ # out.connect(in)
161
+ # Think output connects to input.
107
162
  def connect(node, weight=0.0)
108
163
  @connections.push(Connection.new(node,weight))
109
164
  update
110
165
  end
111
166
  end
112
167
 
113
- # This is the Input Layer
168
+ # Neuronet::InputLayer is an Array of Neuronet::Node's.
169
+ # It can be used for the input layer of a feed forward network.
114
170
  class InputLayer < Array
115
171
  def initialize(length) # number of nodes
116
172
  super(length)
@@ -123,7 +179,8 @@ module Neuronet
123
179
  end
124
180
  end
125
181
 
126
- # Just a regular Layer
182
+ # Just a regular Layer.
183
+ # InputLayer is to Layer what Node is to Neuron.
127
184
  class Layer < Array
128
185
  def initialize(length)
129
186
  super(length)
@@ -161,6 +218,16 @@ module Neuronet
161
218
  # A Feed Forward Network
162
219
  class FeedForward < Array
163
220
  # Whatchamacallits?
221
+ # The learning constant is given different names...
222
+ # often some greek letter.
223
+ # It's a small number less than one.
224
+ # Ideally, it divides the errors evenly among all contributors.
225
+ # Contributors are the neurons' biases and the connections' weights.
226
+ # Thus if one counts all the contributors as N,
227
+ # the learning constant should be at most 1/N.
228
+ # But there are other considerations, such as how noisy the data is.
229
+ # In any case, I'm calling this N value FeedForward#mu.
230
+ # 1/mu is used for the initial default value for the learning constant.
164
231
  def mu
165
232
  sum = 1.0
166
233
  1.upto(self.length-1) do |i|
@@ -169,16 +236,33 @@ module Neuronet
169
236
  end
170
237
  return sum
171
238
  end
239
+ # Given that the learning constant is initially set to 1/mu as defined above,
240
+ # muk gives a way to modify the learning constant by some factor, k.
241
+ # In theory, when there is no noice in the target data, k can be set to 1.0.
242
+ # If the data is noisy, k is set to some value less than 1.0.
172
243
  def muk(k=1.0)
173
244
  @learning = k/mu
174
245
  end
246
+ # Given that the learning constant can be modified by some factor k with #muk,
247
+ # #num gives an alternate way to express
248
+ # the k factor in terms of some number n greater than 1, setting k to 1/sqrt(n).
249
+ # I believe that the optimal value for the learning constant
250
+ # for a training set of size n is somewhere between #muk(1) and #num(n).
251
+ # Whereas the learning constant can be too high,
252
+ # a low learning value just increases the training time.
175
253
  def num(n)
176
- @learning = 1.0/(Math.sqrt(1.0+n) * mu)
254
+ muk(1.0/(Math.sqrt(n)))
177
255
  end
178
256
 
179
257
  attr_reader :in, :out
180
258
  attr_reader :yin, :yang
181
259
  attr_accessor :learning
260
+
261
+ # I find very useful to name certain layers:
262
+ # [0] @in Input Layer
263
+ # [1] @yin Tipically the first middle layer
264
+ # [-2] @yang Tipically the last middle layer
265
+ # [-1] @out Output Layer
182
266
  def initialize(layers)
183
267
  super(length = layers.length)
184
268
  @in = self[0] = Neuronet::InputLayer.new(layers[0])
@@ -222,11 +306,23 @@ module Neuronet
222
306
  end
223
307
  end
224
308
 
225
- # Scales the problem
309
+ # Neuronet::Scale is a class to
310
+ # help scale problems to fit within a network's "field of view".
311
+ # Given a list of values, it finds the minimum and maximum values and
312
+ # establishes a mapping to a scaled set of numbers between minus one and one (-1,1).
226
313
  class Scale
227
314
  attr_accessor :spread, :center
228
315
  attr_writer :init
229
316
 
317
+ # If the value of center is provided, then
318
+ # that value will be used instead of
319
+ # calculating it from the values passed to method set.
320
+ # Likewise, if spread is provided, that value of spread will be used.
321
+ # The attribute @init flags if
322
+ # there is a initiation phase to the calculation of @spread and @center.
323
+ # For Scale, @init is true and the initiation phase calculates
324
+ # the intermediate values @min and @max (the minimum and maximum values in the data set).
325
+ # It's possible for subclasses of Scale, such as Gaussian, to not have this initiation phase.
230
326
  def initialize(factor=1.0,center=nil,spread=nil)
231
327
  @factor,@center,@spread = factor,center,spread
232
328
  @centered, @spreaded = center.nil?, spread.nil?
@@ -272,7 +368,10 @@ module Neuronet
272
368
  alias unmapped_output unmapped
273
369
  end
274
370
 
275
- # Normal Distribution
371
+ # "Normal Distribution"
372
+ # Gaussian subclasses Scale and is used exactly the same way.
373
+ # The only changes are that it calculates the arithmetic mean (average) for center and
374
+ # the standard deviation for spread.
276
375
  class Gaussian < Scale
277
376
  def initialize(factor=1.0,center=nil,spread=nil)
278
377
  super(factor, center, spread)
@@ -290,7 +389,8 @@ module Neuronet
290
389
  end
291
390
  end
292
391
 
293
- # Log-Normal Distribution
392
+ # "Log-Normal Distribution"
393
+ # LogNormal subclasses Gaussian to transform the values to a logarithmic scale.
294
394
  class LogNormal < Gaussian
295
395
  def initialize(factor=1.0,center=nil,spread=nil)
296
396
  super(factor, center, spread)
@@ -313,7 +413,11 @@ module Neuronet
313
413
  alias unmapped_output unmapped
314
414
  end
315
415
 
316
- # Series Network for similar input/output values
416
+ # ScaledNetwork is a subclass of FeedForwardNetwork.
417
+ # It automatically scales the problem given to it
418
+ # by using a Scale type instance set in @distribution.
419
+ # The attribute, @distribution, is set to Neuronet::Gausian.new by default,
420
+ # but one can change this to Scale, LogNormal, or one's own custom mapper.
317
421
  class ScaledNetwork < FeedForward
318
422
  attr_accessor :distribution
319
423
 
@@ -331,6 +435,12 @@ module Neuronet
331
435
  super(@distribution.mapped_input(inputs))
332
436
  end
333
437
 
438
+ # ScaledNetwork#reset works just like FeedForwardNetwork's set method,
439
+ # but calls distribution.set( values ) first.
440
+ # Sometimes you'll want to set the distribution
441
+ # with the entire data set and the use set,
442
+ # and then there will be times you'll want to
443
+ # set the distribution with each input and use reset.
334
444
  def reset(inputs)
335
445
  @distribution.set(inputs)
336
446
  set(inputs)
@@ -345,13 +455,17 @@ module Neuronet
345
455
  end
346
456
  end
347
457
 
348
- # A Perceptron Hybrid
458
+ # A Perceptron Hybrid,
459
+ # Tao directly connects the output layer to the input layer.
349
460
  module Tao
461
+ # Tao's extra connections adds to mu.
350
462
  def mu
351
463
  sum = super
352
464
  sum += self.first.length * self.last.length
353
465
  return sum
354
466
  end
467
+ # Tao.bless connects the network's output layer to the input layer,
468
+ # extends it with Tao, and modifies the learning constant if needed.
355
469
  def self.bless(myself)
356
470
  # @out directly connects to @in
357
471
  myself.out.connect(myself.in)
@@ -364,8 +478,13 @@ module Neuronet
364
478
  end
365
479
  end
366
480
 
367
- # sets @yin to initially mirror @in
481
+ # Yin is a network wich has its @yin layer initially mirroring @in.
368
482
  module Yin
483
+ # Yin.bless sets the bias of each @yin[i] to 0.5, and
484
+ # the weight of pairing (@yin[i], @in[i]) connections to one.
485
+ # This makes @yin initially mirror @in.
486
+ # The pairing is done starting with (@yin[0], @in[0]).
487
+ # That is, starting with (@yin.first, @in.first).
369
488
  def self.bless(myself)
370
489
  yin = myself.yin
371
490
  if yin.length < (in_length = myself.in.length)
@@ -381,11 +500,18 @@ module Neuronet
381
500
  end
382
501
  end
383
502
 
384
- # sets @out to initially mirror @yang
503
+ # Yang is a network wich has its @out layer initially mirroring @yang.
385
504
  module Yang
505
+ # Yang.bless sets the bias of each @yang[i] to 0.5, and
506
+ # the weight of pairing (@out[i], @yang[i]) connections to one.
507
+ # This makes @out initially mirror @yang.
508
+ # The pairing is done starting with (@out[-1], @yang[-1]).
509
+ # That is, starting with (@out.last, @yang.last).
386
510
  def self.bless(myself)
387
511
  offset = myself.yang.length - (out_length = (out = myself.out).length)
388
512
  raise "Last hidden layer, yang, needs to have at least the same length as output" if offset < 0
513
+ # Although the algorithm here is not as described,
514
+ # the net effect to is pair @out.last with @yang.last, and so on down.
389
515
  0.upto(out_length-1) do |index|
390
516
  node = out[index]
391
517
  node.connections[offset+index].weight = 1.0
@@ -395,9 +521,7 @@ module Neuronet
395
521
  end
396
522
  end
397
523
 
398
- # And convenient composites...
399
-
400
- # Yin-Yang-ed :))
524
+ # A Yin Yang composite provided for convenience.
401
525
  module YinYang
402
526
  def self.bless(myself)
403
527
  Yin.bless(myself)
@@ -406,6 +530,7 @@ module Neuronet
406
530
  end
407
531
  end
408
532
 
533
+ # A Tao Yin Yang composite provided for convenience.
409
534
  module TaoYinYang
410
535
  def self.bless(myself)
411
536
  Tao.bless(myself)
@@ -415,6 +540,7 @@ module Neuronet
415
540
  end
416
541
  end
417
542
 
543
+ # A Tao Yin composite provided for convenience.
418
544
  module TaoYin
419
545
  def self.bless(myself)
420
546
  Tao.bless(myself)
@@ -423,6 +549,7 @@ module Neuronet
423
549
  end
424
550
  end
425
551
 
552
+ # A Tao Yang composite provided for convenience.
426
553
  module TaoYang
427
554
  def self.bless(myself)
428
555
  Tao.bless(myself)
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: neuronet
3
3
  version: !ruby/object:Gem::Version
4
- version: 6.0.0
4
+ version: 6.0.1
5
5
  prerelease:
6
6
  platform: ruby
7
7
  authors:
@@ -9,7 +9,7 @@ authors:
9
9
  autorequire:
10
10
  bindir: bin
11
11
  cert_chain: []
12
- date: 2013-06-16 00:00:00.000000000 Z
12
+ date: 2013-06-18 00:00:00.000000000 Z
13
13
  dependencies: []
14
14
  description: Build custom neural networks. 100% 1.9 Ruby.
15
15
  email: carlosjhr64@gmail.com