neuronet 6.0.1 → 7.0.230416

Sign up to get free protection for your applications and to get access to all the features.
data/lib/neuronet.rb CHANGED
@@ -1,561 +1,15 @@
1
- # Neuronet module
2
- module Neuronet
3
- VERSION = '6.0.1'
4
-
5
- # An artificial neural network uses a squash function
6
- # to determine the activation value of a neuron.
7
- # The squash function for Neuronet is the
8
- # [Sigmoid function](http://en.wikipedia.org/wiki/Sigmoid_function)
9
- # which sets the neuron's activation value between 1.0 and 0.0.
10
- # This activation value is often thought of on/off or true/false.
11
- # For classification problems, activation values near one are considered true
12
- # while activation values near 0.0 are considered false.
13
- # In Neuronet I make a distinction between the neuron's activation value and
14
- # it's representation to the problem.
15
- # This attribute, activation, need never appear in an implementation of Neuronet, but
16
- # it is mapped back to it's unsquashed value every time
17
- # the implementation asks for the neuron's value.
18
- # One should scale the problem with most data points between -1 and 1,
19
- # extremes under 2s, and no outbounds above 3s.
20
- # Standard deviations from the mean is probably a good way to figure the scale of the problem.
21
- def self.squash(unsquashed)
22
- 1.0 / (1.0 + Math.exp(-unsquashed))
23
- end
24
-
25
- def self.unsquash(squashed)
26
- Math.log(squashed / (1.0 - squashed))
27
- end
28
-
29
- # Although the implementation is free to set all parameters for each neuron,
30
- # Neuronet by default creates zeroed neurons.
31
- # Association between inputs and outputs are trained, and
32
- # neurons differentiate from each other randomly.
33
- # Differentiation among neurons is achieved by noise in the back-propagation of errors.
34
- # This noise is provided by Neuronet.noise.
35
- # I chose rand + rand to give the noise an average value of one and a bell shape distribution.
36
- def self.noise
37
- rand + rand
38
- end
39
-
40
- # In Neuronet, there are two main types of objects: Nodes and Connections.
41
- # A Node has a value which the implementation can set.
42
- # A plain Node instance is used primarily as input neurons, and
43
- # its value is not changed by training.
44
- # It is a terminal for backpropagation of errors.
45
- # Nodes are used for the input layer.
46
- class Node
47
- attr_reader :activation
48
- # A Node is constant (Input)
49
- alias update activation
50
-
51
- # The "real world" value of a node is the value of it's activation unsquashed.
52
- def value=(val)
53
- @activation = Neuronet.squash(val)
54
- end
55
-
56
- def initialize(val=0.0)
57
- self.value = val
58
- end
59
-
60
- # The "real world" value is stored as a squashed activation.
61
- def value
62
- Neuronet.unsquash(@activation)
63
- end
64
-
65
- # Node is a terminal where backpropagation ends.
66
- def backpropagate(error)
67
- # to be over-ridden
68
- nil
69
- end
70
- end
71
-
72
- # Connections between neurons (and nodes) are there own separate objects.
73
- # In Neuronet, a neuron contains it's bias, and a list of it's connections.
74
- # Each connection contains it's weight (strength) and connected node.
75
- class Connection
76
- attr_accessor :node, :weight
77
- def initialize(node, weight=0.0)
78
- @node, @weight = node, weight
79
- end
80
-
81
- # The value of a connection is the weighted activation of the connected node.
82
- def value
83
- @node.activation * @weight
84
- end
85
-
86
- # Connection#update returns the updated value of a connection,
87
- # which is the weighted updated activation of
88
- # the node it's connected to ( weight * node.update ).
89
- # This method is the one to use
90
- # whenever the value of the inputs are changed (right after training).
91
- # Otherwise, both update and value should give the same result.
92
- # Use Connection#value when back calculations are not needed instead.
93
- def update
94
- @node.update * @weight
95
- end
96
-
97
- # Connectoin#backpropagate modifies the connection's weight
98
- # in proportion to the error given and passes that error
99
- # to its connected node via the node's backpropagate method.
100
- def backpropagate(error)
101
- @weight += @node.activation * error * Neuronet.noise
102
- @node.backpropagate(error)
103
- end
104
- end
105
-
106
- # A Neuron is a Node with some extra features.
107
- # It adds two attributes: connections, and bias.
108
- # The connections attribute is a list of
109
- # the neuron's connections to other neurons (or nodes).
110
- # A neuron's bias is it's kicker (or deduction) to it's activation value,
111
- # a sum of its connections values.
112
- class Neuron < Node
113
- attr_reader :connections
114
- attr_accessor :bias
115
- def initialize(bias=0.0)
116
- super(bias)
117
- @connections = []
118
- @bias = bias
119
- end
120
-
121
- # Updates the activation with the current value of bias and updated values of connections.
122
- # If you're not familiar with ruby's Array::inject method,
123
- # it is a Ruby way of doing summations. Checkout:
124
- # [Jay Field's Thoughts on Ruby: inject](http://blog.jayfields.com/2008/03/ruby-inject.html)
125
- # [Induction ( for_all )](http://carlosjhr64.blogspot.com/2011/02/induction.html)
126
- def update
127
- self.value = @bias + @connections.inject(0.0){|sum,connection| sum + connection.update}
128
- end
129
-
130
- # For when connections are already updated,
131
- # Neuron#partial updates the activation with the current values of bias and connections.
132
- # It is not always necessary to burrow all the way down to the terminal input node
133
- # to update the current neuron if it's connected neurons have all been updated.
134
- # The implementation should set it's algorithm to use partial
135
- # instead of update as update will most likely needlessly update previously updated neurons.
136
- def partial
137
- self.value = @bias + @connections.inject(0.0){|sum,connection| sum + connection.value}
138
- end
139
-
140
- # The backpropagate method modifies
141
- # the neuron's bias in proportion to the given error and
142
- # passes on this error to each of its connection's backpropagate method.
143
- # While updates flows from input to output,
144
- # back-propagation of errors flows from output to input.
145
- def backpropagate(error)
146
- # Adjusts bias according to error and...
147
- @bias += error * Neuronet.noise
148
- # backpropagates the error to the connections.
149
- @connections.each{|connection| connection.backpropagate(error)}
150
- end
151
-
152
- # Connects the neuron to another node.
153
- # Updates the activation with the new connection.
154
- # The default weight=0 means there is no initial association.
155
- # The connect method is how the implementation adds a connection,
156
- # the way to connect the neuron to another.
157
- # To connect neuron out to neuron in, for example, it is:
158
- # in = Neuronet::Neuron.new
159
- # out = Neuronet::Neuron.new
160
- # out.connect(in)
161
- # Think output connects to input.
162
- def connect(node, weight=0.0)
163
- @connections.push(Connection.new(node,weight))
164
- update
165
- end
166
- end
167
-
168
- # Neuronet::InputLayer is an Array of Neuronet::Node's.
169
- # It can be used for the input layer of a feed forward network.
170
- class InputLayer < Array
171
- def initialize(length) # number of nodes
172
- super(length)
173
- 0.upto(length-1){|index| self[index] = Neuronet::Node.new }
174
- end
175
-
176
- # This is where one enters the "real world" inputs.
177
- def set(inputs)
178
- 0.upto(self.length-1){|index| self[index].value = inputs[index]}
179
- end
180
- end
181
-
182
- # Just a regular Layer.
183
- # InputLayer is to Layer what Node is to Neuron.
184
- class Layer < Array
185
- def initialize(length)
186
- super(length)
187
- 0.upto(length-1){|index| self[index] = Neuronet::Neuron.new }
188
- end
189
-
190
- # Allows one to fully connect layers.
191
- def connect(layer, weight=0.0)
192
- # creates the neuron matrix... note that node can be either Neuron or Node class.
193
- self.each{|neuron| layer.each{|node| neuron.connect(node,weight) }}
194
- end
195
-
196
- # updates layer with current values of the previous layer
197
- def partial
198
- self.each{|neuron| neuron.partial}
199
- end
200
-
201
- # Takes the real world targets for each node in this layer
202
- # and backpropagates the error to each node.
203
- # Note that the learning constant is really a value
204
- # that needs to be determined for each network.
205
- def train(targets, learning)
206
- 0.upto(self.length-1) do |index|
207
- node = self[index]
208
- node.backpropagate(learning*(targets[index] - node.value))
209
- end
210
- end
211
-
212
- # Returns the real world values of this layer.
213
- def values
214
- self.map{|node| node.value}
215
- end
216
- end
217
-
218
- # A Feed Forward Network
219
- class FeedForward < Array
220
- # Whatchamacallits?
221
- # The learning constant is given different names...
222
- # often some greek letter.
223
- # It's a small number less than one.
224
- # Ideally, it divides the errors evenly among all contributors.
225
- # Contributors are the neurons' biases and the connections' weights.
226
- # Thus if one counts all the contributors as N,
227
- # the learning constant should be at most 1/N.
228
- # But there are other considerations, such as how noisy the data is.
229
- # In any case, I'm calling this N value FeedForward#mu.
230
- # 1/mu is used for the initial default value for the learning constant.
231
- def mu
232
- sum = 1.0
233
- 1.upto(self.length-1) do |i|
234
- n, m = self[i-1].length, self[i].length
235
- sum += n + n*m
236
- end
237
- return sum
238
- end
239
- # Given that the learning constant is initially set to 1/mu as defined above,
240
- # muk gives a way to modify the learning constant by some factor, k.
241
- # In theory, when there is no noice in the target data, k can be set to 1.0.
242
- # If the data is noisy, k is set to some value less than 1.0.
243
- def muk(k=1.0)
244
- @learning = k/mu
245
- end
246
- # Given that the learning constant can be modified by some factor k with #muk,
247
- # #num gives an alternate way to express
248
- # the k factor in terms of some number n greater than 1, setting k to 1/sqrt(n).
249
- # I believe that the optimal value for the learning constant
250
- # for a training set of size n is somewhere between #muk(1) and #num(n).
251
- # Whereas the learning constant can be too high,
252
- # a low learning value just increases the training time.
253
- def num(n)
254
- muk(1.0/(Math.sqrt(n)))
255
- end
256
-
257
- attr_reader :in, :out
258
- attr_reader :yin, :yang
259
- attr_accessor :learning
260
-
261
- # I find very useful to name certain layers:
262
- # [0] @in Input Layer
263
- # [1] @yin Tipically the first middle layer
264
- # [-2] @yang Tipically the last middle layer
265
- # [-1] @out Output Layer
266
- def initialize(layers)
267
- super(length = layers.length)
268
- @in = self[0] = Neuronet::InputLayer.new(layers[0])
269
- (1).upto(length-1){|index|
270
- self[index] = Neuronet::Layer.new(layers[index])
271
- self[index].connect(self[index-1])
272
- }
273
- @out = self.last
274
- @yin = self[1] # first middle layer
275
- @yang = self[-2] # last middle layer
276
- @learning = 1.0/mu
277
- end
278
-
279
- def update
280
- # update up the layers
281
- (1).upto(self.length-1){|index| self[index].partial}
282
- end
283
-
284
- def set(inputs)
285
- @in.set(inputs)
286
- update
287
- end
288
-
289
- def train!(targets)
290
- @out.train(targets, @learning)
291
- update
292
- end
293
-
294
- # trains an input/output pair
295
- def exemplar(inputs, targets)
296
- set(inputs)
297
- train!(targets)
298
- end
299
-
300
- def input
301
- @in.values
302
- end
303
-
304
- def output
305
- @out.values
306
- end
307
- end
308
-
309
- # Neuronet::Scale is a class to
310
- # help scale problems to fit within a network's "field of view".
311
- # Given a list of values, it finds the minimum and maximum values and
312
- # establishes a mapping to a scaled set of numbers between minus one and one (-1,1).
313
- class Scale
314
- attr_accessor :spread, :center
315
- attr_writer :init
316
-
317
- # If the value of center is provided, then
318
- # that value will be used instead of
319
- # calculating it from the values passed to method set.
320
- # Likewise, if spread is provided, that value of spread will be used.
321
- # The attribute @init flags if
322
- # there is a initiation phase to the calculation of @spread and @center.
323
- # For Scale, @init is true and the initiation phase calculates
324
- # the intermediate values @min and @max (the minimum and maximum values in the data set).
325
- # It's possible for subclasses of Scale, such as Gaussian, to not have this initiation phase.
326
- def initialize(factor=1.0,center=nil,spread=nil)
327
- @factor,@center,@spread = factor,center,spread
328
- @centered, @spreaded = center.nil?, spread.nil?
329
- @init = true
330
- end
331
-
332
- def set_init(inputs)
333
- @min, @max = inputs.minmax
334
- end
335
-
336
- # In this case, inputs is unused, but
337
- # it's there for the general case.
338
- def set_spread(inputs)
339
- @spread = (@max - @min) / 2.0
340
- end
341
-
342
- # In this case, inputs is unused, but
343
- # it's there for the general case.
344
- def set_center(inputs)
345
- @center = (@max + @min) / 2.0
346
- end
347
-
348
- def set(inputs)
349
- set_init(inputs) if @init
350
- set_center(inputs) if @centered
351
- set_spread(inputs) if @spreaded
352
- end
353
-
354
- def mapped(inputs)
355
- factor = 1.0 / (@factor*@spread)
356
- inputs.map{|value| factor*(value - @center)}
357
- end
358
- alias mapped_input mapped
359
- alias mapped_output mapped
360
-
361
- # Note that it could also unmap inputs, but
362
- # outputs is typically what's being transformed back.
363
- def unmapped(outputs)
364
- factor = @factor*@spread
365
- outputs.map{|value| factor*value + @center}
366
- end
367
- alias unmapped_input unmapped
368
- alias unmapped_output unmapped
369
- end
370
-
371
- # "Normal Distribution"
372
- # Gaussian subclasses Scale and is used exactly the same way.
373
- # The only changes are that it calculates the arithmetic mean (average) for center and
374
- # the standard deviation for spread.
375
- class Gaussian < Scale
376
- def initialize(factor=1.0,center=nil,spread=nil)
377
- super(factor, center, spread)
378
- self.init = false
379
- end
380
-
381
- def set_center(inputs)
382
- self.center = inputs.inject(0.0,:+) / inputs.length
383
- end
384
-
385
- def set_spread(inputs)
386
- self.spread = Math.sqrt(inputs.map{|value|
387
- self.center - value}.inject(0.0){|sum,value|
388
- value*value + sum} / (inputs.length - 1.0))
389
- end
390
- end
391
-
392
- # "Log-Normal Distribution"
393
- # LogNormal subclasses Gaussian to transform the values to a logarithmic scale.
394
- class LogNormal < Gaussian
395
- def initialize(factor=1.0,center=nil,spread=nil)
396
- super(factor, center, spread)
397
- end
398
-
399
- def set(inputs)
400
- super( inputs.map{|value| Math::log(value)} )
401
- end
402
-
403
- def mapped(inputs)
404
- super( inputs.map{|value| Math::log(value)} )
405
- end
406
- alias mapped_input mapped
407
- alias mapped_output mapped
408
-
409
- def unmapped(outputs)
410
- super(outputs).map{|value| Math::exp(value)}
411
- end
412
- alias unmapped_input unmapped
413
- alias unmapped_output unmapped
414
- end
415
-
416
- # ScaledNetwork is a subclass of FeedForwardNetwork.
417
- # It automatically scales the problem given to it
418
- # by using a Scale type instance set in @distribution.
419
- # The attribute, @distribution, is set to Neuronet::Gausian.new by default,
420
- # but one can change this to Scale, LogNormal, or one's own custom mapper.
421
- class ScaledNetwork < FeedForward
422
- attr_accessor :distribution
423
-
424
- def initialize(layers)
425
- super(layers)
426
- @distribution = Gaussian.new
427
- end
428
-
429
- def train!(targets)
430
- super(@distribution.mapped_output(targets))
431
- end
432
-
433
- # @param (List of Float) values
434
- def set(inputs)
435
- super(@distribution.mapped_input(inputs))
436
- end
437
-
438
- # ScaledNetwork#reset works just like FeedForwardNetwork's set method,
439
- # but calls distribution.set( values ) first.
440
- # Sometimes you'll want to set the distribution
441
- # with the entire data set and the use set,
442
- # and then there will be times you'll want to
443
- # set the distribution with each input and use reset.
444
- def reset(inputs)
445
- @distribution.set(inputs)
446
- set(inputs)
447
- end
448
-
449
- def output
450
- @distribution.unmapped_output(super)
451
- end
452
-
453
- def input
454
- @distribution.unmapped_input(super)
455
- end
456
- end
457
-
458
- # A Perceptron Hybrid,
459
- # Tao directly connects the output layer to the input layer.
460
- module Tao
461
- # Tao's extra connections adds to mu.
462
- def mu
463
- sum = super
464
- sum += self.first.length * self.last.length
465
- return sum
466
- end
467
- # Tao.bless connects the network's output layer to the input layer,
468
- # extends it with Tao, and modifies the learning constant if needed.
469
- def self.bless(myself)
470
- # @out directly connects to @in
471
- myself.out.connect(myself.in)
472
- myself.extend Tao
473
- # Save current learning and set it to muk(1).
474
- l, m = myself.learning, myself.muk
475
- # If learning was lower b/4, revert.
476
- myself.learning = l if l<m
477
- return myself
478
- end
479
- end
480
-
481
- # Yin is a network wich has its @yin layer initially mirroring @in.
482
- module Yin
483
- # Yin.bless sets the bias of each @yin[i] to 0.5, and
484
- # the weight of pairing (@yin[i], @in[i]) connections to one.
485
- # This makes @yin initially mirror @in.
486
- # The pairing is done starting with (@yin[0], @in[0]).
487
- # That is, starting with (@yin.first, @in.first).
488
- def self.bless(myself)
489
- yin = myself.yin
490
- if yin.length < (in_length = myself.in.length)
491
- raise "First hidden layer, yin, needs to have at least the same length as input"
492
- end
493
- # connections from yin[i] to in[i] are 1... mirroring to start.
494
- 0.upto(in_length-1) do |index|
495
- node = yin[index]
496
- node.connections[index].weight = 1.0
497
- node.bias = -0.5
498
- end
499
- return myself
500
- end
501
- end
502
-
503
- # Yang is a network wich has its @out layer initially mirroring @yang.
504
- module Yang
505
- # Yang.bless sets the bias of each @yang[i] to 0.5, and
506
- # the weight of pairing (@out[i], @yang[i]) connections to one.
507
- # This makes @out initially mirror @yang.
508
- # The pairing is done starting with (@out[-1], @yang[-1]).
509
- # That is, starting with (@out.last, @yang.last).
510
- def self.bless(myself)
511
- offset = myself.yang.length - (out_length = (out = myself.out).length)
512
- raise "Last hidden layer, yang, needs to have at least the same length as output" if offset < 0
513
- # Although the algorithm here is not as described,
514
- # the net effect to is pair @out.last with @yang.last, and so on down.
515
- 0.upto(out_length-1) do |index|
516
- node = out[index]
517
- node.connections[offset+index].weight = 1.0
518
- node.bias = -0.5
519
- end
520
- return myself
521
- end
522
- end
523
-
524
- # A Yin Yang composite provided for convenience.
525
- module YinYang
526
- def self.bless(myself)
527
- Yin.bless(myself)
528
- Yang.bless(myself)
529
- return myself
530
- end
531
- end
532
-
533
- # A Tao Yin Yang composite provided for convenience.
534
- module TaoYinYang
535
- def self.bless(myself)
536
- Tao.bless(myself)
537
- Yin.bless(myself)
538
- Yang.bless(myself)
539
- return myself
540
- end
541
- end
542
-
543
- # A Tao Yin composite provided for convenience.
544
- module TaoYin
545
- def self.bless(myself)
546
- Tao.bless(myself)
547
- Yin.bless(myself)
548
- return myself
549
- end
550
- end
551
-
552
- # A Tao Yang composite provided for convenience.
553
- module TaoYang
554
- def self.bless(myself)
555
- Tao.bless(myself)
556
- Yang.bless(myself)
557
- return myself
558
- end
559
- end
1
+ # frozen_string_literal: true
560
2
 
3
+ # Neuronet is a neural network library for Ruby.
4
+ module Neuronet
5
+ VERSION = '7.0.230416'
6
+ require_relative 'neuronet/constants'
7
+ autoload :Connection, 'neuronet/connection'
8
+ autoload :Neuron, 'neuronet/neuron'
9
+ autoload :Layer, 'neuronet/layer'
10
+ autoload :FeedForward, 'neuronet/feed_forward'
11
+ autoload :Scale, 'neuronet/scale'
12
+ autoload :Gaussian, 'neuronet/gaussian'
13
+ autoload :LogNormal, 'neuronet/log_normal'
14
+ autoload :ScaledNetwork, 'neuronet/scaled_network'
561
15
  end