neuronet 6.1.0 β†’ 7.0.230416

Sign up to get free protection for your applications and to get access to all the features.
@@ -0,0 +1,65 @@
1
+ # frozen_string_literal: true
2
+
3
+ # Neuronet module / Connection class
4
+ module Neuronet
5
+ # Connections between neurons are there own separate objects. In Neuronet, a
6
+ # neuron contains it's bias, and a list of it's connections. Each connection
7
+ # contains it's weight (strength) and connected neuron.
8
+ class Connection
9
+ attr_accessor :neuron, :weight
10
+
11
+ # Connection#initialize takes a neuron and a weight with a default of 0.0.
12
+ def initialize(neuron = Neuron.new, weight: 0.0)
13
+ @neuron = neuron
14
+ @weight = weight
15
+ end
16
+
17
+ # The connection's mu is the activation of the connected neuron.
18
+ def mu = @neuron.activation
19
+ alias activation mu
20
+
21
+ # The connection's mju is 𝑾𝓑𝒂'.
22
+ def mju = @weight * @neuron.derivative
23
+
24
+ # The connection kappa is a component of the neuron's sum kappa:
25
+ # 𝜿 := 𝑾 𝝀'
26
+ def kappa = @weight * @neuron.lamda
27
+
28
+ # The weighted activation of the connected neuron.
29
+ def weighted_activation = @neuron.activation * @weight
30
+
31
+ # Consistent with #update
32
+ alias partial weighted_activation
33
+
34
+ # Connection#update returns the updated activation of a connection, which is
35
+ # the weighted updated activation of the neuron it's connected to:
36
+ # weight * neuron.update
37
+ # This method is the one to use whenever the value of the inputs are changed
38
+ # (or right after training). Otherwise, both update and value should give
39
+ # the same result. When back calculation are not needed, use
40
+ # Connection#weighted_activation instead.
41
+ def update = @neuron.update * @weight
42
+
43
+ # Connection#backpropagate modifies the connection's weight in proportion to
44
+ # the error given and passes that error to its connected neuron via the
45
+ # neuron's backpropagate method.
46
+ def backpropagate(error)
47
+ @weight += @neuron.activation * Neuronet.noise[error]
48
+ if @weight.abs > Neuronet.maxw
49
+ @weight = @weight.positive? ? Neuronet.maxw : -Neuronet.maxw
50
+ end
51
+ @neuron.backpropagate(error)
52
+ self
53
+ end
54
+ # On how to reduce the error, the above makes it obvious how to interpret
55
+ # the equipartition of errors among the connections. Backpropagation is
56
+ # symmetric to forward propagation of errors. The error variable is the
57
+ # reduced error, 𝛆(see the wiki notes).
58
+
59
+ # A connection inspects itself as "weight*label:...".
60
+ def inspect = "#{Neuronet.format % @weight}*#{@neuron.inspect}"
61
+
62
+ # A connection puts itself as "weight*label".
63
+ def to_s = "#{Neuronet.format % @weight}*#{@neuron}"
64
+ end
65
+ end
@@ -0,0 +1,110 @@
1
+ # frozen_string_literal: true
2
+
3
+ # Neuronet module / Constants
4
+ module Neuronet
5
+ # Neuronet allows one to set the format to use for displaying float values,
6
+ # mostly used in the inspect methods.
7
+ # [Docs](https://docs.ruby-lang.org/en/master/format_specifications_rdoc.html)
8
+ FORMAT = '%.13g'
9
+
10
+ # An artificial neural network uses a squash function to determine the
11
+ # activation value of a neuron. The squash function for Neuronet is the
12
+ # [Sigmoid function](http://en.wikipedia.org/wiki/Sigmoid_function) which sets
13
+ # the neuron's activation value between 0.0 and 1.0. This activation value is
14
+ # often thought of on/off or true/false. For classification problems,
15
+ # activation values near one are considered true while activation values near
16
+ # 0.0 are considered false. In Neuronet I make a distinction between the
17
+ # neuron's activation value and it's representation to the problem. This
18
+ # attribute, activation, need never appear in an implementation of Neuronet,
19
+ # but it is mapped back to it's unsquashed value every time the implementation
20
+ # asks for the neuron's value. One should scale the problem with most data
21
+ # points between -1 and 1, extremes under 2s, and no outbounds above 3s.
22
+ # Standard deviations from the mean is probably a good way to figure the scale
23
+ # of the problem.
24
+ SQUASH = ->(unsquashed) { 1.0 / (1.0 + Math.exp(-unsquashed)) }
25
+ UNSQUASH = ->(squashed) { Math.log(squashed / (1.0 - squashed)) }
26
+ DERIVATIVE = ->(squash) { squash * (1.0 - squash) }
27
+
28
+ # I'll want to have a neuron roughly mirror another later. Let [v] be the
29
+ # squash of v. Consider:
30
+ # v = b + w*[v]
31
+ # There is no constant b and w that will satisfy the above equation for all v.
32
+ # But one can satisfy the equation for v in {-1, 0, 1}. Find b and w such
33
+ # that:
34
+ # A: 0 = b + w*[0]
35
+ # B: 1 = b + w*[1]
36
+ # C: -1 = b + w*[-1]
37
+ # Use A and B to solve for b and w:
38
+ # A: 0 = b + w*[0]
39
+ # b = -w*[0]
40
+ # B: 1 = b + w*[1]
41
+ # 1 = -w*[0] + w*[1]
42
+ # 1 = w*(-[0] + [1])
43
+ # w = 1/([1] - [0])
44
+ # b = -[0]/([1] - [0])
45
+ # Verify A, B, and C:
46
+ # A: 0 = b + w*[0]
47
+ # 0 = -[0]/([1] - [0]) + [0]/([1] - [0])
48
+ # 0 = 0 # OK
49
+ # B: 1 = b + w*[1]
50
+ # 1 = -[0]/([1] - [0]) + [1]/([1] - [0])
51
+ # 1 = ([1] - [0])/([1] - [0])
52
+ # 1 = 1 # OK
53
+ # Using the squash function identity, [v] = 1 - [-v]:
54
+ # C: -1 = b + w*[-1]
55
+ # -1 = -[0]/([1] - [0]) + [-1]/([1] - [0])
56
+ # -1 = ([-1] - [0])/([1] - [0])
57
+ # [0] - [1] = [-1] - [0]
58
+ # [0] - [1] = 1 - [1] - [0] # Identity substitution.
59
+ # [0] = 1 - [0] # OK, by identity(0=-0).
60
+ # Evaluate given that [0] = 0.5:
61
+ # b = -[0]/([1] - [0])
62
+ # b = [0]/([0] - [1])
63
+ # b = 0.5/(0.5 - [1])
64
+ # w = 1/([1] - [0])
65
+ # w = 1/([1] - 0.5)
66
+ # w = -2 * 0.5/(0.5 - [1])
67
+ # w = -2 * b
68
+ BZERO = 0.5 / (0.5 - SQUASH[1.0])
69
+ WONE = -2.0 * BZERO
70
+
71
+ # Although the implementation is free to set all parameters for each neuron,
72
+ # Neuronet by default creates zeroed neurons. Association between inputs and
73
+ # outputs are trained, and neurons differentiate from each other randomly.
74
+ # Differentiation among neurons is achieved by noise in the back-propagation
75
+ # of errors. This noise is provided by rand + rand. I chose rand + rand to
76
+ # give the noise an average value of one and a bell shape distribution.
77
+ NOISE = ->(error) { error * (rand + rand) }
78
+
79
+ # One may choose not to have noise.
80
+ NO_NOISE = ->(error) { error }
81
+
82
+ # To keep components bounded, Neuronet limits the weights, biases, and values.
83
+ # Note that on a 64-bit machine SQUASH[37] rounds to 1.0, and
84
+ # SQUASH[9] is 0.99987...
85
+ MAXW = 9.0 # Maximum weight
86
+ MAXB = 18.0 # Maximum bias
87
+ MAXV = 36.0 # Maximum value
88
+
89
+ # Mu learning factor.
90
+ LEARNING = 1.0
91
+
92
+ # The above constants are the defaults for Neuronet. They are set below in
93
+ # accessable module attributes. The user may change these to suit their
94
+ # needs.
95
+ class << self
96
+ attr_accessor :format, :squash, :unsquash, :derivative, :bzero, :wone,
97
+ :noise, :maxw, :maxb, :maxv, :learning
98
+ end
99
+ self.squash = SQUASH
100
+ self.unsquash = UNSQUASH
101
+ self.derivative = DERIVATIVE
102
+ self.bzero = BZERO
103
+ self.wone = WONE
104
+ self.noise = NOISE
105
+ self.format = FORMAT
106
+ self.maxw = MAXW
107
+ self.maxb = MAXB
108
+ self.maxv = MAXV
109
+ self.learning = LEARNING
110
+ end
@@ -0,0 +1,89 @@
1
+ # frozen_string_literal: true
2
+
3
+ # Neuronet module / FeedForward class
4
+ module Neuronet
5
+ # A Feed Forward Network
6
+ class FeedForward < Array
7
+ # Example:
8
+ # ff = Neuronet::FeedForward.new([2, 3, 1])
9
+ def initialize(layers)
10
+ length = layers.length
11
+ raise 'Need at least 2 layers' if length < 2
12
+
13
+ super(length) { Layer.new(layers[_1]) }
14
+ 1.upto(length - 1) { self[_1].connect(self[_1 - 1]) }
15
+ end
16
+
17
+ # Set the input layer.
18
+ def set(input)
19
+ first.set(input)
20
+ self
21
+ end
22
+
23
+ def input = first.values
24
+
25
+ # Update the network.
26
+ def update
27
+ # update up the layers
28
+ 1.upto(length - 1) { self[_1].partial }
29
+ self
30
+ end
31
+
32
+ def output = last.values
33
+
34
+ # Consider:
35
+ # m = Neuronet::FeedForward.new(layers)
36
+ # Want:
37
+ # output = m * input
38
+ def *(other)
39
+ set(other)
40
+ update
41
+ last.values
42
+ end
43
+
44
+ # 𝝁 + 𝜧 𝝁' + 𝜧 𝜧'𝝁" + 𝜧 𝜧'𝜧"𝝁"' + ...
45
+ # |𝜧| ~ |𝑾||𝓑𝒂|
46
+ # |βˆ‘π‘Ύ| ~ βˆšπ‘
47
+ # |𝓑𝒂| ~ ΒΌ
48
+ # |𝝁| ~ 1+βˆ‘|𝒂'| ~ 1+½𝑁
49
+ def expected_mju!
50
+ sum = 0.0
51
+ mju = 1.0
52
+ reverse[1..].each do |layer|
53
+ n = layer.length
54
+ sum += mju * (1.0 + (0.5 * n))
55
+ mju *= 0.25 * Math.sqrt(layer.length)
56
+ end
57
+ @expected_mju = Neuronet.learning * sum
58
+ end
59
+
60
+ def expected_mju
61
+ @expected_mju || expected_mju!
62
+ end
63
+
64
+ def average_mju
65
+ last.average_mju
66
+ end
67
+
68
+ def train(target, mju = expected_mju)
69
+ last.train(target, mju)
70
+ self
71
+ end
72
+
73
+ def pair(input, target, mju = expected_mju)
74
+ set(input).update.train(target, mju)
75
+ end
76
+
77
+ def pairs(pairs, mju = expected_mju)
78
+ pairs.shuffle.each { |input, target| pair(input, target, mju) }
79
+ return self unless block_given?
80
+
81
+ pairs.shuffle.each { |i, t| pair(i, t, mju) } while yield
82
+ self
83
+ end
84
+
85
+ def inspect = map(&:inspect).join("\n")
86
+
87
+ def to_s = map(&:to_s).join("\n")
88
+ end
89
+ end
@@ -0,0 +1,19 @@
1
+ # frozen_string_literal: true
2
+
3
+ # Neuronet module
4
+ module Neuronet
5
+ # "Normal Distribution"
6
+ # Gaussian sub-classes Scale and is used exactly the same way. The only
7
+ # changes are that it calculates the arithmetic mean (average) for center and
8
+ # the standard deviation for spread.
9
+ class Gaussian < Scale
10
+ def set(inputs)
11
+ @center ||= inputs.sum.to_f / inputs.length
12
+ unless @spread
13
+ sum2 = inputs.map { @center - _1 }.sum { _1 * _1 }.to_f
14
+ @spread = Math.sqrt(sum2 / (inputs.length - 1.0))
15
+ end
16
+ self
17
+ end
18
+ end
19
+ end
@@ -0,0 +1,111 @@
1
+ # frozen_string_literal: true
2
+
3
+ # Neuronet module
4
+ module Neuronet
5
+ # Layer is an array of neurons.
6
+ class Layer < Array
7
+ # Length is the number of neurons in the layer.
8
+ def initialize(length)
9
+ super(length) { Neuron.new }
10
+ end
11
+
12
+ # This is where one enters the "real world" inputs.
13
+ def set(inputs)
14
+ 0.upto(length - 1) { self[_1].value = inputs[_1] || 0.0 }
15
+ self
16
+ end
17
+
18
+ # Returns the real world values: [value, ...]
19
+ def values
20
+ map(&:value)
21
+ end
22
+
23
+ # Allows one to fully connect layers.
24
+ def connect(layer = Layer.new(length), weights: [])
25
+ # creates the neuron matrix...
26
+ each_with_index do |neuron, i|
27
+ weight = weights[i] || 0.0
28
+ layer.each { neuron.connect(_1, weight:) }
29
+ end
30
+ # The layer is returned for chaining.
31
+ layer
32
+ end
33
+
34
+ # Set layer to mirror input:
35
+ # bias = BZERO.
36
+ # weight = WONE
37
+ # Input should be the same size as the layer. One can set sign to -1 to
38
+ # anti-mirror. One can set sign to other than |1| to scale.
39
+ def mirror(sign = 1)
40
+ each_with_index do |neuron, index|
41
+ neuron.bias = sign * Neuronet.bzero
42
+ neuron.connections[index].weight = sign * Neuronet.wone
43
+ end
44
+ end
45
+
46
+ # Doubles up the input mirroring and anti-mirroring it. The layer should be
47
+ # twice the size of the input.
48
+ def antithesis
49
+ sign = 1
50
+ each_with_index do |n, i|
51
+ n.connections[i / 2].weight = sign * Neuronet.wone
52
+ n.bias = sign * Neuronet.bzero
53
+ sign = -sign
54
+ end
55
+ end
56
+
57
+ # Sums two corresponding input neurons above each neuron in the layer.
58
+ # Input should be twice the size of the layer.
59
+ def synthesis
60
+ semi = Neuronet.wone / 2
61
+ each_with_index do |n, i|
62
+ j = i * 2
63
+ c = n.connections
64
+ n.bias = Neuronet.bzero
65
+ c[j].weight = semi
66
+ c[j + 1].weight = semi
67
+ end
68
+ end
69
+
70
+ # Set layer to average input.
71
+ def average(sign = 1)
72
+ bias = sign * Neuronet.bzero
73
+ each do |n|
74
+ n.bias = bias
75
+ weight = sign * Neuronet.wone / n.connections.length
76
+ n.connections.each { _1.weight = weight }
77
+ end
78
+ end
79
+
80
+ # updates layer with current values of the previous layer
81
+ def partial
82
+ each(&:partial)
83
+ end
84
+
85
+ def average_mju
86
+ Neuronet.learning * sum { Neuron.mju(_1) } / length
87
+ end
88
+
89
+ # Takes the real world target for each neuron in this layer and
90
+ # backpropagates the error to each neuron.
91
+ def train(target, mju = nil)
92
+ 0.upto(length - 1) do |index|
93
+ neuron = self[index]
94
+ error = (target[index] - neuron.value) /
95
+ (mju || (Neuronet.learning * Neuron.mju(neuron)))
96
+ neuron.backpropagate(error)
97
+ end
98
+ self
99
+ end
100
+
101
+ # Layer inspects as "label:value,..."
102
+ def inspect
103
+ map(&:inspect).join(',')
104
+ end
105
+
106
+ # Layer puts as "label,..."
107
+ def to_s
108
+ map(&:to_s).join(',')
109
+ end
110
+ end
111
+ end
@@ -0,0 +1,21 @@
1
+ # frozen_string_literal: true
2
+
3
+ # Neuronet module
4
+ module Neuronet
5
+ # "Log-Normal Distribution"
6
+ # LogNormal sub-classes Gaussian to transform the values to a logarithmic
7
+ # scale.
8
+ class LogNormal < Gaussian
9
+ def set(inputs)
10
+ super(inputs.map { |value| Math.log(value) })
11
+ end
12
+
13
+ def mapped(inputs)
14
+ super(inputs.map { |value| Math.log(value) })
15
+ end
16
+
17
+ def unmapped(outputs)
18
+ super(outputs).map { |value| Math.exp(value) }
19
+ end
20
+ end
21
+ end
@@ -0,0 +1,146 @@
1
+ # frozen_string_literal: true
2
+
3
+ # Neuronet module / Neuron class
4
+ module Neuronet
5
+ # A Neuron is capable of creating connections to other neurons. The
6
+ # connections attribute is a list of the neuron's connections to other
7
+ # neurons. A neuron's bias is it's kicker (or deduction) to it's activation
8
+ # value, a sum of its connections values.
9
+ class Neuron
10
+ # For bookkeeping, each Neuron is given a label, starting with 'a' by
11
+ # default.
12
+ class << self; attr_accessor :label; end
13
+ Neuron.label = 'a'
14
+
15
+ attr_reader :label, :activation, :connections
16
+ attr_accessor :bias
17
+
18
+ # The neuron's mu is the sum of the connections' mu(activation), plus one
19
+ # for the bias:
20
+ # 𝛍 := 1+βˆ‘πš'
21
+ def mu
22
+ return 0.0 if @connections.empty?
23
+
24
+ 1 + @connections.sum(&:mu)
25
+ end
26
+
27
+ # Reference the library's wiki:
28
+ # 𝒆ₕ ~ πœ€(𝝁ₕ + πœ§β‚•β±πα΅’ + πœ§β‚•β±πœ§α΅’Κ²πβ±Ό + πœ§β‚•β±πœ§α΅’Κ²πœ§β±Όα΅πβ‚– + ...)
29
+ # πœ§β‚•β±πα΅’ is:
30
+ # neuron.mju{ |connected_neuron| connected_neuron.mu }
31
+ # πœ§β‚•β±πœ§α΅’Κ²πβ±Ό is:
32
+ # nh.mju{ |ni| ni.mju{ |nj| nj.mu }}
33
+ def mju(&block)
34
+ @connections.sum { _1.mju * block[_1.neuron] }
35
+ end
36
+
37
+ # Full recursive implementation of mju:
38
+ def self.mju(neuron)
39
+ return 0.0 if neuron.connections.empty?
40
+
41
+ neuron.mu + neuron.mju { |connected_neuron| Neuron.mju(connected_neuron) }
42
+ end
43
+
44
+ # π““π’—βŒˆπ’— = (1-βŒˆπ’—)βŒˆπ’— = (1-𝒂)𝒂 = 𝓑𝒂
45
+ def derivative = Neuronet.derivative[@activation]
46
+
47
+ # 𝝀 = 𝓑𝒂𝛍
48
+ def lamda = derivative * mu
49
+
50
+ # 𝜿 := 𝜧 𝝁' = 𝑾 𝓑𝒂'𝝁' = 𝑾 𝝀'
51
+ # def kappa = mju(&:mu)
52
+ def kappa = @connections.sum(&:kappa)
53
+
54
+ # 𝜾 := 𝜧 𝜧' 𝝁" = 𝜧 𝜿'
55
+ def iota = mju(&:kappa)
56
+
57
+ # One can explicitly set the neuron's value, typically used to set the input
58
+ # neurons. The given "real world" value is squashed into the neuron's
59
+ # activation value.
60
+ def value=(value)
61
+ # If value is out of bounds, set it to the bound.
62
+ if value.abs > Neuronet.maxv
63
+ value = value.positive? ? Neuronet.maxv : -Neuronet.maxv
64
+ end
65
+ @activation = Neuronet.squash[value]
66
+ end
67
+
68
+ # The "real world" value of the neuron is the unsquashed activation value.
69
+ def value = Neuronet.unsquash[@activation]
70
+
71
+ # The initialize method sets the neuron's value, bias and connections.
72
+ def initialize(value = 0.0, bias: 0.0, connections: [])
73
+ self.value = value
74
+ @connections = connections
75
+ @bias = bias
76
+ @label = Neuron.label
77
+ Neuron.label = Neuron.label.next
78
+ end
79
+
80
+ # Updates the activation with the current value of bias and updated values
81
+ # of connections.
82
+ def update
83
+ return @activation if @connections.empty?
84
+
85
+ self.value = @bias + @connections.sum(&:update)
86
+ @activation
87
+ end
88
+
89
+ # For when connections are already updated, Neuron#partial updates the
90
+ # activation with the current values of bias and connections. It is not
91
+ # always necessary to burrow all the way down to the terminal input neuron
92
+ # to update the current neuron if it's connected neurons have all been
93
+ # updated. The implementation should set it's algorithm to use partial
94
+ # instead of update as update will most likely needlessly update previously
95
+ # updated neurons.
96
+ def partial
97
+ return @activation if @connections.empty?
98
+
99
+ self.value = @bias + @connections.sum(&:partial)
100
+ @activation
101
+ end
102
+
103
+ # The backpropagate method modifies the neuron's bias in proportion to the
104
+ # given error and passes on this error to each of its connection's
105
+ # backpropagate method. While updates flows from input to output, back-
106
+ # propagation of errors flows from output to input.
107
+ def backpropagate(error)
108
+ return self if @connections.empty?
109
+
110
+ @bias += Neuronet.noise[error]
111
+ if @bias.abs > Neuronet.maxb
112
+ @bias = @bias.positive? ? Neuronet.maxb : -Neuronet.maxb
113
+ end
114
+ @connections.each { |connection| connection.backpropagate(error) }
115
+ self
116
+ end
117
+
118
+ # Connects the neuron to another neuron. The default weight=0 means there
119
+ # is no initial association. The connect method is how the implementation
120
+ # adds a connection, the way to connect a neuron to another. To connect
121
+ # "output" to "input", for example, it is:
122
+ # input = Neuronet::Neuron.new
123
+ # output = Neuronet::Neuron.new
124
+ # output.connect(input)
125
+ # Think "output" connects to "input".
126
+ def connect(neuron = Neuron.new, weight: 0.0)
127
+ @connections.push(Connection.new(neuron, weight:))
128
+ # Note that we're returning the connected neuron:
129
+ neuron
130
+ end
131
+
132
+ # Tacks on to neuron's inspect method to show the neuron's bias and
133
+ # connections.
134
+ def inspect
135
+ fmt = Neuronet.format
136
+ if @connections.empty?
137
+ "#{@label}:#{fmt % value}"
138
+ else
139
+ "#{@label}:#{fmt % value}|#{[(fmt % @bias), *@connections].join('+')}"
140
+ end
141
+ end
142
+
143
+ # A neuron plainly puts itself as it's label.
144
+ def to_s = @label
145
+ end
146
+ end
@@ -0,0 +1,50 @@
1
+ # frozen_string_literal: true
2
+
3
+ # Neuronet module
4
+ module Neuronet
5
+ # Neuronet::Scale is a class to help scale problems to fit within a network's
6
+ # "field of view". Given a list of values, it finds the minimum and maximum
7
+ # values and establishes a mapping to a scaled set of numbers between minus
8
+ # one and one (-1,1).
9
+ class Scale
10
+ attr_accessor :spread, :center
11
+
12
+ # If the value of center is provided, then
13
+ # that value will be used instead of
14
+ # calculating it from the values passed to method #set.
15
+ # Likewise, if spread is provided, that value of spread will be used.
16
+ def initialize(factor: 1.0, center: nil, spread: nil)
17
+ @factor = factor
18
+ @center = center
19
+ @spread = spread
20
+ end
21
+
22
+ def set(inputs)
23
+ min, max = inputs.minmax
24
+ @center ||= (max + min) / 2.0
25
+ @spread ||= (max - min) / 2.0
26
+ self
27
+ end
28
+
29
+ def reset(inputs)
30
+ @center = @spread = nil
31
+ set(inputs)
32
+ end
33
+
34
+ def mapped(inputs)
35
+ factor = 1.0 / (@factor * @spread)
36
+ inputs.map { |value| factor * (value - @center) }
37
+ end
38
+ alias mapped_input mapped
39
+ alias mapped_output mapped
40
+
41
+ # Note that it could also unmap inputs, but
42
+ # outputs is typically what's being transformed back.
43
+ def unmapped(outputs)
44
+ factor = @factor * @spread
45
+ outputs.map { |value| (factor * value) + @center }
46
+ end
47
+ alias unmapped_input unmapped
48
+ alias unmapped_output unmapped
49
+ end
50
+ end
@@ -0,0 +1,50 @@
1
+ # frozen_string_literal: true
2
+
3
+ # Neuronet module
4
+ module Neuronet
5
+ # ScaledNetwork is a subclass of FeedForwardNetwork.
6
+ # It automatically scales the problem given to it
7
+ # by using a Scale type instance set in @distribution.
8
+ # The attribute, @distribution, is set to Neuronet::Gaussian.new by default,
9
+ # but one can change this to Scale, LogNormal, or one's own custom mapper.
10
+ class ScaledNetwork < FeedForward
11
+ attr_accessor :distribution, :reset
12
+
13
+ def initialize(layers, distribution: Gaussian.new, reset: false)
14
+ super(layers)
15
+ @distribution = distribution
16
+ @reset = reset
17
+ end
18
+
19
+ # ScaledNetwork set works just like FeedForwardNetwork's set method,
20
+ # but calls @distribution.set(values) first if @reset is true.
21
+ # Sometimes you'll want to set the distribution with the entire data set,
22
+ # and then there will be times you'll want to reset the distribution
23
+ # with each input.
24
+ def set(input)
25
+ @distribution.reset(input) if @reset
26
+ super(@distribution.mapped_input(input))
27
+ end
28
+
29
+ def input
30
+ @distribution.unmapped_input(super)
31
+ end
32
+
33
+ def output
34
+ @distribution.unmapped_output(super)
35
+ end
36
+
37
+ def *(_other)
38
+ @distribution.unmapped_output(super)
39
+ end
40
+
41
+ def train(target, mju = expected_mju)
42
+ super(@distribution.mapped_output(target), mju)
43
+ end
44
+
45
+ def inspect
46
+ distribution = @distribution.class.to_s.split(':').last
47
+ "#distribution:#{distribution} #reset:#{@reset}\n" + super
48
+ end
49
+ end
50
+ end