neuronet 6.0.1 → 6.1.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- data/README.md +91 -19
- data/lib/neuronet.rb +75 -15
- metadata +2 -2
data/README.md
CHANGED
@@ -315,7 +315,7 @@ For the example above, we can check their lengths.
|
|
315
315
|
puts neuronet.yang.length #=> 6
|
316
316
|
puts neuronet.out.length #=> 3
|
317
317
|
|
318
|
-
## Tao, Yin, and
|
318
|
+
## Tao, Yin, Yang, and Brahma
|
319
319
|
|
320
320
|
Tao
|
321
321
|
: The absolute principle underlying the universe,
|
@@ -346,6 +346,8 @@ Training begins the process that sets the weights to associate the two.
|
|
346
346
|
But you can also manually set the initial weights.
|
347
347
|
One useful way to initially set the weigths is to have one layer mirror another.
|
348
348
|
The [Yin](http://rubydoc.info/gems/neuronet/Neuronet/Yin) bless makes yin mirror the input.
|
349
|
+
The length of yin must be at least that of in.
|
350
|
+
The pairing starts with in.first and yin.first on up.
|
349
351
|
|
350
352
|
Yin.bless(neuronet)
|
351
353
|
|
@@ -353,11 +355,24 @@ Yang
|
|
353
355
|
: The active male principle of the universe, characterized as male and
|
354
356
|
creative and associated with heaven, heat, and light.
|
355
357
|
|
356
|
-
|
358
|
+
On the other hand, the [Yang](http://rubydoc.info/gems/neuronet/Neuronet/Yang)
|
357
359
|
bless makes the output mirror yang.
|
360
|
+
The length of yang must be a least that of out.
|
361
|
+
The pairing starts from yang.last and out.last on down.
|
358
362
|
|
359
363
|
Yang.bless(neuronet)
|
360
364
|
|
365
|
+
Brahma
|
366
|
+
: The creator god in later Hinduism, who forms a triad with Vishnu the preserver and Shiva the destroyer.
|
367
|
+
|
368
|
+
[Brahma](http://rubydoc.info/gems/neuronet/Neuronet/Brahma)
|
369
|
+
pairs each input node with two yin neurons sending them respectively the positive and negative value of its activation.
|
370
|
+
I'd say then that yin both mirrors and shadows input.
|
371
|
+
The length of yin must be at least twice that of in.
|
372
|
+
The pairing starts with in.first and yin.first on up.
|
373
|
+
|
374
|
+
Brahma.bless(neuronet)
|
375
|
+
|
361
376
|
Bless
|
362
377
|
: Pronounce words in a religious rite, to confer or invoke divine favor upon.
|
363
378
|
|
@@ -493,26 +508,10 @@ would be constructed this way:
|
|
493
508
|
Here yinyang's hidden layer (which is both yin and yang)
|
494
509
|
initially would have the first n neurons mirror the input and
|
495
510
|
the last m neurons be mirrored by the output.
|
496
|
-
Another interesting YinYang would be:
|
511
|
+
Another interesting YinYang would be an input to output mirror:
|
497
512
|
|
498
513
|
yinyang = YinYang.bless FeedForward.new( [n, n, n] )
|
499
514
|
|
500
|
-
The following code demonstrates what is meant by "mirroring":
|
501
|
-
|
502
|
-
yinyang = YinYang.bless FeedForward.new( [3, 3, 3] )
|
503
|
-
yinyang.set( [-1,0,1] )
|
504
|
-
puts yinyang.in.map{|x| x.activation}.join(', ')
|
505
|
-
puts yinyang.yin.map{|x| x.activation}.join(', ')
|
506
|
-
puts yinyang.out.map{|x| x.activation}.join(', ')
|
507
|
-
puts yinyang.output.join(', ')
|
508
|
-
|
509
|
-
Here's the output:
|
510
|
-
|
511
|
-
0.268941421369995, 0.5, 0.731058578630005
|
512
|
-
0.442490985892539, 0.5, 0.557509014107461
|
513
|
-
0.485626707638021, 0.5, 0.514373292361979
|
514
|
-
-0.0575090141074614, 0.0, 0.057509014107461
|
515
|
-
|
516
515
|
# Theory
|
517
516
|
|
518
517
|
## The Biological Description of a Neuron
|
@@ -709,6 +708,79 @@ So I provide a way to set the learning constant based on the size of the data wi
|
|
709
708
|
|
710
709
|
The value of #num(n) is #muk(1.0)/Math.sqrt(n)).
|
711
710
|
|
711
|
+
## Mirroring
|
712
|
+
|
713
|
+
Because the squash function is not linear, mirroring is going to be warped.
|
714
|
+
Nonetheless, I'd like to map zeroes to zeroes and ones to ones.
|
715
|
+
That gives us the following two equations:
|
716
|
+
|
717
|
+
weight*sigmoid(1.0) + bias = 1.0
|
718
|
+
weight*sigmoid(0.0) + bias = 0.0
|
719
|
+
|
720
|
+
We can solve that! Consider the zeroes to zeroes map:
|
721
|
+
|
722
|
+
weight*sigmoid(0.0) + bias = 0.0
|
723
|
+
weight*sigmoid(0.0) = -bias
|
724
|
+
weight*0.5 = -bias
|
725
|
+
weight = -2*bias
|
726
|
+
|
727
|
+
Now the ones to ones:
|
728
|
+
|
729
|
+
weight*sigmoid(1.0) + bias = 1.0
|
730
|
+
-2.0*bias*sigmoid(1.0) + bias = 1.0
|
731
|
+
bias*(-2.0*sigmoid(1.0) + 1.0) = 1.0
|
732
|
+
bias = 1.0 / (1.0 - 2.0*sigmoid(1.0))
|
733
|
+
|
734
|
+
We get the numerical values:
|
735
|
+
|
736
|
+
bias = -2.163953413738653 # BZERO
|
737
|
+
weight = 4.327906827477306 # WONE
|
738
|
+
|
739
|
+
In the code I call this bias and weight BZERO and WONE respectively.
|
740
|
+
What about "shadowing"?
|
741
|
+
|
742
|
+
weight*sigmoid(1.0) + bias = -1.0
|
743
|
+
weight*sigmoid(0.0) + bias = 0.0
|
744
|
+
|
745
|
+
weight = -2.0*bias # <== same a before
|
746
|
+
|
747
|
+
weight*sigmoid(1.0) + bias = -1.0
|
748
|
+
-2.0*bias*sigmoid(1.0) + bias = -1.0
|
749
|
+
bias*(-2.0*sigmoid(1.0) + 1.0) = -1.0
|
750
|
+
bias = -1.0 / (-2.0*sigmoid(1.0) + 1.0)
|
751
|
+
bias = 1.0 / (2.0*sigmoid(1.0) - 1.0)
|
752
|
+
# ^== this is just negative what we got before.
|
753
|
+
|
754
|
+
Shadowing is just the negative of mirroring.
|
755
|
+
There's a test, [tests/mirror.rb](https://github.com/carlosjhr64/neuronet/blob/master/tests/mirror.rb),
|
756
|
+
which demostrates mirroring. Here's the output:
|
757
|
+
|
758
|
+
### YinYang ###
|
759
|
+
Input:
|
760
|
+
-1.0, 0.0, 1.0
|
761
|
+
In:
|
762
|
+
0.2689414213699951, 0.5, 0.7310585786300049
|
763
|
+
Yin/Yang:
|
764
|
+
0.2689414213699951, 0.5, 0.7310585786300049
|
765
|
+
0.2689414213699951, 0.5, 0.7310585786300049
|
766
|
+
Out:
|
767
|
+
0.2689414213699951, 0.5, 0.7310585786300049
|
768
|
+
Output:
|
769
|
+
-1.0000000000000002, 0.0, 1.0
|
770
|
+
|
771
|
+
### BrahmaYang ###
|
772
|
+
Input:
|
773
|
+
-1.0, 0.0, 1.0
|
774
|
+
In:
|
775
|
+
0.2689414213699951, 0.5, 0.7310585786300049
|
776
|
+
Yin/Yang:
|
777
|
+
0.2689414213699951, 0.7310585786300049, 0.5, 0.5, 0.7310585786300049, 0.2689414213699951
|
778
|
+
0.2689414213699951, 0.7310585786300049, 0.5, 0.5, 0.7310585786300049, 0.2689414213699951
|
779
|
+
Out:
|
780
|
+
0.2689414213699951, 0.7310585786300049, 0.5, 0.5, 0.7310585786300049, 0.2689414213699951
|
781
|
+
Output:
|
782
|
+
-1.0000000000000002, 1.0, 0.0, 0.0, 1.0, -1.0000000000000002
|
783
|
+
|
712
784
|
# Questions?
|
713
785
|
|
714
786
|
Email me!
|
data/lib/neuronet.rb
CHANGED
@@ -1,6 +1,6 @@
|
|
1
1
|
# Neuronet module
|
2
2
|
module Neuronet
|
3
|
-
VERSION = '6.0
|
3
|
+
VERSION = '6.1.0'
|
4
4
|
|
5
5
|
# An artificial neural network uses a squash function
|
6
6
|
# to determine the activation value of a neuron.
|
@@ -26,6 +26,9 @@ module Neuronet
|
|
26
26
|
Math.log(squashed / (1.0 - squashed))
|
27
27
|
end
|
28
28
|
|
29
|
+
BZERO = 1.0/(1.0-2.0*squash(1.0))
|
30
|
+
WONE = -2.0*BZERO
|
31
|
+
|
29
32
|
# Although the implementation is free to set all parameters for each neuron,
|
30
33
|
# Neuronet by default creates zeroed neurons.
|
31
34
|
# Association between inputs and outputs are trained, and
|
@@ -478,10 +481,10 @@ module Neuronet
|
|
478
481
|
end
|
479
482
|
end
|
480
483
|
|
481
|
-
# Yin is a network
|
484
|
+
# Yin is a network which has its @yin layer initially mirroring @in.
|
482
485
|
module Yin
|
483
|
-
# Yin.bless
|
484
|
-
# the weight of pairing (@yin[i], @in[i]) connections
|
486
|
+
# Yin.bless increments the bias of each @yin[i] by BZERO, and
|
487
|
+
# the weight of pairing (@yin[i], @in[i]) connections by WONE.
|
485
488
|
# This makes @yin initially mirror @in.
|
486
489
|
# The pairing is done starting with (@yin[0], @in[0]).
|
487
490
|
# That is, starting with (@yin.first, @in.first).
|
@@ -490,11 +493,11 @@ module Neuronet
|
|
490
493
|
if yin.length < (in_length = myself.in.length)
|
491
494
|
raise "First hidden layer, yin, needs to have at least the same length as input"
|
492
495
|
end
|
493
|
-
# connections from yin[i] to in[i] are
|
496
|
+
# connections from yin[i] to in[i] are WONE... mirroring to start.
|
494
497
|
0.upto(in_length-1) do |index|
|
495
498
|
node = yin[index]
|
496
|
-
node.connections[index].weight
|
497
|
-
node.bias
|
499
|
+
node.connections[index].weight += WONE
|
500
|
+
node.bias += BZERO
|
498
501
|
end
|
499
502
|
return myself
|
500
503
|
end
|
@@ -502,8 +505,8 @@ module Neuronet
|
|
502
505
|
|
503
506
|
# Yang is a network wich has its @out layer initially mirroring @yang.
|
504
507
|
module Yang
|
505
|
-
# Yang.bless
|
506
|
-
# the weight of pairing (@out[i], @yang[i]) connections
|
508
|
+
# Yang.bless increments the bias of each @yang[i] by BZERO, and
|
509
|
+
# the weight of pairing (@out[i], @yang[i]) connections by WONE.
|
507
510
|
# This makes @out initially mirror @yang.
|
508
511
|
# The pairing is done starting with (@out[-1], @yang[-1]).
|
509
512
|
# That is, starting with (@out.last, @yang.last).
|
@@ -514,8 +517,8 @@ module Neuronet
|
|
514
517
|
# the net effect to is pair @out.last with @yang.last, and so on down.
|
515
518
|
0.upto(out_length-1) do |index|
|
516
519
|
node = out[index]
|
517
|
-
node.connections[offset+index].weight
|
518
|
-
node.bias
|
520
|
+
node.connections[offset+index].weight += WONE
|
521
|
+
node.bias += BZERO
|
519
522
|
end
|
520
523
|
return myself
|
521
524
|
end
|
@@ -524,8 +527,8 @@ module Neuronet
|
|
524
527
|
# A Yin Yang composite provided for convenience.
|
525
528
|
module YinYang
|
526
529
|
def self.bless(myself)
|
527
|
-
Yin.bless(myself)
|
528
530
|
Yang.bless(myself)
|
531
|
+
Yin.bless(myself)
|
529
532
|
return myself
|
530
533
|
end
|
531
534
|
end
|
@@ -533,9 +536,9 @@ module Neuronet
|
|
533
536
|
# A Tao Yin Yang composite provided for convenience.
|
534
537
|
module TaoYinYang
|
535
538
|
def self.bless(myself)
|
536
|
-
Tao.bless(myself)
|
537
|
-
Yin.bless(myself)
|
538
539
|
Yang.bless(myself)
|
540
|
+
Yin.bless(myself)
|
541
|
+
Tao.bless(myself)
|
539
542
|
return myself
|
540
543
|
end
|
541
544
|
end
|
@@ -543,8 +546,8 @@ module Neuronet
|
|
543
546
|
# A Tao Yin composite provided for convenience.
|
544
547
|
module TaoYin
|
545
548
|
def self.bless(myself)
|
546
|
-
Tao.bless(myself)
|
547
549
|
Yin.bless(myself)
|
550
|
+
Tao.bless(myself)
|
548
551
|
return myself
|
549
552
|
end
|
550
553
|
end
|
@@ -552,10 +555,67 @@ module Neuronet
|
|
552
555
|
# A Tao Yang composite provided for convenience.
|
553
556
|
module TaoYang
|
554
557
|
def self.bless(myself)
|
558
|
+
Yang.bless(myself)
|
555
559
|
Tao.bless(myself)
|
560
|
+
return myself
|
561
|
+
end
|
562
|
+
end
|
563
|
+
|
564
|
+
# Brahma is a network which has its @yin layer initially mirror and "shadow" @in.
|
565
|
+
# I'm calling it shadow until I can think of a better name.
|
566
|
+
# Note that a Brahma, Yin bless combination overwrite eachother and is probably useless.
|
567
|
+
module Brahma
|
568
|
+
# Brahma.bless increments the weights of pairing even yin (@yin[2*i], @in[i]) connections by WONE.
|
569
|
+
# and pairing odd yin (@yin[2*i+1], @in[i]) connections by negative WONE.
|
570
|
+
# Likewise the bias with BZERO.
|
571
|
+
# This makes @yin initially mirror and shadow @in.
|
572
|
+
# The pairing is done starting with (@yin[0], @in[0]).
|
573
|
+
# That is, starting with (@yin.first, @in.first).
|
574
|
+
def self.bless(myself)
|
575
|
+
yin = myself.yin
|
576
|
+
if yin.length < 2*(in_length = myself.in.length)
|
577
|
+
raise "First hidden layer, yin, needs to be at least twice the length as input"
|
578
|
+
end
|
579
|
+
# connections from yin[2*i] to in[i] are WONE... mirroring to start.
|
580
|
+
# connections from yin[2*i+1] to in[i] are -WONE... shadowing to start.
|
581
|
+
0.upto(in_length-1) do |index|
|
582
|
+
even = yin[2*index]
|
583
|
+
odd = yin[(2*index)+1]
|
584
|
+
even.connections[index].weight += WONE
|
585
|
+
even.bias += BZERO
|
586
|
+
odd.connections[index].weight -= WONE
|
587
|
+
odd.bias -= BZERO
|
588
|
+
end
|
589
|
+
return myself
|
590
|
+
end
|
591
|
+
end
|
592
|
+
|
593
|
+
# A Brahma Yang composite provided for convenience.
|
594
|
+
module BrahmaYang
|
595
|
+
def self.bless(myself)
|
596
|
+
Brahma.bless(myself)
|
556
597
|
Yang.bless(myself)
|
557
598
|
return myself
|
558
599
|
end
|
559
600
|
end
|
560
601
|
|
602
|
+
# A Brahma Yang composite provided for convenience.
|
603
|
+
module TaoBrahma
|
604
|
+
def self.bless(myself)
|
605
|
+
Brahma.bless(myself)
|
606
|
+
Tao.bless(myself)
|
607
|
+
return myself
|
608
|
+
end
|
609
|
+
end
|
610
|
+
|
611
|
+
# A Tao Brahma Yang composite provided for convenience.
|
612
|
+
module TaoBrahmaYang
|
613
|
+
def self.bless(myself)
|
614
|
+
Yang.bless(myself)
|
615
|
+
Brahma.bless(myself)
|
616
|
+
Tao.bless(myself)
|
617
|
+
return myself
|
618
|
+
end
|
619
|
+
end
|
620
|
+
|
561
621
|
end
|
metadata
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
2
2
|
name: neuronet
|
3
3
|
version: !ruby/object:Gem::Version
|
4
|
-
version: 6.0
|
4
|
+
version: 6.1.0
|
5
5
|
prerelease:
|
6
6
|
platform: ruby
|
7
7
|
authors:
|
@@ -9,7 +9,7 @@ authors:
|
|
9
9
|
autorequire:
|
10
10
|
bindir: bin
|
11
11
|
cert_chain: []
|
12
|
-
date: 2013-06-
|
12
|
+
date: 2013-06-20 00:00:00.000000000 Z
|
13
13
|
dependencies: []
|
14
14
|
description: Build custom neural networks. 100% 1.9 Ruby.
|
15
15
|
email: carlosjhr64@gmail.com
|