micrograd-rb 0.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml ADDED
@@ -0,0 +1,7 @@
1
+ ---
2
+ SHA256:
3
+ metadata.gz: 5f4ecdb2b1d0efa6a960a3a5006c4d26658bb2fbabc3f491eb4e23a4cdcc77b8
4
+ data.tar.gz: 90b64a0ccf554b5b28c1935230b5c7aa92248b898c81566cad5a59effaf75532
5
+ SHA512:
6
+ metadata.gz: 2eb06299d644f414fc295a330314814cdd3a38f50a95ea3d3c2a53789dfe3672e676b46653c49389d691d04ab971d6167c0ecc486eaf0717ecfba77f8306e442
7
+ data.tar.gz: d5250318f2381e09f24cab75b25b00526cabc3e419f9b31199dc3275d0731327f44d3d0ad7e2b8244a47790042d22fa4bd537e172fb2f6aef2c8c449abd50bb0
data/.rspec ADDED
@@ -0,0 +1,3 @@
1
+ --format documentation
2
+ --color
3
+ --require spec_helper
data/.rubocop.yml ADDED
@@ -0,0 +1,11 @@
1
+ inherit_from:
2
+ - "https://raw.githubusercontent.com/joeldrapper/goodcop/refs/heads/main/base.yml"
3
+
4
+ AllCops:
5
+ TargetRubyVersion: 3.4
6
+
7
+ Lint/UnderscorePrefixedVariableName:
8
+ Enabled: false
9
+
10
+ Style/RedundantSelf:
11
+ Enabled: false
@@ -0,0 +1,132 @@
1
+ # Contributor Covenant Code of Conduct
2
+
3
+ ## Our Pledge
4
+
5
+ We as members, contributors, and leaders pledge to make participation in our
6
+ community a harassment-free experience for everyone, regardless of age, body
7
+ size, visible or invisible disability, ethnicity, sex characteristics, gender
8
+ identity and expression, level of experience, education, socio-economic status,
9
+ nationality, personal appearance, race, caste, color, religion, or sexual
10
+ identity and orientation.
11
+
12
+ We pledge to act and interact in ways that contribute to an open, welcoming,
13
+ diverse, inclusive, and healthy community.
14
+
15
+ ## Our Standards
16
+
17
+ Examples of behavior that contributes to a positive environment for our
18
+ community include:
19
+
20
+ * Demonstrating empathy and kindness toward other people
21
+ * Being respectful of differing opinions, viewpoints, and experiences
22
+ * Giving and gracefully accepting constructive feedback
23
+ * Accepting responsibility and apologizing to those affected by our mistakes,
24
+ and learning from the experience
25
+ * Focusing on what is best not just for us as individuals, but for the overall
26
+ community
27
+
28
+ Examples of unacceptable behavior include:
29
+
30
+ * The use of sexualized language or imagery, and sexual attention or advances of
31
+ any kind
32
+ * Trolling, insulting or derogatory comments, and personal or political attacks
33
+ * Public or private harassment
34
+ * Publishing others' private information, such as a physical or email address,
35
+ without their explicit permission
36
+ * Other conduct which could reasonably be considered inappropriate in a
37
+ professional setting
38
+
39
+ ## Enforcement Responsibilities
40
+
41
+ Community leaders are responsible for clarifying and enforcing our standards of
42
+ acceptable behavior and will take appropriate and fair corrective action in
43
+ response to any behavior that they deem inappropriate, threatening, offensive,
44
+ or harmful.
45
+
46
+ Community leaders have the right and responsibility to remove, edit, or reject
47
+ comments, commits, code, wiki edits, issues, and other contributions that are
48
+ not aligned to this Code of Conduct, and will communicate reasons for moderation
49
+ decisions when appropriate.
50
+
51
+ ## Scope
52
+
53
+ This Code of Conduct applies within all community spaces, and also applies when
54
+ an individual is officially representing the community in public spaces.
55
+ Examples of representing our community include using an official email address,
56
+ posting via an official social media account, or acting as an appointed
57
+ representative at an online or offline event.
58
+
59
+ ## Enforcement
60
+
61
+ Instances of abusive, harassing, or otherwise unacceptable behavior may be
62
+ reported to the community leaders responsible for enforcement at
63
+ [INSERT CONTACT METHOD].
64
+ All complaints will be reviewed and investigated promptly and fairly.
65
+
66
+ All community leaders are obligated to respect the privacy and security of the
67
+ reporter of any incident.
68
+
69
+ ## Enforcement Guidelines
70
+
71
+ Community leaders will follow these Community Impact Guidelines in determining
72
+ the consequences for any action they deem in violation of this Code of Conduct:
73
+
74
+ ### 1. Correction
75
+
76
+ **Community Impact**: Use of inappropriate language or other behavior deemed
77
+ unprofessional or unwelcome in the community.
78
+
79
+ **Consequence**: A private, written warning from community leaders, providing
80
+ clarity around the nature of the violation and an explanation of why the
81
+ behavior was inappropriate. A public apology may be requested.
82
+
83
+ ### 2. Warning
84
+
85
+ **Community Impact**: A violation through a single incident or series of
86
+ actions.
87
+
88
+ **Consequence**: A warning with consequences for continued behavior. No
89
+ interaction with the people involved, including unsolicited interaction with
90
+ those enforcing the Code of Conduct, for a specified period of time. This
91
+ includes avoiding interactions in community spaces as well as external channels
92
+ like social media. Violating these terms may lead to a temporary or permanent
93
+ ban.
94
+
95
+ ### 3. Temporary Ban
96
+
97
+ **Community Impact**: A serious violation of community standards, including
98
+ sustained inappropriate behavior.
99
+
100
+ **Consequence**: A temporary ban from any sort of interaction or public
101
+ communication with the community for a specified period of time. No public or
102
+ private interaction with the people involved, including unsolicited interaction
103
+ with those enforcing the Code of Conduct, is allowed during this period.
104
+ Violating these terms may lead to a permanent ban.
105
+
106
+ ### 4. Permanent Ban
107
+
108
+ **Community Impact**: Demonstrating a pattern of violation of community
109
+ standards, including sustained inappropriate behavior, harassment of an
110
+ individual, or aggression toward or disparagement of classes of individuals.
111
+
112
+ **Consequence**: A permanent ban from any sort of public interaction within the
113
+ community.
114
+
115
+ ## Attribution
116
+
117
+ This Code of Conduct is adapted from the [Contributor Covenant][homepage],
118
+ version 2.1, available at
119
+ [https://www.contributor-covenant.org/version/2/1/code_of_conduct.html][v2.1].
120
+
121
+ Community Impact Guidelines were inspired by
122
+ [Mozilla's code of conduct enforcement ladder][Mozilla CoC].
123
+
124
+ For answers to common questions about this code of conduct, see the FAQ at
125
+ [https://www.contributor-covenant.org/faq][FAQ]. Translations are available at
126
+ [https://www.contributor-covenant.org/translations][translations].
127
+
128
+ [homepage]: https://www.contributor-covenant.org
129
+ [v2.1]: https://www.contributor-covenant.org/version/2/1/code_of_conduct.html
130
+ [Mozilla CoC]: https://github.com/mozilla/diversity
131
+ [FAQ]: https://www.contributor-covenant.org/faq
132
+ [translations]: https://www.contributor-covenant.org/translations
data/Guardfile ADDED
@@ -0,0 +1,8 @@
1
+ # frozen_string_literal: true
2
+
3
+ guard :shell do
4
+ watch(%r{^lib/micrograd/.*\.rb$}) do |m|
5
+ puts "File #{m[0]} changed, regenerating d2 graph..."
6
+ system("bundle exec ruby lib/micrograd/examples.rb")
7
+ end
8
+ end
data/LICENSE.txt ADDED
@@ -0,0 +1,21 @@
1
+ The MIT License (MIT)
2
+
3
+ Copyright (c) 2025 Sean Collins
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in
13
+ all copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
21
+ THE SOFTWARE.
data/README.md ADDED
@@ -0,0 +1,152 @@
1
+ # Micrograd-rb 🧮💎📉
2
+
3
+ This is an example implementation of a **small neural network library** in Ruby, with [automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation) and [backpropagation](https://en.wikipedia.org/wiki/Backpropagation). If you have no clue what that means, check out [this video series](https://www.youtube.com/watch?v=aircAruvnKk&list=PLZHQObOWTQDNU6R1_67000Dx_ZCJB-3pi), which explains Neural Networks and Deep Learning visually.
4
+
5
+ I implemented this library by going through the YouTube lecture
6
+ [“The spelled-out intro to neural networks and backpropagation: building micrograd”](https://www.youtube.com/watch?v=VMj-3S1tku0&list=PLAqhIrjkxbuWI23v9cThsA9GvCAUhRvKZ)
7
+ by Andrej Karpathy.
8
+ It's the first in a series called "Neural Networks: From Zero to Hero",
9
+ which builds up from the basic building blocks all the way to implementing GPT-2.
10
+ As I watched the video, I translated the Python code into Ruby.
11
+
12
+ There's a canonical implementation of the functionality implemented in Python, available at [karpathy/micrograd](https://github.com/karpathy/micrograd).
13
+ I didn't reference the Python micrograd codebase at all,
14
+ nor any of the other micrograd implementations [in Ruby](https://github.com/search?utf8=%E2%9C%93&q=micrograd+language%3ARuby+&type=repositories),
15
+ nor in any other languages.
16
+
17
+ This codebase could be helpful if you know about how Neural Networks work in general,
18
+ and you're interested in seeing how to implement a simple one in Ruby.
19
+
20
+ If you're building anything real, you probably want to use [torch.rb](https://github.com/ankane/torch.rb), which is based on libtorch, the high-performance C++ library that powers PyTorch.
21
+
22
+ ### Motivation
23
+ Why? Because I am learning neural networks & deep learning.
24
+ I know enough Python to have been able to write it in Python,
25
+ but (1) I didn't want to just write all the same code he wrote and (2) I learn best by adapting principles to a different language and taking a different approach.
26
+ I know Ruby best (and enjoy writing it the most), so it was the obvious choice.
27
+
28
+ ## Approach
29
+ I implemented it in **idiomatic** Ruby: I didn't just copy the Python and adapt the syntax directly:
30
+ * I used bang methods, e.g. `Value#backward!` (instead of just `#backward`), since in Ruby we use that to signify that we're mutating the object in-place.
31
+ * I used methods on Enumerable instead of using loops (especially since we don't have list comprehension in Ruby)
32
+ * I expanded short variable/parameter names to be unabbreviated in most cases
33
+ * I used symbol keys
34
+ * I used keyword args in most cases
35
+ * I extracted `Visualizer` and `TopoGraph` classes, instead of encapsulating that logic within `Value`.
36
+
37
+ And, since it's Ruby, I also implemented it in an **idiosyncratic** way, in a style I personally prefer:
38
+ * I added a bracket constructor (factory) syntax (e.g. `Micrograd::Value[scalar_value]`), since I was jealous of Python's terseness with no `.new` when creating Value objects.
39
+ * I also added a shorthand to pair labels with scalar data values via this syntax: `Micrograd::Value[label: scalar_value]`.
40
+ * I think using parens, e.g. `Micrograd::Value(...)` to construct the value would work too (since that's the convention for conversion methods).
41
+ * Prefer immutability by default, when realistic. Mutating state is hard to reason about, avoiding it as much as possible is preferable.
42
+ * Prefer using `attr_reader` internally for accessing instance variables (so all references to instance variables are mutation. This makes it easier to find them).
43
+ * Prefer injecting dependencies rather than relying on global state. You can see I did this with `random:` being passed into `Neuron`, `Layer`, `MLP`, and `Training`.
44
+
45
+ I balanced that with being **pragmatic**:
46
+ * In `lib/micrograd/value.rb`, I do use mutation of instance variables to update `@grad` and `@data`. In many (most?) applications, immutability can provide better performance since it's easier on the Garbage Collector. However, these kind of neural nets are meant to be computed at a massive scale and repeatedly, on GPU's. In that case, creating millions or billions of objects on each iteration would obviously be much slower than mutating in place, since object creation is relatively slow and memory-intensive.
47
+ * In `lib/micrograd/value.rb`, I used `self.` when it's not necessary (and disabled the `Style/RedundantSelf` to allow this). Why? Because many of the methods are operations that reference `other`, so I find it more readable to have symmetry between `self` and `other`. And, for the rest of the class, I wanted to be consistent with that choice. This is also the standard way to access instance variables in Python.
48
+ * I used `Enumerable#reduce` which is an alias for `Enumerable#inject`. I typically default to using `#inject` but figured non-Rubyists might read this codebase, and `#reduce` is the name that's more common in other languages, so I think it makes more sense here.
49
+ * I kept the leading underscore for `_backward` lambda, to signify it's different from the externally facing `backward!`. I could have named it `backward` (without the bang), but I feel like the underscore reveals the intent that it's an implementation detail and shouldn't be used directly. This is a pattern used occasionally in Ruby, and I think it's worth using here.
50
+
51
+ I **extended** the work from the video slightly. At the end, he builds out the training process using the MLP (multi-level perceptron), ad-hoc in the Jupyter notebook. I did that as well first, in the `MLP` class's spec file. After that, though, I extracted an `Micrograd::Training` class to encapsulate and generalize that work (and tested it separately).
52
+
53
+
54
+ ## Overview of library
55
+
56
+ #### Value
57
+ The basic object is the `Micrograd::Value`. This has a `data` attribute (which is the underlying scalar value), a `grad` attribute, and a `backward!` method. I wanted to follow the convention established by micrograd (and PyTorch), but I think I would have named this class `Node` and have the `data` attribute be named `value` or `scalar` instead. While we're at it, I'd also name `grad` as `gradient`, but that's an even starker break from convention.
58
+
59
+ This class handles:
60
+ - unary operations
61
+ - binary operations, with `other` Values,
62
+ - computing `grad`,
63
+ - starting the `backward!` pass,
64
+ - a `gradient_step!` method, which is used in the `Training` class, in order to keep all mutation within its own class
65
+
66
+ The `backward!` method uses a helper class called `TopoSort`
67
+
68
+ It also has a convenience method `generate_image`, which uses the `Visualizer` class to generate a visual representation of the network (with [d2](https://github.com/terrastruct/d2)).
69
+
70
+ #### Building up a neural net (using `Neuron`, `Layer`, `MLP`, and `Training`)
71
+
72
+ The rest of the classes all stack on top of each other to build a small-scale [feedforward neural net](feedforward neural net)[https://en.wikipedia.org/wiki/Feedforward_neural_network].
73
+
74
+ The basic building block here is the `Neuron`. This uses `Value` for its *weights* and *bias*.
75
+
76
+ Those are combined into a `Layer`, which is then combined into an `MLP` ([multi-layer perceptron](https://en.wikipedia.org/wiki/Multilayer_perceptron)). Again, I'd usually name this class its full name but MLP is a ubiquitous acronym in Deep Learning, and I didn't want to buck the conventions too much.
77
+
78
+ These three classes (`Neuron`, `Layer`, and `MLP`) are all quite small.
79
+
80
+ Finally, there is the `Training` class. This is used to train the neural net! This is the real guts of the library, the fullest expression of what we're trying to accomplish.
81
+
82
+ The objective is to take many training inputs, a set of targets (desired outputs) for each output and get an `MLP` object that's trained to map *any* values (even ones we didn't train with) to a set of outputs, based on what it learned from the example inputs and targets. We also have to tell it how large we want the neural net to be.
83
+
84
+ So, this takes:
85
+ 1. how many layers and what size you want as an array, e.g. [3, 2, 2, 2, 2, 1] signifies: 3 input scalars, 4 'hidden' internal layers of 2 neurons each, and 1 output value.
86
+ 2. an array of arrays of `inputs` values
87
+ 3. an array of `target` values (for each of the inputs)
88
+ 4. an optional `Random` instance (else it just defaults to `Random.new`, helpful for reproducing results and testing)
89
+
90
+ Then once it's created, the training occurs when `call` is received.
91
+ This takes:
92
+ 1. number of `epochs` (how many times the gradient descent occurs)
93
+ 2. the `learning_rate` (how much we tweak the parameters for each epoch)
94
+ 3. an optional `verbose` flag if you want the loss function results to be printed as it goes. This is helpful for manually adjusting the number of `epochs` and the `learning_rate`
95
+
96
+ What this does is:
97
+ 1. First `iterate!` on the MLP, by:
98
+ 1. calculating the forward pass,
99
+ 2. calculate the loss (using difference of two squares)
100
+ 3. run `backward!` on the loss
101
+
102
+ 2. Then `epoch` number of times, do the following:
103
+ 1. Descend!! That is: go through all the parameters in the MLP and step them downward a small amount (the `learning_rate`)
104
+ 2. Recalculate the loss, but `iterate!`ing the same way as above
105
+
106
+ 3. Once that is done, return a `Training::Result` object, which holds the last run's `outputs` and the `mlp`. This `MLP` is now the trained neural net.
107
+
108
+ 4. [????](https://youtu.be/2B3slX6-_20?feature=shared&t=6)
109
+
110
+ 5. Profit!
111
+
112
+ ## Coding modalities
113
+ In the lecture, Andrej uses a Jupyter notebook: these are ubiquitous in Python Data/AI/ML world.
114
+ There's a library called `iruby` that let you use Ruby in Jupyter notebooks, but I didn't do that.
115
+
116
+ > As an aside, I found it extremely hard to reason about state in Jupyter when watching the videos.
117
+ > I guess it may be easier when you're coding in a notebook yourself, since your memory is more clear,
118
+ > but having blocks of code that redefine variables executed ad-hoc in different sequences... Ah!! GOTO considered harmful, indeed!
119
+
120
+ I preferred to write code in classes, then execute it in a file when necessary (e.g. `ruby lib/micrograd/examples.rb`).
121
+ Sometimes I used `bin/console` to load the files and work with them like that, in an `irb` session.
122
+
123
+ Finally, I transitioned to using RSpec to ensure behavior stayed consistent as I refactored.
124
+ That made it much easier to work with.
125
+
126
+ ## TODO's
127
+ - [ ] Push to Rubygems.org
128
+ - [ ] Fix CI
129
+ - [ ] Explain Value#backward! and updating grad
130
+ - [ ] Convert examples.rb to runnable script in `bin/`
131
+ - [ ] Update README with actual usages instead of telling people to look at the specs
132
+ - [ ] Adapt/complete the [exercises](https://colab.research.google.com/drive/1FPTx1RXtBfc4MaTkf7viZZD4U2F9gtKN?usp=sharing) from the video description
133
+
134
+
135
+ ## COULD-DO's (but probably will not)
136
+ - [ ] Add alternate `Training` runner that gets the loss function within the threshold, and reports the number of steps
137
+ - [ ] Adapt Torch examples from video with torch.rb
138
+ - [ ] Add a `Micrograd::Torch::` namespace that implements the same API as Micrograd (`Value`, `Neuron`, `Layer`, `MLP`, `Training`), using torch.rb
139
+ - [ ] Fix `# TODO` in `Training` class (validate and/or compute sizes)
140
+
141
+
142
+ ## Installation
143
+ You're probably just curious and may read the code and specs here on GitHub.
144
+ But if you want to mess around with the code, you can clone this repo.
145
+
146
+ I don't see why you'd want to install this as a dependency, but you could do `gem "micrograd", github: "cllns/micrograd"` if you want.
147
+
148
+ ## Usage
149
+ Take a look at the specs, particularly `specs/training_spec.rb`, for the highest level. Or start at `specs/value_spec.rb` and work your way up the stack. There's also a `lib/micrograd/examples.rb` which be be run directly with `ruby lib/micrograd/examples.rb`.
150
+
151
+ ## Contributing
152
+ No need. This was an educational exercise. :)
data/Rakefile ADDED
@@ -0,0 +1,12 @@
1
+ # frozen_string_literal: true
2
+
3
+ require "bundler/gem_tasks"
4
+ require "rspec/core/rake_task"
5
+
6
+ RSpec::Core::RakeTask.new(:spec)
7
+
8
+ require "rubocop/rake_task"
9
+
10
+ RuboCop::RakeTask.new
11
+
12
+ task default: %i[spec rubocop]
@@ -0,0 +1,48 @@
1
+ # frozen_string_literal: true
2
+
3
+ require_relative "value"
4
+ require_relative "visualizer"
5
+
6
+ module Micrograd
7
+ class Examples
8
+ def initialize
9
+ x1 = Value[x1: 2]
10
+ x2 = Value[x2: 0]
11
+ w1 = Value[w1: -3]
12
+ w2 = Value[w2: 1]
13
+ b = Value[b: 6.8813735870195432]
14
+
15
+ x1w1 = (x1 * w1).with_label(:x1w1)
16
+ x2w2 = (x2 * w2).with_label(:x2w2)
17
+
18
+ x1w1x2w2 = (x1w1 + x2w2).with_label(:x1w1x2w2)
19
+ n = (x1w1x2w2 + b).with_label(:n)
20
+
21
+ # o = n.tanh.with_label(:o)
22
+ # Or, the equivalent implemented from our operations:
23
+ e = (2 * n).exp
24
+ o = (e - 1) / (e + 1)
25
+
26
+ o.backward!
27
+
28
+ @node = o
29
+ end
30
+
31
+ def call
32
+ puts "generating image of graph"
33
+ Visualizer.new(@node).generate_image
34
+ end
35
+
36
+ # First version had initializer with:
37
+ # a = Value[a: 2]
38
+ # b = Value[b: -3]
39
+ # c = Value[c: 10]
40
+ # e = (a * b).with_label(:e)
41
+ # d = (e + c).with_label(:d)
42
+ # f = Value[f: -2]
43
+ # loss = (d * f).with_label(:L)
44
+ # @node = loss
45
+ end
46
+ end
47
+
48
+ Micrograd::Examples.new.call
@@ -0,0 +1,26 @@
1
+ # frozen_string_literal: true
2
+
3
+ require_relative "neuron"
4
+
5
+ module Micrograd
6
+ class Layer
7
+ attr_reader :neurons
8
+
9
+ def initialize(n_in, n_out, random: Random.new)
10
+ @neurons = n_out.times.map { Neuron.new(n_in, random:) }
11
+ end
12
+
13
+ def call(x)
14
+ outs = neurons.map { |neuron| neuron.call(x) }
15
+ if outs.length == 1
16
+ outs.first
17
+ else
18
+ outs
19
+ end
20
+ end
21
+
22
+ def parameters
23
+ neurons.flat_map(&:parameters)
24
+ end
25
+ end
26
+ end
@@ -0,0 +1,28 @@
1
+ # frozen_string_literal: true
2
+
3
+ require_relative "layer"
4
+
5
+ module Micrograd
6
+ class MLP
7
+ attr_reader :layers
8
+
9
+ def initialize(n_in, n_outs, random: Random.new)
10
+ size = [n_in] + n_outs
11
+ @layers = n_outs.length.times.map do |i|
12
+ Layer.new(size[i], size[i + 1], random:)
13
+ end
14
+ end
15
+
16
+ def call(input)
17
+ layers.reduce(input) { |input, layer| layer.call(input) }
18
+ end
19
+
20
+ def parameters
21
+ layers.flat_map(&:parameters)
22
+ end
23
+
24
+ def zero_grad!
25
+ parameters.each(&:zero_grad!)
26
+ end
27
+ end
28
+ end
@@ -0,0 +1,23 @@
1
+ # frozen_string_literal: true
2
+
3
+ require_relative "value"
4
+
5
+ module Micrograd
6
+ class Neuron
7
+ attr_reader :weights, :bias
8
+
9
+ def initialize(n_in, random: Random.new)
10
+ @weights = n_in.times.map { Value[random.rand(-1.0..1)] }
11
+ @bias = Value[random.rand(-1.0..1)]
12
+ end
13
+
14
+ def call(inputs)
15
+ act = weights.zip(inputs).map { |weight, input| weight * input }.sum + bias
16
+ act.tanh
17
+ end
18
+
19
+ def parameters
20
+ weights + [bias]
21
+ end
22
+ end
23
+ end
@@ -0,0 +1,27 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Micrograd
4
+ class TopoSort
5
+ def initialize(start_node)
6
+ @start_node = start_node
7
+ end
8
+
9
+ def call
10
+ build(node: @start_node)
11
+ end
12
+
13
+ private
14
+
15
+ def build(node:, topo: [], visited: [])
16
+ unless visited.include?(node)
17
+ visited << node
18
+
19
+ node.previous.each do |child|
20
+ build(node: child, topo:, visited:)
21
+ end
22
+
23
+ topo << node
24
+ end
25
+ end
26
+ end
27
+ end
@@ -0,0 +1,67 @@
1
+ # frozen_string_literal: true
2
+
3
+ require_relative "mlp"
4
+
5
+ module Micrograd
6
+ class Training
7
+ attr_reader :mlp, :inputs, :targets
8
+
9
+ Result = Data.define(:mlp, :outputs)
10
+
11
+ def initialize(layer_sizes:, inputs:, targets:, random: Random.new)
12
+ n_in, *n_outs = layer_sizes
13
+ # TODO: Ensure n_in == inputs.length, n_outs == targets.length
14
+ @mlp = MLP.new(n_in, n_outs, random:)
15
+ @inputs = inputs
16
+ @targets = targets
17
+ end
18
+
19
+ def call(epochs:, learning_rate:, verbose: false)
20
+ # Assign the last iterate! result to outputs
21
+ outputs = epochs.times.reduce(nil) do |_, i|
22
+ # Skip the first pass because it's the initialization
23
+ gradient_descent!(learning_rate) unless i == 0
24
+
25
+ iterate!((i if verbose))
26
+ end
27
+
28
+ print_summary(outputs) if verbose
29
+
30
+ Result.new(mlp:, outputs: outputs.map(&:data))
31
+ end
32
+
33
+ private
34
+
35
+ def iterate!(i)
36
+ outputs = forward_pass
37
+ loss = calculate_loss(outputs)
38
+ loss.backward!
39
+ puts "#{i}: #{loss.data}" if i
40
+ outputs
41
+ end
42
+
43
+ def gradient_descent!(learning_rate)
44
+ mlp.parameters.each do |parameter|
45
+ parameter.gradient_step!(learning_rate)
46
+ end
47
+ end
48
+
49
+ def forward_pass
50
+ mlp.zero_grad!
51
+ inputs.map { |input| mlp.call(input) }
52
+ end
53
+
54
+ def calculate_loss(outputs)
55
+ # Difference of two squares
56
+ targets.zip(outputs).map do |target, output|
57
+ (output - target) ** 2
58
+ end.sum
59
+ end
60
+
61
+ def print_summary(outputs)
62
+ puts "Final loss: #{calculate_loss(outputs).data}"
63
+ puts "Targets: #{targets}"
64
+ print "Outputs: "
65
+ end
66
+ end
67
+ end
@@ -0,0 +1,171 @@
1
+ # frozen_string_literal: true
2
+
3
+ require_relative "topo_sort"
4
+ require_relative "visualizer"
5
+
6
+ module Micrograd
7
+ class Value
8
+ attr_reader :data, :label, :grad, :operation, :previous
9
+
10
+ protected attr_reader :_backward
11
+
12
+ def initialize(data:, label: nil, operation: nil, previous: [], _backward: -> (_) {})
13
+ @data = data
14
+ @label = label
15
+ @operation = operation
16
+ @previous = previous.to_set
17
+ @_backward = _backward
18
+
19
+ @grad = nil
20
+ end
21
+
22
+ def self.[](*args, **kwargs)
23
+ if args.size == 1
24
+ label = nil
25
+ data = args.first
26
+ elsif kwargs.size == 1
27
+ label, data = kwargs.first
28
+ else
29
+ raise ArgumentError.new("Provide data or label: data as arg")
30
+ end
31
+ new(label:, data:)
32
+ end
33
+
34
+ def id
35
+ self.object_id
36
+ end
37
+
38
+ def inspect
39
+ label = self.label.nil? ? "" : ", label: #{self.label.inspect}"
40
+ "Value[data: #{data}#{label}]"
41
+ end
42
+
43
+ def +(other)
44
+ unless other.is_a?(Value)
45
+ other = Value[scalar: other]
46
+ end
47
+
48
+ Value.new(
49
+ data: self.data + other.data,
50
+ operation: :+,
51
+ previous: [self, other],
52
+ _backward: lambda do |value|
53
+ self.add_grad!(value.grad)
54
+ other.add_grad!(value.grad)
55
+ end
56
+ )
57
+ end
58
+
59
+ def -(other)
60
+ self + (-other)
61
+ end
62
+
63
+ def -@
64
+ self * -1
65
+ end
66
+
67
+ def *(other)
68
+ unless other.is_a?(Value)
69
+ other = Value[scalar: other]
70
+ end
71
+
72
+ Value.new(
73
+ data: self.data * other.data,
74
+ operation: :*,
75
+ previous: [self, other],
76
+ _backward: lambda do |value|
77
+ self.add_grad!(other.data * value.grad)
78
+ other.add_grad!(self.data * value.grad)
79
+ end
80
+ )
81
+ end
82
+
83
+ def /(other)
84
+ self * (other ** -1)
85
+ end
86
+
87
+ def **(pow)
88
+ unless pow.is_a?(Numeric)
89
+ raise TypeError.new("Cannot raise #{pow.class} to a power")
90
+ end
91
+
92
+ Value.new(
93
+ data: self.data ** pow,
94
+ label: :"**#{pow}",
95
+ operation: :**,
96
+ previous: [self],
97
+ _backward: lambda do |value|
98
+ self.add_grad!(value.grad * pow * (self.data ** (pow - 1)))
99
+ end
100
+ )
101
+ end
102
+
103
+ def tanh
104
+ t = (Math.exp(2 * self.data) - 1) / (Math.exp(2 * self.data) + 1)
105
+ # Other valid options
106
+ # Math.tanh(self.data)
107
+ # (Math.exp(self.data) - Math.exp(-self.data)) / (Math.exp(self.data) + Math.exp(-self.data))
108
+ Value.new(
109
+ data: t,
110
+ operation: :tanh,
111
+ previous: [self],
112
+ _backward: lambda do |value|
113
+ self.add_grad!(value.grad * (1 - (t ** 2)))
114
+ end
115
+ )
116
+ end
117
+
118
+ def exp
119
+ Value.new(
120
+ data: Math.exp(self.data),
121
+ operation: :exp,
122
+ previous: [self],
123
+ _backward: lambda do |value|
124
+ self.add_grad!(value.grad * value.data)
125
+ end
126
+ )
127
+ end
128
+
129
+ def with_label(new_label)
130
+ self.class.new(
131
+ data: self.data,
132
+ label: new_label,
133
+ operation: self.operation,
134
+ previous: self.previous,
135
+ _backward: self._backward
136
+ )
137
+ end
138
+
139
+ def add_grad!(grad)
140
+ @grad ||= 0.0
141
+ @grad += grad
142
+ self
143
+ end
144
+
145
+ def zero_grad!
146
+ @grad = 0.0
147
+ end
148
+
149
+ def gradient_step!(learning_rate)
150
+ # This could be written as -learning_rate but this is harder to miss
151
+ @data += -1 * learning_rate * self.grad
152
+ end
153
+
154
+ def backward!
155
+ add_grad!(1)
156
+ Micrograd::TopoSort.new(self).call.reverse.map { |node| node._backward.call(node) }
157
+ end
158
+
159
+ def generate_image
160
+ Visualizer.new(self).generate_image
161
+ end
162
+
163
+ def coerce(other)
164
+ if other.is_a?(Numeric)
165
+ [Value[scalar: other], self]
166
+ else
167
+ raise TypeError.new("Cannot coerce #{other.class} into Value")
168
+ end
169
+ end
170
+ end
171
+ end
@@ -0,0 +1,5 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Micrograd
4
+ VERSION = "0.1.0"
5
+ end
@@ -0,0 +1,56 @@
1
+ # frozen_string_literal: true
2
+
3
+ require "open3"
4
+
5
+ module Micrograd
6
+ class Visualizer
7
+ attr_reader :node, :output_file
8
+
9
+ def initialize(node, output_file: "graph.svg")
10
+ @node = node
11
+ @output_file = output_file
12
+ end
13
+
14
+ def generate_image
15
+ Open3.popen3("d2", "-", "--layout=elk") do |stdin, stdout, stderr, wait_thr|
16
+ stdin.puts(to_d2)
17
+ stdin.close
18
+
19
+ if wait_thr.value.success?
20
+ File.write(output_file, stdout.read)
21
+ else
22
+ raise "Error: #{stderr}"
23
+ end
24
+ end
25
+ end
26
+
27
+ private
28
+
29
+ def to_d2
30
+ nodes, edges = build_graph(node)
31
+ d2_representation = "direction: right\n"
32
+ d2_representation += nodes.map do |node_id, node|
33
+ label_or_operation = node.label || node.operation
34
+ rounded_data = round(node.data)
35
+ grad_line = "grad: #{round(node.grad)}" if node.grad
36
+ data_line = [label_or_operation, rounded_data].compact.join(": ")
37
+ contents = [data_line, grad_line].compact.join("\\n")
38
+ %("#{node_id}": #{contents}\n)
39
+ end.join
40
+ d2_representation += edges.map { |from, to, op| %(#{from} -> "#{to}": #{op}\n) }.join
41
+ d2_representation
42
+ end
43
+
44
+ def build_graph(node, nodes = {}, edges = [])
45
+ return [nodes, edges] if nodes.key?(node.id)
46
+
47
+ new_nodes = nodes.merge(node.id => node)
48
+ new_edges = edges + node.previous.map { |prev| [prev.id, node.id, node.operation] }
49
+ node.previous.reduce([new_nodes, new_edges]) { |acc, prev| build_graph(prev, *acc) }
50
+ end
51
+
52
+ def round(float)
53
+ format("%.4f", float)
54
+ end
55
+ end
56
+ end
data/lib/micrograd.rb ADDED
@@ -0,0 +1,7 @@
1
+ # frozen_string_literal: true
2
+
3
+ require_relative "training"
4
+
5
+ module Micrograd
6
+ class Error < StandardError; end
7
+ end
metadata ADDED
@@ -0,0 +1,58 @@
1
+ --- !ruby/object:Gem::Specification
2
+ name: micrograd-rb
3
+ version: !ruby/object:Gem::Version
4
+ version: 0.1.0
5
+ platform: ruby
6
+ authors:
7
+ - Sean Collins
8
+ bindir: bin
9
+ cert_chain: []
10
+ date: 2025-03-28 00:00:00.000000000 Z
11
+ dependencies: []
12
+ email:
13
+ - sean@cllns.com
14
+ executables: []
15
+ extensions: []
16
+ extra_rdoc_files: []
17
+ files:
18
+ - ".rspec"
19
+ - ".rubocop.yml"
20
+ - CODE_OF_CONDUCT.md
21
+ - Guardfile
22
+ - LICENSE.txt
23
+ - README.md
24
+ - Rakefile
25
+ - lib/micrograd.rb
26
+ - lib/micrograd/examples.rb
27
+ - lib/micrograd/layer.rb
28
+ - lib/micrograd/mlp.rb
29
+ - lib/micrograd/neuron.rb
30
+ - lib/micrograd/topo_sort.rb
31
+ - lib/micrograd/training.rb
32
+ - lib/micrograd/value.rb
33
+ - lib/micrograd/version.rb
34
+ - lib/micrograd/visualizer.rb
35
+ homepage: https://github.com/cllns/micrograd-rb
36
+ licenses:
37
+ - MIT
38
+ metadata:
39
+ allowed_push_host: https://rubygems.org
40
+ homepage_uri: https://github.com/cllns/micrograd-rb
41
+ rdoc_options: []
42
+ require_paths:
43
+ - lib
44
+ required_ruby_version: !ruby/object:Gem::Requirement
45
+ requirements:
46
+ - - ">="
47
+ - !ruby/object:Gem::Version
48
+ version: 3.4.0
49
+ required_rubygems_version: !ruby/object:Gem::Requirement
50
+ requirements:
51
+ - - ">="
52
+ - !ruby/object:Gem::Version
53
+ version: '0'
54
+ requirements: []
55
+ rubygems_version: 3.6.2
56
+ specification_version: 4
57
+ summary: Ruby implementation of micrograd
58
+ test_files: []