daimond 0.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml ADDED
@@ -0,0 +1,7 @@
1
+ ---
2
+ SHA256:
3
+ metadata.gz: 5f8e7eb4a7668542baca1504c843eb802996481641c22eda1633c24395591f30
4
+ data.tar.gz: 812a7410ef056cdb8b9a47cab26fa6786191b4cf9715f59b53f1be6f9e6a2c77
5
+ SHA512:
6
+ metadata.gz: b116749b121c4d779e0884fa512cd8f7d1933c51102ec88f631a3c0422a01c5818abca9093e4c3acea28e9f8ba1f8693609bcde07afddc2b28e58c256e88ba39
7
+ data.tar.gz: 679a9249cf1171ed4798d5aef7413d595c9910c835dc0ea5eaad715a8307caa4193cdf9fcdb574e2fbb86bb20120c5ecf20aa18582de66ea5b755435ac59bf7e
data/CONTRIBUTIONG.md ADDED
@@ -0,0 +1,160 @@
1
+ # Contributing to dAImond 💎
2
+
3
+ First off, thank you for considering contributing to dAImond! This is a passion project to bring deep learning to Ruby, and every contribution helps.
4
+
5
+ ## Where to Start?
6
+
7
+ - **⭐ Star the repo** - It helps others discover the project
8
+ - **🐛 Report bugs** - Found an issue? Open an issue with details
9
+ - **📖 Improve docs** - Fix typos, improve examples, add translations
10
+ - **✨ Add features** - New layers, optimizers, or examples
11
+ - **🧪 Add tests** - We need them desperately!
12
+
13
+ ## Development Setup
14
+
15
+ ```bash
16
+ git clone https://github.com/yourusername/daimond.git
17
+ cd daimond
18
+ bundle install
19
+ ruby examples/mnist.rb # Verify it works
20
+ ```
21
+
22
+ ## How to Contribute
23
+
24
+ **Reporting Bugs**
25
+
26
+ Use GitHub Issues and include:
27
+ Ruby version (ruby -v)
28
+ Error message with full backtrace
29
+ Minimal code to reproduce
30
+ Expected vs actual behavior
31
+ Suggesting Features
32
+
33
+ Open an issue with [Feature Request] in title. Explain:
34
+ What and why?
35
+ API design (how should it look?)
36
+ Are you willing to implement it?
37
+ Pull Request Process
38
+
39
+ Fork the repo and create your branch: git checkout -b feature/amazing-feature
40
+ Make changes following our style guide below
41
+ Test manually with MNIST at minimum: ruby examples/mnist.rb should still achieve 97%+
42
+ Update docs (this README, code comments, examples)
43
+ Commit with clear messages:
44
+ feat: Add BatchNorm layer
45
+ fix: Correct ReLU backward pass
46
+ docs: Update Japanese README
47
+ Push to your fork and open a Pull Request
48
+ Wait for review (usually within 24-48 hours)
49
+ Code Style Guide
50
+
51
+ **Ruby Style**
52
+
53
+ Follow Standard Ruby style guide
54
+ Max 100 characters per line
55
+ Prefer do...end for multiline blocks
56
+ Always use frozen_string_literal: true
57
+
58
+ **For New Layers (NN)**
59
+ If adding new layers (e.g., Conv2D, LSTM):
60
+ ```ruby
61
+ module Daimond
62
+ module NN
63
+ class Conv2d < Module
64
+ def initialize(in_channels, out_channels, kernel_size)
65
+ super()
66
+ # Initialize parameters as Tensors
67
+ # Add to @parameters array
68
+ end
69
+
70
+ def forward(x)
71
+ # Return Tensor
72
+ # Store _backward lambda for autograd
73
+ end
74
+ end
75
+ end
76
+ end
77
+ ```
78
+
79
+ **Must include:**
80
+ - super() call in initialize
81
+ - Add parameters to @parameters
82
+ - Return Tensor from forward
83
+ - Implement _backward lambda with correct gradients
84
+ - Update lib/daimond.rb exports
85
+
86
+ **For Optimizers:**
87
+ ```ruby
88
+ module Daimond
89
+ module Optim
90
+ class Adam < SGD # Or Module
91
+ def initialize(parameters, lr: 0.001, beta1: 0.9, beta2: 0.999)
92
+ super()
93
+ @parameters = parameters
94
+ @lr = lr
95
+ # Initialize moments
96
+ end
97
+
98
+ def step
99
+ # Update parameters using gradients
100
+ end
101
+
102
+ def zero_grad
103
+ @parameters.each { |p| p.grad = Numo::DFloat.zeros(*p.shape) }
104
+ end
105
+ end
106
+ end
107
+ end
108
+ ```
109
+
110
+ ## Testing (Important!)
111
+ **1. Verify backward pass manually for new operations:**
112
+ ```ruby
113
+ # Create simple test case
114
+ x = Daimond::Tensor.new([[1.0, 2.0]])
115
+ y = x.relu
116
+ y.backward!
117
+
118
+ # Check gradient is correct manually
119
+ puts x.grad.inspect
120
+ ```
121
+ **2. Verify MNIST still trains to 95%+ accuracy**
122
+ **3. Add example script in examples/ if adding major features:**
123
+ - examples/conv2d_demo.rb
124
+ - examples/adam_comparison.rb
125
+
126
+ **Documentation**
127
+ - Code comments: Explain complex math in backward passes
128
+ - README updates: If adding public API
129
+ - Type hints: Optional, but helpful (e.g., # @param [Tensor] input)
130
+
131
+ ## Performance Guidelines
132
+ dAImond is for education and prototyping, but we shouldn't be painfully slow:
133
+ - Prefer Numo operations over Ruby loops for math
134
+ - Minimize object allocation in hot loops (forward/backward)
135
+ - Profile before optimizing: Use ruby -rprofile if needed
136
+
137
+ Example:
138
+ ```ruby
139
+ # ❌ Bad: Ruby loop
140
+ result = []
141
+ 1000.times { |i| result << data[i] * 2 }
142
+
143
+ # ✅ Good: Numo vectorized
144
+ result = data * 2
145
+ ```
146
+
147
+ ## Recognition
148
+ - Contributors will be:
149
+ - Added to CONTRIBUTORS.md file
150
+ - Mentioned in release notes
151
+ - Forever appreciated! 🙏
152
+
153
+ ## Questions
154
+ - Open a GitHub Discussion
155
+ - Or email: fyodor@hudzone.com
156
+
157
+ ## Code of Conduct
158
+ Be nice
159
+
160
+ **Thank you for helping make dAImond shine! Glory Ruby.** 💎✨
data/README.ja.md ADDED
@@ -0,0 +1,115 @@
1
+ # dAImond 💎
2
+
3
+ PyTorchにインスパイアされた、Rubyのためのディープラーニングフレームワーク。Rubyコミュニティへの愛を込めて、ゼロから作成しました。
4
+
5
+ [![Ruby](https://img.shields.io/badge/ruby-%23CC342D.svg?style=for-the-badge&logo=ruby&logoColor=white)](https://www.ruby-lang.org/)
6
+ [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](LICENSE)
7
+
8
+ > **なぜRuby?** 理由はわからない、ただこの言語が好きだから。dAImondはMLに幸せを取り戻します。
9
+
10
+ ## 主な機能
11
+
12
+ - 🔥 **自動微分** - 計算グラフによる完全なautogradエンジン
13
+ - 🧠 **ニューラルネットワーク** - 線形レイヤー、活性化関数(ReLU、Sigmoid、Softmax、Tanh)
14
+ - 📊 **オプティマイザー** - Momentum付きSGD、学習率スケジューリング
15
+ - 🎯 **損失関数** - MSE、CrossEntropy
16
+ - 💾 **モデルの保存** - Marshalによる保存/読み込み
17
+ - 📈 **データローダー** - バッチ処理、シャッフル、MNIST対応
18
+ - ⚡ **高速バックエンド** - Numo::NArrayによるベクトル化演算(C速度)
19
+ - 🎨 **美しいAPI** - 慣用的なRuby DSL、メソッドチェーン
20
+
21
+ ## インストール
22
+
23
+ Gemfileに追加:
24
+
25
+ ```ruby
26
+ gem 'daimond'
27
+ ```
28
+
29
+ または手動でインストール:
30
+ ```ruby
31
+ gem install daimond
32
+ ```
33
+
34
+ **依存関係:** Ruby 2.7+, numo-narray
35
+ クイックスタート
36
+ ```ruby
37
+ require 'daimond'
38
+
39
+ # モデルの定義
40
+ class NeuralNet < Daimond::NN::Module
41
+ attr_reader :fc1, :fc2
42
+
43
+ def initialize
44
+ super()
45
+ @fc1 = Daimond::NN::Linear.new(784, 128)
46
+ @fc2 = Daimond::NN::Linear.new(128, 10)
47
+ @parameters = @fc1.parameters + @fc2.parameters
48
+ end
49
+
50
+ def forward(x)
51
+ x = @fc1.forward(x).relu
52
+ @fc2.forward(x).softmax
53
+ end
54
+ end
55
+
56
+ # トレーニングループ
57
+ model = NeuralNet.new
58
+ optimizer = Daimond::Optim::SGD.new(model.parameters, lr: 0.1, momentum: 0.9)
59
+ criterion = Daimond::Loss::CrossEntropyLoss.new
60
+
61
+ # 順伝播 → 逆伝播 → 更新
62
+ loss = criterion.call(model.forward(input), target)
63
+ optimizer.zero_grad
64
+ loss.backward!
65
+ optimizer.step
66
+ ```
67
+
68
+ ## MNISTの例(97%の精度!)
69
+
70
+ 60,000枚の手書き数字で分類器をトレーニング:
71
+ ```ruby
72
+ ruby examples/mnist.rb
73
+ ```
74
+
75
+ 結果:
76
+ ```text
77
+ Epoch 1/5: Loss = 0.2898, Accuracy = 91.35%
78
+ Epoch 2/5: Loss = 0.1638, Accuracy = 95.31%
79
+ Epoch 3/5: Loss = 0.1389, Accuracy = 96.2%
80
+ Epoch 4/5: Loss = 0.1195, Accuracy = 96.68%
81
+ Epoch 5/5: Loss = 0.1083, Accuracy = 97.12%
82
+ ```
83
+
84
+ モデルの保存:
85
+ ```ruby
86
+ model.save('models/mnist_model.bin')
87
+ ```
88
+ 読み込みと予測:
89
+ ```ruby
90
+ model = NeuralNet.new
91
+ model.load('models/mnist_model.bin')
92
+ prediction = model.forward(test_image)
93
+ ```
94
+
95
+ ## パフォーマンス
96
+ 純粋なRubyはPyTorch/CUDAより遅いですが、dAImondはプロトタイピングや小〜中規模のデータセットに対して合理的な速度を実現しています:
97
+ MNIST(60k画像):現代のCPUで1エポックあたり約2〜3分
98
+ 教育、研究、100万パラメーター未満のモデルに最適
99
+
100
+ ## ロードマップ
101
+ - [x] コアautogradエンジン
102
+ - [x] 線形レイヤーと活性化関数
103
+ - [x] MNIST 97%精度
104
+ - [x] モデルのシリアライズ
105
+ - [ ] 畳み込みレイヤー(Conv2D)
106
+ - [ ] Batch NormalizationとDropout
107
+ - [ ] Adam/RMSpropオプティマイザー
108
+ - [ ] GPUサポート(OpenCL/CUDA経由FFI)
109
+ - [ ] ONNXエクスポート/インポート
110
+
111
+ ## コントリビューション
112
+ どんなコントリビューターも歓迎します!CONTRIBUTING.mdをご覧ください。
113
+
114
+ ## ライセンス
115
+ MIT License - LICENSEファイルを参照。
data/README.md ADDED
@@ -0,0 +1,115 @@
1
+ # dAImond 💎
2
+
3
+ Deep Learning framework for Ruby, inspired by PyTorch. Written from scratch with love for the Ruby community.
4
+
5
+ [![Ruby](https://img.shields.io/badge/ruby-%23CC342D.svg?style=for-the-badge&logo=ruby&logoColor=white)](https://www.ruby-lang.org/)
6
+ [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](LICENSE)
7
+
8
+ > **Why Ruby?** IDK, i just love this lang. dAImond brings back the happiness to ML.
9
+
10
+ ## Features
11
+
12
+ - 🔥 **Automatic Differentiation** - Full autograd engine with computational graphs
13
+ - 🧠 **Neural Networks** - Linear layers, activations (ReLU, Sigmoid, Softmax, Tanh)
14
+ - 📊 **Optimizers** - SGD with momentum, learning rate scheduling
15
+ - 🎯 **Loss Functions** - MSE, CrossEntropy
16
+ - 💾 **Model Serialization** - Save/load trained models with Marshal
17
+ - 📈 **Data Loaders** - Batch processing, shuffling, MNIST support
18
+ - ⚡ **Fast Backend** - Numo::NArray for vectorized operations (C-speed)
19
+ - 🎨 **Beautiful API** - Idiomatic Ruby DSL, chainable methods
20
+
21
+ ## Installation
22
+
23
+ Add this line to your Gemfile:
24
+
25
+ ```ruby
26
+ gem 'daimond'
27
+ ```
28
+
29
+ Or install manually:
30
+ ```ruby
31
+ gem install daimond
32
+ ```
33
+
34
+ **Dependencies:** Ruby 2.7+, numo-narray
35
+
36
+ ## Quick Start
37
+ ```ruby
38
+ require 'daimond'
39
+
40
+ # Define your model
41
+ class NeuralNet < Daimond::NN::Module
42
+ attr_reader :fc1, :fc2
43
+
44
+ def initialize
45
+ super()
46
+ @fc1 = Daimond::NN::Linear.new(784, 128)
47
+ @fc2 = Daimond::NN::Linear.new(128, 10)
48
+ @parameters = @fc1.parameters + @fc2.parameters
49
+ end
50
+
51
+ def forward(x)
52
+ x = @fc1.forward(x).relu
53
+ @fc2.forward(x).softmax
54
+ end
55
+ end
56
+
57
+ # Training loop
58
+ model = NeuralNet.new
59
+ optimizer = Daimond::Optim::SGD.new(model.parameters, lr: 0.1, momentum: 0.9)
60
+ criterion = Daimond::Loss::CrossEntropyLoss.new
61
+
62
+ # Forward → Backward → Update
63
+ loss = criterion.call(model.forward(input), target)
64
+ optimizer.zero_grad
65
+ loss.backward!
66
+ optimizer.step
67
+ ```
68
+
69
+ ## MNIST Example (97% Accuracy!)
70
+ **Train a classifier on 60,000 handwritten digits:**
71
+ ```ruby
72
+ ruby examples/mnist.rb
73
+ ```
74
+ **Results:**
75
+ ```text
76
+ Epoch 1/5: Loss = 0.2898, Accuracy = 91.35%
77
+ Epoch 2/5: Loss = 0.1638, Accuracy = 95.31%
78
+ Epoch 3/5: Loss = 0.1389, Accuracy = 96.2%
79
+ Epoch 4/5: Loss = 0.1195, Accuracy = 96.68%
80
+ Epoch 5/5: Loss = 0.1083, Accuracy = 97.12%
81
+ ```
82
+
83
+ **Save your model:**
84
+ ```ruby
85
+ model.save('models/mnist_model.bin')
86
+ ```
87
+
88
+ **Load and predict:**
89
+ ```ruby
90
+ model = NeuralNet.new
91
+ model.load('models/mnist_model.bin')
92
+ prediction = model.forward(test_image)
93
+ ```
94
+
95
+ ## Performance
96
+ While pure Ruby is slower than PyTorch/CUDA, dAImond achieves reasonable speeds for prototyping and small-to-medium datasets:
97
+ MNIST (60k images): ~2-3 minutes per epoch on modern CPU
98
+ Perfect for education, research, and production models < 1M parameters
99
+
100
+ ## Roadmap
101
+ - [x] Core autograd engine
102
+ - [x] Linear layers & activations
103
+ - [x] MNIST 97% accuracy
104
+ - [x] Model serialization
105
+ - [ ] Convolutional layers (Conv2D)
106
+ - [ ] Batch Normalization & Dropout
107
+ - [ ] Adam/RMSprop optimizers
108
+ - [ ] GPU support (OpenCL/CUDA via FFI)
109
+ - [ ] ONNX export/import
110
+
111
+ ## Contributing
112
+ I'll be happy to see any contributors! Please read CONTRIBUTING.md for details.
113
+
114
+ ## License
115
+ MIT License - see LICENSE file.
data/README.ru.md ADDED
@@ -0,0 +1,116 @@
1
+ # dAImond 💎
2
+
3
+ Deep Learnin фреймворк для Ruby, вдохновлённый PyTorch.
4
+
5
+ [![Ruby](https://img.shields.io/badge/ruby-%23CC342D.svg?style=for-the-badge&logo=ruby&logoColor=white)](https://www.ruby-lang.org/)
6
+ [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](LICENSE)
7
+
8
+ > **Почему Ruby?** ХЗ, захотелось. dAImond возвращает радость в возюкании с ML, потому что это Ruby.
9
+
10
+ ## Возможности
11
+
12
+ - 🔥 **Автоматическое дифференцирование** - Полноценный autograd с вычислительными графами
13
+ - 🧠 **Нейронные сети** - Линейные слои, активации (ReLU, Sigmoid, Softmax, Tanh)
14
+ - 📊 **Оптимизаторы** - SGD с моментумом, планирование learning rate
15
+ - 🎯 **Функции потерь** - MSE, CrossEntropy
16
+ - 💾 **Сериализация моделей** - Сохранение/загрузка через Marshal
17
+ - 📈 **Загрузчики данных** - Batch processing, шаффл, поддержка MNIST
18
+ - ⚡ **Быстрый бэкенд** - Numo::NArray для векторизованных операций (скорость C)
19
+ - 🎨 **Красивый API** - Идиоматичный Ruby DSL, чейнящиеся методы
20
+
21
+ ## Установка
22
+
23
+ Добавьте в Gemfile:
24
+
25
+ ```ruby
26
+ gem 'daimond'
27
+ ```
28
+
29
+
30
+ Или установите ручками:
31
+ ```ruby
32
+ gem install daimond
33
+ ```
34
+
35
+ **Зависимости:** Ruby 2.7+, numo-narray
36
+
37
+ ## Быстрый старт
38
+ ```ruby
39
+ require 'daimond'
40
+
41
+ # Define your model
42
+ class NeuralNet < Daimond::NN::Module
43
+ attr_reader :fc1, :fc2
44
+
45
+ def initialize
46
+ super()
47
+ @fc1 = Daimond::NN::Linear.new(784, 128)
48
+ @fc2 = Daimond::NN::Linear.new(128, 10)
49
+ @parameters = @fc1.parameters + @fc2.parameters
50
+ end
51
+
52
+ def forward(x)
53
+ x = @fc1.forward(x).relu
54
+ @fc2.forward(x).softmax
55
+ end
56
+ end
57
+
58
+ # Training loop
59
+ model = NeuralNet.new
60
+ optimizer = Daimond::Optim::SGD.new(model.parameters, lr: 0.1, momentum: 0.9)
61
+ criterion = Daimond::Loss::CrossEntropyLoss.new
62
+
63
+ # Forward → Backward → Update
64
+ loss = criterion.call(model.forward(input), target)
65
+ optimizer.zero_grad
66
+ loss.backward!
67
+ optimizer.step
68
+ ```
69
+
70
+ ## Пример MNIST (97% Accuracy!)
71
+ **Обучение классификатора на 60к рукописных цифрах:**
72
+ ```ruby
73
+ ruby examples/mnist.rb
74
+ ```
75
+ **Результаты:**
76
+ ```text
77
+ Epoch 1/5: Loss = 0.2898, Accuracy = 91.35%
78
+ Epoch 2/5: Loss = 0.1638, Accuracy = 95.31%
79
+ Epoch 3/5: Loss = 0.1389, Accuracy = 96.2%
80
+ Epoch 4/5: Loss = 0.1195, Accuracy = 96.68%
81
+ Epoch 5/5: Loss = 0.1083, Accuracy = 97.12%
82
+ ```
83
+
84
+ **Сохранение модели:**
85
+ ```ruby
86
+ model.save('models/mnist_model.bin')
87
+ ```
88
+
89
+ **Загрузка и предикт:**
90
+ ```ruby
91
+ model = NeuralNet.new
92
+ model.load('models/mnist_model.bin')
93
+ prediction = model.forward(test_image)
94
+ ```
95
+
96
+ ## Производительность
97
+ Хотя чистый Ruby медленнее PyTorch/CUDA, dAImond обеспечивает разумную скорость для прототипирования и небольших/средних датасетов:
98
+ MNIST (60k изображений): ~2-3 минуты на эпоху на современном CPU
99
+ Идеально для обучения, исследований и моделей < 1M параметров
100
+
101
+ ## Планы
102
+ - [x] Ядро autograd
103
+ - [x] Линейные слои и активации
104
+ - [x] MNIST 97% точности
105
+ - [x] Сериализация моделей
106
+ - [ ] Свёрточные слои (Conv2D)
107
+ - [ ] Batch Normalization и Dropout
108
+ - [ ] Оптимизаторы Adam/RMSprop
109
+ - [ ] Поддержка GPU (OpenCL/CUDA via FFI)
110
+ - [ ] ONNX экспорт/импорт
111
+
112
+ ## Помощь
113
+ Буду рад любой помощи! Инфа в CONTRIBUTING.md.
114
+
115
+ ## Лицензия
116
+ MIT License