torch-rb 0.1.3

Sign up to get free protection for your applications and to get access to all the features.
Files changed (44) hide show
  1. checksums.yaml +7 -0
  2. data/CHANGELOG.md +28 -0
  3. data/LICENSE.txt +46 -0
  4. data/README.md +426 -0
  5. data/ext/torch/ext.cpp +839 -0
  6. data/ext/torch/extconf.rb +25 -0
  7. data/lib/torch-rb.rb +1 -0
  8. data/lib/torch.rb +422 -0
  9. data/lib/torch/ext.bundle +0 -0
  10. data/lib/torch/inspector.rb +85 -0
  11. data/lib/torch/nn/alpha_dropout.rb +9 -0
  12. data/lib/torch/nn/conv2d.rb +37 -0
  13. data/lib/torch/nn/convnd.rb +41 -0
  14. data/lib/torch/nn/dropout.rb +9 -0
  15. data/lib/torch/nn/dropout2d.rb +9 -0
  16. data/lib/torch/nn/dropout3d.rb +9 -0
  17. data/lib/torch/nn/dropoutnd.rb +15 -0
  18. data/lib/torch/nn/embedding.rb +52 -0
  19. data/lib/torch/nn/feature_alpha_dropout.rb +9 -0
  20. data/lib/torch/nn/functional.rb +100 -0
  21. data/lib/torch/nn/init.rb +30 -0
  22. data/lib/torch/nn/linear.rb +36 -0
  23. data/lib/torch/nn/module.rb +85 -0
  24. data/lib/torch/nn/mse_loss.rb +13 -0
  25. data/lib/torch/nn/parameter.rb +14 -0
  26. data/lib/torch/nn/relu.rb +13 -0
  27. data/lib/torch/nn/sequential.rb +29 -0
  28. data/lib/torch/optim/adadelta.rb +57 -0
  29. data/lib/torch/optim/adagrad.rb +71 -0
  30. data/lib/torch/optim/adam.rb +81 -0
  31. data/lib/torch/optim/adamax.rb +68 -0
  32. data/lib/torch/optim/adamw.rb +82 -0
  33. data/lib/torch/optim/asgd.rb +65 -0
  34. data/lib/torch/optim/lr_scheduler/lr_scheduler.rb +33 -0
  35. data/lib/torch/optim/lr_scheduler/step_lr.rb +17 -0
  36. data/lib/torch/optim/optimizer.rb +62 -0
  37. data/lib/torch/optim/rmsprop.rb +76 -0
  38. data/lib/torch/optim/rprop.rb +68 -0
  39. data/lib/torch/optim/sgd.rb +60 -0
  40. data/lib/torch/tensor.rb +196 -0
  41. data/lib/torch/utils/data/data_loader.rb +27 -0
  42. data/lib/torch/utils/data/tensor_dataset.rb +22 -0
  43. data/lib/torch/version.rb +3 -0
  44. metadata +169 -0
@@ -0,0 +1,7 @@
1
+ ---
2
+ SHA256:
3
+ metadata.gz: e7f715179c9a84dc7399b80d93fd61f2bbb58a0156e6084dc4abb23e1d4a1b52
4
+ data.tar.gz: 6928379ae7c92a77ad9dde4f4224ec33c6f8575a9b77585c0147e4f5361021de
5
+ SHA512:
6
+ metadata.gz: 9911a9e86d93f1e410776c44fdb3cd9aa06c83d1f0e42fdab8530970bea6520aed7906e96fb8243efd6b957453ebc13678b2b92e4c85b54407030a32c6196e08
7
+ data.tar.gz: 0d080f5458a5dcf8fee19ce5e2e342bf6269432de6e78d923036232963ebb80daeea993c0bbf4af2d6da46593ac28a72a8232020a9fcb48acc3276c9e1ebebf3
@@ -0,0 +1,28 @@
1
+ ## 0.1.3 (2019-11-30)
2
+
3
+ - Changed to BSD 3-Clause license to match PyTorch
4
+ - Added many optimizers
5
+ - Added `StepLR` learning rate scheduler
6
+ - Added dropout
7
+ - Added embedding
8
+ - Added support for `bool` type
9
+ - Improved performance of `from_numo`
10
+
11
+ ## 0.1.2 (2019-11-27)
12
+
13
+ - Added SGD optimizer
14
+ - Added support for gradient to `backward` method
15
+ - Added `argmax`, `eq`, `leaky_relu`, `prelu`, and `reshape` methods
16
+ - Improved indexing
17
+ - Fixed `zero_grad`
18
+ - Fixed error with infinite values
19
+
20
+ ## 0.1.1 (2019-11-26)
21
+
22
+ - Added support for `uint8` and `int8` types
23
+ - Fixed `undefined symbol` error on Linux
24
+ - Fixed C++ error messages
25
+
26
+ ## 0.1.0 (2019-11-26)
27
+
28
+ - First release
@@ -0,0 +1,46 @@
1
+ BSD 3-Clause License
2
+
3
+ From Torch-rb:
4
+
5
+ Copyright (c) 2019- Andrew Kane
6
+
7
+ From PyTorch (for ported code):
8
+
9
+ Copyright (c) 2016- Facebook, Inc (Adam Paszke)
10
+ Copyright (c) 2014- Facebook, Inc (Soumith Chintala)
11
+ Copyright (c) 2011-2014 Idiap Research Institute (Ronan Collobert)
12
+ Copyright (c) 2012-2014 Deepmind Technologies (Koray Kavukcuoglu)
13
+ Copyright (c) 2011-2012 NEC Laboratories America (Koray Kavukcuoglu)
14
+ Copyright (c) 2011-2013 NYU (Clement Farabet)
15
+ Copyright (c) 2006-2010 NEC Laboratories America (Ronan Collobert, Leon Bottou, Iain Melvin, Jason Weston)
16
+ Copyright (c) 2006 Idiap Research Institute (Samy Bengio)
17
+ Copyright (c) 2001-2004 Idiap Research Institute (Ronan Collobert, Samy Bengio, Johnny Mariethoz)
18
+
19
+ All rights reserved.
20
+
21
+ Redistribution and use in source and binary forms, with or without
22
+ modification, are permitted provided that the following conditions are met:
23
+
24
+ 1. Redistributions of source code must retain the above copyright
25
+ notice, this list of conditions and the following disclaimer.
26
+
27
+ 2. Redistributions in binary form must reproduce the above copyright
28
+ notice, this list of conditions and the following disclaimer in the
29
+ documentation and/or other materials provided with the distribution.
30
+
31
+ 3. Neither the names of Facebook, Deepmind Technologies, NYU, NEC Laboratories America
32
+ and IDIAP Research Institute nor the names of its contributors may be
33
+ used to endorse or promote products derived from this software without
34
+ specific prior written permission.
35
+
36
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
37
+ AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
38
+ IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
39
+ ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
40
+ LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
41
+ CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
42
+ SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
43
+ INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
44
+ CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
45
+ ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
46
+ POSSIBILITY OF SUCH DAMAGE.
@@ -0,0 +1,426 @@
1
+ # Torch-rb
2
+
3
+ :fire: Deep learning for Ruby, powered by [LibTorch](https://pytorch.org)
4
+
5
+ This gem is currently experimental. There may be breaking changes between each release. Please report any issues you experience.
6
+
7
+ [![Build Status](https://travis-ci.org/ankane/torch-rb.svg?branch=master)](https://travis-ci.org/ankane/torch-rb)
8
+
9
+ ## Installation
10
+
11
+ First, [install LibTorch](#libtorch-installation). For Homebrew, use:
12
+
13
+ ```sh
14
+ brew install libtorch
15
+ ```
16
+
17
+ Add this line to your application’s Gemfile:
18
+
19
+ ```ruby
20
+ gem 'torch-rb'
21
+ ```
22
+
23
+ ## Getting Started
24
+
25
+ This library follows the [PyTorch API](https://pytorch.org/docs/stable/torch.html). There are a few changes to make it more Ruby-like:
26
+
27
+ - Methods that perform in-place modifications end with `!` instead of `_` (`add!` instead of `add_`)
28
+ - Methods that return booleans use `?` instead of `is_` (`tensor?` instead of `is_tensor`)
29
+ - Numo is used instead of NumPy (`x.numo` instead of `x.numpy()`)
30
+
31
+ Many methods and options are missing at the moment. PRs welcome!
32
+
33
+ ## Tutorial
34
+
35
+ Some examples below are from [Deep Learning with PyTorch: A 60 Minutes Blitz](https://pytorch.org/tutorials/beginner/deep_learning_60min_blitz.html)
36
+
37
+ ### Tensors
38
+
39
+ Create a tensor from a Ruby array
40
+
41
+ ```ruby
42
+ x = Torch.tensor([[1, 2, 3], [4, 5, 6]])
43
+ ```
44
+
45
+ Get the shape of a tensor
46
+
47
+ ```ruby
48
+ x.shape
49
+ ```
50
+
51
+ There are [many functions](#tensor-creation) to create tensors, like
52
+
53
+ ```ruby
54
+ a = Torch.rand(3)
55
+ b = Torch.zeros(2, 3)
56
+ ```
57
+
58
+ Each tensor has four properties
59
+
60
+ - `dtype` - the data type - `:uint8`, `:int8`, `:int16`, `:int32`, `:int64`, `:float32`, `float64`, or `:bool`
61
+ - `layout` - `:strided` (dense) or `:sparse`
62
+ - `device` - the compute device, like CPU or GPU
63
+ - `requires_grad` - whether or not to record gradients
64
+
65
+ You can specify properties when creating a tensor
66
+
67
+ ```ruby
68
+ Torch.rand(2, 3, dtype: :double, layout: :strided, device: "cpu", requires_grad: true)
69
+ ```
70
+
71
+ ### Operations
72
+
73
+ Create a tensor
74
+
75
+ ```ruby
76
+ x = Torch.tensor([10, 20, 30])
77
+ ```
78
+
79
+ Add
80
+
81
+ ```ruby
82
+ x + 5 # tensor([15, 25, 35])
83
+ ```
84
+
85
+ Subtract
86
+
87
+ ```ruby
88
+ x - 5 # tensor([5, 15, 25])
89
+ ```
90
+
91
+ Multiply
92
+
93
+ ```ruby
94
+ x * 5 # tensor([50, 100, 150])
95
+ ```
96
+
97
+ Divide
98
+
99
+ ```ruby
100
+ x / 5 # tensor([2, 4, 6])
101
+ ```
102
+
103
+ Get the remainder
104
+
105
+ ```ruby
106
+ x % 3 # tensor([1, 2, 0])
107
+ ```
108
+
109
+ Raise to a power
110
+
111
+ ```ruby
112
+ x**2 # tensor([100, 400, 900])
113
+ ```
114
+
115
+ Perform operations with other tensors
116
+
117
+ ```ruby
118
+ y = Torch.tensor([1, 2, 3])
119
+ x + y # tensor([11, 22, 33])
120
+ ```
121
+
122
+ Perform operations in-place
123
+
124
+ ```ruby
125
+ x.add!(5)
126
+ x # tensor([15, 25, 35])
127
+ ```
128
+
129
+ You can also specify an output tensor
130
+
131
+ ```ruby
132
+ result = Torch.empty(3)
133
+ Torch.add(x, y, out: result)
134
+ result # tensor([15, 25, 35])
135
+ ```
136
+
137
+ ### Numo
138
+
139
+ Convert a tensor to a Numo array
140
+
141
+ ```ruby
142
+ a = Torch.ones(5)
143
+ a.numo
144
+ ```
145
+
146
+ Convert a Numo array to a tensor
147
+
148
+ ```ruby
149
+ b = Numo::NArray.cast([1, 2, 3])
150
+ Torch.from_numo(b)
151
+ ```
152
+
153
+ ### Autograd
154
+
155
+ Create a tensor with `requires_grad: true`
156
+
157
+ ```ruby
158
+ x = Torch.ones(2, 2, requires_grad: true)
159
+ ```
160
+
161
+ Perform operations
162
+
163
+ ```ruby
164
+ y = x + 2
165
+ z = y * y * 3
166
+ out = z.mean
167
+ ```
168
+
169
+ Backprop
170
+
171
+ ```ruby
172
+ out.backward
173
+ ```
174
+
175
+ Get gradients
176
+
177
+ ```ruby
178
+ x.grad # tensor([[4.5, 4.5], [4.5, 4.5]])
179
+ ```
180
+
181
+ Stop autograd from tracking history
182
+
183
+ ```ruby
184
+ x.requires_grad # true
185
+ (x**2).requires_grad # true
186
+
187
+ Torch.no_grad do
188
+ (x**2).requires_grad # false
189
+ end
190
+ ```
191
+
192
+ ### Neural Networks
193
+
194
+ Define a neural network
195
+
196
+ ```ruby
197
+ class Net < Torch::NN::Module
198
+ def initialize
199
+ super
200
+ @conv1 = Torch::NN::Conv2d.new(1, 6, 3)
201
+ @conv2 = Torch::NN::Conv2d.new(6, 16, 3)
202
+ @fc1 = Torch::NN::Linear.new(16 * 6 * 6, 120)
203
+ @fc2 = Torch::NN::Linear.new(120, 84)
204
+ @fc3 = Torch::NN::Linear.new(84, 10)
205
+ end
206
+
207
+ def forward(x)
208
+ x = Torch::NN::F.max_pool2d(Torch::NN::F.relu(@conv1.call(x)), [2, 2])
209
+ x = Torch::NN::F.max_pool2d(Torch::NN::F.relu(@conv2.call(x)), 2)
210
+ x = x.view(-1, num_flat_features(x))
211
+ x = Torch::NN::F.relu(@fc1.call(x))
212
+ x = Torch::NN::F.relu(@fc2.call(x))
213
+ x = @fc3.call(x)
214
+ x
215
+ end
216
+
217
+ def num_flat_features(x)
218
+ size = x.size[1..-1]
219
+ num_features = 1
220
+ size.each do |s|
221
+ num_features *= s
222
+ end
223
+ num_features
224
+ end
225
+ end
226
+ ```
227
+
228
+ Create an instance of it
229
+
230
+ ```ruby
231
+ net = Net.new
232
+ input = Torch.randn(1, 1, 32, 32)
233
+ net.call(input)
234
+ ```
235
+
236
+ Get trainable parameters
237
+
238
+ ```ruby
239
+ net.parameters
240
+ ```
241
+
242
+ Zero the gradient buffers and backprop with random gradients
243
+
244
+ ```ruby
245
+ net.zero_grad
246
+ out.backward(Torch.randn(1, 10))
247
+ ```
248
+
249
+ Define a loss function
250
+
251
+ ```ruby
252
+ output = net.call(input)
253
+ target = Torch.randn(10)
254
+ target = target.view(1, -1)
255
+ criterion = Torch::NN::MSELoss.new
256
+ loss = criterion.call(output, target)
257
+ ```
258
+
259
+ Backprop
260
+
261
+ ```ruby
262
+ net.zero_grad
263
+ p net.conv1.bias.grad
264
+ loss.backward
265
+ p net.conv1.bias.grad
266
+ ```
267
+
268
+ Update the weights
269
+
270
+ ```ruby
271
+ learning_rate = 0.01
272
+ net.parameters.each do |f|
273
+ f.data.sub!(f.grad.data * learning_rate)
274
+ end
275
+ ```
276
+
277
+ Use an optimizer
278
+
279
+ ```ruby
280
+ optimizer = Torch::Optim::SGD.new(net.parameters, lr: 0.01)
281
+ optimizer.zero_grad
282
+ output = net.call(input)
283
+ loss = criterion.call(output, target)
284
+ loss.backward
285
+ optimizer.step
286
+ ```
287
+
288
+ ### Tensor Creation
289
+
290
+ Here’s a list of functions to create tensors (descriptions from the [C++ docs](https://pytorch.org/cppdocs/notes/tensor_creation.html)):
291
+
292
+ - `arange` returns a tensor with a sequence of integers
293
+
294
+ ```ruby
295
+ Torch.arange(3) # tensor([0, 1, 2])
296
+ ```
297
+
298
+ - `empty` returns a tensor with uninitialized values
299
+
300
+ ```ruby
301
+ Torch.empty(3) # tensor([7.0054e-45, 0.0000e+00, 0.0000e+00])
302
+ ```
303
+
304
+ - `eye` returns an identity matrix
305
+
306
+ ```ruby
307
+ Torch.eye(2) # tensor([[1, 0], [0, 1]])
308
+ ```
309
+
310
+ - `full` returns a tensor filled with a single value
311
+
312
+ ```ruby
313
+ Torch.full([3], 5) # tensor([5, 5, 5])
314
+ ```
315
+
316
+ - `linspace` returns a tensor with values linearly spaced in some interval
317
+
318
+ ```ruby
319
+ Torch.linspace(0, 10, 5) # tensor([0, 5, 10])
320
+ ```
321
+
322
+ - `logspace` returns a tensor with values logarithmically spaced in some interval
323
+
324
+ ```ruby
325
+ Torch.logspace(0, 10, 5) # tensor([1, 1e5, 1e10])
326
+ ```
327
+
328
+ - `ones` returns a tensor filled with all ones
329
+
330
+ ```ruby
331
+ Torch.ones(3) # tensor([1, 1, 1])
332
+ ```
333
+
334
+ - `rand` returns a tensor filled with values drawn from a uniform distribution on [0, 1)
335
+
336
+ ```ruby
337
+ Torch.rand(3) # tensor([0.5444, 0.8799, 0.5571])
338
+ ```
339
+
340
+ - `randint` returns a tensor with integers randomly drawn from an interval
341
+
342
+ ```ruby
343
+ Torch.randint(1, 10, [3]) # tensor([7, 6, 4])
344
+ ```
345
+
346
+ - `randn` returns a tensor filled with values drawn from a unit normal distribution
347
+
348
+ ```ruby
349
+ Torch.randn(3) # tensor([-0.7147, 0.6614, 1.1453])
350
+ ```
351
+
352
+ - `randperm` returns a tensor filled with a random permutation of integers in some interval
353
+
354
+ ```ruby
355
+ Torch.randperm(3) # tensor([2, 0, 1])
356
+ ```
357
+
358
+ - `zeros` returns a tensor filled with all zeros
359
+
360
+ ```ruby
361
+ Torch.zeros(3) # tensor([0, 0, 0])
362
+ ```
363
+
364
+ ## Examples
365
+
366
+ Here are a few full examples:
367
+
368
+ - [Image classification with MNIST](examples/mnist)
369
+ - [Collaborative filtering with MovieLens](examples/movielens)
370
+
371
+ ## LibTorch Installation
372
+
373
+ [Download LibTorch](https://pytorch.org/). For Linux, use the `cxx11 ABI` version. Then run:
374
+
375
+ ```sh
376
+ bundle config build.torch-rb --with-torch-dir=/path/to/libtorch
377
+ ```
378
+
379
+ ### Homebrew
380
+
381
+ For Mac, you can use Homebrew.
382
+
383
+ ```sh
384
+ brew install libtorch
385
+ ```
386
+
387
+ Then install the gem (no need for `bundle config`).
388
+
389
+ ## rbenv
390
+
391
+ This library uses [Rice](https://github.com/jasonroelofs/rice) to interface with LibTorch. Rice and earlier versions of rbenv don’t play nicely together. If you encounter an error during installation, upgrade ruby-build and reinstall your Ruby version.
392
+
393
+ ```sh
394
+ brew upgrade ruby-build
395
+ rbenv install [version]
396
+ ```
397
+
398
+ ## History
399
+
400
+ View the [changelog](https://github.com/ankane/torch-rb/blob/master/CHANGELOG.md)
401
+
402
+ ## Contributing
403
+
404
+ Everyone is encouraged to help improve this project. Here are a few ways you can help:
405
+
406
+ - [Report bugs](https://github.com/ankane/torch-rb/issues)
407
+ - Fix bugs and [submit pull requests](https://github.com/ankane/torch-rb/pulls)
408
+ - Write, clarify, or fix documentation
409
+ - Suggest or add new features
410
+
411
+ To get started with development:
412
+
413
+ ```sh
414
+ git clone https://github.com/ankane/torch-rb.git
415
+ cd torch-rb
416
+ bundle install
417
+ bundle exec rake compile -- --with-torch-dir=/path/to/libtorch
418
+ bundle exec rake test
419
+ ```
420
+
421
+ Here are some good resources for contributors:
422
+
423
+ - [PyTorch API](https://pytorch.org/docs/stable/torch.html)
424
+ - [PyTorch C++ API](https://pytorch.org/cppdocs/)
425
+ - [Tensor Creation API](https://pytorch.org/cppdocs/notes/tensor_creation.html)
426
+ - [Using the PyTorch C++ Frontend](https://pytorch.org/tutorials/advanced/cpp_frontend.html)