torch-rb 0.1.8 → 0.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: fca87cb9b6d255287e9fafadf786c113798abbe76b36c82b8271b79cfbf3c2b9
4
- data.tar.gz: 4813c71f5ad6d078e78da03cf59f8036e9e76258ffb67f538899bba146dcba2a
3
+ metadata.gz: 9179470135a00453dcae9efbc6cd143112c5fec925bb5675a686e80e70b71b28
4
+ data.tar.gz: de365f50021d75338a78bcb6e0733bb430759fe7c4fcaea96b1ff0ed2a4b8d5d
5
5
  SHA512:
6
- metadata.gz: 22c7150e6a7d9132c40c67819beecc6b8c69b268bd227a8e4aa324ef5e2707004691d5b65dcd4ba1ac537bfaf783947da7e5a323417cffcbf7d348768c40b7c6
7
- data.tar.gz: 8a86c6b68efe6ad85a261d7033b87f040c22b2c670a0238accd6246274caed17b86d7b424441bba80c5ea67ec1bf53b05444dfb0c45ea5b8a52806d0ce19ec1e
6
+ metadata.gz: 46d3d49aa63c0764d20178f450aa0b88c88938c194e70040650e6c1a29899e5f4d896671571730dc23eb1fe039ede53d2714db0f6fe7506ad4382653a5e6ec18
7
+ data.tar.gz: 3fe47be264030fc2d84de85bb7d006337df37fb5c41b147332fe37dd21de7ba61bdc53a0f7ae9085e01c992a643b22d8216a0248b7a4d24487d88fd7f88a9ecf
data/CHANGELOG.md CHANGED
@@ -1,7 +1,16 @@
1
+ ## 0.2.0 (2020-04-22)
2
+
3
+ - No longer experimental
4
+ - Updated libtorch to 1.5.0
5
+ - Added support for GPUs and OpenMP
6
+ - Added adaptive pooling layers
7
+ - Tensor `dtype` is now based on Numo type for `Torch.tensor`
8
+ - Improved support for boolean tensors
9
+ - Fixed error with unbiased linear model
10
+
1
11
  ## 0.1.8 (2020-01-17)
2
12
 
3
- - Added support for libtorch 1.4.0
4
- - Dropped support for libtorch 1.3.1
13
+ - Updated libtorch to 1.4.0
5
14
 
6
15
  ## 0.1.7 (2020-01-10)
7
16
 
data/README.md CHANGED
@@ -1,10 +1,8 @@
1
- # Torch-rb
1
+ # Torch.rb
2
2
 
3
3
  :fire: Deep learning for Ruby, powered by [LibTorch](https://pytorch.org)
4
4
 
5
- This gem is currently experimental. There may be breaking changes between each release. Please report any issues you experience.
6
-
7
- [![Build Status](https://travis-ci.org/ankane/torch-rb.svg?branch=master)](https://travis-ci.org/ankane/torch-rb)
5
+ [![Build Status](https://travis-ci.org/ankane/torch.rb.svg?branch=master)](https://travis-ci.org/ankane/torch.rb)
8
6
 
9
7
  ## Installation
10
8
 
@@ -30,8 +28,6 @@ This library follows the [PyTorch API](https://pytorch.org/docs/stable/torch.htm
30
28
  - Methods that return booleans use `?` instead of `is_` (`tensor?` instead of `is_tensor`)
31
29
  - Numo is used instead of NumPy (`x.numo` instead of `x.numpy()`)
32
30
 
33
- Some methods and options are missing at the moment. PRs welcome!
34
-
35
31
  ## Tutorial
36
32
 
37
33
  Some examples below are from [Deep Learning with PyTorch: A 60 Minutes Blitz](https://pytorch.org/tutorials/beginner/deep_learning_60min_blitz.html)
@@ -379,6 +375,14 @@ Here are a few full examples:
379
375
  bundle config build.torch-rb --with-torch-dir=/path/to/libtorch
380
376
  ```
381
377
 
378
+ Here’s the list of compatible versions.
379
+
380
+ Torch.rb | LibTorch
381
+ --- | ---
382
+ 0.2.0 | 1.5.0
383
+ 0.1.8 | 1.4.0
384
+ 0.1.0-0.1.7 | 1.3.1
385
+
382
386
  ### Homebrew
383
387
 
384
388
  For Mac, you can use Homebrew.
@@ -389,6 +393,26 @@ brew install libtorch
389
393
 
390
394
  Then install the gem (no need for `bundle config`).
391
395
 
396
+ ## Performance
397
+
398
+ ### Linux
399
+
400
+ Deep learning is significantly faster on GPUs.
401
+
402
+ Install [CUDA](https://developer.nvidia.com/cuda-downloads) and [cuDNN](https://developer.nvidia.com/cudnn) and reinstall the gem.
403
+
404
+ Check if CUDA is available
405
+
406
+ ```ruby
407
+ Torch::CUDA.available?
408
+ ```
409
+
410
+ Move a neural network to a GPU
411
+
412
+ ```ruby
413
+ net.to("cuda")
414
+ ```
415
+
392
416
  ## rbenv
393
417
 
394
418
  This library uses [Rice](https://github.com/jasonroelofs/rice) to interface with LibTorch. Rice and earlier versions of rbenv don’t play nicely together. If you encounter an error during installation, upgrade ruby-build and reinstall your Ruby version.
@@ -400,22 +424,22 @@ rbenv install [version]
400
424
 
401
425
  ## History
402
426
 
403
- View the [changelog](https://github.com/ankane/torch-rb/blob/master/CHANGELOG.md)
427
+ View the [changelog](https://github.com/ankane/torch.rb/blob/master/CHANGELOG.md)
404
428
 
405
429
  ## Contributing
406
430
 
407
431
  Everyone is encouraged to help improve this project. Here are a few ways you can help:
408
432
 
409
- - [Report bugs](https://github.com/ankane/torch-rb/issues)
410
- - Fix bugs and [submit pull requests](https://github.com/ankane/torch-rb/pulls)
433
+ - [Report bugs](https://github.com/ankane/torch.rb/issues)
434
+ - Fix bugs and [submit pull requests](https://github.com/ankane/torch.rb/pulls)
411
435
  - Write, clarify, or fix documentation
412
436
  - Suggest or add new features
413
437
 
414
438
  To get started with development:
415
439
 
416
440
  ```sh
417
- git clone https://github.com/ankane/torch-rb.git
418
- cd torch-rb
441
+ git clone https://github.com/ankane/torch.rb.git
442
+ cd torch.rb
419
443
  bundle install
420
444
  bundle exec rake compile -- --with-torch-dir=/path/to/libtorch
421
445
  bundle exec rake test
data/ext/torch/ext.cpp CHANGED
@@ -131,12 +131,15 @@ void Init_ext()
131
131
  })
132
132
  .define_singleton_method(
133
133
  "_tensor",
134
- *[](Object o, IntArrayRef size, const torch::TensorOptions &options) {
135
- Array a = Array(o);
134
+ *[](Array a, IntArrayRef size, const torch::TensorOptions &options) {
136
135
  auto dtype = options.dtype();
137
136
  torch::Tensor t;
138
137
  if (dtype == torch::kBool) {
139
- throw std::runtime_error("Cannot create bool from tensor method yet");
138
+ std::vector<uint8_t> vec;
139
+ for (size_t i = 0; i < a.size(); i++) {
140
+ vec.push_back(from_ruby<bool>(a[i]));
141
+ }
142
+ t = torch::tensor(vec, options);
140
143
  } else {
141
144
  std::vector<float> vec;
142
145
  for (size_t i = 0; i < a.size(); i++) {
@@ -213,48 +216,56 @@ void Init_ext()
213
216
  .define_method(
214
217
  "_flat_data",
215
218
  *[](Tensor& self) {
219
+ Tensor tensor = self;
220
+
221
+ // move to CPU to get data
222
+ if (tensor.device().type() != torch::kCPU) {
223
+ torch::Device device("cpu");
224
+ tensor = tensor.to(device);
225
+ }
226
+
216
227
  Array a;
217
- auto dtype = self.dtype();
228
+ auto dtype = tensor.dtype();
218
229
 
219
230
  // TODO DRY if someone knows C++
220
231
  if (dtype == torch::kByte) {
221
- uint8_t* data = self.data_ptr<uint8_t>();
222
- for (int i = 0; i < self.numel(); i++) {
232
+ uint8_t* data = tensor.data_ptr<uint8_t>();
233
+ for (int i = 0; i < tensor.numel(); i++) {
223
234
  a.push(data[i]);
224
235
  }
225
236
  } else if (dtype == torch::kChar) {
226
- int8_t* data = self.data_ptr<int8_t>();
227
- for (int i = 0; i < self.numel(); i++) {
237
+ int8_t* data = tensor.data_ptr<int8_t>();
238
+ for (int i = 0; i < tensor.numel(); i++) {
228
239
  a.push(to_ruby<int>(data[i]));
229
240
  }
230
241
  } else if (dtype == torch::kShort) {
231
- int16_t* data = self.data_ptr<int16_t>();
232
- for (int i = 0; i < self.numel(); i++) {
242
+ int16_t* data = tensor.data_ptr<int16_t>();
243
+ for (int i = 0; i < tensor.numel(); i++) {
233
244
  a.push(data[i]);
234
245
  }
235
246
  } else if (dtype == torch::kInt) {
236
- int32_t* data = self.data_ptr<int32_t>();
237
- for (int i = 0; i < self.numel(); i++) {
247
+ int32_t* data = tensor.data_ptr<int32_t>();
248
+ for (int i = 0; i < tensor.numel(); i++) {
238
249
  a.push(data[i]);
239
250
  }
240
251
  } else if (dtype == torch::kLong) {
241
- int64_t* data = self.data_ptr<int64_t>();
242
- for (int i = 0; i < self.numel(); i++) {
252
+ int64_t* data = tensor.data_ptr<int64_t>();
253
+ for (int i = 0; i < tensor.numel(); i++) {
243
254
  a.push(data[i]);
244
255
  }
245
256
  } else if (dtype == torch::kFloat) {
246
- float* data = self.data_ptr<float>();
247
- for (int i = 0; i < self.numel(); i++) {
257
+ float* data = tensor.data_ptr<float>();
258
+ for (int i = 0; i < tensor.numel(); i++) {
248
259
  a.push(data[i]);
249
260
  }
250
261
  } else if (dtype == torch::kDouble) {
251
- double* data = self.data_ptr<double>();
252
- for (int i = 0; i < self.numel(); i++) {
262
+ double* data = tensor.data_ptr<double>();
263
+ for (int i = 0; i < tensor.numel(); i++) {
253
264
  a.push(data[i]);
254
265
  }
255
266
  } else if (dtype == torch::kBool) {
256
- bool* data = self.data_ptr<bool>();
257
- for (int i = 0; i < self.numel(); i++) {
267
+ bool* data = tensor.data_ptr<bool>();
268
+ for (int i = 0; i < tensor.numel(); i++) {
258
269
  a.push(data[i] ? True : False);
259
270
  }
260
271
  } else {
@@ -300,15 +311,13 @@ void Init_ext()
300
311
  .define_method(
301
312
  "device",
302
313
  *[](torch::TensorOptions& self, std::string device) {
303
- torch::DeviceType d;
304
- if (device == "cpu") {
305
- d = torch::kCPU;
306
- } else if (device == "cuda") {
307
- d = torch::kCUDA;
308
- } else {
309
- throw std::runtime_error("Unsupported device: " + device);
314
+ try {
315
+ // needed to catch exception
316
+ torch::Device d(device);
317
+ return self.device(d);
318
+ } catch (const c10::Error& error) {
319
+ throw std::runtime_error(error.what_without_backtrace());
310
320
  }
311
- return self.device(d);
312
321
  })
313
322
  .define_method(
314
323
  "requires_grad",
data/ext/torch/extconf.rb CHANGED
@@ -2,28 +2,55 @@ require "mkmf-rice"
2
2
 
3
3
  abort "Missing stdc++" unless have_library("stdc++")
4
4
 
5
- $CXXFLAGS << " -std=c++11"
5
+ $CXXFLAGS << " -std=c++14"
6
6
 
7
- # needed for Linux pre-cxx11 ABI version
8
- # $CXXFLAGS << " -D_GLIBCXX_USE_CXX11_ABI=0"
7
+ # change to 0 for Linux pre-cxx11 ABI version
8
+ $CXXFLAGS << " -D_GLIBCXX_USE_CXX11_ABI=1"
9
+
10
+ # TODO check compiler name
11
+ clang = RbConfig::CONFIG["host_os"] =~ /darwin/i
12
+
13
+ if have_library("omp") || have_library("gomp")
14
+ $CXXFLAGS << " -DAT_PARALLEL_OPENMP=1"
15
+ $CXXFLAGS << " -Xclang" if clang
16
+ $CXXFLAGS << " -fopenmp"
17
+ end
9
18
 
10
19
  # silence ruby/intern.h warning
11
20
  $CXXFLAGS << " -Wno-deprecated-register"
12
21
 
13
22
  # silence torch warnings
14
- $CXXFLAGS << " -Wno-shorten-64-to-32 -Wno-missing-noreturn"
23
+ if clang
24
+ $CXXFLAGS << " -Wno-shorten-64-to-32 -Wno-missing-noreturn"
25
+ else
26
+ $CXXFLAGS << " -Wno-duplicated-cond -Wno-suggest-attribute=noreturn"
27
+ end
15
28
 
16
29
  inc, lib = dir_config("torch")
17
-
18
30
  inc ||= "/usr/local/include"
19
31
  lib ||= "/usr/local/lib"
20
32
 
33
+ cuda_inc, cuda_lib = dir_config("cuda")
34
+ cuda_inc ||= "/usr/local/cuda/include"
35
+ cuda_lib ||= "/usr/local/cuda/lib64"
36
+
37
+ with_cuda = Dir["#{lib}/*torch_cuda*"].any? && have_library("cuda") && have_library("cudnn")
38
+
21
39
  $INCFLAGS << " -I#{inc}"
22
40
  $INCFLAGS << " -I#{inc}/torch/csrc/api/include"
23
41
 
24
42
  $LDFLAGS << " -Wl,-rpath,#{lib}"
43
+ $LDFLAGS << ":#{cuda_lib}/stubs:#{cuda_lib}" if with_cuda
25
44
  $LDFLAGS << " -L#{lib}"
26
- $LDFLAGS << " -ltorch -lc10"
45
+ $LDFLAGS << " -L#{cuda_lib}" if with_cuda
46
+
47
+ # https://github.com/pytorch/pytorch/blob/v1.5.0/torch/utils/cpp_extension.py#L1232-L1238
48
+ $LDFLAGS << " -lc10 -ltorch_cpu -ltorch"
49
+ if with_cuda
50
+ $LDFLAGS << " -lcuda -lnvrtc -lnvToolsExt -lcudart -lc10_cuda -ltorch_cuda -lcufft -lcurand -lcublas -lcudnn"
51
+ # TODO figure out why this is needed
52
+ $LDFLAGS << " -Wl,--no-as-needed,#{lib}/libtorch.so"
53
+ end
27
54
 
28
55
  # generate C++ functions
29
56
  puts "Generating C++ functions..."