torch-rb 0.12.2 → 0.13.1

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 8160298f6299869f699201dd9217785da5ac7d2562f05b12a2df2d4537143886
4
- data.tar.gz: cb902f45cb72adaaa2008e181af2af0844e926ef62f1d2ccaa67251cb70c5799
3
+ metadata.gz: 311e86910351cc5050fb92146e1f6c9ad018a9609d0f5ca0ef51253f1b813b22
4
+ data.tar.gz: afc18a5142abecba2fdd17f669ecdf9de5b90f35e8c79a637c4e960427a08439
5
5
  SHA512:
6
- metadata.gz: c5077c40cb32414ac7d230430fdcf1cd93f199a81100179a1aa26185d6216c049316bbf35d79968442fd4db5e18414991946a366e6e7e6260a904f001774faa2
7
- data.tar.gz: b4f1ff33b47596953c0654b07f1916a88ba247af72c1d4693d9b737802a790256d3232c0419c3a603fdebd51f47360833265326a1f27f8242604ff05766ea690
6
+ metadata.gz: 29df29a3dd0f752f4da731b84f0ca40832d49510a76784364d607977c8e90e32074f7e17ea07167e79330a97ef5f3378b19331d326301a3cc5a0e76180384ac6
7
+ data.tar.gz: 50e96247102c02cb6f5b0ddc470947f06dfb4efcc3a675ee174a334c4263c1dfe9f3be9c5bfb249239b8b4abce79f6e6dc625833081ec101180ba9a757f3421c
data/CHANGELOG.md CHANGED
@@ -1,3 +1,12 @@
1
+ ## 0.13.1 (2023-05-03)
2
+
3
+ - Fixed error with Rice 4.1
4
+
5
+ ## 0.13.0 (2023-04-13)
6
+
7
+ - Updated LibTorch to 2.0.0
8
+ - Dropped support for Ruby < 3
9
+
1
10
  ## 0.12.2 (2023-01-30)
2
11
 
3
12
  - Added experimental support for DataPipes
data/README.md CHANGED
@@ -8,6 +8,7 @@ Check out:
8
8
  - [TorchText](https://github.com/ankane/torchtext) for text and NLP tasks
9
9
  - [TorchAudio](https://github.com/ankane/torchaudio) for audio tasks
10
10
  - [TorchRec](https://github.com/ankane/torchrec-ruby) for recommendation systems
11
+ - [TorchData](https://github.com/ankane/torchdata-ruby) for data loading
11
12
 
12
13
  [![Build Status](https://github.com/ankane/torch.rb/workflows/build/badge.svg?branch=master)](https://github.com/ankane/torch.rb/actions)
13
14
 
@@ -409,17 +410,12 @@ Here’s the list of compatible versions.
409
410
 
410
411
  Torch.rb | LibTorch
411
412
  --- | ---
412
- 0.12.0+ | 1.13.0+
413
- 0.11.0-0.11.2 | 1.12.0-1.12.1
414
- 0.10.0-0.10.2 | 1.11.0
415
- 0.9.0-0.9.2 | 1.10.0-1.10.2
416
- 0.8.0-0.8.3 | 1.9.0-1.9.1
417
- 0.6.0-0.7.0 | 1.8.0-1.8.1
418
- 0.5.0-0.5.3 | 1.7.0-1.7.1
419
- 0.3.0-0.4.2 | 1.6.0
420
- 0.2.0-0.2.7 | 1.5.0-1.5.1
421
- 0.1.8 | 1.4.0
422
- 0.1.0-0.1.7 | 1.3.1
413
+ 0.13.x | 2.0.x
414
+ 0.12.x | 1.13.x
415
+ 0.11.x | 1.12.x
416
+ 0.10.x | 1.11.x
417
+ 0.9.x | 1.10.x
418
+ 0.8.x | 1.9.x
423
419
 
424
420
  ### Homebrew
425
421
 
@@ -431,7 +427,11 @@ brew install pytorch
431
427
 
432
428
  ## Performance
433
429
 
434
- Deep learning is significantly faster on a GPU. With Linux, install [CUDA](https://developer.nvidia.com/cuda-downloads) and [cuDNN](https://developer.nvidia.com/cudnn) and reinstall the gem.
430
+ Deep learning is significantly faster on a GPU.
431
+
432
+ ### Linux
433
+
434
+ With Linux, install [CUDA](https://developer.nvidia.com/cuda-downloads) and [cuDNN](https://developer.nvidia.com/cudnn) and reinstall the gem.
435
435
 
436
436
  Check if CUDA is available
437
437
 
@@ -453,6 +453,21 @@ ankane/ml-stack:torch-gpu
453
453
 
454
454
  And leave the other fields in that section blank. Once the notebook is running, you can run the [MNIST example](https://github.com/ankane/ml-stack/blob/master/torch-gpu/MNIST.ipynb).
455
455
 
456
+ ### Mac
457
+
458
+ With Apple silicon, check if Metal Performance Shaders (MPS) is available
459
+
460
+ ```ruby
461
+ Torch::Backends::MPS.available?
462
+ ```
463
+
464
+ Move a neural network to a GPU
465
+
466
+ ```ruby
467
+ device = Torch.device("mps")
468
+ net.to(device)
469
+ ```
470
+
456
471
  ## History
457
472
 
458
473
  View the [changelog](https://github.com/ankane/torch.rb/blob/master/CHANGELOG.md)
@@ -156,10 +156,7 @@ def generate_attach_def(name, type, def_method)
156
156
  ruby_name = ruby_name.sub(/\Asparse_/, "") if type == "sparse"
157
157
  ruby_name = name if name.start_with?("__")
158
158
 
159
- # cast for Ruby < 3.0 https://github.com/thisMagpie/fftw/issues/22#issuecomment-49508900
160
- cast = RUBY_VERSION.to_f > 2.7 ? "" : "(VALUE (*)(...)) "
161
-
162
- "rb_#{def_method}(m, \"#{ruby_name}\", #{cast}#{full_name(name, type)}, -1);"
159
+ "rb_#{def_method}(m, \"#{ruby_name}\", #{full_name(name, type)}, -1);"
163
160
  end
164
161
 
165
162
  def generate_method_def(name, functions, type, def_method)
@@ -415,6 +412,8 @@ def generate_function_params(function, params, remove_self)
415
412
  "memoryformat"
416
413
  when "Storage"
417
414
  "storage"
415
+ when "Layout"
416
+ "layout"
418
417
  else
419
418
  raise "Unknown type: #{param[:type]} (#{function.name})"
420
419
  end
@@ -445,7 +444,7 @@ def generate_dispatch_code(function, def_method, params, opt_index, remove_self)
445
444
  # torch::empty sets requires_grad by at::empty doesn't
446
445
  # https://github.com/pytorch/pytorch/issues/36455
447
446
  prefix = remove_self ? "self." : (opt_index ? "torch::" : "at::")
448
- dispatch = function.dispatch_name
447
+ dispatch = nil # function.dispatch_name
449
448
  unless dispatch
450
449
  dispatch = function.base_name
451
450
  dispatch += "_symint" if function.func.include?("SymInt")
@@ -640,7 +639,7 @@ def signature_type(param)
640
639
  when "int"
641
640
  "int64_t"
642
641
  when "SymInt"
643
- "c10::SymInt"
642
+ "SymInt"
644
643
  when "float"
645
644
  "double"
646
645
  when "str"