froog 0.3.0__tar.gz → 0.3.1__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: froog
3
- Version: 0.3.0
3
+ Version: 0.3.1
4
4
  Summary: a beautifully simplistic tensor library
5
5
  Author: Kevin Buhler
6
6
  License: MIT
@@ -27,7 +27,7 @@ Requires-Dist: urllib
27
27
  <br/>
28
28
  </div>
29
29
 
30
- ```froog``` is an easy-to-read tensor library (<a href="https://www.pepy.tech/projects/froog">11k pip installs!</a>) meant for those looking to get into machine learning and who want to understand how the underlying machine learning framework's code works before they are ultra-optimized (which all modern ml libraries are).
30
+ ```froog``` is an easy-to-read tensor library (<a href="https://www.pepy.tech/projects/froog">16k pip installs!</a>) meant for those looking to get into machine learning and who want to understand how the underlying machine learning framework's code works before they are ultra-optimized (which all modern ml libraries are).
31
31
 
32
32
  ```froog``` encapsulates everything from <a href="https://github.com/kevbuh/froog/blob/main/models/linear_regression.py">linear regression</a> to <a href="https://github.com/kevbuh/froog/blob/main/models/efficientnet.py">convolutional neural networks </a> in under 1000 lines.
33
33
 
@@ -108,13 +108,13 @@ Tensors are the fundamental datatype in froog, and one of the two main classes.
108
108
  - ```shape(self)```: this returns the tensor shape
109
109
 
110
110
  *Methods*
111
- - ``def zeros(*shape)```: this returns a tensor full of zeros with any shape that you pass in. Defaults to np.float32
111
+ - ```def zeros(*shape)```: this returns a tensor full of zeros with any shape that you pass in. Defaults to np.float32
112
112
 
113
113
  - ```def ones(*shape)```: this returns a tensor full of ones with any shape that you pass in. Defaults to np.float32
114
114
 
115
115
  - ```def randn(*shape):```: this returns a randomly initialized Tensor of *shape
116
116
 
117
- *Froog gradient calculations*
117
+ *Gradient calculations*
118
118
 
119
119
  - ```froog``` computes gradients automatically through a process called automatic differentiation. it has a variable ```_ctx```, which stores the chain of operations. it will take the current operation, lets say a dot product, and go to the dot product definition in ```froog/ops.py```, which contains a backward pass specfically for dot products. all methods, from add to 2x2 maxpools, have this backward pass implemented.
120
120
 
@@ -276,7 +276,7 @@ blocks_args = [
276
276
 
277
277
  ## Linear regression
278
278
 
279
- Doing linear regression in froog is pretty easy, check out the entire <a href="https://github.com/kevbuh/froog/blob/main/models/linear_regression.py">code</a>.
279
+ Doing linear regression in ```froog``` is pretty easy, check out the entire <a href="https://github.com/kevbuh/froog/blob/main/models/linear_regression.py">code</a>.
280
280
 
281
281
  ```bash
282
282
  VIZ=1 python3 linear_regression.py
@@ -11,7 +11,7 @@
11
11
  <br/>
12
12
  </div>
13
13
 
14
- ```froog``` is an easy-to-read tensor library (<a href="https://www.pepy.tech/projects/froog">11k pip installs!</a>) meant for those looking to get into machine learning and who want to understand how the underlying machine learning framework's code works before they are ultra-optimized (which all modern ml libraries are).
14
+ ```froog``` is an easy-to-read tensor library (<a href="https://www.pepy.tech/projects/froog">16k pip installs!</a>) meant for those looking to get into machine learning and who want to understand how the underlying machine learning framework's code works before they are ultra-optimized (which all modern ml libraries are).
15
15
 
16
16
  ```froog``` encapsulates everything from <a href="https://github.com/kevbuh/froog/blob/main/models/linear_regression.py">linear regression</a> to <a href="https://github.com/kevbuh/froog/blob/main/models/efficientnet.py">convolutional neural networks </a> in under 1000 lines.
17
17
 
@@ -92,13 +92,13 @@ Tensors are the fundamental datatype in froog, and one of the two main classes.
92
92
  - ```shape(self)```: this returns the tensor shape
93
93
 
94
94
  *Methods*
95
- - ``def zeros(*shape)```: this returns a tensor full of zeros with any shape that you pass in. Defaults to np.float32
95
+ - ```def zeros(*shape)```: this returns a tensor full of zeros with any shape that you pass in. Defaults to np.float32
96
96
 
97
97
  - ```def ones(*shape)```: this returns a tensor full of ones with any shape that you pass in. Defaults to np.float32
98
98
 
99
99
  - ```def randn(*shape):```: this returns a randomly initialized Tensor of *shape
100
100
 
101
- *Froog gradient calculations*
101
+ *Gradient calculations*
102
102
 
103
103
  - ```froog``` computes gradients automatically through a process called automatic differentiation. it has a variable ```_ctx```, which stores the chain of operations. it will take the current operation, lets say a dot product, and go to the dot product definition in ```froog/ops.py```, which contains a backward pass specfically for dot products. all methods, from add to 2x2 maxpools, have this backward pass implemented.
104
104
 
@@ -260,7 +260,7 @@ blocks_args = [
260
260
 
261
261
  ## Linear regression
262
262
 
263
- Doing linear regression in froog is pretty easy, check out the entire <a href="https://github.com/kevbuh/froog/blob/main/models/linear_regression.py">code</a>.
263
+ Doing linear regression in ```froog``` is pretty easy, check out the entire <a href="https://github.com/kevbuh/froog/blob/main/models/linear_regression.py">code</a>.
264
264
 
265
265
  ```bash
266
266
  VIZ=1 python3 linear_regression.py
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: froog
3
- Version: 0.3.0
3
+ Version: 0.3.1
4
4
  Summary: a beautifully simplistic tensor library
5
5
  Author: Kevin Buhler
6
6
  License: MIT
@@ -27,7 +27,7 @@ Requires-Dist: urllib
27
27
  <br/>
28
28
  </div>
29
29
 
30
- ```froog``` is an easy-to-read tensor library (<a href="https://www.pepy.tech/projects/froog">11k pip installs!</a>) meant for those looking to get into machine learning and who want to understand how the underlying machine learning framework's code works before they are ultra-optimized (which all modern ml libraries are).
30
+ ```froog``` is an easy-to-read tensor library (<a href="https://www.pepy.tech/projects/froog">16k pip installs!</a>) meant for those looking to get into machine learning and who want to understand how the underlying machine learning framework's code works before they are ultra-optimized (which all modern ml libraries are).
31
31
 
32
32
  ```froog``` encapsulates everything from <a href="https://github.com/kevbuh/froog/blob/main/models/linear_regression.py">linear regression</a> to <a href="https://github.com/kevbuh/froog/blob/main/models/efficientnet.py">convolutional neural networks </a> in under 1000 lines.
33
33
 
@@ -108,13 +108,13 @@ Tensors are the fundamental datatype in froog, and one of the two main classes.
108
108
  - ```shape(self)```: this returns the tensor shape
109
109
 
110
110
  *Methods*
111
- - ``def zeros(*shape)```: this returns a tensor full of zeros with any shape that you pass in. Defaults to np.float32
111
+ - ```def zeros(*shape)```: this returns a tensor full of zeros with any shape that you pass in. Defaults to np.float32
112
112
 
113
113
  - ```def ones(*shape)```: this returns a tensor full of ones with any shape that you pass in. Defaults to np.float32
114
114
 
115
115
  - ```def randn(*shape):```: this returns a randomly initialized Tensor of *shape
116
116
 
117
- *Froog gradient calculations*
117
+ *Gradient calculations*
118
118
 
119
119
  - ```froog``` computes gradients automatically through a process called automatic differentiation. it has a variable ```_ctx```, which stores the chain of operations. it will take the current operation, lets say a dot product, and go to the dot product definition in ```froog/ops.py```, which contains a backward pass specfically for dot products. all methods, from add to 2x2 maxpools, have this backward pass implemented.
120
120
 
@@ -276,7 +276,7 @@ blocks_args = [
276
276
 
277
277
  ## Linear regression
278
278
 
279
- Doing linear regression in froog is pretty easy, check out the entire <a href="https://github.com/kevbuh/froog/blob/main/models/linear_regression.py">code</a>.
279
+ Doing linear regression in ```froog``` is pretty easy, check out the entire <a href="https://github.com/kevbuh/froog/blob/main/models/linear_regression.py">code</a>.
280
280
 
281
281
  ```bash
282
282
  VIZ=1 python3 linear_regression.py
@@ -10,7 +10,7 @@ with open(os.path.join(directory, 'README.md'), encoding='utf-8') as f:
10
10
  long_description = f.read()
11
11
 
12
12
  setup(name='froog',
13
- version='0.3.0',
13
+ version='0.3.1',
14
14
  description='a beautifully simplistic tensor library',
15
15
  author='Kevin Buhler',
16
16
  license='MIT',
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes