froog 0.3.0__tar.gz → 0.3.2__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: froog
3
- Version: 0.3.0
3
+ Version: 0.3.2
4
4
  Summary: a beautifully simplistic tensor library
5
5
  Author: Kevin Buhler
6
6
  License: MIT
@@ -27,7 +27,7 @@ Requires-Dist: urllib
27
27
  <br/>
28
28
  </div>
29
29
 
30
- ```froog``` is an easy-to-read tensor library (<a href="https://www.pepy.tech/projects/froog">11k pip installs!</a>) meant for those looking to get into machine learning and who want to understand how the underlying machine learning framework's code works before they are ultra-optimized (which all modern ml libraries are).
30
+ ```froog``` is an easy-to-read tensor library (<a href="https://www.pepy.tech/projects/froog">16k pip installs!</a>) meant for those looking to get into machine learning and who want to understand how the underlying machine learning framework's code works before they are ultra-optimized (which all modern ml libraries are).
31
31
 
32
32
  ```froog``` encapsulates everything from <a href="https://github.com/kevbuh/froog/blob/main/models/linear_regression.py">linear regression</a> to <a href="https://github.com/kevbuh/froog/blob/main/models/efficientnet.py">convolutional neural networks </a> in under 1000 lines.
33
33
 
@@ -108,13 +108,13 @@ Tensors are the fundamental datatype in froog, and one of the two main classes.
108
108
  - ```shape(self)```: this returns the tensor shape
109
109
 
110
110
  *Methods*
111
- - ``def zeros(*shape)```: this returns a tensor full of zeros with any shape that you pass in. Defaults to np.float32
111
+ - ```def zeros(*shape)```: this returns a tensor full of zeros with any shape that you pass in. Defaults to np.float32
112
112
 
113
113
  - ```def ones(*shape)```: this returns a tensor full of ones with any shape that you pass in. Defaults to np.float32
114
114
 
115
115
  - ```def randn(*shape):```: this returns a randomly initialized Tensor of *shape
116
116
 
117
- *Froog gradient calculations*
117
+ *Gradient calculations*
118
118
 
119
119
  - ```froog``` computes gradients automatically through a process called automatic differentiation. it has a variable ```_ctx```, which stores the chain of operations. it will take the current operation, lets say a dot product, and go to the dot product definition in ```froog/ops.py```, which contains a backward pass specfically for dot products. all methods, from add to 2x2 maxpools, have this backward pass implemented.
120
120
 
@@ -276,7 +276,7 @@ blocks_args = [
276
276
 
277
277
  ## Linear regression
278
278
 
279
- Doing linear regression in froog is pretty easy, check out the entire <a href="https://github.com/kevbuh/froog/blob/main/models/linear_regression.py">code</a>.
279
+ Doing linear regression in ```froog``` is pretty easy, check out the entire <a href="https://github.com/kevbuh/froog/blob/main/models/linear_regression.py">code</a>.
280
280
 
281
281
  ```bash
282
282
  VIZ=1 python3 linear_regression.py
@@ -11,7 +11,7 @@
11
11
  <br/>
12
12
  </div>
13
13
 
14
- ```froog``` is an easy-to-read tensor library (<a href="https://www.pepy.tech/projects/froog">11k pip installs!</a>) meant for those looking to get into machine learning and who want to understand how the underlying machine learning framework's code works before they are ultra-optimized (which all modern ml libraries are).
14
+ ```froog``` is an easy-to-read tensor library (<a href="https://www.pepy.tech/projects/froog">16k pip installs!</a>) meant for those looking to get into machine learning and who want to understand how the underlying machine learning framework's code works before they are ultra-optimized (which all modern ml libraries are).
15
15
 
16
16
  ```froog``` encapsulates everything from <a href="https://github.com/kevbuh/froog/blob/main/models/linear_regression.py">linear regression</a> to <a href="https://github.com/kevbuh/froog/blob/main/models/efficientnet.py">convolutional neural networks </a> in under 1000 lines.
17
17
 
@@ -92,13 +92,13 @@ Tensors are the fundamental datatype in froog, and one of the two main classes.
92
92
  - ```shape(self)```: this returns the tensor shape
93
93
 
94
94
  *Methods*
95
- - ``def zeros(*shape)```: this returns a tensor full of zeros with any shape that you pass in. Defaults to np.float32
95
+ - ```def zeros(*shape)```: this returns a tensor full of zeros with any shape that you pass in. Defaults to np.float32
96
96
 
97
97
  - ```def ones(*shape)```: this returns a tensor full of ones with any shape that you pass in. Defaults to np.float32
98
98
 
99
99
  - ```def randn(*shape):```: this returns a randomly initialized Tensor of *shape
100
100
 
101
- *Froog gradient calculations*
101
+ *Gradient calculations*
102
102
 
103
103
  - ```froog``` computes gradients automatically through a process called automatic differentiation. it has a variable ```_ctx```, which stores the chain of operations. it will take the current operation, lets say a dot product, and go to the dot product definition in ```froog/ops.py```, which contains a backward pass specfically for dot products. all methods, from add to 2x2 maxpools, have this backward pass implemented.
104
104
 
@@ -260,7 +260,7 @@ blocks_args = [
260
260
 
261
261
  ## Linear regression
262
262
 
263
- Doing linear regression in froog is pretty easy, check out the entire <a href="https://github.com/kevbuh/froog/blob/main/models/linear_regression.py">code</a>.
263
+ Doing linear regression in ```froog``` is pretty easy, check out the entire <a href="https://github.com/kevbuh/froog/blob/main/models/linear_regression.py">code</a>.
264
264
 
265
265
  ```bash
266
266
  VIZ=1 python3 linear_regression.py
@@ -10,8 +10,8 @@ from froog.tensor import Tensor
10
10
  import numpy as np
11
11
 
12
12
  def Linear(*x):
13
- # TODO: why dividing by sqrt?
14
- ret = np.random.uniform(-1., 1., size=x)/np.sqrt(np.prod(x)) # random init weights
13
+ # random Glorot initialization
14
+ ret = np.random.uniform(-1., 1., size=x)/np.sqrt(np.prod(x))
15
15
  return ret.astype(np.float32)
16
16
 
17
17
  def swish(x):
@@ -55,6 +55,6 @@ class BatchNorm2D:
55
55
  def __call__(self, x):
56
56
  x = x.sub(self.running_mean.reshape(shape=[1, -1, 1, 1]))
57
57
  x = x.mul(self.weight.reshape(shape=[1, -1, 1, 1]))
58
- x = x.div(self.running_var.add(Tensor([self.eps], gpu=x.gpu)).reshape(shape=[1, -1, 1, 1]).sqrt()) # TODO: shouldn't div go first?
58
+ x = x.div(self.running_var.add(Tensor([self.eps], gpu=x.gpu)).reshape(shape=[1, -1, 1, 1]).sqrt())
59
59
  x = x.add(self.bias.reshape(shape=[1, -1, 1, 1]))
60
60
  return x
@@ -340,7 +340,6 @@ class MaxPool2D(Function):
340
340
  *ctx.kernel_size)
341
341
  register('max_pool2d', MaxPool2D)
342
342
 
343
-
344
343
  class AvgPool2D(Function):
345
344
  @staticmethod
346
345
  def forward(ctx, x, kernel_size=(2,2)):
@@ -351,7 +350,7 @@ class AvgPool2D(Function):
351
350
  @staticmethod
352
351
  def backward(ctx, grad_output):
353
352
  s, = ctx.saved_tensors
354
- py, px = ctx.kernel_size # TODO: where does kernel_size come from?
353
+ py, px = ctx.kernel_size # kernel_size passed from forward context
355
354
  my, mx = (s[2]//py)*py, (s[3]//px)*px
356
355
  ret = np.zeros(s, dtype=grad_output.dtype)
357
356
  for Y in range(py):
@@ -5,6 +5,8 @@
5
5
  # | ___|| __ || |_| || |_| || || |
6
6
  # | | | | | || || || |_| |
7
7
  # |___| |___| |_||_______||_______||_______|
8
+ #
9
+ # OpenCL kernels
8
10
 
9
11
  import numpy as np
10
12
  from .tensor import Function, register
@@ -71,7 +73,6 @@ def unary_op(ctx, code, x):
71
73
  prg.unop(ctx.cl_queue, [np.prod(ret.shape)], None, x, ret)
72
74
  return ret
73
75
 
74
- # ???
75
76
  @functools.lru_cache
76
77
  def cl_pooling_krnl_build(cl_ctx, iter_op, result_op, init_val=0):
77
78
  prg = """
@@ -62,7 +62,7 @@ class Tensor:
62
62
  self.gpu = False
63
63
 
64
64
  self.data = data
65
- self.grad = None # TODO: why self.grad.data instead of self.grad?
65
+ self.grad = None
66
66
 
67
67
  if gpu:
68
68
  self.gpu_()
@@ -67,7 +67,8 @@ def im2col(x, H, W):
67
67
  tx = x.reshape(bs, -1)[:, idx]
68
68
 
69
69
  # all the time is spent here
70
- tx = tx.ravel() # TODO: whats the purpose of ravel ???
70
+ # np.ravel() flattens the array into a 1-dimensional shape
71
+ tx = tx.ravel()
71
72
  return tx.reshape(-1, cin*W*H)
72
73
 
73
74
  def col2im(tx, H, W, OY, OX):
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: froog
3
- Version: 0.3.0
3
+ Version: 0.3.2
4
4
  Summary: a beautifully simplistic tensor library
5
5
  Author: Kevin Buhler
6
6
  License: MIT
@@ -27,7 +27,7 @@ Requires-Dist: urllib
27
27
  <br/>
28
28
  </div>
29
29
 
30
- ```froog``` is an easy-to-read tensor library (<a href="https://www.pepy.tech/projects/froog">11k pip installs!</a>) meant for those looking to get into machine learning and who want to understand how the underlying machine learning framework's code works before they are ultra-optimized (which all modern ml libraries are).
30
+ ```froog``` is an easy-to-read tensor library (<a href="https://www.pepy.tech/projects/froog">16k pip installs!</a>) meant for those looking to get into machine learning and who want to understand how the underlying machine learning framework's code works before they are ultra-optimized (which all modern ml libraries are).
31
31
 
32
32
  ```froog``` encapsulates everything from <a href="https://github.com/kevbuh/froog/blob/main/models/linear_regression.py">linear regression</a> to <a href="https://github.com/kevbuh/froog/blob/main/models/efficientnet.py">convolutional neural networks </a> in under 1000 lines.
33
33
 
@@ -108,13 +108,13 @@ Tensors are the fundamental datatype in froog, and one of the two main classes.
108
108
  - ```shape(self)```: this returns the tensor shape
109
109
 
110
110
  *Methods*
111
- - ``def zeros(*shape)```: this returns a tensor full of zeros with any shape that you pass in. Defaults to np.float32
111
+ - ```def zeros(*shape)```: this returns a tensor full of zeros with any shape that you pass in. Defaults to np.float32
112
112
 
113
113
  - ```def ones(*shape)```: this returns a tensor full of ones with any shape that you pass in. Defaults to np.float32
114
114
 
115
115
  - ```def randn(*shape):```: this returns a randomly initialized Tensor of *shape
116
116
 
117
- *Froog gradient calculations*
117
+ *Gradient calculations*
118
118
 
119
119
  - ```froog``` computes gradients automatically through a process called automatic differentiation. it has a variable ```_ctx```, which stores the chain of operations. it will take the current operation, lets say a dot product, and go to the dot product definition in ```froog/ops.py```, which contains a backward pass specfically for dot products. all methods, from add to 2x2 maxpools, have this backward pass implemented.
120
120
 
@@ -276,7 +276,7 @@ blocks_args = [
276
276
 
277
277
  ## Linear regression
278
278
 
279
- Doing linear regression in froog is pretty easy, check out the entire <a href="https://github.com/kevbuh/froog/blob/main/models/linear_regression.py">code</a>.
279
+ Doing linear regression in ```froog``` is pretty easy, check out the entire <a href="https://github.com/kevbuh/froog/blob/main/models/linear_regression.py">code</a>.
280
280
 
281
281
  ```bash
282
282
  VIZ=1 python3 linear_regression.py
@@ -10,7 +10,7 @@ with open(os.path.join(directory, 'README.md'), encoding='utf-8') as f:
10
10
  long_description = f.read()
11
11
 
12
12
  setup(name='froog',
13
- version='0.3.0',
13
+ version='0.3.2',
14
14
  description='a beautifully simplistic tensor library',
15
15
  author='Kevin Buhler',
16
16
  license='MIT',
@@ -16,7 +16,6 @@ X_train, Y_train, X_test, Y_test = fetch_mnist()
16
16
  class SimpleMLP:
17
17
  def __init__(self):
18
18
  # 784 pixel inputs -> 128 -> 10 output
19
- # TODO: why down to 128?
20
19
  self.l1 = Tensor(Linear(784, 128))
21
20
  self.l2 = Tensor(Linear(128, 10))
22
21
 
@@ -73,7 +72,7 @@ def train(model, optimizer, steps, BS=128, gpu=False):
73
72
  model_outputs = model.forward(x)
74
73
 
75
74
  # ********* backward pass *********
76
- loss = model_outputs.mul(y).mean() # TODO: what exactly is NLL loss function?
75
+ loss = model_outputs.mul(y).mean()
77
76
  loss.backward()
78
77
  optimizer.step()
79
78
 
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes