froog 0.4.0__tar.gz → 0.4.2__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: froog
3
- Version: 0.4.0
3
+ Version: 0.4.2
4
4
  Summary: a toy tensor library with opencl support
5
5
  Author: Kevin Buhler
6
6
  License: MIT
@@ -23,9 +23,10 @@ License-File: LICENSE
23
23
  <br/>
24
24
  </div>
25
25
 
26
- ```froog``` is an easy-to-read tensor library (<a href="https://www.pepy.tech/projects/froog">25k pip installs!</a>) meant for those looking to get into machine learning and who want to understand how the underlying machine learning framework's code works before they are ultra-optimized (which all modern ml libraries are).
26
+ ```froog``` is an easy-to-read tensor library (<a href="https://www.pepy.tech/projects/froog">25k pip installs!</a>) with OpenCL support for GPU acceleration. Inspired by pytorch, tinygrad, and micrograd.
27
27
 
28
- ```froog``` encapsulates everything from <a href="https://github.com/kevbuh/froog/blob/main/models/linear_regression.py">linear regression</a> to <a href="https://github.com/kevbuh/froog/blob/main/models/efficientnet.py">convolutional neural networks </a> in under 1000 lines.
28
+
29
+ <!-- ```froog``` encapsulates everything from <a href="https://github.com/kevbuh/froog/blob/main/models/linear_regression.py">linear regression</a> to <a href="https://github.com/kevbuh/froog/blob/main/models/efficientnet.py">convolutional neural networks </a> in under 2000 lines. -->
29
30
 
30
31
  # Installation
31
32
  ```bash
@@ -188,9 +189,9 @@ So there are two quick examples to get you up and running. You might have notice
188
189
  - ```.max_pool2d()```
189
190
  - ```.avg_pool2d()```
190
191
 
191
- ## GPU Support
192
+ # GPU Support
192
193
 
193
- Have a GPU and need a speedup? You're in good luck because we have GPU support from for our operations defined in <a href="https://github.com/kevbuh/froog/blob/main/froog/ops_gpu.py">```ops_gpu.py```</a>. In order to do this we have a backend built on <a href="https://en.wikipedia.org/wiki/OpenCL">OpenCL</a> that invokes kernel functions that work on the GPU.
194
+ Have a GPU and need a speedup? You're in good luck because we have GPU support via OpenCL for our operations defined in <a href="https://github.com/kevbuh/froog/blob/main/froog/ops_gpu.py">```ops_gpu.py```</a>.
194
195
 
195
196
  Here's how you can send data to the GPU during a forward pass and bring it back to the CPU.
196
197
 
@@ -201,75 +202,19 @@ if GPU:
201
202
  out = model.forward(Tensor(img).to_gpu()).cpu()
202
203
  ```
203
204
 
204
- ## EfficientNet in froog!
205
+ # EfficientNet in froog!
206
+
207
+ <img src="assets/efficientnet_pug.png" alt="pug" height="300">
205
208
 
206
209
  We have a really cool finished implementation of EfficientNet built entirely in ```froog```!
207
210
 
208
211
  In order to run EfficientNet inference:
209
212
 
210
213
  ```bash
211
- VIZ=1 python models/efficientnet.py <https://put_your_image_url_here>
212
- ```
213
-
214
- I would recommend checking out the <a href="https://github.com/kevbuh/froog/blob/main/models/efficientnet.py">code</a>, it's highly documented and pretty cool. Here's some of the documentation
215
- ```
216
- Paper : https://arxiv.org/abs/1905.11946
217
- PyTorch version : https://github.com/lukemelas/EfficientNet-PyTorch/blob/master/efficientnet_pytorch/model.py
218
-
219
- ConvNets are commonly developed at a fixed resource cost, and then scaled up in order to achieve better accuracy when more resources are made available
220
- The scaling method was found by performing a grid search to find the relationship between different scaling dimensions of the baseline network under a fixed resource constraint
221
- "SE" stands for "Squeeze-and-Excitation." Introduced by the "Squeeze-and-Excitation Networks" paper by Jie Hu, Li Shen, and Gang Sun (CVPR 2018).
222
-
223
- Environment Variables:
224
- VIZ=1 --> plots processed image and output probabilities
225
-
226
- How to Run:
227
- 'VIZ=1 python models/efficientnet.py https://your_image_url'
228
-
229
- EfficientNet Hyper-Parameters and Weights:
230
- url_map = {
231
- 'efficientnet-b0': 'https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/efficientnet-b0-355c32eb.pth',
232
- 'efficientnet-b1': 'https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/efficientnet-b1-f1951068.pth',
233
- 'efficientnet-b2': 'https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/efficientnet-b2-8bb594d6.pth',
234
- 'efficientnet-b3': 'https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/efficientnet-b3-5fb5a3c3.pth',
235
- 'efficientnet-b4': 'https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/efficientnet-b4-6ed6700e.pth',
236
- 'efficientnet-b5': 'https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/efficientnet-b5-b6417697.pth',
237
- 'efficientnet-b6': 'https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/efficientnet-b6-c76e70fd.pth',
238
- 'efficientnet-b7': 'https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/efficientnet-b7-dcc49843.pth',
239
- }
240
-
241
- params_dict = {
242
- # Coefficients: width,depth,res,dropout
243
- 'efficientnet-b0': (1.0, 1.0, 224, 0.2),
244
- 'efficientnet-b1': (1.0, 1.1, 240, 0.2),
245
- 'efficientnet-b2': (1.1, 1.2, 260, 0.3),
246
- 'efficientnet-b3': (1.2, 1.4, 300, 0.3),
247
- 'efficientnet-b4': (1.4, 1.8, 380, 0.4),
248
- 'efficientnet-b5': (1.6, 2.2, 456, 0.4),
249
- 'efficientnet-b6': (1.8, 2.6, 528, 0.5),
250
- 'efficientnet-b7': (2.0, 3.1, 600, 0.5),
251
- 'efficientnet-b8': (2.2, 3.6, 672, 0.5),
252
- 'efficientnet-l2': (4.3, 5.3, 800, 0.5),
253
- }
254
-
255
- blocks_args = [
256
- 'r1_k3_s11_e1_i32_o16_se0.25',
257
- 'r2_k3_s22_e6_i16_o24_se0.25',
258
- 'r2_k5_s22_e6_i24_o40_se0.25',
259
- 'r3_k3_s22_e6_i40_o80_se0.25',
260
- 'r3_k5_s11_e6_i80_o112_se0.25',
261
- 'r4_k5_s22_e6_i112_o192_se0.25',
262
- 'r1_k3_s11_e6_i192_o320_se0.25',
263
- ]
214
+ VIZ=1 python3 models/efficientnet.py <https://put_your_image_url_here>
264
215
  ```
265
216
 
266
- ## Linear regression
267
-
268
- Doing linear regression in ```froog``` is pretty easy, check out the entire <a href="https://github.com/kevbuh/froog/blob/main/models/linear_regression.py">code</a>.
269
-
270
- ```bash
271
- VIZ=1 python3 linear_regression.py
272
- ```
217
+ I would recommend checking out the <a href="https://github.com/kevbuh/froog/blob/main/models/efficientnet.py">code</a>, it's highly documented and pretty cool.
273
218
 
274
219
  # Contributing
275
220
  <!-- THERES LOT OF STUFF TO WORK ON! VISIT THE <a href="https://github.com/kevbuh/froog/blob/main/docs/bounties.md">BOUNTY SHOP</a> -->
@@ -279,12 +224,7 @@ Pull requests will be merged if they:
279
224
  * increase functionality
280
225
  * increase efficiency
281
226
 
282
- More info on <a href="https://github.com/kevbuh/froog/blob/main/docs/contributing.md">contributing</a>.
283
-
284
- # Documentation
285
-
286
- Need more information about how ```froog``` works? Visit the <a href="https://github.com/kevbuh/froog/tree/main/docs">documentation</a>.
287
-
288
- # Interested in more?
227
+ More info on <a href="https://github.com/kevbuh/froog/blob/main/docs/contributing.md">contributing</a>. Make sure to run ```python -m pytest``` before creating a PR.
289
228
 
290
- If you thought ```froog``` was cool, check out the inspirations for this project: pytorch, tinygrad, and https://github.com/karpathy/micrograd/blob/master/micrograd/engine.py
229
+ <!-- # Documentation
230
+ Need more information about how ```froog``` works? Visit the <a href="https://github.com/kevbuh/froog/tree/main/docs">documentation</a>. -->
@@ -11,9 +11,10 @@
11
11
  <br/>
12
12
  </div>
13
13
 
14
- ```froog``` is an easy-to-read tensor library (<a href="https://www.pepy.tech/projects/froog">25k pip installs!</a>) meant for those looking to get into machine learning and who want to understand how the underlying machine learning framework's code works before they are ultra-optimized (which all modern ml libraries are).
14
+ ```froog``` is an easy-to-read tensor library (<a href="https://www.pepy.tech/projects/froog">25k pip installs!</a>) with OpenCL support for GPU acceleration. Inspired by pytorch, tinygrad, and micrograd.
15
15
 
16
- ```froog``` encapsulates everything from <a href="https://github.com/kevbuh/froog/blob/main/models/linear_regression.py">linear regression</a> to <a href="https://github.com/kevbuh/froog/blob/main/models/efficientnet.py">convolutional neural networks </a> in under 1000 lines.
16
+
17
+ <!-- ```froog``` encapsulates everything from <a href="https://github.com/kevbuh/froog/blob/main/models/linear_regression.py">linear regression</a> to <a href="https://github.com/kevbuh/froog/blob/main/models/efficientnet.py">convolutional neural networks </a> in under 2000 lines. -->
17
18
 
18
19
  # Installation
19
20
  ```bash
@@ -176,9 +177,9 @@ So there are two quick examples to get you up and running. You might have notice
176
177
  - ```.max_pool2d()```
177
178
  - ```.avg_pool2d()```
178
179
 
179
- ## GPU Support
180
+ # GPU Support
180
181
 
181
- Have a GPU and need a speedup? You're in good luck because we have GPU support from for our operations defined in <a href="https://github.com/kevbuh/froog/blob/main/froog/ops_gpu.py">```ops_gpu.py```</a>. In order to do this we have a backend built on <a href="https://en.wikipedia.org/wiki/OpenCL">OpenCL</a> that invokes kernel functions that work on the GPU.
182
+ Have a GPU and need a speedup? You're in good luck because we have GPU support via OpenCL for our operations defined in <a href="https://github.com/kevbuh/froog/blob/main/froog/ops_gpu.py">```ops_gpu.py```</a>.
182
183
 
183
184
  Here's how you can send data to the GPU during a forward pass and bring it back to the CPU.
184
185
 
@@ -189,75 +190,19 @@ if GPU:
189
190
  out = model.forward(Tensor(img).to_gpu()).cpu()
190
191
  ```
191
192
 
192
- ## EfficientNet in froog!
193
+ # EfficientNet in froog!
194
+
195
+ <img src="assets/efficientnet_pug.png" alt="pug" height="300">
193
196
 
194
197
  We have a really cool finished implementation of EfficientNet built entirely in ```froog```!
195
198
 
196
199
  In order to run EfficientNet inference:
197
200
 
198
201
  ```bash
199
- VIZ=1 python models/efficientnet.py <https://put_your_image_url_here>
200
- ```
201
-
202
- I would recommend checking out the <a href="https://github.com/kevbuh/froog/blob/main/models/efficientnet.py">code</a>, it's highly documented and pretty cool. Here's some of the documentation
203
- ```
204
- Paper : https://arxiv.org/abs/1905.11946
205
- PyTorch version : https://github.com/lukemelas/EfficientNet-PyTorch/blob/master/efficientnet_pytorch/model.py
206
-
207
- ConvNets are commonly developed at a fixed resource cost, and then scaled up in order to achieve better accuracy when more resources are made available
208
- The scaling method was found by performing a grid search to find the relationship between different scaling dimensions of the baseline network under a fixed resource constraint
209
- "SE" stands for "Squeeze-and-Excitation." Introduced by the "Squeeze-and-Excitation Networks" paper by Jie Hu, Li Shen, and Gang Sun (CVPR 2018).
210
-
211
- Environment Variables:
212
- VIZ=1 --> plots processed image and output probabilities
213
-
214
- How to Run:
215
- 'VIZ=1 python models/efficientnet.py https://your_image_url'
216
-
217
- EfficientNet Hyper-Parameters and Weights:
218
- url_map = {
219
- 'efficientnet-b0': 'https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/efficientnet-b0-355c32eb.pth',
220
- 'efficientnet-b1': 'https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/efficientnet-b1-f1951068.pth',
221
- 'efficientnet-b2': 'https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/efficientnet-b2-8bb594d6.pth',
222
- 'efficientnet-b3': 'https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/efficientnet-b3-5fb5a3c3.pth',
223
- 'efficientnet-b4': 'https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/efficientnet-b4-6ed6700e.pth',
224
- 'efficientnet-b5': 'https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/efficientnet-b5-b6417697.pth',
225
- 'efficientnet-b6': 'https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/efficientnet-b6-c76e70fd.pth',
226
- 'efficientnet-b7': 'https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/efficientnet-b7-dcc49843.pth',
227
- }
228
-
229
- params_dict = {
230
- # Coefficients: width,depth,res,dropout
231
- 'efficientnet-b0': (1.0, 1.0, 224, 0.2),
232
- 'efficientnet-b1': (1.0, 1.1, 240, 0.2),
233
- 'efficientnet-b2': (1.1, 1.2, 260, 0.3),
234
- 'efficientnet-b3': (1.2, 1.4, 300, 0.3),
235
- 'efficientnet-b4': (1.4, 1.8, 380, 0.4),
236
- 'efficientnet-b5': (1.6, 2.2, 456, 0.4),
237
- 'efficientnet-b6': (1.8, 2.6, 528, 0.5),
238
- 'efficientnet-b7': (2.0, 3.1, 600, 0.5),
239
- 'efficientnet-b8': (2.2, 3.6, 672, 0.5),
240
- 'efficientnet-l2': (4.3, 5.3, 800, 0.5),
241
- }
242
-
243
- blocks_args = [
244
- 'r1_k3_s11_e1_i32_o16_se0.25',
245
- 'r2_k3_s22_e6_i16_o24_se0.25',
246
- 'r2_k5_s22_e6_i24_o40_se0.25',
247
- 'r3_k3_s22_e6_i40_o80_se0.25',
248
- 'r3_k5_s11_e6_i80_o112_se0.25',
249
- 'r4_k5_s22_e6_i112_o192_se0.25',
250
- 'r1_k3_s11_e6_i192_o320_se0.25',
251
- ]
202
+ VIZ=1 python3 models/efficientnet.py <https://put_your_image_url_here>
252
203
  ```
253
204
 
254
- ## Linear regression
255
-
256
- Doing linear regression in ```froog``` is pretty easy, check out the entire <a href="https://github.com/kevbuh/froog/blob/main/models/linear_regression.py">code</a>.
257
-
258
- ```bash
259
- VIZ=1 python3 linear_regression.py
260
- ```
205
+ I would recommend checking out the <a href="https://github.com/kevbuh/froog/blob/main/models/efficientnet.py">code</a>, it's highly documented and pretty cool.
261
206
 
262
207
  # Contributing
263
208
  <!-- THERES LOT OF STUFF TO WORK ON! VISIT THE <a href="https://github.com/kevbuh/froog/blob/main/docs/bounties.md">BOUNTY SHOP</a> -->
@@ -267,12 +212,7 @@ Pull requests will be merged if they:
267
212
  * increase functionality
268
213
  * increase efficiency
269
214
 
270
- More info on <a href="https://github.com/kevbuh/froog/blob/main/docs/contributing.md">contributing</a>.
271
-
272
- # Documentation
273
-
274
- Need more information about how ```froog``` works? Visit the <a href="https://github.com/kevbuh/froog/tree/main/docs">documentation</a>.
275
-
276
- # Interested in more?
215
+ More info on <a href="https://github.com/kevbuh/froog/blob/main/docs/contributing.md">contributing</a>. Make sure to run ```python -m pytest``` before creating a PR.
277
216
 
278
- If you thought ```froog``` was cool, check out the inspirations for this project: pytorch, tinygrad, and https://github.com/karpathy/micrograd/blob/master/micrograd/engine.py
217
+ <!-- # Documentation
218
+ Need more information about how ```froog``` works? Visit the <a href="https://github.com/kevbuh/froog/tree/main/docs">documentation</a>. -->
@@ -9,6 +9,7 @@
9
9
  import numpy as np
10
10
  from froog.tensor import Function, register
11
11
  from froog.utils import im2col, col2im
12
+ from froog.tensor import Tensor
12
13
 
13
14
  # *****************************************************
14
15
  # ____ ___ _____ __________ ____ ____ _____
@@ -142,6 +143,29 @@ class Sigmoid(Function):
142
143
  return grad_input
143
144
  register("sigmoid", Sigmoid)
144
145
 
146
+ # class Dropout(Function):
147
+ # """
148
+ # Randomly zeroes some of the elements of the input tensor with probability p during training.
149
+ # The elements to zero are randomized on every forward call.
150
+ # During inference, dropout is disabled and the input is scaled by (1-p) to maintain the expected value.
151
+ # """
152
+ # @staticmethod
153
+ # def forward(ctx, input, p=0.5, training=True):
154
+ # if training:
155
+ # # Create a binary mask with probability (1-p) of being 1
156
+ # mask = (np.random.random(input.shape) > p).astype(np.float32)
157
+ # ctx.save_for_backward(mask)
158
+ # return input * mask
159
+ # else:
160
+ # # during inference, scale the input by (1-p)
161
+ # return input * (1-p)
162
+
163
+ # @staticmethod
164
+ # def backward(ctx, grad_output):
165
+ # mask, = ctx.saved_tensors
166
+ # return grad_output * mask
167
+ # register("dropout", Dropout)
168
+
145
169
  class Reshape(Function):
146
170
  @staticmethod
147
171
  def forward(ctx, x, shape):
@@ -358,3 +382,53 @@ class AvgPool2D(Function):
358
382
  ret[:, :, Y:my:py, X:mx:px] = grad_output / py / px # divide by avg of pool, e.g. for 2x2 pool /= 4
359
383
  return ret
360
384
  register('avg_pool2d', AvgPool2D)
385
+
386
+ # *************************************
387
+ # _ ___ __ ____ ____ _____
388
+ # / | / / | / / / __ \/ __ \/ ___/
389
+ # / |/ / |/ / / / / / /_/ /\__ \
390
+ # / /| / /| / / /_/ / ____/___/ /
391
+ # /_/ |_/_/ |_/ \____/_/ /____/
392
+ #
393
+ # ************* nn ops ************
394
+
395
+ def Linear(*x):
396
+ # random Glorot initialization
397
+ ret = np.random.uniform(-1., 1., size=x)/np.sqrt(np.prod(x))
398
+ return ret.astype(np.float32)
399
+
400
+ def swish(x):
401
+ return x.mul(x.sigmoid())
402
+
403
+ class BatchNorm2D:
404
+ """
405
+ __call__ follows the formula from the link below
406
+ pytorch version: https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm2d.html
407
+
408
+ self.weight = γ
409
+ self.bias = β
410
+ self.running_mean = E[x]
411
+ self.running_var = Var[x]
412
+
413
+ the reshaping step ensures that each channel of the input has its
414
+ own separate set of parameters (mean, variance, weight, and bias)
415
+
416
+ self.running_mean has shape [num_channels].
417
+ self.running_mean.reshape(shape=[1, -1, 1, 1]) reshapes it to [1, num_channels, 1, 1]
418
+ """
419
+ def __init__(self, sz, eps=0.001):
420
+ self.eps = eps
421
+ self.weight = Tensor.zeros(sz)
422
+ self.bias = Tensor.zeros(sz)
423
+
424
+ # TODO: need running_mean and running_var
425
+ self.running_mean = Tensor.zeros(sz)
426
+ self.running_var = Tensor.zeros(sz)
427
+ self.num_batches_tracked = Tensor.zeros(1)
428
+
429
+ def __call__(self, x):
430
+ x = x.sub(self.running_mean.reshape(shape=[1, -1, 1, 1]))
431
+ x = x.mul(self.weight.reshape(shape=[1, -1, 1, 1]))
432
+ x = x.div(self.running_var.add(Tensor([self.eps], gpu=x.gpu)).reshape(shape=[1, -1, 1, 1]).sqrt())
433
+ x = x.add(self.bias.reshape(shape=[1, -1, 1, 1]))
434
+ return x
@@ -57,7 +57,7 @@ class RMSprop(Optimizer):
57
57
  RMSprop divides the learning rate by an exponentially decaying average of squared gradients.
58
58
 
59
59
  Notes:
60
- The reason RPROP doesnt work is that it violates the central idea behind stochastic gradient descent,
60
+ The reason RPROP doesn't work is that it violates the central idea behind stochastic gradient descent,
61
61
  which is when we have small enough learning rate, it averages the gradients over successive mini-batches.
62
62
  """
63
63
  def __init__(self, params, decay=0.9, lr=0.001, eps=1e-8):
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: froog
3
- Version: 0.4.0
3
+ Version: 0.4.2
4
4
  Summary: a toy tensor library with opencl support
5
5
  Author: Kevin Buhler
6
6
  License: MIT
@@ -23,9 +23,10 @@ License-File: LICENSE
23
23
  <br/>
24
24
  </div>
25
25
 
26
- ```froog``` is an easy-to-read tensor library (<a href="https://www.pepy.tech/projects/froog">25k pip installs!</a>) meant for those looking to get into machine learning and who want to understand how the underlying machine learning framework's code works before they are ultra-optimized (which all modern ml libraries are).
26
+ ```froog``` is an easy-to-read tensor library (<a href="https://www.pepy.tech/projects/froog">25k pip installs!</a>) with OpenCL support for GPU acceleration. Inspired by pytorch, tinygrad, and micrograd.
27
27
 
28
- ```froog``` encapsulates everything from <a href="https://github.com/kevbuh/froog/blob/main/models/linear_regression.py">linear regression</a> to <a href="https://github.com/kevbuh/froog/blob/main/models/efficientnet.py">convolutional neural networks </a> in under 1000 lines.
28
+
29
+ <!-- ```froog``` encapsulates everything from <a href="https://github.com/kevbuh/froog/blob/main/models/linear_regression.py">linear regression</a> to <a href="https://github.com/kevbuh/froog/blob/main/models/efficientnet.py">convolutional neural networks </a> in under 2000 lines. -->
29
30
 
30
31
  # Installation
31
32
  ```bash
@@ -188,9 +189,9 @@ So there are two quick examples to get you up and running. You might have notice
188
189
  - ```.max_pool2d()```
189
190
  - ```.avg_pool2d()```
190
191
 
191
- ## GPU Support
192
+ # GPU Support
192
193
 
193
- Have a GPU and need a speedup? You're in good luck because we have GPU support from for our operations defined in <a href="https://github.com/kevbuh/froog/blob/main/froog/ops_gpu.py">```ops_gpu.py```</a>. In order to do this we have a backend built on <a href="https://en.wikipedia.org/wiki/OpenCL">OpenCL</a> that invokes kernel functions that work on the GPU.
194
+ Have a GPU and need a speedup? You're in good luck because we have GPU support via OpenCL for our operations defined in <a href="https://github.com/kevbuh/froog/blob/main/froog/ops_gpu.py">```ops_gpu.py```</a>.
194
195
 
195
196
  Here's how you can send data to the GPU during a forward pass and bring it back to the CPU.
196
197
 
@@ -201,75 +202,19 @@ if GPU:
201
202
  out = model.forward(Tensor(img).to_gpu()).cpu()
202
203
  ```
203
204
 
204
- ## EfficientNet in froog!
205
+ # EfficientNet in froog!
206
+
207
+ <img src="assets/efficientnet_pug.png" alt="pug" height="300">
205
208
 
206
209
  We have a really cool finished implementation of EfficientNet built entirely in ```froog```!
207
210
 
208
211
  In order to run EfficientNet inference:
209
212
 
210
213
  ```bash
211
- VIZ=1 python models/efficientnet.py <https://put_your_image_url_here>
212
- ```
213
-
214
- I would recommend checking out the <a href="https://github.com/kevbuh/froog/blob/main/models/efficientnet.py">code</a>, it's highly documented and pretty cool. Here's some of the documentation
215
- ```
216
- Paper : https://arxiv.org/abs/1905.11946
217
- PyTorch version : https://github.com/lukemelas/EfficientNet-PyTorch/blob/master/efficientnet_pytorch/model.py
218
-
219
- ConvNets are commonly developed at a fixed resource cost, and then scaled up in order to achieve better accuracy when more resources are made available
220
- The scaling method was found by performing a grid search to find the relationship between different scaling dimensions of the baseline network under a fixed resource constraint
221
- "SE" stands for "Squeeze-and-Excitation." Introduced by the "Squeeze-and-Excitation Networks" paper by Jie Hu, Li Shen, and Gang Sun (CVPR 2018).
222
-
223
- Environment Variables:
224
- VIZ=1 --> plots processed image and output probabilities
225
-
226
- How to Run:
227
- 'VIZ=1 python models/efficientnet.py https://your_image_url'
228
-
229
- EfficientNet Hyper-Parameters and Weights:
230
- url_map = {
231
- 'efficientnet-b0': 'https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/efficientnet-b0-355c32eb.pth',
232
- 'efficientnet-b1': 'https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/efficientnet-b1-f1951068.pth',
233
- 'efficientnet-b2': 'https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/efficientnet-b2-8bb594d6.pth',
234
- 'efficientnet-b3': 'https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/efficientnet-b3-5fb5a3c3.pth',
235
- 'efficientnet-b4': 'https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/efficientnet-b4-6ed6700e.pth',
236
- 'efficientnet-b5': 'https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/efficientnet-b5-b6417697.pth',
237
- 'efficientnet-b6': 'https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/efficientnet-b6-c76e70fd.pth',
238
- 'efficientnet-b7': 'https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/efficientnet-b7-dcc49843.pth',
239
- }
240
-
241
- params_dict = {
242
- # Coefficients: width,depth,res,dropout
243
- 'efficientnet-b0': (1.0, 1.0, 224, 0.2),
244
- 'efficientnet-b1': (1.0, 1.1, 240, 0.2),
245
- 'efficientnet-b2': (1.1, 1.2, 260, 0.3),
246
- 'efficientnet-b3': (1.2, 1.4, 300, 0.3),
247
- 'efficientnet-b4': (1.4, 1.8, 380, 0.4),
248
- 'efficientnet-b5': (1.6, 2.2, 456, 0.4),
249
- 'efficientnet-b6': (1.8, 2.6, 528, 0.5),
250
- 'efficientnet-b7': (2.0, 3.1, 600, 0.5),
251
- 'efficientnet-b8': (2.2, 3.6, 672, 0.5),
252
- 'efficientnet-l2': (4.3, 5.3, 800, 0.5),
253
- }
254
-
255
- blocks_args = [
256
- 'r1_k3_s11_e1_i32_o16_se0.25',
257
- 'r2_k3_s22_e6_i16_o24_se0.25',
258
- 'r2_k5_s22_e6_i24_o40_se0.25',
259
- 'r3_k3_s22_e6_i40_o80_se0.25',
260
- 'r3_k5_s11_e6_i80_o112_se0.25',
261
- 'r4_k5_s22_e6_i112_o192_se0.25',
262
- 'r1_k3_s11_e6_i192_o320_se0.25',
263
- ]
214
+ VIZ=1 python3 models/efficientnet.py <https://put_your_image_url_here>
264
215
  ```
265
216
 
266
- ## Linear regression
267
-
268
- Doing linear regression in ```froog``` is pretty easy, check out the entire <a href="https://github.com/kevbuh/froog/blob/main/models/linear_regression.py">code</a>.
269
-
270
- ```bash
271
- VIZ=1 python3 linear_regression.py
272
- ```
217
+ I would recommend checking out the <a href="https://github.com/kevbuh/froog/blob/main/models/efficientnet.py">code</a>, it's highly documented and pretty cool.
273
218
 
274
219
  # Contributing
275
220
  <!-- THERES LOT OF STUFF TO WORK ON! VISIT THE <a href="https://github.com/kevbuh/froog/blob/main/docs/bounties.md">BOUNTY SHOP</a> -->
@@ -279,12 +224,7 @@ Pull requests will be merged if they:
279
224
  * increase functionality
280
225
  * increase efficiency
281
226
 
282
- More info on <a href="https://github.com/kevbuh/froog/blob/main/docs/contributing.md">contributing</a>.
283
-
284
- # Documentation
285
-
286
- Need more information about how ```froog``` works? Visit the <a href="https://github.com/kevbuh/froog/tree/main/docs">documentation</a>.
287
-
288
- # Interested in more?
227
+ More info on <a href="https://github.com/kevbuh/froog/blob/main/docs/contributing.md">contributing</a>. Make sure to run ```python -m pytest``` before creating a PR.
289
228
 
290
- If you thought ```froog``` was cool, check out the inspirations for this project: pytorch, tinygrad, and https://github.com/karpathy/micrograd/blob/master/micrograd/engine.py
229
+ <!-- # Documentation
230
+ Need more information about how ```froog``` works? Visit the <a href="https://github.com/kevbuh/froog/tree/main/docs">documentation</a>. -->
@@ -3,7 +3,6 @@ README.md
3
3
  setup.py
4
4
  froog/__init__.py
5
5
  froog/gradcheck.py
6
- froog/nn.py
7
6
  froog/ops.py
8
7
  froog/ops_gpu.py
9
8
  froog/optim.py
@@ -9,7 +9,7 @@ with open(os.path.join(directory, 'README.md'), encoding='utf-8') as f:
9
9
  long_description = f.read()
10
10
 
11
11
  setup(name='froog',
12
- version='0.4.0',
12
+ version='0.4.2',
13
13
  description='a toy tensor library with opencl support',
14
14
  author='Kevin Buhler',
15
15
  license='MIT',
@@ -2,7 +2,7 @@ import numpy as np
2
2
  from tqdm import trange
3
3
  from froog.tensor import Tensor, GPU
4
4
  from froog.utils import fetch_mnist
5
- from froog.nn import Linear
5
+ from froog.ops import Linear
6
6
  import froog.optim as optim
7
7
  import unittest
8
8
  import os
froog-0.4.0/froog/nn.py DELETED
@@ -1,60 +0,0 @@
1
- # _______ ______ _______ _______ _______
2
- # | || _ | | || || |
3
- # | ___|| | || | _ || _ || ___|
4
- # | |___ | |_||_ | | | || | | || | __
5
- # | ___|| __ || |_| || |_| || || |
6
- # | | | | | || || || |_| |
7
- # |___| |___| |_||_______||_______||_______|
8
-
9
- from froog.tensor import Tensor
10
- import numpy as np
11
-
12
- def Linear(*x):
13
- # random Glorot initialization
14
- ret = np.random.uniform(-1., 1., size=x)/np.sqrt(np.prod(x))
15
- return ret.astype(np.float32)
16
-
17
- def swish(x):
18
- return x.mul(x.sigmoid())
19
-
20
- # *************************************
21
- # _ ___ __ ____ ____ _____
22
- # / | / / | / / / __ \/ __ \/ ___/
23
- # / |/ / |/ / / / / / /_/ /\__ \
24
- # / /| / /| / / /_/ / ____/___/ /
25
- # /_/ |_/_/ |_/ \____/_/ /____/
26
- #
27
- # ************* nn ops ************
28
-
29
- class BatchNorm2D:
30
- """
31
- __call__ follows the formula from the link below
32
- pytorch version: https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm2d.html
33
-
34
- self.weight = γ
35
- self.bias = β
36
- self.running_mean = E[x]
37
- self.running_var = Var[x]
38
-
39
- the reshaping step ensures that each channel of the input has its
40
- own separate set of parameters (mean, variance, weight, and bias)
41
-
42
- self.running_mean has shape [num_channels].
43
- self.running_mean.reshape(shape=[1, -1, 1, 1]) reshapes it to [1, num_channels, 1, 1]
44
- """
45
- def __init__(self, sz, eps=0.001):
46
- self.eps = eps
47
- self.weight = Tensor.zeros(sz)
48
- self.bias = Tensor.zeros(sz)
49
-
50
- # TODO: need running_mean and running_var
51
- self.running_mean = Tensor.zeros(sz)
52
- self.running_var = Tensor.zeros(sz)
53
- self.num_batches_tracked = Tensor.zeros(1)
54
-
55
- def __call__(self, x):
56
- x = x.sub(self.running_mean.reshape(shape=[1, -1, 1, 1]))
57
- x = x.mul(self.weight.reshape(shape=[1, -1, 1, 1]))
58
- x = x.div(self.running_var.add(Tensor([self.eps], gpu=x.gpu)).reshape(shape=[1, -1, 1, 1]).sqrt())
59
- x = x.add(self.bias.reshape(shape=[1, -1, 1, 1]))
60
- return x
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes