froog 0.4.0__py3-none-any.whl → 0.5.0__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,293 +0,0 @@
1
- Metadata-Version: 2.1
2
- Name: froog
3
- Version: 0.4.0
4
- Summary: a toy tensor library with opencl support
5
- Author: Kevin Buhler
6
- License: MIT
7
- Classifier: Programming Language :: Python :: 3
8
- Classifier: License :: OSI Approved :: MIT License
9
- Requires-Python: >=3.8
10
- Description-Content-Type: text/markdown
11
- License-File: LICENSE
12
- Requires-Dist: numpy
13
- Requires-Dist: requests
14
- Requires-Dist: matplotlib
15
-
16
- # froog <img src="https://github.com/kevbuh/froog/actions/workflows/test.yml/badge.svg" alt="unit test badge" > <img src="https://static.pepy.tech/badge/froog" alt="num downloads badge">
17
- <div align="center" >
18
- <img src="https://raw.githubusercontent.com/kevbuh/froog/main/assets/froog.png" alt="froog the frog" height="200">
19
- <br/>
20
- froog: fast real-time optimization of gradients
21
- <br/>
22
- a beautifully compact tensor library
23
- <br/>
24
- <a href="https://github.com/kevbuh/froog">homepage</a> | <a href="https://github.com/kevbuh/froog/tree/main/docs">documentation</a> | <a href="https://pypi.org/project/froog/">pip</a>
25
- <br/>
26
- <br/>
27
- </div>
28
-
29
- ```froog``` is an easy-to-read tensor library (<a href="https://www.pepy.tech/projects/froog">25k pip installs!</a>) meant for those looking to get into machine learning and who want to understand how the underlying machine learning framework's code works before they are ultra-optimized (which all modern ml libraries are).
30
-
31
- ```froog``` encapsulates everything from <a href="https://github.com/kevbuh/froog/blob/main/models/linear_regression.py">linear regression</a> to <a href="https://github.com/kevbuh/froog/blob/main/models/efficientnet.py">convolutional neural networks </a> in under 1000 lines.
32
-
33
- # Installation
34
- ```bash
35
- pip install froog
36
- ```
37
-
38
- More information on downloading ```froog``` in the <a href="https://github.com/kevbuh/froog/blob/main/docs/install.md">installation</a> docs.
39
-
40
- # Features
41
- - <a href="https://github.com/kevbuh/froog/blob/main/froog/tensor.py">Custom Tensors</a>
42
- - Backpropagation
43
- - Automatic Differentiation (autograd)
44
- - Forward and backward passes
45
- - <a href="https://github.com/kevbuh/froog/blob/main/froog/ops.py">ML Operations</a>
46
- - 2D Convolutions (im2col)
47
- - Numerical gradient checking
48
- - Acceleration methods (Adam)
49
- - Avg & Max pooling
50
- - <a href="https://github.com/kevbuh/froog/blob/main/models/efficientnet.py">EfficientNet</a> inference
51
- - <a href="https://github.com/kevbuh/froog/blob/main/froog/ops_gpu.py">GPU Support</a>
52
- - and a bunch <a href="https://github.com/kevbuh/froog/tree/main/froog">more</a>
53
-
54
- # Sneak Peek
55
-
56
- Here's how you set up a simple multilayer perceptron for classification on MNIST. Looks pretty similar to pytorch, right?
57
-
58
- ```python
59
- from froog.tensor import Tensor
60
- from froog.nn import Linear
61
- import froog.optim as optim
62
-
63
- class mnistMLP:
64
- def __init__(self):
65
- self.l1 = Tensor(Linear(784, 128)) # layer 1
66
- self.l2 = Tensor(Linear(128, 10)) # layer 2
67
-
68
- def forward(self, x):
69
- # forward pass through both layers and softmax for output probabilities
70
- return x.dot(self.l1).relu().dot(self.l2).logsoftmax()
71
-
72
- model = mnistMLP() # create model
73
- optim = optim.SGD([model.l1, model.l2], lr=0.001) # stochastic gradient descent optimizer
74
- ```
75
-
76
- # Overview
77
-
78
- The most fundamental concept in all of ```froog``` and machine learning frameworks is the <a href="https://github.com/kevbuh/froog/blob/977b09caf32f21904768b08b2772139596604603/froog/tensor.py#L47">Tensor</a>. A <a href="https://en.wikipedia.org/wiki/Tensor_(machine_learning)">tensor</a> is simply a matrix of matrices (more accurately a multi-dimensional array).
79
-
80
- You can create a Tensor in ```froog``` with:
81
- ```python
82
- import numpy as np
83
- from froog.tensor import Tensor
84
- my_tensor = Tensor([1,2,3])
85
- ```
86
-
87
- Notice how we had to import NumPy. If you want to create a Tensor manually, make sure that it is a NumPy array!
88
-
89
- <!-- Learn more about ```froog``` Tensors <a href="https://github.com/kevbuh/froog/blob/main/docs/tensors.md">here</a>. -->
90
-
91
- # Tensors
92
-
93
- Tensors are the fundamental datatype in froog, and one of the two main classes.
94
-
95
- - ```def __init__(self, data)```:
96
-
97
- - Tensor takes in one param, which is the data. Since ```froog``` has a NumPy backend, the input data into tensors has to be a NumPy array.
98
- - Tensor has a ```self.data``` state that it holds. this contains the data inside of the tensor.
99
- - In addition, it has ```self.grad```. this is to hold what the gradients of the tensor is.
100
- - Lastly, it has ```self._ctx```. These are the internal variables used for autograd graph construction. This is where the backward gradient computations are saved.
101
-
102
- *Properties*
103
-
104
- - ```shape(self)```: this returns the tensor shape
105
-
106
- *Methods*
107
- - ```def zeros(*shape)```: this returns a tensor full of zeros with any shape that you pass in. Defaults to np.float32
108
- - ```def ones(*shape)```: this returns a tensor full of ones with any shape that you pass in. Defaults to np.float32
109
- - ```def randn(*shape):```: this returns a randomly initialized Tensor of *shape
110
-
111
- *Gradient calculations*
112
-
113
- - ```froog``` computes gradients automatically through a process called automatic differentiation. it has a variable ```_ctx```, which stores the chain of operations. It will take the current operation, let's say a dot product, and go to the dot product definition in ```froog/ops.py```, which contains a backward pass specifically for dot products. all methods, from add to 2x2 maxpools, have this backward pass implemented.
114
-
115
- *Functions*
116
-
117
- The other base class in froog is the class ```Function```. It keeps track of input tensors and tensors that need to be saved for backward passes
118
-
119
- - ```def __init__(self, *tensors)```: takes in an argument of tensors, which are then saved.
120
- - ```def save_for_backward(self, *x)```: saves Tensors that are necessary to compute for the computation of gradients in the backward pass.
121
- - ```def apply(self, arg, *x)```: takes care of the forward pass, applying the operation to the inputs.
122
-
123
- *Register*
124
-
125
- - ```def register(name, fxn)```: allows you to add a method to a Tensor. This allows you to chain any operations, e.g. x.dot(w).relu(), where w is a tensor
126
-
127
- # Creating a model
128
-
129
- Okay cool, so now you know that ```froog```'s main datatype is a Tensor and uses NumPy in the background. How do I actually build a model?
130
-
131
- Here's an example of how to create an MNIST multi-layer perceptron (MLP). We wanted to make it as simple as possible for you to do so it resembles very basic Python concepts like classes. There are really only two methods you need to define:
132
- 1. ```__init__``` that defines layers of the model (here we use ```Linear```)
133
- 2. ```forward``` which defines how the input should flow through your model. We use a simple dot product with a ```Linear``` layer with a <a href="https://en.wikipedia.org/wiki/Rectifier_(neural_networks)">```ReLU```</a> activation.
134
-
135
- To create an instance of the ```mnistMLP``` model, do the same as you would in Python: ```model = mnistMLP()```.
136
-
137
- We support a few different optimizers, <a href="https://github.com/kevbuh/froog/blob/main/froog/optim.py">here</a> which include:
138
- - <a href="https://en.wikipedia.org/wiki/Stochastic_gradient_descent">Stochastic Gradient Descent (SGD)</a>
139
- - <a href="https://en.wikipedia.org/wiki/Stochastic_gradient_descent#Adam">Adaptive Moment Estimation (Adam)</a>
140
- - <a href="https://en.wikipedia.org/wiki/Stochastic_gradient_descent#RMSProp">Root Mean Square Propagation (RMSProp)</a>
141
-
142
- ```python
143
- from froog.tensor import Tensor
144
- import froog.optim as optim
145
- from froog.nn import Linear
146
-
147
- class mnistMLP:
148
- def __init__(self):
149
- self.l1 = Tensor(Linear(784, 128))
150
- self.l2 = Tensor(Linear(128, 10))
151
-
152
- def forward(self, x):
153
- return x.dot(self.l1).relu().dot(self.l2).logsoftmax()
154
-
155
- model = mnistMLP()
156
- optim = optim.SGD([model.l1, model.l2], lr=0.001)
157
- ```
158
-
159
- You can also create a convolutional neural net by
160
-
161
- ```python
162
- class SimpleConvNet:
163
- def __init__(self):
164
- conv_size = 5
165
- channels = 17
166
- self.c1 = Tensor(Linear(channels,1,conv_size,conv_size)) # (num_filters, color_channels, kernel_h, kernel_w)
167
- self.l1 = Tensor(Linear((28-conv_size+1)**2*channels, 128)) # (28-conv+1)(28-conv+1) since kernel isn't padded
168
- self.l2 = Tensor(Linear(128, 10)) # MNIST output is 10 classes
169
-
170
- def forward(self, x):
171
- x.data = x.data.reshape((-1, 1, 28, 28)) # get however many number of imgs in batch
172
- x = x.conv2d(self.c1).relu() # pass through conv first
173
- x = x.reshape(shape=(x.shape[0], -1))
174
- return x.dot(self.l1).relu().dot(self.l2).logsoftmax()
175
- ```
176
-
177
- So there are two quick examples to get you up and running. You might have noticed some operations like ```reshape``` and were wondering what else you can do with ```froog```. We have many more operations that you can apply on tensors:
178
- - ```.add()```
179
- - ```.sub()```
180
- - ```.mul()```
181
- - ```.sum()```
182
- - ```.pow()```
183
- - ```.dot()```
184
- - ```.relu()```
185
- - ```.sigmoid()```
186
- - ```.reshape()```
187
- - ```.pad2d()```
188
- - ```.logsoftmax()```
189
- - ```.conv2d()```
190
- - ```.im2col2dconv()```
191
- - ```.max_pool2d()```
192
- - ```.avg_pool2d()```
193
-
194
- ## GPU Support
195
-
196
- Have a GPU and need a speedup? You're in good luck because we have GPU support from for our operations defined in <a href="https://github.com/kevbuh/froog/blob/main/froog/ops_gpu.py">```ops_gpu.py```</a>. In order to do this we have a backend built on <a href="https://en.wikipedia.org/wiki/OpenCL">OpenCL</a> that invokes kernel functions that work on the GPU.
197
-
198
- Here's how you can send data to the GPU during a forward pass and bring it back to the CPU.
199
-
200
- ```python
201
- # ...
202
- GPU = os.getenv("GPU", None) is not None
203
- if GPU:
204
- out = model.forward(Tensor(img).to_gpu()).cpu()
205
- ```
206
-
207
- ## EfficientNet in froog!
208
-
209
- We have a really cool finished implementation of EfficientNet built entirely in ```froog```!
210
-
211
- In order to run EfficientNet inference:
212
-
213
- ```bash
214
- VIZ=1 python models/efficientnet.py <https://put_your_image_url_here>
215
- ```
216
-
217
- I would recommend checking out the <a href="https://github.com/kevbuh/froog/blob/main/models/efficientnet.py">code</a>, it's highly documented and pretty cool. Here's some of the documentation
218
- ```
219
- Paper : https://arxiv.org/abs/1905.11946
220
- PyTorch version : https://github.com/lukemelas/EfficientNet-PyTorch/blob/master/efficientnet_pytorch/model.py
221
-
222
- ConvNets are commonly developed at a fixed resource cost, and then scaled up in order to achieve better accuracy when more resources are made available
223
- The scaling method was found by performing a grid search to find the relationship between different scaling dimensions of the baseline network under a fixed resource constraint
224
- "SE" stands for "Squeeze-and-Excitation." Introduced by the "Squeeze-and-Excitation Networks" paper by Jie Hu, Li Shen, and Gang Sun (CVPR 2018).
225
-
226
- Environment Variables:
227
- VIZ=1 --> plots processed image and output probabilities
228
-
229
- How to Run:
230
- 'VIZ=1 python models/efficientnet.py https://your_image_url'
231
-
232
- EfficientNet Hyper-Parameters and Weights:
233
- url_map = {
234
- 'efficientnet-b0': 'https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/efficientnet-b0-355c32eb.pth',
235
- 'efficientnet-b1': 'https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/efficientnet-b1-f1951068.pth',
236
- 'efficientnet-b2': 'https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/efficientnet-b2-8bb594d6.pth',
237
- 'efficientnet-b3': 'https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/efficientnet-b3-5fb5a3c3.pth',
238
- 'efficientnet-b4': 'https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/efficientnet-b4-6ed6700e.pth',
239
- 'efficientnet-b5': 'https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/efficientnet-b5-b6417697.pth',
240
- 'efficientnet-b6': 'https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/efficientnet-b6-c76e70fd.pth',
241
- 'efficientnet-b7': 'https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/efficientnet-b7-dcc49843.pth',
242
- }
243
-
244
- params_dict = {
245
- # Coefficients: width,depth,res,dropout
246
- 'efficientnet-b0': (1.0, 1.0, 224, 0.2),
247
- 'efficientnet-b1': (1.0, 1.1, 240, 0.2),
248
- 'efficientnet-b2': (1.1, 1.2, 260, 0.3),
249
- 'efficientnet-b3': (1.2, 1.4, 300, 0.3),
250
- 'efficientnet-b4': (1.4, 1.8, 380, 0.4),
251
- 'efficientnet-b5': (1.6, 2.2, 456, 0.4),
252
- 'efficientnet-b6': (1.8, 2.6, 528, 0.5),
253
- 'efficientnet-b7': (2.0, 3.1, 600, 0.5),
254
- 'efficientnet-b8': (2.2, 3.6, 672, 0.5),
255
- 'efficientnet-l2': (4.3, 5.3, 800, 0.5),
256
- }
257
-
258
- blocks_args = [
259
- 'r1_k3_s11_e1_i32_o16_se0.25',
260
- 'r2_k3_s22_e6_i16_o24_se0.25',
261
- 'r2_k5_s22_e6_i24_o40_se0.25',
262
- 'r3_k3_s22_e6_i40_o80_se0.25',
263
- 'r3_k5_s11_e6_i80_o112_se0.25',
264
- 'r4_k5_s22_e6_i112_o192_se0.25',
265
- 'r1_k3_s11_e6_i192_o320_se0.25',
266
- ]
267
- ```
268
-
269
- ## Linear regression
270
-
271
- Doing linear regression in ```froog``` is pretty easy, check out the entire <a href="https://github.com/kevbuh/froog/blob/main/models/linear_regression.py">code</a>.
272
-
273
- ```bash
274
- VIZ=1 python3 linear_regression.py
275
- ```
276
-
277
- # Contributing
278
- <!-- THERES LOT OF STUFF TO WORK ON! VISIT THE <a href="https://github.com/kevbuh/froog/blob/main/docs/bounties.md">BOUNTY SHOP</a> -->
279
-
280
- Pull requests will be merged if they:
281
- * increase simplicity
282
- * increase functionality
283
- * increase efficiency
284
-
285
- More info on <a href="https://github.com/kevbuh/froog/blob/main/docs/contributing.md">contributing</a>.
286
-
287
- # Documentation
288
-
289
- Need more information about how ```froog``` works? Visit the <a href="https://github.com/kevbuh/froog/tree/main/docs">documentation</a>.
290
-
291
- # Interested in more?
292
-
293
- If you thought ```froog``` was cool, check out the inspirations for this project: pytorch, tinygrad, and https://github.com/karpathy/micrograd/blob/master/micrograd/engine.py
@@ -1,13 +0,0 @@
1
- froog/__init__.py,sha256=Mzxgj9bA2G4kcmbmY8fY0KCKgimPucn3hTVRWBJ-5_Q,57
2
- froog/gradcheck.py,sha256=HlA0VDKE-c44o0E73QsUTIVoNs-w_C9FyKFlHfoagIQ,2415
3
- froog/nn.py,sha256=_5dzIoxz1L4yEnYfONVc8xIs8vqRpUBBwZwHLvBu9yY,2023
4
- froog/ops.py,sha256=t0P0OzzlhYBgAhM3urLsXLl9LJNff_7Yiyc_pYgP5B4,14388
5
- froog/ops_gpu.py,sha256=ANDJiWS0e1ehcGCSDo_ZOOowaEPZrz2__FkX5z5uYf4,19367
6
- froog/optim.py,sha256=m8Q1xe3WwU41obGSMVjRMIs3rWqfqRWfhlbhF9oJyWA,2450
7
- froog/tensor.py,sha256=Wix4pE5-OIY8Pvv3bqNCSU_-c_wZV2HrmAtBwMPmAfE,7636
8
- froog/utils.py,sha256=vs9bmBOyfy0_NR8jPl2DMWBCAqIacJ6a75Lbso2MAKs,3347
9
- froog-0.4.0.dist-info/LICENSE,sha256=k_856uNmcNUoLC_HkI18c1WomqvQ1Ioqk6gwYfWQiaM,31
10
- froog-0.4.0.dist-info/METADATA,sha256=R87af4vXl_1TInaB-6XXD6y0b_OQPbZHBmgREJSc_RA,13782
11
- froog-0.4.0.dist-info/WHEEL,sha256=AtBG6SXL3KF_v0NxLf0ehyVOh0cold-JbJYXNGorC6Q,92
12
- froog-0.4.0.dist-info/top_level.txt,sha256=XPz35C_JWu20LlsVxIMdMZn8DD58Ak78LwgWFBGYZwY,6
13
- froog-0.4.0.dist-info/RECORD,,