froog 0.4.2__py3-none-any.whl → 0.5.0__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,233 +0,0 @@
1
- Metadata-Version: 2.1
2
- Name: froog
3
- Version: 0.4.2
4
- Summary: a toy tensor library with opencl support
5
- Author: Kevin Buhler
6
- License: MIT
7
- Classifier: Programming Language :: Python :: 3
8
- Classifier: License :: OSI Approved :: MIT License
9
- Requires-Python: >=3.8
10
- Description-Content-Type: text/markdown
11
- License-File: LICENSE
12
- Requires-Dist: numpy
13
- Requires-Dist: requests
14
- Requires-Dist: matplotlib
15
-
16
- # froog <img src="https://github.com/kevbuh/froog/actions/workflows/test.yml/badge.svg" alt="unit test badge" > <img src="https://static.pepy.tech/badge/froog" alt="num downloads badge">
17
- <div align="center" >
18
- <img src="https://raw.githubusercontent.com/kevbuh/froog/main/assets/froog.png" alt="froog the frog" height="200">
19
- <br/>
20
- froog: fast real-time optimization of gradients
21
- <br/>
22
- a beautifully compact tensor library
23
- <br/>
24
- <a href="https://github.com/kevbuh/froog">homepage</a> | <a href="https://github.com/kevbuh/froog/tree/main/docs">documentation</a> | <a href="https://pypi.org/project/froog/">pip</a>
25
- <br/>
26
- <br/>
27
- </div>
28
-
29
- ```froog``` is an easy-to-read tensor library (<a href="https://www.pepy.tech/projects/froog">25k pip installs!</a>) with OpenCL support for GPU acceleration. Inspired by pytorch, tinygrad, and micrograd.
30
-
31
-
32
- <!-- ```froog``` encapsulates everything from <a href="https://github.com/kevbuh/froog/blob/main/models/linear_regression.py">linear regression</a> to <a href="https://github.com/kevbuh/froog/blob/main/models/efficientnet.py">convolutional neural networks </a> in under 2000 lines. -->
33
-
34
- # Installation
35
- ```bash
36
- pip install froog
37
- ```
38
-
39
- More information on downloading ```froog``` in the <a href="https://github.com/kevbuh/froog/blob/main/docs/install.md">installation</a> docs.
40
-
41
- # Features
42
- - <a href="https://github.com/kevbuh/froog/blob/main/froog/tensor.py">Custom Tensors</a>
43
- - Backpropagation
44
- - Automatic Differentiation (autograd)
45
- - Forward and backward passes
46
- - <a href="https://github.com/kevbuh/froog/blob/main/froog/ops.py">ML Operations</a>
47
- - 2D Convolutions (im2col)
48
- - Numerical gradient checking
49
- - Acceleration methods (Adam)
50
- - Avg & Max pooling
51
- - <a href="https://github.com/kevbuh/froog/blob/main/models/efficientnet.py">EfficientNet</a> inference
52
- - <a href="https://github.com/kevbuh/froog/blob/main/froog/ops_gpu.py">GPU Support</a>
53
- - and a bunch <a href="https://github.com/kevbuh/froog/tree/main/froog">more</a>
54
-
55
- # Sneak Peek
56
-
57
- Here's how you set up a simple multilayer perceptron for classification on MNIST. Looks pretty similar to pytorch, right?
58
-
59
- ```python
60
- from froog.tensor import Tensor
61
- from froog.nn import Linear
62
- import froog.optim as optim
63
-
64
- class mnistMLP:
65
- def __init__(self):
66
- self.l1 = Tensor(Linear(784, 128)) # layer 1
67
- self.l2 = Tensor(Linear(128, 10)) # layer 2
68
-
69
- def forward(self, x):
70
- # forward pass through both layers and softmax for output probabilities
71
- return x.dot(self.l1).relu().dot(self.l2).logsoftmax()
72
-
73
- model = mnistMLP() # create model
74
- optim = optim.SGD([model.l1, model.l2], lr=0.001) # stochastic gradient descent optimizer
75
- ```
76
-
77
- # Overview
78
-
79
- The most fundamental concept in all of ```froog``` and machine learning frameworks is the <a href="https://github.com/kevbuh/froog/blob/977b09caf32f21904768b08b2772139596604603/froog/tensor.py#L47">Tensor</a>. A <a href="https://en.wikipedia.org/wiki/Tensor_(machine_learning)">tensor</a> is simply a matrix of matrices (more accurately a multi-dimensional array).
80
-
81
- You can create a Tensor in ```froog``` with:
82
- ```python
83
- import numpy as np
84
- from froog.tensor import Tensor
85
- my_tensor = Tensor([1,2,3])
86
- ```
87
-
88
- Notice how we had to import NumPy. If you want to create a Tensor manually, make sure that it is a NumPy array!
89
-
90
- <!-- Learn more about ```froog``` Tensors <a href="https://github.com/kevbuh/froog/blob/main/docs/tensors.md">here</a>. -->
91
-
92
- # Tensors
93
-
94
- Tensors are the fundamental datatype in froog, and one of the two main classes.
95
-
96
- - ```def __init__(self, data)```:
97
-
98
- - Tensor takes in one param, which is the data. Since ```froog``` has a NumPy backend, the input data into tensors has to be a NumPy array.
99
- - Tensor has a ```self.data``` state that it holds. this contains the data inside of the tensor.
100
- - In addition, it has ```self.grad```. this is to hold what the gradients of the tensor is.
101
- - Lastly, it has ```self._ctx```. These are the internal variables used for autograd graph construction. This is where the backward gradient computations are saved.
102
-
103
- *Properties*
104
-
105
- - ```shape(self)```: this returns the tensor shape
106
-
107
- *Methods*
108
- - ```def zeros(*shape)```: this returns a tensor full of zeros with any shape that you pass in. Defaults to np.float32
109
- - ```def ones(*shape)```: this returns a tensor full of ones with any shape that you pass in. Defaults to np.float32
110
- - ```def randn(*shape):```: this returns a randomly initialized Tensor of *shape
111
-
112
- *Gradient calculations*
113
-
114
- - ```froog``` computes gradients automatically through a process called automatic differentiation. it has a variable ```_ctx```, which stores the chain of operations. It will take the current operation, let's say a dot product, and go to the dot product definition in ```froog/ops.py```, which contains a backward pass specifically for dot products. all methods, from add to 2x2 maxpools, have this backward pass implemented.
115
-
116
- *Functions*
117
-
118
- The other base class in froog is the class ```Function```. It keeps track of input tensors and tensors that need to be saved for backward passes
119
-
120
- - ```def __init__(self, *tensors)```: takes in an argument of tensors, which are then saved.
121
- - ```def save_for_backward(self, *x)```: saves Tensors that are necessary to compute for the computation of gradients in the backward pass.
122
- - ```def apply(self, arg, *x)```: takes care of the forward pass, applying the operation to the inputs.
123
-
124
- *Register*
125
-
126
- - ```def register(name, fxn)```: allows you to add a method to a Tensor. This allows you to chain any operations, e.g. x.dot(w).relu(), where w is a tensor
127
-
128
- # Creating a model
129
-
130
- Okay cool, so now you know that ```froog```'s main datatype is a Tensor and uses NumPy in the background. How do I actually build a model?
131
-
132
- Here's an example of how to create an MNIST multi-layer perceptron (MLP). We wanted to make it as simple as possible for you to do so it resembles very basic Python concepts like classes. There are really only two methods you need to define:
133
- 1. ```__init__``` that defines layers of the model (here we use ```Linear```)
134
- 2. ```forward``` which defines how the input should flow through your model. We use a simple dot product with a ```Linear``` layer with a <a href="https://en.wikipedia.org/wiki/Rectifier_(neural_networks)">```ReLU```</a> activation.
135
-
136
- To create an instance of the ```mnistMLP``` model, do the same as you would in Python: ```model = mnistMLP()```.
137
-
138
- We support a few different optimizers, <a href="https://github.com/kevbuh/froog/blob/main/froog/optim.py">here</a> which include:
139
- - <a href="https://en.wikipedia.org/wiki/Stochastic_gradient_descent">Stochastic Gradient Descent (SGD)</a>
140
- - <a href="https://en.wikipedia.org/wiki/Stochastic_gradient_descent#Adam">Adaptive Moment Estimation (Adam)</a>
141
- - <a href="https://en.wikipedia.org/wiki/Stochastic_gradient_descent#RMSProp">Root Mean Square Propagation (RMSProp)</a>
142
-
143
- ```python
144
- from froog.tensor import Tensor
145
- import froog.optim as optim
146
- from froog.nn import Linear
147
-
148
- class mnistMLP:
149
- def __init__(self):
150
- self.l1 = Tensor(Linear(784, 128))
151
- self.l2 = Tensor(Linear(128, 10))
152
-
153
- def forward(self, x):
154
- return x.dot(self.l1).relu().dot(self.l2).logsoftmax()
155
-
156
- model = mnistMLP()
157
- optim = optim.SGD([model.l1, model.l2], lr=0.001)
158
- ```
159
-
160
- You can also create a convolutional neural net by
161
-
162
- ```python
163
- class SimpleConvNet:
164
- def __init__(self):
165
- conv_size = 5
166
- channels = 17
167
- self.c1 = Tensor(Linear(channels,1,conv_size,conv_size)) # (num_filters, color_channels, kernel_h, kernel_w)
168
- self.l1 = Tensor(Linear((28-conv_size+1)**2*channels, 128)) # (28-conv+1)(28-conv+1) since kernel isn't padded
169
- self.l2 = Tensor(Linear(128, 10)) # MNIST output is 10 classes
170
-
171
- def forward(self, x):
172
- x.data = x.data.reshape((-1, 1, 28, 28)) # get however many number of imgs in batch
173
- x = x.conv2d(self.c1).relu() # pass through conv first
174
- x = x.reshape(shape=(x.shape[0], -1))
175
- return x.dot(self.l1).relu().dot(self.l2).logsoftmax()
176
- ```
177
-
178
- So there are two quick examples to get you up and running. You might have noticed some operations like ```reshape``` and were wondering what else you can do with ```froog```. We have many more operations that you can apply on tensors:
179
- - ```.add()```
180
- - ```.sub()```
181
- - ```.mul()```
182
- - ```.sum()```
183
- - ```.pow()```
184
- - ```.dot()```
185
- - ```.relu()```
186
- - ```.sigmoid()```
187
- - ```.reshape()```
188
- - ```.pad2d()```
189
- - ```.logsoftmax()```
190
- - ```.conv2d()```
191
- - ```.im2col2dconv()```
192
- - ```.max_pool2d()```
193
- - ```.avg_pool2d()```
194
-
195
- # GPU Support
196
-
197
- Have a GPU and need a speedup? You're in good luck because we have GPU support via OpenCL for our operations defined in <a href="https://github.com/kevbuh/froog/blob/main/froog/ops_gpu.py">```ops_gpu.py```</a>.
198
-
199
- Here's how you can send data to the GPU during a forward pass and bring it back to the CPU.
200
-
201
- ```python
202
- # ...
203
- GPU = os.getenv("GPU", None) is not None
204
- if GPU:
205
- out = model.forward(Tensor(img).to_gpu()).cpu()
206
- ```
207
-
208
- # EfficientNet in froog!
209
-
210
- <img src="assets/efficientnet_pug.png" alt="pug" height="300">
211
-
212
- We have a really cool finished implementation of EfficientNet built entirely in ```froog```!
213
-
214
- In order to run EfficientNet inference:
215
-
216
- ```bash
217
- VIZ=1 python3 models/efficientnet.py <https://put_your_image_url_here>
218
- ```
219
-
220
- I would recommend checking out the <a href="https://github.com/kevbuh/froog/blob/main/models/efficientnet.py">code</a>, it's highly documented and pretty cool.
221
-
222
- # Contributing
223
- <!-- THERES LOT OF STUFF TO WORK ON! VISIT THE <a href="https://github.com/kevbuh/froog/blob/main/docs/bounties.md">BOUNTY SHOP</a> -->
224
-
225
- Pull requests will be merged if they:
226
- * increase simplicity
227
- * increase functionality
228
- * increase efficiency
229
-
230
- More info on <a href="https://github.com/kevbuh/froog/blob/main/docs/contributing.md">contributing</a>. Make sure to run ```python -m pytest``` before creating a PR.
231
-
232
- <!-- # Documentation
233
- Need more information about how ```froog``` works? Visit the <a href="https://github.com/kevbuh/froog/tree/main/docs">documentation</a>. -->
@@ -1,13 +0,0 @@
1
- froog/__init__.py,sha256=Mzxgj9bA2G4kcmbmY8fY0KCKgimPucn3hTVRWBJ-5_Q,57
2
- froog/gradcheck.py,sha256=HlA0VDKE-c44o0E73QsUTIVoNs-w_C9FyKFlHfoagIQ,2415
3
- froog/nn.py,sha256=_5dzIoxz1L4yEnYfONVc8xIs8vqRpUBBwZwHLvBu9yY,2023
4
- froog/ops.py,sha256=1JtzHJf9fMy9ccmVhNIHIbanvoxMYPyZ5WCUliyj8tU,16890
5
- froog/ops_gpu.py,sha256=ANDJiWS0e1ehcGCSDo_ZOOowaEPZrz2__FkX5z5uYf4,19367
6
- froog/optim.py,sha256=BucVi-j-kphiG4ao7aCMbtxgF6PGcCHITWkgr7Ao0QU,2448
7
- froog/tensor.py,sha256=Wix4pE5-OIY8Pvv3bqNCSU_-c_wZV2HrmAtBwMPmAfE,7636
8
- froog/utils.py,sha256=vs9bmBOyfy0_NR8jPl2DMWBCAqIacJ6a75Lbso2MAKs,3347
9
- froog-0.4.2.dist-info/LICENSE,sha256=k_856uNmcNUoLC_HkI18c1WomqvQ1Ioqk6gwYfWQiaM,31
10
- froog-0.4.2.dist-info/METADATA,sha256=Z0U4MY_eWhxH2VXnR876fySyTJRRTjp4wKHWSwSVoRY,10442
11
- froog-0.4.2.dist-info/WHEEL,sha256=tZoeGjtWxWRfdplE7E3d45VPlLNQnvbKiYnx7gwAy8A,92
12
- froog-0.4.2.dist-info/top_level.txt,sha256=XPz35C_JWu20LlsVxIMdMZn8DD58Ak78LwgWFBGYZwY,6
13
- froog-0.4.2.dist-info/RECORD,,
File without changes