froog 0.5.0__tar.gz → 0.5.2__tar.gz
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- {froog-0.5.0 → froog-0.5.2}/PKG-INFO +39 -50
- {froog-0.5.0 → froog-0.5.2}/README.md +38 -49
- {froog-0.5.0 → froog-0.5.2}/froog.egg-info/PKG-INFO +39 -50
- {froog-0.5.0 → froog-0.5.2}/setup.py +1 -1
- {froog-0.5.0 → froog-0.5.2}/froog/__init__.py +0 -0
- {froog-0.5.0 → froog-0.5.2}/froog/gradient.py +0 -0
- {froog-0.5.0 → froog-0.5.2}/froog/ops.py +0 -0
- {froog-0.5.0 → froog-0.5.2}/froog/optim.py +0 -0
- {froog-0.5.0 → froog-0.5.2}/froog/tensor.py +0 -0
- {froog-0.5.0 → froog-0.5.2}/froog/utils.py +0 -0
- {froog-0.5.0 → froog-0.5.2}/froog.egg-info/SOURCES.txt +0 -0
- {froog-0.5.0 → froog-0.5.2}/froog.egg-info/dependency_links.txt +0 -0
- {froog-0.5.0 → froog-0.5.2}/froog.egg-info/requires.txt +0 -0
- {froog-0.5.0 → froog-0.5.2}/froog.egg-info/top_level.txt +0 -0
- {froog-0.5.0 → froog-0.5.2}/setup.cfg +0 -0
@@ -1,6 +1,6 @@
|
|
1
1
|
Metadata-Version: 2.1
|
2
2
|
Name: froog
|
3
|
-
Version: 0.5.
|
3
|
+
Version: 0.5.2
|
4
4
|
Summary: tensor library with opencl and metal support
|
5
5
|
Author: Kevin Buhler
|
6
6
|
License: MIT
|
@@ -13,23 +13,21 @@ Description-Content-Type: text/markdown
|
|
13
13
|
<div align="center" >
|
14
14
|
<img src="https://raw.githubusercontent.com/kevbuh/froog/main/assets/froog.png" alt="froog the frog" height="200">
|
15
15
|
<br/>
|
16
|
-
froog:
|
17
|
-
<br/>
|
18
|
-
a beautifully compact tensor library
|
16
|
+
froog: a gpu accelerated tensor library
|
19
17
|
<br/>
|
20
18
|
<a href="https://github.com/kevbuh/froog">homepage</a> | <a href="https://github.com/kevbuh/froog/tree/main/DOCS.md">documentation</a> | <a href="https://pypi.org/project/froog/">pip</a>
|
21
19
|
<br/>
|
22
20
|
<br/>
|
23
21
|
</div>
|
24
22
|
|
25
|
-
```froog``` is an easy-to-read tensor library (<a href="https://www.pepy.tech/projects/froog">
|
23
|
+
```froog``` is an easy-to-read tensor library (<a href="https://www.pepy.tech/projects/froog">27k pip installs!</a>) with support for GPU acceleration with [OpenCL](https://www.khronos.org/opencl/) and [Apple Metal](https://developer.apple.com/metal/). Inspired by [tinygrad](https://github.com/tinygrad/tinygrad), and [micrograd](https://github.com/karpathy/micrograd).
|
26
24
|
|
27
|
-
|
25
|
+
## Installation
|
28
26
|
```bash
|
29
27
|
pip install froog
|
30
28
|
```
|
31
29
|
|
32
|
-
|
30
|
+
## Features
|
33
31
|
- <a href="https://github.com/kevbuh/froog/blob/main/froog/tensor.py">Custom Tensors</a>
|
34
32
|
- Backpropagation
|
35
33
|
- Automatic Differentiation (autograd)
|
@@ -40,11 +38,9 @@ pip install froog
|
|
40
38
|
- Acceleration methods (Adam)
|
41
39
|
- Avg & Max pooling
|
42
40
|
- <a href="https://github.com/kevbuh/froog/blob/main/models/efficientnet.py">EfficientNet</a> inference
|
43
|
-
- <a href="https://github.com/kevbuh/froog/blob/main/froog/gpu
|
44
|
-
- <a href="https://github.com/kevbuh/froog/blob/main/docs/env.md">Configuration via Environment Variables</a>
|
45
|
-
- and a bunch <a href="https://github.com/kevbuh/froog/tree/main/froog">more</a>
|
41
|
+
- <a href="https://github.com/kevbuh/froog/blob/main/froog/gpu">GPU Support</a>
|
46
42
|
|
47
|
-
|
43
|
+
## Quick Example
|
48
44
|
|
49
45
|
Here's how you set up a simple multilayer perceptron for classification on MNIST. Looks pretty similar to pytorch, right?
|
50
46
|
|
@@ -66,7 +62,7 @@ model = mnistMLP() # create model
|
|
66
62
|
optim = optim.SGD([model.l1, model.l2], lr=0.001) # stochastic gradient descent optimizer
|
67
63
|
```
|
68
64
|
|
69
|
-
|
65
|
+
## GPU Support
|
70
66
|
|
71
67
|
Device management is handled transparently and will automatically select one of ```[METAL, OPENCL, CPU]```. To use the GPU:
|
72
68
|
|
@@ -102,11 +98,12 @@ from froog import set_device
|
|
102
98
|
set_device("METAL") # or "OPENCL"
|
103
99
|
```
|
104
100
|
|
105
|
-
|
101
|
+
## EfficientNet in froog!
|
102
|
+
|
103
|
+
<img src="https://github.com/kevbuh/froog/blob/main/assets/efficientnet_pug.png" alt="pug" height="200">
|
106
104
|
|
107
|
-
|
105
|
+
We have an implementation of [EfficientNet v2](https://arxiv.org/abs/2104.00298) built entirely in ```froog``` using the official PyTorch weights! Running inference on this pug...
|
108
106
|
|
109
|
-
We have an implementation of [EfficientNet v2](https://arxiv.org/abs/2104.00298) built entirely in ```froog``` using the official PyTorch weights! Run inference with:
|
110
107
|
|
111
108
|
```bash
|
112
109
|
python3 models/efficientnet.py <https://optional_image_url>
|
@@ -116,25 +113,25 @@ inference 4.34 s
|
|
116
113
|
|
117
114
|
imagenet class: 254
|
118
115
|
prediction : pug, pug-dog
|
119
|
-
probability : 0.
|
116
|
+
probability : 0.9402361
|
120
117
|
******************************
|
121
118
|
```
|
122
119
|
|
123
120
|
I would recommend checking out the <a href="https://github.com/kevbuh/froog/blob/main/models/efficientnet.py">code</a>, it's highly documented and pretty cool.
|
124
121
|
|
125
|
-
|
122
|
+
<!-- ## Contributing -->
|
126
123
|
<!-- THERES LOT OF STUFF TO WORK ON! VISIT THE <a href="https://github.com/kevbuh/froog/blob/main/docs/bounties.md">BOUNTY SHOP</a> -->
|
127
124
|
|
128
|
-
Pull requests will be merged if they:
|
125
|
+
<!-- Pull requests will be merged if they:
|
129
126
|
* increase simplicity
|
130
127
|
* increase functionality
|
131
128
|
* increase efficiency
|
132
129
|
|
133
|
-
More info on <a href="https://github.com/kevbuh/froog/blob/main/docs/contributing.md">contributing</a>. Make sure to run ```python -m pytest``` before creating a PR.
|
130
|
+
More info on <a href="https://github.com/kevbuh/froog/blob/main/docs/contributing.md">contributing</a>. Make sure to run ```python -m pytest``` before creating a PR. -->
|
134
131
|
|
135
|
-
|
132
|
+
## API
|
136
133
|
|
137
|
-
|
134
|
+
MATH
|
138
135
|
- ```.add(y)``` - Addition with y
|
139
136
|
- ```.sub(y)``` - Subtraction with y
|
140
137
|
- ```.mul(y)``` - Multiplication with y
|
@@ -143,12 +140,10 @@ More info on <a href="https://github.com/kevbuh/froog/blob/main/docs/contributin
|
|
143
140
|
- ```.sum()``` - Sum all elements
|
144
141
|
- ```.mean()``` - Mean of all elements
|
145
142
|
- ```.sqrt()``` - Square root
|
146
|
-
|
147
|
-
## Linear Algebra Operations
|
148
143
|
- ```.dot(y)``` - Matrix multiplication with y
|
149
144
|
- ```.matmul(y)``` - Alias for dot
|
150
145
|
|
151
|
-
|
146
|
+
MACHINE LEARNING
|
152
147
|
- ```.relu()``` - Rectified Linear Unit activation
|
153
148
|
- ```.sigmoid()``` - Sigmoid activation
|
154
149
|
- ```.dropout(p=0.5, training=True)``` - Dropout regularization
|
@@ -156,22 +151,17 @@ More info on <a href="https://github.com/kevbuh/froog/blob/main/docs/contributin
|
|
156
151
|
- ```.swish()``` - Swish activation function (x * sigmoid(x))
|
157
152
|
- ```.conv2d(w, stride=1, groups=1)``` - 2D convolution
|
158
153
|
- ```.im2col2dconv(w)``` - Image to column for convolution
|
159
|
-
|
160
|
-
## Pooling Operations
|
161
154
|
- ```.max_pool2d(kernel_size=(2,2))``` - 2D max pooling
|
162
155
|
- ```.avg_pool2d(kernel_size=(2,2))``` - 2D average pooling
|
163
156
|
|
164
|
-
|
165
|
-
-
|
166
|
-
-
|
167
|
-
-
|
168
|
-
-
|
169
|
-
-
|
170
|
-
- ```.squeeze(dim=None)``` - Remove dimensions of size 1
|
171
|
-
- ```.detach()``` - Returns a tensor detached from computation graph
|
172
|
-
- ```.assign(x)``` - Assign values from tensor x to this tensor
|
157
|
+
TENSOR
|
158
|
+
- ```Tensor.zeros(*shape)``` - Create tensor of zeros
|
159
|
+
- ```Tensor.ones(*shape)``` - Create tensor of ones
|
160
|
+
- ```Tensor.randn(*shape)``` - Create tensor with random normal values
|
161
|
+
- ```Tensor.eye(dim)``` - Create identity matrix
|
162
|
+
- ```Tensor.arange(start, stop=None, step=1)``` - Create tensor with evenly spaced values
|
173
163
|
|
174
|
-
|
164
|
+
TENSOR PROPERTIES
|
175
165
|
- ```.shape``` - The shape of the tensor as a tuple
|
176
166
|
- ```.size``` - Total number of elements in the tensor
|
177
167
|
- ```.ndim``` - Number of dimensions (rank) of the tensor
|
@@ -180,23 +170,22 @@ More info on <a href="https://github.com/kevbuh/froog/blob/main/docs/contributin
|
|
180
170
|
- ```.is_gpu``` - Whether tensor is on GPU
|
181
171
|
- ```.grad``` - Gradient of tensor with respect to some scalar value
|
182
172
|
- ```.data``` - Underlying NumPy array (or GPU buffer)
|
173
|
+
- ```.to_float()``` - Converts tensor to float32 data type
|
174
|
+
- ```.to_int()``` - Converts tensor to int32 data type
|
175
|
+
- ```.to_bool()``` - Converts tensor to boolean data type
|
176
|
+
- ```.reshape(*shape)``` - Change tensor shape
|
177
|
+
- ```.view(*shape)``` - Alternative to reshape
|
178
|
+
- ```.pad2d(padding=None)``` - Pad 2D tensors
|
179
|
+
- ```.flatten()``` - Returns a flattened 1D copy of the tensor
|
180
|
+
- ```.unsqueeze(dim)``` - Add dimension of size 1 at specified position
|
181
|
+
- ```.squeeze(dim=None)``` - Remove dimensions of size 1
|
182
|
+
- ```.detach()``` - Returns a tensor detached from computation graph
|
183
|
+
- ```.assign(x)``` - Assign values from tensor x to this tensor
|
183
184
|
|
184
|
-
|
185
|
+
GPU
|
185
186
|
- ```.to_cpu()``` - Moves tensor to CPU
|
186
187
|
- ```.to_gpu()``` - Moves tensor to GPU
|
187
188
|
- ```.gpu_()``` - In-place GPU conversion (modifies tensor)
|
188
189
|
|
189
|
-
|
190
|
-
- ```.to_float()``` - Converts tensor to float32 data type
|
191
|
-
- ```.to_int()``` - Converts tensor to int32 data type
|
192
|
-
- ```.to_bool()``` - Converts tensor to boolean data type
|
193
|
-
|
194
|
-
## Autograd Operations
|
190
|
+
AUTOGRAD
|
195
191
|
- ```.backward(allow_fill=True)``` - Performs backpropagation
|
196
|
-
|
197
|
-
## Tensor Creation Methods
|
198
|
-
- ```Tensor.zeros(*shape)``` - Create tensor of zeros
|
199
|
-
- ```Tensor.ones(*shape)``` - Create tensor of ones
|
200
|
-
- ```Tensor.randn(*shape)``` - Create tensor with random normal values
|
201
|
-
- ```Tensor.eye(dim)``` - Create identity matrix
|
202
|
-
- ```Tensor.arange(start, stop=None, step=1)``` - Create tensor with evenly spaced values
|
@@ -2,23 +2,21 @@
|
|
2
2
|
<div align="center" >
|
3
3
|
<img src="https://raw.githubusercontent.com/kevbuh/froog/main/assets/froog.png" alt="froog the frog" height="200">
|
4
4
|
<br/>
|
5
|
-
froog:
|
6
|
-
<br/>
|
7
|
-
a beautifully compact tensor library
|
5
|
+
froog: a gpu accelerated tensor library
|
8
6
|
<br/>
|
9
7
|
<a href="https://github.com/kevbuh/froog">homepage</a> | <a href="https://github.com/kevbuh/froog/tree/main/DOCS.md">documentation</a> | <a href="https://pypi.org/project/froog/">pip</a>
|
10
8
|
<br/>
|
11
9
|
<br/>
|
12
10
|
</div>
|
13
11
|
|
14
|
-
```froog``` is an easy-to-read tensor library (<a href="https://www.pepy.tech/projects/froog">
|
12
|
+
```froog``` is an easy-to-read tensor library (<a href="https://www.pepy.tech/projects/froog">27k pip installs!</a>) with support for GPU acceleration with [OpenCL](https://www.khronos.org/opencl/) and [Apple Metal](https://developer.apple.com/metal/). Inspired by [tinygrad](https://github.com/tinygrad/tinygrad), and [micrograd](https://github.com/karpathy/micrograd).
|
15
13
|
|
16
|
-
|
14
|
+
## Installation
|
17
15
|
```bash
|
18
16
|
pip install froog
|
19
17
|
```
|
20
18
|
|
21
|
-
|
19
|
+
## Features
|
22
20
|
- <a href="https://github.com/kevbuh/froog/blob/main/froog/tensor.py">Custom Tensors</a>
|
23
21
|
- Backpropagation
|
24
22
|
- Automatic Differentiation (autograd)
|
@@ -29,11 +27,9 @@ pip install froog
|
|
29
27
|
- Acceleration methods (Adam)
|
30
28
|
- Avg & Max pooling
|
31
29
|
- <a href="https://github.com/kevbuh/froog/blob/main/models/efficientnet.py">EfficientNet</a> inference
|
32
|
-
- <a href="https://github.com/kevbuh/froog/blob/main/froog/gpu
|
33
|
-
- <a href="https://github.com/kevbuh/froog/blob/main/docs/env.md">Configuration via Environment Variables</a>
|
34
|
-
- and a bunch <a href="https://github.com/kevbuh/froog/tree/main/froog">more</a>
|
30
|
+
- <a href="https://github.com/kevbuh/froog/blob/main/froog/gpu">GPU Support</a>
|
35
31
|
|
36
|
-
|
32
|
+
## Quick Example
|
37
33
|
|
38
34
|
Here's how you set up a simple multilayer perceptron for classification on MNIST. Looks pretty similar to pytorch, right?
|
39
35
|
|
@@ -55,7 +51,7 @@ model = mnistMLP() # create model
|
|
55
51
|
optim = optim.SGD([model.l1, model.l2], lr=0.001) # stochastic gradient descent optimizer
|
56
52
|
```
|
57
53
|
|
58
|
-
|
54
|
+
## GPU Support
|
59
55
|
|
60
56
|
Device management is handled transparently and will automatically select one of ```[METAL, OPENCL, CPU]```. To use the GPU:
|
61
57
|
|
@@ -91,11 +87,12 @@ from froog import set_device
|
|
91
87
|
set_device("METAL") # or "OPENCL"
|
92
88
|
```
|
93
89
|
|
94
|
-
|
90
|
+
## EfficientNet in froog!
|
91
|
+
|
92
|
+
<img src="https://github.com/kevbuh/froog/blob/main/assets/efficientnet_pug.png" alt="pug" height="200">
|
95
93
|
|
96
|
-
|
94
|
+
We have an implementation of [EfficientNet v2](https://arxiv.org/abs/2104.00298) built entirely in ```froog``` using the official PyTorch weights! Running inference on this pug...
|
97
95
|
|
98
|
-
We have an implementation of [EfficientNet v2](https://arxiv.org/abs/2104.00298) built entirely in ```froog``` using the official PyTorch weights! Run inference with:
|
99
96
|
|
100
97
|
```bash
|
101
98
|
python3 models/efficientnet.py <https://optional_image_url>
|
@@ -105,25 +102,25 @@ inference 4.34 s
|
|
105
102
|
|
106
103
|
imagenet class: 254
|
107
104
|
prediction : pug, pug-dog
|
108
|
-
probability : 0.
|
105
|
+
probability : 0.9402361
|
109
106
|
******************************
|
110
107
|
```
|
111
108
|
|
112
109
|
I would recommend checking out the <a href="https://github.com/kevbuh/froog/blob/main/models/efficientnet.py">code</a>, it's highly documented and pretty cool.
|
113
110
|
|
114
|
-
|
111
|
+
<!-- ## Contributing -->
|
115
112
|
<!-- THERES LOT OF STUFF TO WORK ON! VISIT THE <a href="https://github.com/kevbuh/froog/blob/main/docs/bounties.md">BOUNTY SHOP</a> -->
|
116
113
|
|
117
|
-
Pull requests will be merged if they:
|
114
|
+
<!-- Pull requests will be merged if they:
|
118
115
|
* increase simplicity
|
119
116
|
* increase functionality
|
120
117
|
* increase efficiency
|
121
118
|
|
122
|
-
More info on <a href="https://github.com/kevbuh/froog/blob/main/docs/contributing.md">contributing</a>. Make sure to run ```python -m pytest``` before creating a PR.
|
119
|
+
More info on <a href="https://github.com/kevbuh/froog/blob/main/docs/contributing.md">contributing</a>. Make sure to run ```python -m pytest``` before creating a PR. -->
|
123
120
|
|
124
|
-
|
121
|
+
## API
|
125
122
|
|
126
|
-
|
123
|
+
MATH
|
127
124
|
- ```.add(y)``` - Addition with y
|
128
125
|
- ```.sub(y)``` - Subtraction with y
|
129
126
|
- ```.mul(y)``` - Multiplication with y
|
@@ -132,12 +129,10 @@ More info on <a href="https://github.com/kevbuh/froog/blob/main/docs/contributin
|
|
132
129
|
- ```.sum()``` - Sum all elements
|
133
130
|
- ```.mean()``` - Mean of all elements
|
134
131
|
- ```.sqrt()``` - Square root
|
135
|
-
|
136
|
-
## Linear Algebra Operations
|
137
132
|
- ```.dot(y)``` - Matrix multiplication with y
|
138
133
|
- ```.matmul(y)``` - Alias for dot
|
139
134
|
|
140
|
-
|
135
|
+
MACHINE LEARNING
|
141
136
|
- ```.relu()``` - Rectified Linear Unit activation
|
142
137
|
- ```.sigmoid()``` - Sigmoid activation
|
143
138
|
- ```.dropout(p=0.5, training=True)``` - Dropout regularization
|
@@ -145,22 +140,17 @@ More info on <a href="https://github.com/kevbuh/froog/blob/main/docs/contributin
|
|
145
140
|
- ```.swish()``` - Swish activation function (x * sigmoid(x))
|
146
141
|
- ```.conv2d(w, stride=1, groups=1)``` - 2D convolution
|
147
142
|
- ```.im2col2dconv(w)``` - Image to column for convolution
|
148
|
-
|
149
|
-
## Pooling Operations
|
150
143
|
- ```.max_pool2d(kernel_size=(2,2))``` - 2D max pooling
|
151
144
|
- ```.avg_pool2d(kernel_size=(2,2))``` - 2D average pooling
|
152
145
|
|
153
|
-
|
154
|
-
-
|
155
|
-
-
|
156
|
-
-
|
157
|
-
-
|
158
|
-
-
|
159
|
-
- ```.squeeze(dim=None)``` - Remove dimensions of size 1
|
160
|
-
- ```.detach()``` - Returns a tensor detached from computation graph
|
161
|
-
- ```.assign(x)``` - Assign values from tensor x to this tensor
|
146
|
+
TENSOR
|
147
|
+
- ```Tensor.zeros(*shape)``` - Create tensor of zeros
|
148
|
+
- ```Tensor.ones(*shape)``` - Create tensor of ones
|
149
|
+
- ```Tensor.randn(*shape)``` - Create tensor with random normal values
|
150
|
+
- ```Tensor.eye(dim)``` - Create identity matrix
|
151
|
+
- ```Tensor.arange(start, stop=None, step=1)``` - Create tensor with evenly spaced values
|
162
152
|
|
163
|
-
|
153
|
+
TENSOR PROPERTIES
|
164
154
|
- ```.shape``` - The shape of the tensor as a tuple
|
165
155
|
- ```.size``` - Total number of elements in the tensor
|
166
156
|
- ```.ndim``` - Number of dimensions (rank) of the tensor
|
@@ -169,23 +159,22 @@ More info on <a href="https://github.com/kevbuh/froog/blob/main/docs/contributin
|
|
169
159
|
- ```.is_gpu``` - Whether tensor is on GPU
|
170
160
|
- ```.grad``` - Gradient of tensor with respect to some scalar value
|
171
161
|
- ```.data``` - Underlying NumPy array (or GPU buffer)
|
162
|
+
- ```.to_float()``` - Converts tensor to float32 data type
|
163
|
+
- ```.to_int()``` - Converts tensor to int32 data type
|
164
|
+
- ```.to_bool()``` - Converts tensor to boolean data type
|
165
|
+
- ```.reshape(*shape)``` - Change tensor shape
|
166
|
+
- ```.view(*shape)``` - Alternative to reshape
|
167
|
+
- ```.pad2d(padding=None)``` - Pad 2D tensors
|
168
|
+
- ```.flatten()``` - Returns a flattened 1D copy of the tensor
|
169
|
+
- ```.unsqueeze(dim)``` - Add dimension of size 1 at specified position
|
170
|
+
- ```.squeeze(dim=None)``` - Remove dimensions of size 1
|
171
|
+
- ```.detach()``` - Returns a tensor detached from computation graph
|
172
|
+
- ```.assign(x)``` - Assign values from tensor x to this tensor
|
172
173
|
|
173
|
-
|
174
|
+
GPU
|
174
175
|
- ```.to_cpu()``` - Moves tensor to CPU
|
175
176
|
- ```.to_gpu()``` - Moves tensor to GPU
|
176
177
|
- ```.gpu_()``` - In-place GPU conversion (modifies tensor)
|
177
178
|
|
178
|
-
|
179
|
-
- ```.to_float()``` - Converts tensor to float32 data type
|
180
|
-
- ```.to_int()``` - Converts tensor to int32 data type
|
181
|
-
- ```.to_bool()``` - Converts tensor to boolean data type
|
182
|
-
|
183
|
-
## Autograd Operations
|
179
|
+
AUTOGRAD
|
184
180
|
- ```.backward(allow_fill=True)``` - Performs backpropagation
|
185
|
-
|
186
|
-
## Tensor Creation Methods
|
187
|
-
- ```Tensor.zeros(*shape)``` - Create tensor of zeros
|
188
|
-
- ```Tensor.ones(*shape)``` - Create tensor of ones
|
189
|
-
- ```Tensor.randn(*shape)``` - Create tensor with random normal values
|
190
|
-
- ```Tensor.eye(dim)``` - Create identity matrix
|
191
|
-
- ```Tensor.arange(start, stop=None, step=1)``` - Create tensor with evenly spaced values
|
@@ -1,6 +1,6 @@
|
|
1
1
|
Metadata-Version: 2.1
|
2
2
|
Name: froog
|
3
|
-
Version: 0.5.
|
3
|
+
Version: 0.5.2
|
4
4
|
Summary: tensor library with opencl and metal support
|
5
5
|
Author: Kevin Buhler
|
6
6
|
License: MIT
|
@@ -13,23 +13,21 @@ Description-Content-Type: text/markdown
|
|
13
13
|
<div align="center" >
|
14
14
|
<img src="https://raw.githubusercontent.com/kevbuh/froog/main/assets/froog.png" alt="froog the frog" height="200">
|
15
15
|
<br/>
|
16
|
-
froog:
|
17
|
-
<br/>
|
18
|
-
a beautifully compact tensor library
|
16
|
+
froog: a gpu accelerated tensor library
|
19
17
|
<br/>
|
20
18
|
<a href="https://github.com/kevbuh/froog">homepage</a> | <a href="https://github.com/kevbuh/froog/tree/main/DOCS.md">documentation</a> | <a href="https://pypi.org/project/froog/">pip</a>
|
21
19
|
<br/>
|
22
20
|
<br/>
|
23
21
|
</div>
|
24
22
|
|
25
|
-
```froog``` is an easy-to-read tensor library (<a href="https://www.pepy.tech/projects/froog">
|
23
|
+
```froog``` is an easy-to-read tensor library (<a href="https://www.pepy.tech/projects/froog">27k pip installs!</a>) with support for GPU acceleration with [OpenCL](https://www.khronos.org/opencl/) and [Apple Metal](https://developer.apple.com/metal/). Inspired by [tinygrad](https://github.com/tinygrad/tinygrad), and [micrograd](https://github.com/karpathy/micrograd).
|
26
24
|
|
27
|
-
|
25
|
+
## Installation
|
28
26
|
```bash
|
29
27
|
pip install froog
|
30
28
|
```
|
31
29
|
|
32
|
-
|
30
|
+
## Features
|
33
31
|
- <a href="https://github.com/kevbuh/froog/blob/main/froog/tensor.py">Custom Tensors</a>
|
34
32
|
- Backpropagation
|
35
33
|
- Automatic Differentiation (autograd)
|
@@ -40,11 +38,9 @@ pip install froog
|
|
40
38
|
- Acceleration methods (Adam)
|
41
39
|
- Avg & Max pooling
|
42
40
|
- <a href="https://github.com/kevbuh/froog/blob/main/models/efficientnet.py">EfficientNet</a> inference
|
43
|
-
- <a href="https://github.com/kevbuh/froog/blob/main/froog/gpu
|
44
|
-
- <a href="https://github.com/kevbuh/froog/blob/main/docs/env.md">Configuration via Environment Variables</a>
|
45
|
-
- and a bunch <a href="https://github.com/kevbuh/froog/tree/main/froog">more</a>
|
41
|
+
- <a href="https://github.com/kevbuh/froog/blob/main/froog/gpu">GPU Support</a>
|
46
42
|
|
47
|
-
|
43
|
+
## Quick Example
|
48
44
|
|
49
45
|
Here's how you set up a simple multilayer perceptron for classification on MNIST. Looks pretty similar to pytorch, right?
|
50
46
|
|
@@ -66,7 +62,7 @@ model = mnistMLP() # create model
|
|
66
62
|
optim = optim.SGD([model.l1, model.l2], lr=0.001) # stochastic gradient descent optimizer
|
67
63
|
```
|
68
64
|
|
69
|
-
|
65
|
+
## GPU Support
|
70
66
|
|
71
67
|
Device management is handled transparently and will automatically select one of ```[METAL, OPENCL, CPU]```. To use the GPU:
|
72
68
|
|
@@ -102,11 +98,12 @@ from froog import set_device
|
|
102
98
|
set_device("METAL") # or "OPENCL"
|
103
99
|
```
|
104
100
|
|
105
|
-
|
101
|
+
## EfficientNet in froog!
|
102
|
+
|
103
|
+
<img src="https://github.com/kevbuh/froog/blob/main/assets/efficientnet_pug.png" alt="pug" height="200">
|
106
104
|
|
107
|
-
|
105
|
+
We have an implementation of [EfficientNet v2](https://arxiv.org/abs/2104.00298) built entirely in ```froog``` using the official PyTorch weights! Running inference on this pug...
|
108
106
|
|
109
|
-
We have an implementation of [EfficientNet v2](https://arxiv.org/abs/2104.00298) built entirely in ```froog``` using the official PyTorch weights! Run inference with:
|
110
107
|
|
111
108
|
```bash
|
112
109
|
python3 models/efficientnet.py <https://optional_image_url>
|
@@ -116,25 +113,25 @@ inference 4.34 s
|
|
116
113
|
|
117
114
|
imagenet class: 254
|
118
115
|
prediction : pug, pug-dog
|
119
|
-
probability : 0.
|
116
|
+
probability : 0.9402361
|
120
117
|
******************************
|
121
118
|
```
|
122
119
|
|
123
120
|
I would recommend checking out the <a href="https://github.com/kevbuh/froog/blob/main/models/efficientnet.py">code</a>, it's highly documented and pretty cool.
|
124
121
|
|
125
|
-
|
122
|
+
<!-- ## Contributing -->
|
126
123
|
<!-- THERES LOT OF STUFF TO WORK ON! VISIT THE <a href="https://github.com/kevbuh/froog/blob/main/docs/bounties.md">BOUNTY SHOP</a> -->
|
127
124
|
|
128
|
-
Pull requests will be merged if they:
|
125
|
+
<!-- Pull requests will be merged if they:
|
129
126
|
* increase simplicity
|
130
127
|
* increase functionality
|
131
128
|
* increase efficiency
|
132
129
|
|
133
|
-
More info on <a href="https://github.com/kevbuh/froog/blob/main/docs/contributing.md">contributing</a>. Make sure to run ```python -m pytest``` before creating a PR.
|
130
|
+
More info on <a href="https://github.com/kevbuh/froog/blob/main/docs/contributing.md">contributing</a>. Make sure to run ```python -m pytest``` before creating a PR. -->
|
134
131
|
|
135
|
-
|
132
|
+
## API
|
136
133
|
|
137
|
-
|
134
|
+
MATH
|
138
135
|
- ```.add(y)``` - Addition with y
|
139
136
|
- ```.sub(y)``` - Subtraction with y
|
140
137
|
- ```.mul(y)``` - Multiplication with y
|
@@ -143,12 +140,10 @@ More info on <a href="https://github.com/kevbuh/froog/blob/main/docs/contributin
|
|
143
140
|
- ```.sum()``` - Sum all elements
|
144
141
|
- ```.mean()``` - Mean of all elements
|
145
142
|
- ```.sqrt()``` - Square root
|
146
|
-
|
147
|
-
## Linear Algebra Operations
|
148
143
|
- ```.dot(y)``` - Matrix multiplication with y
|
149
144
|
- ```.matmul(y)``` - Alias for dot
|
150
145
|
|
151
|
-
|
146
|
+
MACHINE LEARNING
|
152
147
|
- ```.relu()``` - Rectified Linear Unit activation
|
153
148
|
- ```.sigmoid()``` - Sigmoid activation
|
154
149
|
- ```.dropout(p=0.5, training=True)``` - Dropout regularization
|
@@ -156,22 +151,17 @@ More info on <a href="https://github.com/kevbuh/froog/blob/main/docs/contributin
|
|
156
151
|
- ```.swish()``` - Swish activation function (x * sigmoid(x))
|
157
152
|
- ```.conv2d(w, stride=1, groups=1)``` - 2D convolution
|
158
153
|
- ```.im2col2dconv(w)``` - Image to column for convolution
|
159
|
-
|
160
|
-
## Pooling Operations
|
161
154
|
- ```.max_pool2d(kernel_size=(2,2))``` - 2D max pooling
|
162
155
|
- ```.avg_pool2d(kernel_size=(2,2))``` - 2D average pooling
|
163
156
|
|
164
|
-
|
165
|
-
-
|
166
|
-
-
|
167
|
-
-
|
168
|
-
-
|
169
|
-
-
|
170
|
-
- ```.squeeze(dim=None)``` - Remove dimensions of size 1
|
171
|
-
- ```.detach()``` - Returns a tensor detached from computation graph
|
172
|
-
- ```.assign(x)``` - Assign values from tensor x to this tensor
|
157
|
+
TENSOR
|
158
|
+
- ```Tensor.zeros(*shape)``` - Create tensor of zeros
|
159
|
+
- ```Tensor.ones(*shape)``` - Create tensor of ones
|
160
|
+
- ```Tensor.randn(*shape)``` - Create tensor with random normal values
|
161
|
+
- ```Tensor.eye(dim)``` - Create identity matrix
|
162
|
+
- ```Tensor.arange(start, stop=None, step=1)``` - Create tensor with evenly spaced values
|
173
163
|
|
174
|
-
|
164
|
+
TENSOR PROPERTIES
|
175
165
|
- ```.shape``` - The shape of the tensor as a tuple
|
176
166
|
- ```.size``` - Total number of elements in the tensor
|
177
167
|
- ```.ndim``` - Number of dimensions (rank) of the tensor
|
@@ -180,23 +170,22 @@ More info on <a href="https://github.com/kevbuh/froog/blob/main/docs/contributin
|
|
180
170
|
- ```.is_gpu``` - Whether tensor is on GPU
|
181
171
|
- ```.grad``` - Gradient of tensor with respect to some scalar value
|
182
172
|
- ```.data``` - Underlying NumPy array (or GPU buffer)
|
173
|
+
- ```.to_float()``` - Converts tensor to float32 data type
|
174
|
+
- ```.to_int()``` - Converts tensor to int32 data type
|
175
|
+
- ```.to_bool()``` - Converts tensor to boolean data type
|
176
|
+
- ```.reshape(*shape)``` - Change tensor shape
|
177
|
+
- ```.view(*shape)``` - Alternative to reshape
|
178
|
+
- ```.pad2d(padding=None)``` - Pad 2D tensors
|
179
|
+
- ```.flatten()``` - Returns a flattened 1D copy of the tensor
|
180
|
+
- ```.unsqueeze(dim)``` - Add dimension of size 1 at specified position
|
181
|
+
- ```.squeeze(dim=None)``` - Remove dimensions of size 1
|
182
|
+
- ```.detach()``` - Returns a tensor detached from computation graph
|
183
|
+
- ```.assign(x)``` - Assign values from tensor x to this tensor
|
183
184
|
|
184
|
-
|
185
|
+
GPU
|
185
186
|
- ```.to_cpu()``` - Moves tensor to CPU
|
186
187
|
- ```.to_gpu()``` - Moves tensor to GPU
|
187
188
|
- ```.gpu_()``` - In-place GPU conversion (modifies tensor)
|
188
189
|
|
189
|
-
|
190
|
-
- ```.to_float()``` - Converts tensor to float32 data type
|
191
|
-
- ```.to_int()``` - Converts tensor to int32 data type
|
192
|
-
- ```.to_bool()``` - Converts tensor to boolean data type
|
193
|
-
|
194
|
-
## Autograd Operations
|
190
|
+
AUTOGRAD
|
195
191
|
- ```.backward(allow_fill=True)``` - Performs backpropagation
|
196
|
-
|
197
|
-
## Tensor Creation Methods
|
198
|
-
- ```Tensor.zeros(*shape)``` - Create tensor of zeros
|
199
|
-
- ```Tensor.ones(*shape)``` - Create tensor of ones
|
200
|
-
- ```Tensor.randn(*shape)``` - Create tensor with random normal values
|
201
|
-
- ```Tensor.eye(dim)``` - Create identity matrix
|
202
|
-
- ```Tensor.arange(start, stop=None, step=1)``` - Create tensor with evenly spaced values
|
@@ -9,7 +9,7 @@ with open(os.path.join(directory, 'README.md'), encoding='utf-8') as f:
|
|
9
9
|
long_description = f.read()
|
10
10
|
|
11
11
|
setup(name='froog',
|
12
|
-
version='0.5.
|
12
|
+
version='0.5.2',
|
13
13
|
description='tensor library with opencl and metal support',
|
14
14
|
author='Kevin Buhler',
|
15
15
|
license='MIT',
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|