froog 0.2.6__py3-none-any.whl → 0.2.8__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,154 @@
1
+ Metadata-Version: 2.1
2
+ Name: froog
3
+ Version: 0.2.8
4
+ Summary: a beautifully simplistic ml framework
5
+ Author: Kevin Buhler
6
+ License: MIT
7
+ Classifier: Programming Language :: Python :: 3
8
+ Classifier: License :: OSI Approved :: MIT License
9
+ Requires-Python: >=3.8
10
+ Description-Content-Type: text/markdown
11
+ License-File: LICENSE
12
+ Requires-Dist: numpy
13
+ Requires-Dist: requests
14
+ Requires-Dist: matplotlib
15
+
16
+ # froog <img src="https://github.com/kevbuh/froog/actions/workflows/test.yml/badge.svg" alt="unit test badge" > <img src="https://static.pepy.tech/badge/froog" alt="num downloads badge">
17
+ <div align="center" >
18
+ <img src="https://raw.githubusercontent.com/kevbuh/froog/main/assets/froog.png" alt="froog the frog" height="200">
19
+ <br/>
20
+ froog: fast real-time optimization of gradients
21
+ <br/>
22
+ a beautifully compact machine-learning library
23
+ <br/>
24
+ <a href="https://github.com/kevbuh/froog">homepage</a> | <a href="https://github.com/kevbuh/froog/tree/main/docs">documentation</a> | <a href="https://pypi.org/project/froog/">pip</a>
25
+ <br/>
26
+ <br/>
27
+ </div>
28
+
29
+ <!-- froog is a SUPER SIMPLE machine learning framework with the goal of creating tools with AI, easily and efficiently. -->
30
+
31
+ ```froog``` is an easy-to-read machine-learning library.
32
+
33
+ <!-- froog's driving philosophy is demanding simplicity in a world of complexity. -->
34
+
35
+ <!-- Tensorflow and PyTorch are insanely complex with enormous codebases and are meant for expert development. -->
36
+
37
+ ```froog``` is meant for those looking to get into machine learning, who want to understand how the underlying machine learning framework's code works before they are ultra-optimized (which all modern ml libraries are).
38
+
39
+ ```froog``` encapsulates everything from <a href="https://github.com/kevbuh/froog/blob/main/models/linear_regression.py">linear regression</a> to <a href="https://github.com/kevbuh/froog/blob/main/models/efficientnet.py">convolutional neural networks </a>
40
+
41
+ all of this in under 1000 lines.
42
+
43
+ # Installation
44
+ ```bash
45
+ pip install froog
46
+ ```
47
+
48
+ More information on downloading ```froog``` in the <a href="https://github.com/kevbuh/froog/blob/main/docs/install.md">installation</a> docs.
49
+
50
+
51
+ # Features
52
+ - <a href="https://github.com/kevbuh/froog/blob/main/froog/tensor.py">Custom Tensors</a>
53
+ - Backpropagation
54
+ - Automatic Differentiation (autograd)
55
+ - Forward and backward passes
56
+ - <a href="https://github.com/kevbuh/froog/blob/main/froog/ops.py">ML Operations</a>
57
+ - 2D Convolutions (im2col)
58
+ - Numerical gradient checking
59
+ - Acceleration methods (Adam)
60
+ - Avg & Max pooling
61
+ - <a href="https://github.com/kevbuh/froog/blob/main/models/efficientnet.py">EfficientNet</a> inference
62
+ - <a href="https://github.com/kevbuh/froog/blob/main/froog/ops_gpu.py">GPU Support</a>
63
+ - and a bunch <a href="https://github.com/kevbuh/froog/tree/main/froog">more</a>
64
+
65
+ # Sneak Peek
66
+ ```python
67
+ from froog.tensor import Tensor
68
+ from froog.nn import Linear
69
+ import froog.optim as optim
70
+
71
+ class mnistMLP:
72
+ def __init__(self):
73
+ self.l1 = Tensor(Linear(784, 128))
74
+ self.l2 = Tensor(Linear(128, 10))
75
+
76
+ def forward(self, x):
77
+ return x.dot(self.l1).relu().dot(self.l2).logsoftmax()
78
+
79
+ model = mnistMLP()
80
+ optim = optim.SGD([model.l1, model.l2], lr=0.001)
81
+ ```
82
+
83
+ # Overview
84
+
85
+ The most fundamental concept in all of ```froog``` and machine learning is the Tensor. A <a href="https://en.wikipedia.org/wiki/Tensor_(machine_learning)">tensor</a> is simply a matrix of matrices (more accurately a multi-dimensional array).
86
+
87
+ You can create a Tensor in ```froog``` by:
88
+ ```python
89
+ import numpy as np
90
+ from froog.tensor import Tensor
91
+
92
+ my_tensor = Tensor([1,2,3])
93
+ ```
94
+
95
+ Notice how we had to import numpy. If you want to create a Tensor manually make sure that it is a Numpy array!
96
+
97
+ Learn more about ```froog``` Tensors <a href="https://github.com/kevbuh/froog/blob/main/docs/tensors.md">here</a>.
98
+
99
+ ### Actually creating something
100
+
101
+ Okay cool, so now you know that ```froog```'s main datatype is a Tensor and uses NumPy in the background. How do I actually build a model?
102
+
103
+ We wanted to make it as simple as possible for you to do so.
104
+
105
+ Here's an example of how to create an MNIST multi-layer perceptron (MLP)
106
+
107
+ ```python
108
+ from froog.tensor import Tensor
109
+ import froog.optim as optim
110
+ from froog.nn import Linear
111
+
112
+ class mnistMLP:
113
+ def __init__(self):
114
+ self.l1 = Tensor(Linear(784, 128))
115
+ self.l2 = Tensor(Linear(128, 10))
116
+
117
+ def forward(self, x):
118
+ return x.dot(self.l1).relu().dot(self.l2).logsoftmax()
119
+
120
+ model = mnistMLP()
121
+ optim = optim.SGD([model.l1, model.l2], lr=0.001)
122
+ ```
123
+
124
+ You can also create a convolutional neural net by
125
+
126
+ ```python
127
+ class SimpleConvNet:
128
+ def __init__(self):
129
+ conv_size = 5
130
+ channels = 17
131
+ self.c1 = Tensor(Linear(channels,1,conv_size,conv_size)) # (num_filters, color_channels, kernel_h, kernel_w)
132
+ self.l1 = Tensor(Linear((28-conv_size+1)**2*channels, 128)) # (28-conv+1)(28-conv+1) since kernel isn't padded
133
+ self.l2 = Tensor(Linear(128, 10)) # MNIST output is 10 classes
134
+
135
+ def forward(self, x):
136
+ x.data = x.data.reshape((-1, 1, 28, 28)) # get however many number of imgs in batch
137
+ x = x.conv2d(self.c1).relu() # pass through conv first
138
+ x = x.reshape(shape=(x.shape[0], -1))
139
+ return x.dot(self.l1).relu().dot(self.l2).logsoftmax()
140
+ ```
141
+
142
+ # Contributing
143
+ <!-- THERES LOT OF STUFF TO WORK ON! VISIT THE <a href="https://github.com/kevbuh/froog/blob/main/docs/bounties.md">BOUNTY SHOP</a> -->
144
+
145
+ Pull requests will be merged if they:
146
+ * increase simplicity
147
+ * increase functionality
148
+ * increase efficiency
149
+
150
+ More info on <a href="https://github.com/kevbuh/froog/blob/main/docs/contributing.md">contributing</a>
151
+
152
+ # Documentation
153
+
154
+ Need more information about how ```froog``` works? Visit the <a href="https://github.com/kevbuh/froog/tree/main/docs">documentation</a>.
@@ -6,8 +6,8 @@ froog/ops_gpu.py,sha256=bQ2bzMqmvY2xLEaTjOIXN3KOb-7yZEDjmliV8tWlC6g,15469
6
6
  froog/optim.py,sha256=m8Q1xe3WwU41obGSMVjRMIs3rWqfqRWfhlbhF9oJyWA,2450
7
7
  froog/tensor.py,sha256=Fr2Zx0bWfS7EzcLpG3MwuV6GEdMVClMNHQCNpO45Iws,7721
8
8
  froog/utils.py,sha256=LXc-cXOhjqfvRdUXBkysqwGPKwgyWrjB0SdmXVeCIwk,2945
9
- froog-0.2.6.dist-info/LICENSE,sha256=k_856uNmcNUoLC_HkI18c1WomqvQ1Ioqk6gwYfWQiaM,31
10
- froog-0.2.6.dist-info/METADATA,sha256=56ZvgMkO9ts-PqtpkmD9s0XshSy_HT9OXfreKfxPegA,2935
11
- froog-0.2.6.dist-info/WHEEL,sha256=2wepM1nk4DS4eFpYrW1TTqPcoGNfHhhO_i5m4cOimbo,92
12
- froog-0.2.6.dist-info/top_level.txt,sha256=XPz35C_JWu20LlsVxIMdMZn8DD58Ak78LwgWFBGYZwY,6
13
- froog-0.2.6.dist-info/RECORD,,
9
+ froog-0.2.8.dist-info/LICENSE,sha256=k_856uNmcNUoLC_HkI18c1WomqvQ1Ioqk6gwYfWQiaM,31
10
+ froog-0.2.8.dist-info/METADATA,sha256=Wv-UkujmjpIvKnDyvPb_my89cfhOH7Sp0VxWoHi9W84,5831
11
+ froog-0.2.8.dist-info/WHEEL,sha256=2wepM1nk4DS4eFpYrW1TTqPcoGNfHhhO_i5m4cOimbo,92
12
+ froog-0.2.8.dist-info/top_level.txt,sha256=XPz35C_JWu20LlsVxIMdMZn8DD58Ak78LwgWFBGYZwY,6
13
+ froog-0.2.8.dist-info/RECORD,,
@@ -1,80 +0,0 @@
1
- Metadata-Version: 2.1
2
- Name: froog
3
- Version: 0.2.6
4
- Summary: a beautifully simplistic ml framework
5
- Author: Kevin Buhler
6
- License: MIT
7
- Classifier: Programming Language :: Python :: 3
8
- Classifier: License :: OSI Approved :: MIT License
9
- Requires-Python: >=3.8
10
- Description-Content-Type: text/markdown
11
- License-File: LICENSE
12
- Requires-Dist: numpy
13
- Requires-Dist: requests
14
- Requires-Dist: matplotlib
15
-
16
- # froog <img src="https://github.com/kevbuh/froog/actions/workflows/test.yml/badge.svg" alt="unit test badge" >
17
- <div align="center" >
18
- <img src="https://raw.githubusercontent.com/kevbuh/froog/main/assets/froog.png" alt="froog the frog" height="200">
19
- <br/>
20
- froog: fast real-time optimization of gradients
21
- <br/>
22
- a beautifully compact machine-learning library
23
- <br/>
24
- <a href="https://github.com/kevbuh/froog">homepage</a> | <a href="https://github.com/kevbuh/froog/tree/main/docs">documentation</a> | <a href="https://pypi.org/project/froog/">pip</a>
25
- <br/>
26
- <br/>
27
- </div>
28
-
29
- froog is a SUPER SIMPLE machine learning framework with the goal of creating tools with AI --> easily and efficiently.
30
-
31
- froog encapsulates everything from <a href="https://github.com/kevbuh/froog/blob/main/models/linear_regression.py">linear regression</a> to <a href="https://github.com/kevbuh/froog/blob/main/models/efficientnet.py">convolutional neural networks </a>
32
-
33
- all of this in under 1000 lines. in the <a href="https://github.com/kevbuh/froog/tree/main/tadpole">tadpole folder</a>.
34
-
35
- # Installation
36
- ```bash
37
- pip install froog
38
- ```
39
-
40
- ### Overview of Features
41
- - <a href="https://github.com/kevbuh/froog/blob/main/froog/tensor.py">Custom Tensors</a>
42
- - Backpropagation
43
- - Automatic Differentiation (autograd)
44
- - Forward and backward passes
45
- - <a href="https://github.com/kevbuh/froog/blob/main/froog/ops.py">ML Operations</a>
46
- - 2D Convolutions (im2col)
47
- - Numerical gradient checking
48
- - Acceleration methods (Adam)
49
- - Avg & Max pooling
50
- - <a href="https://github.com/kevbuh/froog/blob/main/models/efficientnet.py">EfficientNet</a> inference
51
- - <a href="https://github.com/kevbuh/froog/blob/main/froog/ops_gpu.py">GPU Support</a>
52
- - and a bunch <a href="https://github.com/kevbuh/froog/tree/main/froog">more</a>
53
-
54
- ### Sneak Peek
55
- ```python
56
- from froog.tensor import Tensor
57
- from froog.nn import Linear
58
- import froog.optim as optim
59
-
60
- class mnistMLP:
61
- def __init__(self):
62
- self.l1 = Tensor(Linear(784, 128))
63
- self.l2 = Tensor(Linear(128, 10))
64
-
65
- def forward(self, x):
66
- return x.dot(self.l1).relu().dot(self.l2).logsoftmax()
67
-
68
- model = mnistMLP()
69
- optim = optim.SGD([model.l1, model.l2], lr=0.001)
70
- ```
71
-
72
- # Bounties
73
- THERES LOT OF STUFF TO WORK ON! VISIT THE <a href="https://github.com/kevbuh/froog/blob/main/docs/bounties.md">BOUNTY SHOP</a>
74
-
75
- Pull requests will be merged if they:
76
- * increase simplicity
77
- * increase functionality
78
- * increase efficiency
79
-
80
- more info on <a href="https://github.com/kevbuh/froog/blob/main/docs/contributing.md">contributing</a>
File without changes
File without changes