neural-scratch 0.1.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,153 @@
1
+ Metadata-Version: 2.4
2
+ Name: neural_scratch
3
+ Version: 0.1.0
4
+ Summary: A tiny neural network library built from scratch with NumPy.
5
+ Author: Your Name
6
+ Author-email: your.email@example.com
7
+ Classifier: Programming Language :: Python :: 3
8
+ Classifier: License :: OSI Approved :: MIT License
9
+ Classifier: Operating System :: OS Independent
10
+ Requires-Python: >=3.6
11
+ Description-Content-Type: text/markdown
12
+ Requires-Dist: numpy
13
+ Requires-Dist: tqdm
14
+ Dynamic: author
15
+ Dynamic: author-email
16
+ Dynamic: classifier
17
+ Dynamic: description
18
+ Dynamic: description-content-type
19
+ Dynamic: requires-dist
20
+ Dynamic: requires-python
21
+ Dynamic: summary
22
+
23
+ # neural_scratch
24
+
25
+ A high-performance, educational neural network library built completely from scratch using NumPy.
26
+
27
+ `neural_scratch` provides a Keras-like object-oriented API for building and training neural networks. It is designed to be lightweight, avoiding heavy dependencies like TensorFlow or PyTorch, while leveraging highly-optimized C-Extensions (via Cython) for speed and strong protection against reverse engineering.
28
+
29
+ ---
30
+
31
+ ## Table of Contents
32
+ - [Features](#features)
33
+ - [Installation](#installation)
34
+ - [Quick Start](#quick-start)
35
+ - [API Reference](#api-reference)
36
+ - [Models](#models)
37
+ - [Layers](#layers)
38
+ - [Activations](#activations)
39
+ - [Losses](#losses)
40
+ - [Optimizers](#optimizers)
41
+ - [Anti-Reverse Engineering Security](#anti-reverse-engineering-security)
42
+
43
+ ---
44
+
45
+ ## Features
46
+
47
+ - **Pure NumPy Math**: Built entirely on standard matrix operations without heavy machine learning frameworks.
48
+ - **Keras-like API**: Intuitive `Sequential` model structure that makes building networks incredibly easy.
49
+ - **C-Extension Compilation**: Python code is compiled via Cython into native machine code `.so` objects, rendering it practically impossible to decompile or reverse engineer.
50
+ - **Customizable**: Control the exact size and shape of every layer and activation function.
51
+
52
+ ---
53
+
54
+ ## Installation
55
+
56
+ You can install `neural_scratch` directly via `pip` once it is published to PyPI:
57
+
58
+ ```bash
59
+ pip install neural_scratch
60
+ ```
61
+
62
+ *(Note: If building from source, ensure you have a C compiler installed, then run `pip install .` to compile the Cython extensions)*
63
+
64
+ ---
65
+
66
+ ## Quick Start
67
+
68
+ Here is a simple example demonstrating how to build a model to solve the XOR problem:
69
+
70
+ ```python
71
+ import numpy as np
72
+ from neural_scratch import Sequential, Dense, ReLU, SoftmaxCrossEntropy
73
+ from neural_scratch.activations import Softmax
74
+
75
+ # 1. Create Data (XOR problem)
76
+ X_train = np.array([[0,0], [0,1], [1,0], [1,1]])
77
+ Y_train = np.array([[1, 0], [0, 1], [0, 1], [1, 0]]) # One-hot encoded
78
+
79
+ # 2. Build the Model
80
+ model = Sequential()
81
+
82
+ # First layer: 2 input neurons (for the 2 XOR inputs), 3 output neurons
83
+ model.add(Dense(input_size=2, output_size=3))
84
+ model.add(ReLU())
85
+
86
+ # Second layer: 3 input neurons (must match previous layer), 2 output neurons
87
+ model.add(Dense(input_size=3, output_size=2))
88
+
89
+ # Note: We output raw logits directly to the loss function for numerical stability.
90
+
91
+ # 3. Compile and Train
92
+ loss = SoftmaxCrossEntropy()
93
+ model.use(loss, loss.prime)
94
+ model.fit(X_train, Y_train, epochs=1000, learning_rate=0.1, batch_size=4)
95
+
96
+ # 4. Predict
97
+ predictions = model.predict_batch(X_train)
98
+
99
+ # Apply softmax to raw logits to get final probabilities
100
+ probs = Softmax().forward(predictions)
101
+ print(probs)
102
+ ```
103
+
104
+ ---
105
+
106
+ ## API Reference
107
+
108
+ ### Models
109
+
110
+ #### `Sequential()`
111
+ The core container for stacking layers.
112
+ - `add(layer)`: Appends a layer (Dense or Activation) to the network.
113
+ - `use(loss, loss_prime)`: Sets the loss function and its derivative.
114
+ - `fit(x_train, y_train, epochs, learning_rate, batch_size, verbose)`: Trains the model.
115
+ - `predict(input_data)`: Runs a forward pass on individual samples.
116
+ - `predict_batch(input_data)`: Runs a vectorized forward pass on a batch of samples.
117
+
118
+ ### Layers
119
+
120
+ #### `Dense(input_size, output_size, seed=None)`
121
+ A standard fully-connected neural network layer.
122
+ - **`input_size`**: The number of input neurons. This must match the `output_size` of the previous layer, or the feature dimension of your dataset for the first layer.
123
+ - **`output_size`**: The number of output neurons.
124
+ - Uses **He Initialization** automatically to prevent vanishing or exploding gradients.
125
+
126
+ ### Activations
127
+
128
+ You can append activation functions directly to your `Sequential` model:
129
+ - `ReLU()`: Rectified Linear Unit. The standard for hidden layers.
130
+ - `Sigmoid()`: Squashes outputs to a `[0, 1]` range.
131
+ - `Tanh()`: Squashes outputs to a `[-1, 1]` range.
132
+ - `Softmax()`: Converts a vector of logits into a probability distribution.
133
+
134
+ ### Losses
135
+
136
+ Loss functions are used to calculate the network's error.
137
+ - `mse(y_true, y_pred)` & `mse_prime(y_true, y_pred)`: Mean Squared Error.
138
+ - `categorical_crossentropy(y_true, y_pred)` & `categorical_crossentropy_prime(y_true, y_pred)`: Cross-Entropy loss.
139
+ - `SoftmaxCrossEntropy()`: A highly recommended class that combines Softmax and Cross-Entropy for optimal numerical stability during backpropagation.
140
+
141
+ ### Optimizers
142
+
143
+ - `SGD(learning_rate)`: Stochastic Gradient Descent (used automatically by `Sequential.fit()`).
144
+
145
+ ---
146
+
147
+ ## Anti-Reverse Engineering Security
148
+
149
+ This library distributes native C-Extensions instead of Python bytecode. All core logic (layers, backpropagation, and loss functions) is compiled via `Cython` into `.so` shared objects during the build process.
150
+
151
+ This design choice ensures that:
152
+ 1. **Performance** is maximized by removing Python interpreter overhead.
153
+ 2. **Reverse Engineering** is virtually impossible using standard Python decompilers (like `uncompyle6`, `pycdc`, etc.), as the distributed code is stripped of its Python AST and represented as optimized machine code.
@@ -0,0 +1,131 @@
1
+ # neural_scratch
2
+
3
+ A high-performance, educational neural network library built completely from scratch using NumPy.
4
+
5
+ `neural_scratch` provides a Keras-like object-oriented API for building and training neural networks. It is designed to be lightweight, avoiding heavy dependencies like TensorFlow or PyTorch, while leveraging highly-optimized C-Extensions (via Cython) for speed and strong protection against reverse engineering.
6
+
7
+ ---
8
+
9
+ ## Table of Contents
10
+ - [Features](#features)
11
+ - [Installation](#installation)
12
+ - [Quick Start](#quick-start)
13
+ - [API Reference](#api-reference)
14
+ - [Models](#models)
15
+ - [Layers](#layers)
16
+ - [Activations](#activations)
17
+ - [Losses](#losses)
18
+ - [Optimizers](#optimizers)
19
+ - [Anti-Reverse Engineering Security](#anti-reverse-engineering-security)
20
+
21
+ ---
22
+
23
+ ## Features
24
+
25
+ - **Pure NumPy Math**: Built entirely on standard matrix operations without heavy machine learning frameworks.
26
+ - **Keras-like API**: Intuitive `Sequential` model structure that makes building networks incredibly easy.
27
+ - **C-Extension Compilation**: Python code is compiled via Cython into native machine code `.so` objects, rendering it practically impossible to decompile or reverse engineer.
28
+ - **Customizable**: Control the exact size and shape of every layer and activation function.
29
+
30
+ ---
31
+
32
+ ## Installation
33
+
34
+ You can install `neural_scratch` directly via `pip` once it is published to PyPI:
35
+
36
+ ```bash
37
+ pip install neural_scratch
38
+ ```
39
+
40
+ *(Note: If building from source, ensure you have a C compiler installed, then run `pip install .` to compile the Cython extensions)*
41
+
42
+ ---
43
+
44
+ ## Quick Start
45
+
46
+ Here is a simple example demonstrating how to build a model to solve the XOR problem:
47
+
48
+ ```python
49
+ import numpy as np
50
+ from neural_scratch import Sequential, Dense, ReLU, SoftmaxCrossEntropy
51
+ from neural_scratch.activations import Softmax
52
+
53
+ # 1. Create Data (XOR problem)
54
+ X_train = np.array([[0,0], [0,1], [1,0], [1,1]])
55
+ Y_train = np.array([[1, 0], [0, 1], [0, 1], [1, 0]]) # One-hot encoded
56
+
57
+ # 2. Build the Model
58
+ model = Sequential()
59
+
60
+ # First layer: 2 input neurons (for the 2 XOR inputs), 3 output neurons
61
+ model.add(Dense(input_size=2, output_size=3))
62
+ model.add(ReLU())
63
+
64
+ # Second layer: 3 input neurons (must match previous layer), 2 output neurons
65
+ model.add(Dense(input_size=3, output_size=2))
66
+
67
+ # Note: We output raw logits directly to the loss function for numerical stability.
68
+
69
+ # 3. Compile and Train
70
+ loss = SoftmaxCrossEntropy()
71
+ model.use(loss, loss.prime)
72
+ model.fit(X_train, Y_train, epochs=1000, learning_rate=0.1, batch_size=4)
73
+
74
+ # 4. Predict
75
+ predictions = model.predict_batch(X_train)
76
+
77
+ # Apply softmax to raw logits to get final probabilities
78
+ probs = Softmax().forward(predictions)
79
+ print(probs)
80
+ ```
81
+
82
+ ---
83
+
84
+ ## API Reference
85
+
86
+ ### Models
87
+
88
+ #### `Sequential()`
89
+ The core container for stacking layers.
90
+ - `add(layer)`: Appends a layer (Dense or Activation) to the network.
91
+ - `use(loss, loss_prime)`: Sets the loss function and its derivative.
92
+ - `fit(x_train, y_train, epochs, learning_rate, batch_size, verbose)`: Trains the model.
93
+ - `predict(input_data)`: Runs a forward pass on individual samples.
94
+ - `predict_batch(input_data)`: Runs a vectorized forward pass on a batch of samples.
95
+
96
+ ### Layers
97
+
98
+ #### `Dense(input_size, output_size, seed=None)`
99
+ A standard fully-connected neural network layer.
100
+ - **`input_size`**: The number of input neurons. This must match the `output_size` of the previous layer, or the feature dimension of your dataset for the first layer.
101
+ - **`output_size`**: The number of output neurons.
102
+ - Uses **He Initialization** automatically to prevent vanishing or exploding gradients.
103
+
104
+ ### Activations
105
+
106
+ You can append activation functions directly to your `Sequential` model:
107
+ - `ReLU()`: Rectified Linear Unit. The standard for hidden layers.
108
+ - `Sigmoid()`: Squashes outputs to a `[0, 1]` range.
109
+ - `Tanh()`: Squashes outputs to a `[-1, 1]` range.
110
+ - `Softmax()`: Converts a vector of logits into a probability distribution.
111
+
112
+ ### Losses
113
+
114
+ Loss functions are used to calculate the network's error.
115
+ - `mse(y_true, y_pred)` & `mse_prime(y_true, y_pred)`: Mean Squared Error.
116
+ - `categorical_crossentropy(y_true, y_pred)` & `categorical_crossentropy_prime(y_true, y_pred)`: Cross-Entropy loss.
117
+ - `SoftmaxCrossEntropy()`: A highly recommended class that combines Softmax and Cross-Entropy for optimal numerical stability during backpropagation.
118
+
119
+ ### Optimizers
120
+
121
+ - `SGD(learning_rate)`: Stochastic Gradient Descent (used automatically by `Sequential.fit()`).
122
+
123
+ ---
124
+
125
+ ## Anti-Reverse Engineering Security
126
+
127
+ This library distributes native C-Extensions instead of Python bytecode. All core logic (layers, backpropagation, and loss functions) is compiled via `Cython` into `.so` shared objects during the build process.
128
+
129
+ This design choice ensures that:
130
+ 1. **Performance** is maximized by removing Python interpreter overhead.
131
+ 2. **Reverse Engineering** is virtually impossible using standard Python decompilers (like `uncompyle6`, `pycdc`, etc.), as the distributed code is stripped of its Python AST and represented as optimized machine code.
@@ -0,0 +1,5 @@
1
+ from .network import Sequential
2
+ from .layers import Dense
3
+ from .activations import Tanh, ReLU, Sigmoid, Softmax
4
+ from .losses import mse, mse_prime, SoftmaxCrossEntropy
5
+ from .optimizers import SGD