NeuralEngine 0.5.1__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,66 @@
1
+ # Contribution Guide
2
+
3
+ NeuralEngine is an open-source project, and I warmly welcome all kinds of contributions whether it's code, documentation, bug reports, feature ideas, or sharing cool examples. If you want to help make NeuralEngine better, you're in the right place!
4
+
5
+ ## How to Contribute
6
+ 1. **Fork the repository** and create a new branch for your feature, fix or documentation update.
7
+ 2. **Keep it clean and consistent**: Try to follow the existing code style, naming conventions and documentation patterns. Well-commented, readable code is always appreciated!
8
+ 3. **Add tests** for new features or bug fixes if you can.
9
+ 4. **Document your changes**: Update or add docstrings and README sections so others can easily understand your work.
10
+ 5. **Open a pull request** describing what you've changed and why it's awesome.
11
+
12
+ ## Development Setup
13
+ To start coding, you'll need **Python 3.10+**.
14
+
15
+ 1. **Fork & Clone**:
16
+ ```bash
17
+ git clone https://github.com/YOUR-USERNAME/NeuralEngine.git
18
+ cd NeuralEngine
19
+ ```
20
+
21
+ 2. **Create a Virtual Environment**:
22
+ ```bash
23
+ python -m venv venv
24
+ # Windows
25
+ venv\Scripts\activate
26
+ # macOS/Linux
27
+ source venv/bin/activate
28
+ ```
29
+
30
+ 3. **Install Dependencies**:
31
+ ```bash
32
+ pip install -r requirements.txt
33
+ ```
34
+
35
+ 4. **Install in Editable Mode**: This is crucial! It allows you to modify the code and see changes immediately without reinstalling.
36
+ ```bash
37
+ pip install -e .
38
+ ```
39
+
40
+ ## Coding Standards
41
+ To maintain the quality and performance of NeuralEngine, please follow these technical guidelines:
42
+
43
+ - **Type Hinting**: This project uses a custom `Typed` metaclass (`neuralengine.config.Typed`) for runtime validation. All new functions and methods **must** include Python type hints.
44
+ ```python
45
+ # Good
46
+ def my_function(x: Tensor, alpha: float = 0.1) -> Tensor: ...
47
+ ```
48
+
49
+ - **Device Agnostic Code**: Do not import `numpy` or `cupy` directly for tensor operations. Use the backend provider `xp` defined in `config.py`.
50
+ ```python
51
+ import neuralengine.config as cf
52
+ # ...
53
+ data = cf.xp.zeros((10, 10)) # Automatically handles CPU/GPU
54
+ ```
55
+
56
+ ## What Can You Contribute?
57
+ - New layers, loss functions, optimizers, metrics, or utility functions
58
+ - Improvements to existing components
59
+ - Bug fixes and performance tweaks
60
+ - Documentation updates and tutorials
61
+ - Example scripts and notebooks
62
+ - Feature requests, feedback and ideas
63
+
64
+ Every contribution is reviewed for quality and consistency, but don't worry—if you have questions or need help, just open an issue or start a discussion. I'm happy to help and love seeing new faces in the community!
65
+
66
+ Thanks for making NeuralEngine better, together! 🚀
@@ -0,0 +1,24 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2025 Prajjwal Pratap Shah
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice, this permission notice, and the following attribution clause shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ Attribution Clause:
16
+ Any public use, distribution, or derivative work of this software must give appropriate credit to the original developer, Prajjwal Pratap Shah, by including a visible acknowledgment in documentation, websites, or other materials accompanying the software.
17
+
18
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
19
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
20
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
21
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
22
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
23
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
24
+ SOFTWARE.
@@ -0,0 +1,4 @@
1
+ include requirements.txt
2
+ include CONTRIBUTING.md
3
+ exclude NeuralEngine.webp
4
+ exclude test.py
@@ -0,0 +1,295 @@
1
+ Metadata-Version: 2.4
2
+ Name: NeuralEngine
3
+ Version: 0.5.1
4
+ Summary: A framework/library for building and training neural networks.
5
+ Home-page: https://github.com/Prajjwal2404/NeuralEngine
6
+ Author: Prajjwal Pratap Shah
7
+ Author-email: Prajjwal Pratap Shah <prajjwalpratapshah@outlook.com>
8
+ Maintainer: Prajjwal Pratap Shah
9
+ License: MIT License
10
+
11
+ Copyright (c) 2025 Prajjwal Pratap Shah
12
+
13
+ Permission is hereby granted, free of charge, to any person obtaining a copy
14
+ of this software and associated documentation files (the "Software"), to deal
15
+ in the Software without restriction, including without limitation the rights
16
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
17
+ copies of the Software, and to permit persons to whom the Software is
18
+ furnished to do so, subject to the following conditions:
19
+
20
+ The above copyright notice, this permission notice, and the following attribution clause shall be included in all
21
+ copies or substantial portions of the Software.
22
+
23
+ Attribution Clause:
24
+ Any public use, distribution, or derivative work of this software must give appropriate credit to the original developer, Prajjwal Pratap Shah, by including a visible acknowledgment in documentation, websites, or other materials accompanying the software.
25
+
26
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
27
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
28
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
29
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
30
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
31
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
32
+ SOFTWARE.
33
+
34
+ Project-URL: Homepage, https://github.com/Prajjwal2404/NeuralEngine
35
+ Keywords: deep-learning,neural-networks,machine-learning,numpy,cupy,autograd
36
+ Classifier: Programming Language :: Python :: 3
37
+ Classifier: Operating System :: OS Independent
38
+ Requires-Python: >=3.10
39
+ Description-Content-Type: text/markdown
40
+ License-File: LICENSE
41
+ Requires-Dist: numpy>=1.26.4
42
+ Provides-Extra: cuda
43
+ Requires-Dist: cupy-cuda12x>=12.0.0; extra == "cuda"
44
+ Dynamic: author
45
+ Dynamic: home-page
46
+ Dynamic: license-file
47
+ Dynamic: requires-python
48
+
49
+ <p align="center">
50
+ <img src="https://raw.githubusercontent.com/Prajjwal2404/NeuralEngine/refs/heads/main/NeuralEngine.webp" alt="NeuralEngine Cover" width="600" />
51
+ </p>
52
+
53
+ <p align="center">
54
+ <a href="https://github.com/Prajjwal2404/NeuralEngine/pulse" alt="Activity">
55
+ <img src="https://img.shields.io/github/commit-activity/m/Prajjwal2404/NeuralEngine" /></a>
56
+ <a href="https://github.com/Prajjwal2404/NeuralEngine/graphs/contributors" alt="Contributors">
57
+ <img src="https://img.shields.io/github/contributors/Prajjwal2404/NeuralEngine" /></a>
58
+ <a href="https://pypi.org/project/NeuralEngine" alt="PyPI">
59
+ <img src="https://img.shields.io/pypi/v/NeuralEngine?color=brightgreen&label=PyPI" /></a>
60
+ <a href="https://www.python.org" alt="Language">
61
+ <img src="https://img.shields.io/badge/language-Python-blue"></a>
62
+ <a href="mailto:prajjwalpratapshah@outlook.com" alt="Email">
63
+ <img src="https://img.shields.io/badge/-Email-red?style=flat&logo=gmail&logoColor=white"></a>
64
+ <a href="https://www.linkedin.com/in/prajjwal2404" alt="LinkedIn">
65
+ <img src="https://img.shields.io/badge/LinkedIn-blue?style=flat"></a>
66
+ </p>
67
+
68
+
69
+ # NeuralEngine
70
+
71
+ A framework/library for building and training neural networks in Python. NeuralEngine provides core components for constructing, training and evaluating neural networks, with support for both CPU and GPU (CUDA) acceleration. Designed for extensibility, performance and ease of use, it is suitable for research, prototyping and production.
72
+
73
+ ## Table of Contents
74
+ - [Features](#features)
75
+ - [Installation](#installation)
76
+ - [Example Usage](#example-usage)
77
+ - [Project Structure](#project-structure)
78
+ - [Capabilities & Documentation](#capabilities--documentation)
79
+ - [Contribution](#contribution)
80
+ - [License](#license)
81
+ - [Attribution](#attribution)
82
+ - [Disclaimer](#disclaimer)
83
+
84
+ ## Features
85
+ - Custom tensor operations (CPU/GPU support via NumPy and optional CuPy)
86
+ - Configurable neural network layers (Linear, Flatten, etc.)
87
+ - Built-in dataloaders, loss functions, metrics and optimizers
88
+ - Model class for easy training and evaluation
89
+ - Device management (CPU/CUDA)
90
+ - Utilities for deep learning workflows
91
+ - Autograd capabilities using dynamic computational graphs
92
+ - Extensible design for custom layers, losses, metrics and optimizers
93
+ - Flexible data type configuration and runtime type validation
94
+
95
+ ## Installation
96
+ Install via pip:
97
+ ```bash
98
+ pip install NeuralEngine
99
+ ```
100
+ Or clone and install locally:
101
+ ```bash
102
+ pip install .
103
+ ```
104
+
105
+ ### Optional CUDA Support
106
+ To enable GPU acceleration, Install via pip:
107
+ ```bash
108
+ pip install NeuralEngine[cuda]
109
+ ```
110
+ Or install the optional dependency
111
+ ```bash
112
+ pip install cupy-cuda12x
113
+ ```
114
+
115
+ ## Example Usage
116
+ ```python
117
+ import neuralengine as ne
118
+
119
+ # Set device ('cpu' or 'cuda')
120
+ ne.set_device('cuda')
121
+
122
+ # Load your dataset (example: MNIST)
123
+ (x_train, y_train), (x_test, y_test) = load_mnist_data()
124
+
125
+ # Preprocess data
126
+ x_train, x_test = ne.tensor(x_train), ne.tensor(x_test)
127
+ x_train, x_test = ne.normalize(x_train), ne.normalize(x_test)
128
+ y_train, y_test = ne.one_hot(y_train), ne.one_hot(y_test)
129
+
130
+ train_data = ne.DataLoader(x_train, y_train, batch_size=10000, val_split=0.2)
131
+ test_data = ne.DataLoader(x_test, y_test, batch_size=10000, shuffle=False)
132
+
133
+ # Build your model
134
+ model = ne.Model(
135
+ input_size=(28, 28),
136
+ optimizer=ne.Adam(),
137
+ loss=ne.CrossEntropy(),
138
+ metrics=ne.ClassificationMetrics(),
139
+ dtype=ne.DType.FLOAT16
140
+ )
141
+ model(
142
+ ne.Flatten(),
143
+ ne.Linear(64, activation=ne.ReLU()),
144
+ ne.Linear(10, activation=ne.Softmax()),
145
+ )
146
+
147
+ # Train and evaluate
148
+ model.train(train_data, epochs=30, patience=3)
149
+ result = model.eval(test_data)
150
+ ```
151
+
152
+ ## Project Structure
153
+ ```
154
+ neuralengine/
155
+ __init__.py
156
+ config.py
157
+ tensor.py
158
+ utils.py
159
+ nn/
160
+ __init__.py
161
+ dataload.py
162
+ layers.py
163
+ loss.py
164
+ metrics.py
165
+ model.py
166
+ optim.py
167
+ setup.py
168
+ requirements.txt
169
+ pyproject.toml
170
+ MANIFEST.in
171
+ LICENSE
172
+ README.md
173
+ ```
174
+
175
+ ## Capabilities & Documentation
176
+ NeuralEngine offers the following core capabilities:
177
+
178
+ ### Device Management
179
+ - `ne.set_device('cpu'|'cuda')`: Switch between CPU and GPU (CUDA) for computation.
180
+ - `Tensor.to(device)`, `Layer.to(device)`: Move tensors and layers to specified device.
181
+ - `ne.get_device()`, `ne.has_cuda()`: Get current device and CUDA availability.
182
+
183
+ ### Tensors & Autograd
184
+ - Custom tensor implementation supporting NumPy and CuPy backends.
185
+ - Automatic differentiation (autograd) using dynamic computational graphs for backpropagation.
186
+ - Supports gradients, data types, iteration, parameter updates and custom operations.
187
+ - Supported tensor operations:
188
+ - Arithmetic: `+`, `-`, `*`, `/`, `**` (power)
189
+ - Matrix multiplication: `@`
190
+ - Mathematical: `log`, `sqrt`, `exp`, `abs`
191
+ - Reductions: `sum`, `max`, `min`, `mean`, `var`
192
+ - Shape: `transpose`, `reshape`, `concatenate`, `stack`, `slice`, `set_slice`
193
+ - Elementwise: `masked_fill`
194
+ - Comparison: `==`, `!=`, `>`, `>=`, `<`, `<=`
195
+ - Type conversion: `dtype` (get / set)
196
+ - Utility: `zero_grad()` (reset gradients)
197
+ - Autograd: `backward()` (compute gradients for the computation graph)
198
+
199
+ ### Layers
200
+ - `ne.Linear(out_size, *in_size, bias=True, activation=None)`: Fully connected layer with optional activation.
201
+ - `ne.LSTM(...)`: Long Short-Term Memory layer with options for attention, bidirectionality, sequence/state output. You can build deep LSTM networks by stacking multiple LSTM layers. When building encoder-decoder models, ensure that the hidden units for decoder's first layer is set correctly:
202
+ - For a standard LSTM, the hidden state shape for the last timestep is `(batch, hidden_units)`.
203
+ - For a bidirectional LSTM, the hidden and cell state shape becomes `(batch, hidden_units * 2)`.
204
+ - If attention is enabled, the hidden state shape is `(batch, 2 * hidden_units)` (self-attention), if `enc_size` is provided, the hidden state shape is `(batch, hidden_units + enc_size)` (cross-attention).
205
+ - If LSTM layers require state initializations from prior layers, set the hidden units accordingly to match the output shape of the previous LSTM (including adjustments for bidirectionality and attention).
206
+ - `ne.MultiplicativeAttention(units, *in_size)`: Soft attention mechanism for sequence models.
207
+ - `ne.MultiHeadAttention(*in_size, num_heads=1)`: Multi-head attention layer for transformer and sequence models.
208
+ - `ne.Embedding(embed_size, vocab_size, timesteps=None)`: Embedding layer for mapping indices to dense vectors, with optional positional encoding.
209
+ - `ne.LayerNorm(*num_feat, eps=1e-7)`: Layer normalization for stabilizing training.
210
+ - `ne.Dropout(prob=0.5)`: Dropout regularization for reducing overfitting.
211
+ - `ne.Flatten()`: Flattens input tensors to 2D (batch, features).
212
+ - `ne.Layer.dtype = ne.DType`: Get or set layer parameters data types.
213
+ - `ne.Layer.freezed = True|False`: Freeze or unfreeze layer parameters during training.
214
+ - All layers inherit from a common base and support extensibility for custom architectures.
215
+
216
+ ### Activations
217
+ - `ne.Sigmoid()`: Sigmoid activation function.
218
+ - `ne.Tanh()`: Tanh activation function.
219
+ - `ne.ReLU(alpha=0, parametric=False)`: ReLU, Leaky ReLU, or Parametric ReLU activation.
220
+ - `ne.SiLU(beta=False)`: SiLU (Swish) activation function.
221
+ - `ne.Softmax(axis=-1)`: Softmax activation for classification tasks.
222
+ - All activations inherit from a common base and support extensibility for custom architectures.
223
+
224
+ ### Loss Functions
225
+ - `ne.CrossEntropy(binary=False, eps=1e-7)`: Categorical and binary cross-entropy loss for classification tasks.
226
+ - `ne.MSE()`: Mean Squared Error loss for regression.
227
+ - `ne.MAE()`: Mean Absolute Error loss for regression.
228
+ - `ne.Huber(delta=1.0)`: Huber loss, robust to outliers.
229
+ - `ne.GaussianNLL(eps=1e-7)`: Gaussian Negative Log Likelihood loss for probabilistic regression.
230
+ - `ne.KLDivergence(eps=1e-7)`: Kullback-Leibler Divergence loss for measuring distribution differences.
231
+ - All loss functions inherit from a common base, support autograd and loss accumulation.
232
+
233
+ ### Optimizers
234
+ - `ne.Adam(lr=1e-3, betas=(0.9, 0.99), eps=1e-7, reg=0)`: Adam optimizer (switches to RMSProp if only one beta is provided).
235
+ - `ne.SGD(lr=1e-2, reg=0, momentum=0, nesterov=False)`: Stochastic Gradient Descent with optional momentum and Nesterov acceleration.
236
+ - All optimizers support L2 regularization and gradient reset.
237
+
238
+ ### Metrics
239
+ - `ne.ClassificationMetrics(num_classes=None, acc=True, prec=False, rec=False, f1=False, eps=1e-7)`: Computes accuracy, precision, recall and F1 score for classification tasks.
240
+ - `ne.RMSE()`: Root Mean Squared Error for regression.
241
+ - `ne.R2(eps=1e-7)`: R2 Score for regression.
242
+ - `ne.Perplexity(eps=1e-7)`: Perplexity metric for generative models.
243
+ - All metrics store results as dictionaries, support batch evaluation and metric accumulation.
244
+
245
+ ### Model API
246
+ - `ne.Model(input_size, optimizer, loss, metrics, dtype)`: Create a model specifying input size, optimizer, loss function, metrics and data type for model layers.
247
+ - Add layers by calling the model instance: `model(layer1, layer2, ...)` or using `model.build(layer1, layer2, ...)`.
248
+ - `model.train(dataloader, epochs=10, patience=0, ckpt_interval=0)`: Train the model on dataset, with support for metric/loss reporting, early stopping and checkpointing per epoch.
249
+ - `model.eval(dataloader, validate=False)`: Evaluate the model on dataset or validation set, disables gradient tracking using `with ne.NoGrad():`, prints loss and metrics and returns output tensor.
250
+ - Layers are set to training or evaluation mode automatically during `train` and `eval`.
251
+ - `model.save(filename, weights_only=False)`: Save the model architecture or model parameters to a file.
252
+ - `model.load_params(filepath)`: Load model parameters from a saved file.
253
+ - `ne.Model.load_model(filepath)`: Load a model from a saved file.
254
+ - `print(model)`: Print a summary of the model architecture and configuration.
255
+
256
+ ### DataLoader
257
+ - `ne.DataLoader(x, y, dtype=(None, None), batch_size=32, val_split=0, shuffle=True, random_seed=None, bar_size=30, bar_info='')`: Create a data loader for batching, shuffling and splitting datasets during training and evaluation.
258
+ - Supports lists, tuples, numpy arrays, pandas dataframes and tensors as input data.
259
+ - Provides batching, shuffling, splitting (train/validation) and progress bar display during iteration.
260
+ - Extensible for custom data loading strategies.
261
+
262
+ ### Utilities
263
+ - Tensor creation: `tensor(data, requires_grad=False, dtype=None)`, `zeros(*shape)`, `ones(*shape)`, `rand(*shape)`, `randn(*shape, xavier=False)`, `randint(low, high, *shape)` and their `_like` variants for matching shapes.
264
+ - Tensor operations: `log`, `sqrt`, `exp`, `abs`, `sum`, `max`, `min`, `mean`, `var`, `concat`, `stack`, `where`, `clip`, `array(data, dtype=None)` for elementwise, reduction and conversion operations.
265
+ - Preprocessing: `standardize(tensor)`, `normalize(tensor)`, `one_hot(labels)` for data preprocessing.
266
+ - Autograd management: `with NoGrad()` context manager to disable gradient tracking in a block. `@no_grad` decorator to disable gradients for specific functions.
267
+
268
+ ### Type Validation
269
+ - `metaclass=ne.Typed`: Metaclass for enforcing type hints on class methods, properties and subclasses. Add `STRICT = True` in class definition to enforce strict type checking.
270
+ - `@ne.Typed.validate(strict=True|False)`: Decorator for validating function arguments and return values based on type hints. Decorator has higher precedence over metaclass.
271
+ - `ne.Typed.validation(True|False)`: Enable or disable type validation globally.
272
+ - Data type enum: `ne.DType.FLOAT32`, `ne.DType.INT8`, `ne.DType.UINT16`, etc. Supports iteration and key access.
273
+
274
+ ### Extensibility
275
+ NeuralEngine is designed for easy extension and customization:
276
+ - **Custom Layers**: Create new layers by inheriting from the `Layer` base class and implementing the `forward(self, x)` method. You can add parameters, initialization logic and custom computations as needed. All built-in layers follow this pattern, making it simple to add your own.
277
+ - **Custom Losses**: Define new loss functions by inheriting from the `Loss` base class and implementing the `compute(self, z, y)` method. This allows you to integrate any custom loss logic with autograd support.
278
+ - **Custom Optimizers**: Implement new optimization algorithms by inheriting from the `Optimizer` base class and providing your own `step(self)` method. You can manage optimizer state and parameter updates as required.
279
+ - **Custom Metrics**: Add new metrics by inheriting from the `Metric` base class and implementing the `compute(self, z, y)` method. This allows you to track any performance measure with metric accumulation.
280
+ - **Custom DataLoaders**: Extend the `DataLoader` class to create specialized data loading strategies. Override the `__getitem__` method to define how batches are constructed.
281
+ - All core components are modular and can be replaced or extended for research or production use.
282
+
283
+ ## Contribution
284
+ Contributions are welcome! Please see [CONTRIBUTING.md](CONTRIBUTING.md) file for details on how to set up your development environment and submit pull requests.
285
+
286
+ ## License
287
+ MIT License with attribution clause. See [LICENSE](LICENSE) file for details.
288
+
289
+ ## Attribution
290
+ If you use this project, please credit the original developer: Prajjwal Pratap Shah.
291
+
292
+ Special thanks to the Autograd Framework From Scratch project by Eduardo Leitão da Cunha Opice Leão, which served as a reference for tensor operations and autograd implementations.
293
+
294
+ ## Disclaimer
295
+ *NeuralEngine is an independent, open-source project developed for educational and research purposes. All product names, logos and brands are property of their respective owners. Use of these names does not imply any affiliation with or endorsement by them.*
@@ -0,0 +1,23 @@
1
+ CONTRIBUTING.md
2
+ LICENSE
3
+ MANIFEST.in
4
+ README.md
5
+ pyproject.toml
6
+ requirements.txt
7
+ setup.py
8
+ NeuralEngine.egg-info/PKG-INFO
9
+ NeuralEngine.egg-info/SOURCES.txt
10
+ NeuralEngine.egg-info/dependency_links.txt
11
+ NeuralEngine.egg-info/requires.txt
12
+ NeuralEngine.egg-info/top_level.txt
13
+ neuralengine/__init__.py
14
+ neuralengine/config.py
15
+ neuralengine/tensor.py
16
+ neuralengine/utils.py
17
+ neuralengine/nn/__init__.py
18
+ neuralengine/nn/dataload.py
19
+ neuralengine/nn/layers.py
20
+ neuralengine/nn/loss.py
21
+ neuralengine/nn/metrics.py
22
+ neuralengine/nn/model.py
23
+ neuralengine/nn/optim.py
@@ -0,0 +1,4 @@
1
+ numpy>=1.26.4
2
+
3
+ [cuda]
4
+ cupy-cuda12x>=12.0.0
@@ -0,0 +1 @@
1
+ neuralengine