welvet 0.0.1__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,5 @@
1
+ include README.md
2
+ include LICENSE
3
+ recursive-include src/loom_py *.so *.dylib *.dll
4
+ exclude src/loom_py/**/*.h
5
+ exclude src/loom_py/**/simple_bench*
welvet-0.0.1/PKG-INFO ADDED
@@ -0,0 +1,374 @@
1
+ Metadata-Version: 2.4
2
+ Name: welvet
3
+ Version: 0.0.1
4
+ Summary: Wrapper for Embedding Loom Via External (C-ABI) Toolchain — GPU-accelerated neural networks with WebGPU binding/bridge
5
+ Author: OpenFluke / Samuel Watson
6
+ License: Apache-2.0
7
+ Project-URL: Homepage, https://github.com/openfluke/loom
8
+ Project-URL: Source, https://github.com/openfluke/loom
9
+ Project-URL: Issues, https://github.com/openfluke/loom/issues
10
+ Project-URL: Documentation, https://github.com/openfluke/loom/tree/main/python
11
+ Keywords: neural-network,machine-learning,webgpu,gpu,deep-learning
12
+ Classifier: Programming Language :: Python :: 3
13
+ Classifier: Programming Language :: Python :: 3.8
14
+ Classifier: Programming Language :: Python :: 3.9
15
+ Classifier: Programming Language :: Python :: 3.10
16
+ Classifier: Programming Language :: Python :: 3.11
17
+ Classifier: Programming Language :: Python :: 3.12
18
+ Classifier: Programming Language :: Python :: Implementation :: CPython
19
+ Classifier: Operating System :: OS Independent
20
+ Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
21
+ Classifier: Topic :: Software Development :: Libraries :: Python Modules
22
+ Requires-Python: >=3.8
23
+ Description-Content-Type: text/markdown
24
+
25
+ # LOOM Python Bindings
26
+
27
+ High-performance neural network library with WebGPU acceleration for Python.
28
+
29
+ ## Installation
30
+
31
+ ```bash
32
+ pip install loom-py
33
+ ```
34
+
35
+ ## Quick Start
36
+
37
+ ```python
38
+ import loom_py
39
+
40
+ # Create a neural network with GPU
41
+ network = loom_py.create_network(
42
+ input_size=4,
43
+ grid_rows=1,
44
+ grid_cols=1,
45
+ layers_per_cell=2, # 2 layers: hidden + output
46
+ use_gpu=True
47
+ )
48
+
49
+ # Configure network architecture: 4 -> 8 -> 2
50
+ loom_py.configure_sequential_network(
51
+ network,
52
+ layer_sizes=[4, 8, 2],
53
+ activations=[loom_py.Activation.RELU, loom_py.Activation.SIGMOID]
54
+ )
55
+
56
+ # Training data
57
+ inputs = [[0.1, 0.2, 0.3, 0.4], [0.5, 0.6, 0.7, 0.8]]
58
+ targets = [[1.0, 0.0], [0.0, 1.0]]
59
+
60
+ # Train for 10 epochs
61
+ for epoch in range(10):
62
+ loss = loom_py.train_epoch(network, inputs, targets, learning_rate=0.1)
63
+ print(f"Epoch {epoch+1}: loss = {loss:.4f}")
64
+
65
+ # Test the network
66
+ output = loom_py.forward(network, [0.1, 0.2, 0.3, 0.4])
67
+ print(f"Output: {output}")
68
+
69
+ # Clean up
70
+ loom_py.cleanup_gpu(network)
71
+ loom_py.free_network(network)
72
+ ```
73
+
74
+ ## Features
75
+
76
+ - 🚀 **GPU Acceleration**: WebGPU-powered compute shaders for high performance
77
+ - 🎯 **Cross-Platform**: Pre-compiled binaries for Linux, macOS, Windows, Android
78
+ - 📦 **Easy Integration**: Simple Python API with high-level helpers
79
+ - ⚡ **Grid Architecture**: Flexible grid-based neural network topology
80
+ - 🔧 **Low-Level Access**: Direct control over layers and training loop
81
+ - 🎓 **Training Helpers**: Built-in functions for common training tasks
82
+
83
+ ## API Reference
84
+
85
+ ### Network Management
86
+
87
+ #### `create_network(input_size, grid_rows=2, grid_cols=2, layers_per_cell=3, use_gpu=False)`
88
+
89
+ Creates a new grid-based neural network.
90
+
91
+ **Parameters:**
92
+
93
+ - `input_size` (int): Number of input features
94
+ - `grid_rows` (int): Grid rows (default: 2)
95
+ - `grid_cols` (int): Grid columns (default: 2)
96
+ - `layers_per_cell` (int): Layers per grid cell (default: 3)
97
+ - `use_gpu` (bool): Enable GPU acceleration (default: False)
98
+
99
+ **Simplified API:**
100
+
101
+ - `create_network(input_size, hidden_size, output_size, use_gpu=False)` - Auto-calculates grid
102
+
103
+ **Returns:** Network handle (int)
104
+
105
+ #### `free_network(handle)`
106
+
107
+ Frees network resources.
108
+
109
+ **Parameters:**
110
+
111
+ - `handle` (int): Network handle
112
+
113
+ ### Layer Configuration
114
+
115
+ #### `Activation` (Class)
116
+
117
+ Activation function constants:
118
+
119
+ - `Activation.RELU` (0) - ReLU activation
120
+ - `Activation.SIGMOID` (1) - Sigmoid activation
121
+ - `Activation.TANH` (2) - Tanh activation
122
+ - `Activation.LINEAR` (3) - Linear activation
123
+
124
+ #### `init_dense_layer(input_size, output_size, activation=0)`
125
+
126
+ Initialize a dense layer configuration.
127
+
128
+ **Parameters:**
129
+
130
+ - `input_size` (int): Input neurons
131
+ - `output_size` (int): Output neurons
132
+ - `activation` (int): Activation function (use `Activation` constants)
133
+
134
+ **Returns:** Layer configuration dict
135
+
136
+ #### `set_layer(handle, row, col, layer_index, layer_config)`
137
+
138
+ Set a layer in the network grid.
139
+
140
+ **Parameters:**
141
+
142
+ - `handle` (int): Network handle
143
+ - `row` (int): Grid row (0-indexed)
144
+ - `col` (int): Grid column (0-indexed)
145
+ - `layer_index` (int): Layer index in cell (0-indexed)
146
+ - `layer_config` (dict): Layer config from `init_dense_layer()`
147
+
148
+ #### `configure_sequential_network(handle, layer_sizes, activations=None)`
149
+
150
+ High-level helper to configure a simple feedforward network.
151
+
152
+ **Parameters:**
153
+
154
+ - `handle` (int): Network handle (must have 1x1 grid)
155
+ - `layer_sizes` (List[int]): Layer sizes `[input, hidden1, ..., output]`
156
+ - `activations` (List[int], optional): Activation for each layer. Defaults to ReLU for hidden, Sigmoid for output.
157
+
158
+ **Example:**
159
+
160
+ ```python
161
+ net = create_network(input_size=784, grid_rows=1, grid_cols=1, layers_per_cell=2)
162
+ configure_sequential_network(net, [784, 128, 10]) # MNIST classifier
163
+ ```
164
+
165
+ #### `get_network_info(handle)`
166
+
167
+ Get network information.
168
+
169
+ **Returns:** Dict with `type`, `gpu_enabled`, `grid_rows`, `grid_cols`, `layers_per_cell`, `total_layers`
170
+
171
+ ### Operations
172
+
173
+ #### `forward(handle, input_data)`
174
+
175
+ Performs forward pass through the network.
176
+
177
+ **Parameters:**
178
+
179
+ - `handle` (int): Network handle
180
+ - `input_data` (List[float]): Input vector
181
+
182
+ **Returns:** Output vector (List[float])
183
+
184
+ #### `backward(handle, target_data)`
185
+
186
+ Performs backward pass for training.
187
+
188
+ **Parameters:**
189
+
190
+ - `handle` (int): Network handle
191
+ - `target_data` (List[float]): Target/label vector
192
+
193
+ #### `update_weights(handle, learning_rate)`
194
+
195
+ Updates network weights using computed gradients.
196
+
197
+ **Parameters:**
198
+
199
+ - `handle` (int): Network handle
200
+ - `learning_rate` (float): Learning rate for gradient descent
201
+
202
+ ### Training Helpers
203
+
204
+ #### `train_epoch(handle, inputs, targets, learning_rate=0.01)`
205
+
206
+ Train the network for one epoch.
207
+
208
+ **Parameters:**
209
+
210
+ - `handle` (int): Network handle
211
+ - `inputs` (List[List[float]]): List of input vectors
212
+ - `targets` (List[List[float]]): List of target vectors
213
+ - `learning_rate` (float): Learning rate (default: 0.01)
214
+
215
+ **Returns:** Average loss for the epoch (float)
216
+
217
+ **Example:**
218
+
219
+ ```python
220
+ loss = train_epoch(net, train_inputs, train_targets, learning_rate=0.1)
221
+ print(f"Epoch loss: {loss:.4f}")
222
+ ```
223
+
224
+ ### GPU Management
225
+
226
+ #### `initialize_gpu(handle)`
227
+
228
+ Explicitly initialize GPU resources.
229
+
230
+ **Returns:** True if successful, False otherwise
231
+
232
+ #### `cleanup_gpu(handle)`
233
+
234
+ Release GPU resources.
235
+
236
+ **Parameters:**
237
+
238
+ - `handle` (int): Network handle
239
+
240
+ #### `get_version()`
241
+
242
+ Get LOOM library version string.
243
+
244
+ **Returns:** Version string (e.g., "LOOM C ABI v1.0")
245
+
246
+ ## Examples
247
+
248
+ ### Basic Training Example
249
+
250
+ ```python
251
+ import loom_py
252
+
253
+ # Create network with GPU
254
+ net = loom_py.create_network(
255
+ input_size=4,
256
+ grid_rows=1,
257
+ grid_cols=1,
258
+ layers_per_cell=2,
259
+ use_gpu=True
260
+ )
261
+
262
+ # Configure architecture: 4 -> 8 -> 2
263
+ loom_py.configure_sequential_network(net, [4, 8, 2])
264
+
265
+ # Training data
266
+ inputs = [[0.1, 0.2, 0.3, 0.4], [0.5, 0.6, 0.7, 0.8]]
267
+ targets = [[1.0, 0.0], [0.0, 1.0]]
268
+
269
+ # Train for 50 epochs
270
+ for epoch in range(50):
271
+ loss = loom_py.train_epoch(net, inputs, targets, learning_rate=0.1)
272
+ if (epoch + 1) % 10 == 0:
273
+ print(f"Epoch {epoch+1}: loss = {loss:.6f}")
274
+
275
+ # Test
276
+ output = loom_py.forward(net, [0.1, 0.2, 0.3, 0.4])
277
+ print(f"Output: {output}")
278
+
279
+ # Cleanup
280
+ loom_py.cleanup_gpu(net)
281
+ loom_py.free_network(net)
282
+ ```
283
+
284
+ ### Custom Layer Configuration
285
+
286
+ ```python
287
+ import loom_py
288
+
289
+ # Create network
290
+ net = loom_py.create_network(
291
+ input_size=10,
292
+ grid_rows=2,
293
+ grid_cols=2,
294
+ layers_per_cell=3,
295
+ use_gpu=False
296
+ )
297
+
298
+ # Configure individual layers
299
+ for row in range(2):
300
+ for col in range(2):
301
+ # Layer 0: 10 -> 20 (ReLU)
302
+ layer0 = loom_py.init_dense_layer(10, 20, loom_py.Activation.RELU)
303
+ loom_py.set_layer(net, row, col, 0, layer0)
304
+
305
+ # Layer 1: 20 -> 15 (Tanh)
306
+ layer1 = loom_py.init_dense_layer(20, 15, loom_py.Activation.TANH)
307
+ loom_py.set_layer(net, row, col, 1, layer1)
308
+
309
+ # Layer 2: 15 -> 5 (Sigmoid)
310
+ layer2 = loom_py.init_dense_layer(15, 5, loom_py.Activation.SIGMOID)
311
+ loom_py.set_layer(net, row, col, 2, layer2)
312
+
313
+ # Network is now configured
314
+ info = loom_py.get_network_info(net)
315
+ print(f"Total layers: {info['total_layers']}")
316
+
317
+ loom_py.free_network(net)
318
+ ```
319
+
320
+ ## Testing
321
+
322
+ Run the included examples to verify installation:
323
+
324
+ ```bash
325
+ # Basic GPU training test
326
+ python examples/train_gpu.py
327
+ ```
328
+
329
+ Or test programmatically:
330
+
331
+ ```python
332
+ import loom_py
333
+
334
+ # Test basic functionality
335
+ net = loom_py.create_network(input_size=2, grid_rows=1, grid_cols=1,
336
+ layers_per_cell=1, use_gpu=False)
337
+ loom_py.configure_sequential_network(net, [2, 4, 2])
338
+
339
+ # Verify forward pass works
340
+ output = loom_py.forward(net, [0.5, 0.5])
341
+ assert len(output) == 2, "Forward pass failed"
342
+
343
+ # Verify training works
344
+ inputs = [[0.0, 0.0], [1.0, 1.0]]
345
+ targets = [[1.0, 0.0], [0.0, 1.0]]
346
+ loss = loom_py.train_epoch(net, inputs, targets, learning_rate=0.1)
347
+ assert loss > 0, "Training failed"
348
+
349
+ loom_py.free_network(net)
350
+ print("✅ All tests passed!")
351
+ ```
352
+
353
+ ## Platform Support
354
+
355
+ Pre-compiled binaries included for:
356
+
357
+ - **Linux**: x86_64, ARM64
358
+ - **macOS**: ARM64 (Apple Silicon)
359
+ - **Windows**: x86_64
360
+ - **Android**: ARM64
361
+
362
+ ## Building from Source
363
+
364
+ See the main [LOOM repository](https://github.com/openfluke/loom) for building the C ABI from source.
365
+
366
+ ## License
367
+
368
+ Apache License 2.0
369
+
370
+ ## Links
371
+
372
+ - [GitHub Repository](https://github.com/openfluke/loom)
373
+ - [C ABI Documentation](https://github.com/openfluke/loom/tree/main/cabi)
374
+ - [Issue Tracker](https://github.com/openfluke/loom/issues)
welvet-0.0.1/README.md ADDED
@@ -0,0 +1,350 @@
1
+ # LOOM Python Bindings
2
+
3
+ High-performance neural network library with WebGPU acceleration for Python.
4
+
5
+ ## Installation
6
+
7
+ ```bash
8
+ pip install loom-py
9
+ ```
10
+
11
+ ## Quick Start
12
+
13
+ ```python
14
+ import loom_py
15
+
16
+ # Create a neural network with GPU
17
+ network = loom_py.create_network(
18
+ input_size=4,
19
+ grid_rows=1,
20
+ grid_cols=1,
21
+ layers_per_cell=2, # 2 layers: hidden + output
22
+ use_gpu=True
23
+ )
24
+
25
+ # Configure network architecture: 4 -> 8 -> 2
26
+ loom_py.configure_sequential_network(
27
+ network,
28
+ layer_sizes=[4, 8, 2],
29
+ activations=[loom_py.Activation.RELU, loom_py.Activation.SIGMOID]
30
+ )
31
+
32
+ # Training data
33
+ inputs = [[0.1, 0.2, 0.3, 0.4], [0.5, 0.6, 0.7, 0.8]]
34
+ targets = [[1.0, 0.0], [0.0, 1.0]]
35
+
36
+ # Train for 10 epochs
37
+ for epoch in range(10):
38
+ loss = loom_py.train_epoch(network, inputs, targets, learning_rate=0.1)
39
+ print(f"Epoch {epoch+1}: loss = {loss:.4f}")
40
+
41
+ # Test the network
42
+ output = loom_py.forward(network, [0.1, 0.2, 0.3, 0.4])
43
+ print(f"Output: {output}")
44
+
45
+ # Clean up
46
+ loom_py.cleanup_gpu(network)
47
+ loom_py.free_network(network)
48
+ ```
49
+
50
+ ## Features
51
+
52
+ - 🚀 **GPU Acceleration**: WebGPU-powered compute shaders for high performance
53
+ - 🎯 **Cross-Platform**: Pre-compiled binaries for Linux, macOS, Windows, Android
54
+ - 📦 **Easy Integration**: Simple Python API with high-level helpers
55
+ - ⚡ **Grid Architecture**: Flexible grid-based neural network topology
56
+ - 🔧 **Low-Level Access**: Direct control over layers and training loop
57
+ - 🎓 **Training Helpers**: Built-in functions for common training tasks
58
+
59
+ ## API Reference
60
+
61
+ ### Network Management
62
+
63
+ #### `create_network(input_size, grid_rows=2, grid_cols=2, layers_per_cell=3, use_gpu=False)`
64
+
65
+ Creates a new grid-based neural network.
66
+
67
+ **Parameters:**
68
+
69
+ - `input_size` (int): Number of input features
70
+ - `grid_rows` (int): Grid rows (default: 2)
71
+ - `grid_cols` (int): Grid columns (default: 2)
72
+ - `layers_per_cell` (int): Layers per grid cell (default: 3)
73
+ - `use_gpu` (bool): Enable GPU acceleration (default: False)
74
+
75
+ **Simplified API:**
76
+
77
+ - `create_network(input_size, hidden_size, output_size, use_gpu=False)` - Auto-calculates grid
78
+
79
+ **Returns:** Network handle (int)
80
+
81
+ #### `free_network(handle)`
82
+
83
+ Frees network resources.
84
+
85
+ **Parameters:**
86
+
87
+ - `handle` (int): Network handle
88
+
89
+ ### Layer Configuration
90
+
91
+ #### `Activation` (Class)
92
+
93
+ Activation function constants:
94
+
95
+ - `Activation.RELU` (0) - ReLU activation
96
+ - `Activation.SIGMOID` (1) - Sigmoid activation
97
+ - `Activation.TANH` (2) - Tanh activation
98
+ - `Activation.LINEAR` (3) - Linear activation
99
+
100
+ #### `init_dense_layer(input_size, output_size, activation=0)`
101
+
102
+ Initialize a dense layer configuration.
103
+
104
+ **Parameters:**
105
+
106
+ - `input_size` (int): Input neurons
107
+ - `output_size` (int): Output neurons
108
+ - `activation` (int): Activation function (use `Activation` constants)
109
+
110
+ **Returns:** Layer configuration dict
111
+
112
+ #### `set_layer(handle, row, col, layer_index, layer_config)`
113
+
114
+ Set a layer in the network grid.
115
+
116
+ **Parameters:**
117
+
118
+ - `handle` (int): Network handle
119
+ - `row` (int): Grid row (0-indexed)
120
+ - `col` (int): Grid column (0-indexed)
121
+ - `layer_index` (int): Layer index in cell (0-indexed)
122
+ - `layer_config` (dict): Layer config from `init_dense_layer()`
123
+
124
+ #### `configure_sequential_network(handle, layer_sizes, activations=None)`
125
+
126
+ High-level helper to configure a simple feedforward network.
127
+
128
+ **Parameters:**
129
+
130
+ - `handle` (int): Network handle (must have 1x1 grid)
131
+ - `layer_sizes` (List[int]): Layer sizes `[input, hidden1, ..., output]`
132
+ - `activations` (List[int], optional): Activation for each layer. Defaults to ReLU for hidden, Sigmoid for output.
133
+
134
+ **Example:**
135
+
136
+ ```python
137
+ net = create_network(input_size=784, grid_rows=1, grid_cols=1, layers_per_cell=2)
138
+ configure_sequential_network(net, [784, 128, 10]) # MNIST classifier
139
+ ```
140
+
141
+ #### `get_network_info(handle)`
142
+
143
+ Get network information.
144
+
145
+ **Returns:** Dict with `type`, `gpu_enabled`, `grid_rows`, `grid_cols`, `layers_per_cell`, `total_layers`
146
+
147
+ ### Operations
148
+
149
+ #### `forward(handle, input_data)`
150
+
151
+ Performs forward pass through the network.
152
+
153
+ **Parameters:**
154
+
155
+ - `handle` (int): Network handle
156
+ - `input_data` (List[float]): Input vector
157
+
158
+ **Returns:** Output vector (List[float])
159
+
160
+ #### `backward(handle, target_data)`
161
+
162
+ Performs backward pass for training.
163
+
164
+ **Parameters:**
165
+
166
+ - `handle` (int): Network handle
167
+ - `target_data` (List[float]): Target/label vector
168
+
169
+ #### `update_weights(handle, learning_rate)`
170
+
171
+ Updates network weights using computed gradients.
172
+
173
+ **Parameters:**
174
+
175
+ - `handle` (int): Network handle
176
+ - `learning_rate` (float): Learning rate for gradient descent
177
+
178
+ ### Training Helpers
179
+
180
+ #### `train_epoch(handle, inputs, targets, learning_rate=0.01)`
181
+
182
+ Train the network for one epoch.
183
+
184
+ **Parameters:**
185
+
186
+ - `handle` (int): Network handle
187
+ - `inputs` (List[List[float]]): List of input vectors
188
+ - `targets` (List[List[float]]): List of target vectors
189
+ - `learning_rate` (float): Learning rate (default: 0.01)
190
+
191
+ **Returns:** Average loss for the epoch (float)
192
+
193
+ **Example:**
194
+
195
+ ```python
196
+ loss = train_epoch(net, train_inputs, train_targets, learning_rate=0.1)
197
+ print(f"Epoch loss: {loss:.4f}")
198
+ ```
199
+
200
+ ### GPU Management
201
+
202
+ #### `initialize_gpu(handle)`
203
+
204
+ Explicitly initialize GPU resources.
205
+
206
+ **Returns:** True if successful, False otherwise
207
+
208
+ #### `cleanup_gpu(handle)`
209
+
210
+ Release GPU resources.
211
+
212
+ **Parameters:**
213
+
214
+ - `handle` (int): Network handle
215
+
216
+ #### `get_version()`
217
+
218
+ Get LOOM library version string.
219
+
220
+ **Returns:** Version string (e.g., "LOOM C ABI v1.0")
221
+
222
+ ## Examples
223
+
224
+ ### Basic Training Example
225
+
226
+ ```python
227
+ import loom_py
228
+
229
+ # Create network with GPU
230
+ net = loom_py.create_network(
231
+ input_size=4,
232
+ grid_rows=1,
233
+ grid_cols=1,
234
+ layers_per_cell=2,
235
+ use_gpu=True
236
+ )
237
+
238
+ # Configure architecture: 4 -> 8 -> 2
239
+ loom_py.configure_sequential_network(net, [4, 8, 2])
240
+
241
+ # Training data
242
+ inputs = [[0.1, 0.2, 0.3, 0.4], [0.5, 0.6, 0.7, 0.8]]
243
+ targets = [[1.0, 0.0], [0.0, 1.0]]
244
+
245
+ # Train for 50 epochs
246
+ for epoch in range(50):
247
+ loss = loom_py.train_epoch(net, inputs, targets, learning_rate=0.1)
248
+ if (epoch + 1) % 10 == 0:
249
+ print(f"Epoch {epoch+1}: loss = {loss:.6f}")
250
+
251
+ # Test
252
+ output = loom_py.forward(net, [0.1, 0.2, 0.3, 0.4])
253
+ print(f"Output: {output}")
254
+
255
+ # Cleanup
256
+ loom_py.cleanup_gpu(net)
257
+ loom_py.free_network(net)
258
+ ```
259
+
260
+ ### Custom Layer Configuration
261
+
262
+ ```python
263
+ import loom_py
264
+
265
+ # Create network
266
+ net = loom_py.create_network(
267
+ input_size=10,
268
+ grid_rows=2,
269
+ grid_cols=2,
270
+ layers_per_cell=3,
271
+ use_gpu=False
272
+ )
273
+
274
+ # Configure individual layers
275
+ for row in range(2):
276
+ for col in range(2):
277
+ # Layer 0: 10 -> 20 (ReLU)
278
+ layer0 = loom_py.init_dense_layer(10, 20, loom_py.Activation.RELU)
279
+ loom_py.set_layer(net, row, col, 0, layer0)
280
+
281
+ # Layer 1: 20 -> 15 (Tanh)
282
+ layer1 = loom_py.init_dense_layer(20, 15, loom_py.Activation.TANH)
283
+ loom_py.set_layer(net, row, col, 1, layer1)
284
+
285
+ # Layer 2: 15 -> 5 (Sigmoid)
286
+ layer2 = loom_py.init_dense_layer(15, 5, loom_py.Activation.SIGMOID)
287
+ loom_py.set_layer(net, row, col, 2, layer2)
288
+
289
+ # Network is now configured
290
+ info = loom_py.get_network_info(net)
291
+ print(f"Total layers: {info['total_layers']}")
292
+
293
+ loom_py.free_network(net)
294
+ ```
295
+
296
+ ## Testing
297
+
298
+ Run the included examples to verify installation:
299
+
300
+ ```bash
301
+ # Basic GPU training test
302
+ python examples/train_gpu.py
303
+ ```
304
+
305
+ Or test programmatically:
306
+
307
+ ```python
308
+ import loom_py
309
+
310
+ # Test basic functionality
311
+ net = loom_py.create_network(input_size=2, grid_rows=1, grid_cols=1,
312
+ layers_per_cell=1, use_gpu=False)
313
+ loom_py.configure_sequential_network(net, [2, 4, 2])
314
+
315
+ # Verify forward pass works
316
+ output = loom_py.forward(net, [0.5, 0.5])
317
+ assert len(output) == 2, "Forward pass failed"
318
+
319
+ # Verify training works
320
+ inputs = [[0.0, 0.0], [1.0, 1.0]]
321
+ targets = [[1.0, 0.0], [0.0, 1.0]]
322
+ loss = loom_py.train_epoch(net, inputs, targets, learning_rate=0.1)
323
+ assert loss > 0, "Training failed"
324
+
325
+ loom_py.free_network(net)
326
+ print("✅ All tests passed!")
327
+ ```
328
+
329
+ ## Platform Support
330
+
331
+ Pre-compiled binaries included for:
332
+
333
+ - **Linux**: x86_64, ARM64
334
+ - **macOS**: ARM64 (Apple Silicon)
335
+ - **Windows**: x86_64
336
+ - **Android**: ARM64
337
+
338
+ ## Building from Source
339
+
340
+ See the main [LOOM repository](https://github.com/openfluke/loom) for building the C ABI from source.
341
+
342
+ ## License
343
+
344
+ Apache License 2.0
345
+
346
+ ## Links
347
+
348
+ - [GitHub Repository](https://github.com/openfluke/loom)
349
+ - [C ABI Documentation](https://github.com/openfluke/loom/tree/main/cabi)
350
+ - [Issue Tracker](https://github.com/openfluke/loom/issues)