sparsepixels 0.1.0__tar.gz → 0.2.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,107 @@
1
+ Metadata-Version: 2.4
2
+ Name: sparsepixels
3
+ Version: 0.2.0
4
+ Summary: Efficient convolution for sparse data on FPGAs
5
+ Home-page: https://github.com/hftsoi/sparse-pixels
6
+ Author: Ho Fung Tsoi
7
+ Author-email: ho.fung.tsoi@cern.ch
8
+ License: MIT
9
+ Classifier: Programming Language :: Python :: 3
10
+ Classifier: License :: OSI Approved :: MIT License
11
+ Requires-Python: >=3.10
12
+ Description-Content-Type: text/markdown
13
+ License-File: LICENSE
14
+ Requires-Dist: tensorflow
15
+ Requires-Dist: keras>=3.0
16
+ Requires-Dist: HGQ2
17
+ Dynamic: license-file
18
+
19
+ <p align="center">
20
+ <img src="https://raw.githubusercontent.com/hftsoi/sparse-pixels/main/docs/figs/logo.png" width="300" />
21
+ </p>
22
+
23
+ <p align="center">
24
+ <img src="https://raw.githubusercontent.com/hftsoi/sparse-pixels/main/docs/figs/sparsepixels.png" width="900"/>
25
+ </p>
26
+
27
+ <p align="center">
28
+ <img src="https://raw.githubusercontent.com/hftsoi/sparse-pixels/main/docs/figs/cnn_standard.gif" width="400" />
29
+ <img src="https://raw.githubusercontent.com/hftsoi/sparse-pixels/main/docs/figs/cnn_sparse.gif" width="400" />
30
+ </p>
31
+
32
+ # SparsePixels: Efficient convolution for sparse data on FPGAs
33
+
34
+ [![arXiv](https://img.shields.io/badge/arXiv-2512.06208-b31b1b.svg?style=flat-square)](https://arxiv.org/abs/2512.06208)
35
+ [![PyPI - Version](https://img.shields.io/pypi/v/sparsepixels?color=orange&style=flat-square)](https://pypi.org/project/sparsepixels)
36
+
37
+ > **Note:** We are actively working on hls4ml integration to auto-convert sparse models to HLS, along with a major upgrade with partial parallelization and streaming for sparse layers in HLS. Stay tuned!
38
+
39
+ ## Installation
40
+
41
+ With Python >= 3.10:
42
+
43
+ ```
44
+ pip install sparsepixels
45
+ ```
46
+
47
+ ## Getting Started
48
+
49
+ Import sparse layers and quantization library (HGQ2):
50
+
51
+ ```python
52
+ import keras
53
+ from keras.layers import Flatten, Activation, ReLU
54
+ from hgq.layers import QConv2D, QDense
55
+ from hgq.config import QuantizerConfigScope, LayerConfigScope
56
+ from hgq.quantizer.config import QuantizerConfig
57
+ from sparsepixels.layers import InputReduce, QConv2DSparse, AveragePooling2DSparse
58
+ ```
59
+
60
+ Build an example sparse CNN within HGQ2 quantization scopes:
61
+
62
+ ```python
63
+ with (
64
+ QuantizerConfigScope(place='all', default_q_type='kbi', overflow_mode='SAT_SYM'),
65
+ QuantizerConfigScope(place='datalane', default_q_type='kif', overflow_mode='WRAP'),
66
+ LayerConfigScope(enable_ebops=False, enable_iq=False),
67
+ ):
68
+ x_in = keras.Input(shape=(x_train.shape[1], x_train.shape[2], x_train.shape[3]), name='x_in')
69
+
70
+ # Sparse input reduction: retain up to n_max_pixels active pixels
71
+ x, keep_mask = InputReduce(n_max_pixels=20, threshold=0.1, name='input_reduce')(x_in)
72
+
73
+ # Sparse convolution
74
+ x = QConv2DSparse(filters=3, kernel_size=3, name='conv1', padding='same', strides=1,
75
+ bq_conf=QuantizerConfig('default', 'bias'))([x, keep_mask])
76
+ x = ReLU(name='relu1')(x)
77
+
78
+ # Sparse pooling
79
+ x, keep_mask = AveragePooling2DSparse(2, name='pool1')([x, keep_mask])
80
+
81
+ x = Flatten(name='flatten')(x)
82
+ x = QDense(10, name='dense1', activation='relu')(x)
83
+ x = Activation('softmax', name='softmax')(x)
84
+
85
+ model = keras.Model(x_in, x)
86
+ ```
87
+
88
+ We are working on hls4ml integration that auto parses the sparse layers into HLS.
89
+
90
+ ## Documentation
91
+
92
+ ## Citation
93
+
94
+ If you find this useful in your research, please consider citing:
95
+
96
+ ```
97
+ @article{Tsoi:2025nvg,
98
+ author = "Tsoi, Ho Fung and Rankin, Dylan and Loncar, Vladimir and Harris, Philip",
99
+ title = "{SparsePixels: Efficient Convolution for Sparse Data on FPGAs}",
100
+ eprint = "2512.06208",
101
+ archivePrefix = "arXiv",
102
+ primaryClass = "cs.AR",
103
+ month = "12",
104
+ year = "2025"
105
+ }
106
+ ```
107
+
@@ -0,0 +1,89 @@
1
+ <p align="center">
2
+ <img src="https://raw.githubusercontent.com/hftsoi/sparse-pixels/main/docs/figs/logo.png" width="300" />
3
+ </p>
4
+
5
+ <p align="center">
6
+ <img src="https://raw.githubusercontent.com/hftsoi/sparse-pixels/main/docs/figs/sparsepixels.png" width="900"/>
7
+ </p>
8
+
9
+ <p align="center">
10
+ <img src="https://raw.githubusercontent.com/hftsoi/sparse-pixels/main/docs/figs/cnn_standard.gif" width="400" />
11
+ <img src="https://raw.githubusercontent.com/hftsoi/sparse-pixels/main/docs/figs/cnn_sparse.gif" width="400" />
12
+ </p>
13
+
14
+ # SparsePixels: Efficient convolution for sparse data on FPGAs
15
+
16
+ [![arXiv](https://img.shields.io/badge/arXiv-2512.06208-b31b1b.svg?style=flat-square)](https://arxiv.org/abs/2512.06208)
17
+ [![PyPI - Version](https://img.shields.io/pypi/v/sparsepixels?color=orange&style=flat-square)](https://pypi.org/project/sparsepixels)
18
+
19
+ > **Note:** We are actively working on hls4ml integration to auto-convert sparse models to HLS, along with a major upgrade with partial parallelization and streaming for sparse layers in HLS. Stay tuned!
20
+
21
+ ## Installation
22
+
23
+ With Python >= 3.10:
24
+
25
+ ```
26
+ pip install sparsepixels
27
+ ```
28
+
29
+ ## Getting Started
30
+
31
+ Import sparse layers and quantization library (HGQ2):
32
+
33
+ ```python
34
+ import keras
35
+ from keras.layers import Flatten, Activation, ReLU
36
+ from hgq.layers import QConv2D, QDense
37
+ from hgq.config import QuantizerConfigScope, LayerConfigScope
38
+ from hgq.quantizer.config import QuantizerConfig
39
+ from sparsepixels.layers import InputReduce, QConv2DSparse, AveragePooling2DSparse
40
+ ```
41
+
42
+ Build an example sparse CNN within HGQ2 quantization scopes:
43
+
44
+ ```python
45
+ with (
46
+ QuantizerConfigScope(place='all', default_q_type='kbi', overflow_mode='SAT_SYM'),
47
+ QuantizerConfigScope(place='datalane', default_q_type='kif', overflow_mode='WRAP'),
48
+ LayerConfigScope(enable_ebops=False, enable_iq=False),
49
+ ):
50
+ x_in = keras.Input(shape=(x_train.shape[1], x_train.shape[2], x_train.shape[3]), name='x_in')
51
+
52
+ # Sparse input reduction: retain up to n_max_pixels active pixels
53
+ x, keep_mask = InputReduce(n_max_pixels=20, threshold=0.1, name='input_reduce')(x_in)
54
+
55
+ # Sparse convolution
56
+ x = QConv2DSparse(filters=3, kernel_size=3, name='conv1', padding='same', strides=1,
57
+ bq_conf=QuantizerConfig('default', 'bias'))([x, keep_mask])
58
+ x = ReLU(name='relu1')(x)
59
+
60
+ # Sparse pooling
61
+ x, keep_mask = AveragePooling2DSparse(2, name='pool1')([x, keep_mask])
62
+
63
+ x = Flatten(name='flatten')(x)
64
+ x = QDense(10, name='dense1', activation='relu')(x)
65
+ x = Activation('softmax', name='softmax')(x)
66
+
67
+ model = keras.Model(x_in, x)
68
+ ```
69
+
70
+ We are working on hls4ml integration that auto parses the sparse layers into HLS.
71
+
72
+ ## Documentation
73
+
74
+ ## Citation
75
+
76
+ If you find this useful in your research, please consider citing:
77
+
78
+ ```
79
+ @article{Tsoi:2025nvg,
80
+ author = "Tsoi, Ho Fung and Rankin, Dylan and Loncar, Vladimir and Harris, Philip",
81
+ title = "{SparsePixels: Efficient Convolution for Sparse Data on FPGAs}",
82
+ eprint = "2512.06208",
83
+ archivePrefix = "arXiv",
84
+ primaryClass = "cs.AR",
85
+ month = "12",
86
+ year = "2025"
87
+ }
88
+ ```
89
+