spikezoo 0.2.3.5__py3-none-any.whl → 0.2.3.7__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (46) hide show
  1. spikezoo/archs/bsf/models/bsf/__pycache__/align.cpython-39.pyc +0 -0
  2. spikezoo/archs/bsf/models/bsf/__pycache__/bsf.cpython-39.pyc +0 -0
  3. spikezoo/archs/bsf/models/bsf/__pycache__/rep.cpython-39.pyc +0 -0
  4. spikezoo/archs/spikeclip/__pycache__/nets.cpython-39.pyc +0 -0
  5. spikezoo/archs/ssir/models/__pycache__/layers.cpython-39.pyc +0 -0
  6. spikezoo/archs/ssir/models/__pycache__/networks.cpython-39.pyc +0 -0
  7. spikezoo/archs/ssml/__pycache__/cbam.cpython-39.pyc +0 -0
  8. spikezoo/archs/ssml/__pycache__/model.cpython-39.pyc +0 -0
  9. spikezoo/archs/stir/metrics/__pycache__/losses.cpython-39.pyc +0 -0
  10. spikezoo/archs/stir/models/__pycache__/Vgg19.cpython-39.pyc +0 -0
  11. spikezoo/archs/stir/models/__pycache__/networks_STIR.cpython-39.pyc +0 -0
  12. spikezoo/archs/stir/models/__pycache__/submodules.cpython-39.pyc +0 -0
  13. spikezoo/archs/stir/models/__pycache__/transformer_new.cpython-39.pyc +0 -0
  14. spikezoo/archs/stir/package_core/package_core/__pycache__/__init__.cpython-39.pyc +0 -0
  15. spikezoo/archs/stir/package_core/package_core/__pycache__/geometry.cpython-39.pyc +0 -0
  16. spikezoo/archs/stir/package_core/package_core/__pycache__/image_proc.cpython-39.pyc +0 -0
  17. spikezoo/archs/stir/package_core/package_core/__pycache__/losses.cpython-39.pyc +0 -0
  18. spikezoo/archs/stir/package_core/package_core/__pycache__/net_basics.cpython-39.pyc +0 -0
  19. spikezoo/archs/tfi/__pycache__/nets.cpython-39.pyc +0 -0
  20. spikezoo/archs/tfp/__pycache__/nets.cpython-39.pyc +0 -0
  21. spikezoo/archs/wgse/__pycache__/dwtnets.cpython-39.pyc +0 -0
  22. spikezoo/archs/wgse/__pycache__/submodules.cpython-39.pyc +0 -0
  23. spikezoo/archs/yourmodel/arch/__pycache__/net.cpython-39.pyc +0 -0
  24. spikezoo/archs/yourmodel/arch/net.py +35 -0
  25. spikezoo/datasets/__init__.py +20 -21
  26. spikezoo/datasets/base_dataset.py +25 -19
  27. spikezoo/datasets/{realworld_dataset.py → realdata_dataset.py} +5 -7
  28. spikezoo/datasets/reds_base_dataset.py +1 -1
  29. spikezoo/datasets/szdata_dataset.py +1 -1
  30. spikezoo/datasets/uhsr_dataset.py +1 -1
  31. spikezoo/datasets/yourdataset_dataset.py +23 -0
  32. spikezoo/models/__init__.py +11 -18
  33. spikezoo/models/base_model.py +10 -4
  34. spikezoo/models/yourmodel_model.py +22 -0
  35. spikezoo/pipeline/base_pipeline.py +17 -10
  36. spikezoo/pipeline/ensemble_pipeline.py +2 -1
  37. spikezoo/pipeline/train_cfgs.py +32 -29
  38. spikezoo/pipeline/train_pipeline.py +14 -14
  39. spikezoo/utils/spike_utils.py +1 -1
  40. spikezoo-0.2.3.7.dist-info/METADATA +151 -0
  41. {spikezoo-0.2.3.5.dist-info → spikezoo-0.2.3.7.dist-info}/RECORD +44 -41
  42. spikezoo/data/base/train/spike/203_part4_key_id151.dat +0 -0
  43. spikezoo-0.2.3.5.dist-info/METADATA +0 -258
  44. {spikezoo-0.2.3.5.dist-info → spikezoo-0.2.3.7.dist-info}/LICENSE.txt +0 -0
  45. {spikezoo-0.2.3.5.dist-info → spikezoo-0.2.3.7.dist-info}/WHEEL +0 -0
  46. {spikezoo-0.2.3.5.dist-info → spikezoo-0.2.3.7.dist-info}/top_level.txt +0 -0
@@ -1,258 +0,0 @@
1
- Metadata-Version: 2.2
2
- Name: spikezoo
3
- Version: 0.2.3.5
4
- Summary: A deep learning toolbox for spike-to-image models.
5
- Home-page: https://github.com/chenkang455/Spike-Zoo
6
- Author: Kang Chen
7
- Author-email: mrchenkang@stu.pku.edu.cn
8
- Requires-Python: >=3.7
9
- Description-Content-Type: text/markdown
10
- License-File: LICENSE.txt
11
- Requires-Dist: torch
12
- Requires-Dist: requests
13
- Requires-Dist: numpy
14
- Requires-Dist: tqdm
15
- Requires-Dist: scikit-image
16
- Requires-Dist: lpips
17
- Requires-Dist: pyiqa
18
- Requires-Dist: opencv-python
19
- Requires-Dist: thop
20
- Requires-Dist: pytorch-wavelets
21
- Requires-Dist: pytz
22
- Requires-Dist: PyWavelets
23
- Requires-Dist: pandas
24
- Requires-Dist: pillow
25
- Requires-Dist: scikit-learn
26
- Requires-Dist: scipy
27
- Requires-Dist: spikingjelly
28
- Requires-Dist: setuptools
29
- Dynamic: author
30
- Dynamic: author-email
31
- Dynamic: description
32
- Dynamic: description-content-type
33
- Dynamic: home-page
34
- Dynamic: requires-dist
35
- Dynamic: requires-python
36
- Dynamic: summary
37
-
38
- <p align="center">
39
- <img src="imgs/spike-zoo.png" width="350"/>
40
- <p>
41
- <h5 align="center">
42
-
43
- [![GitHub repo stars](https://img.shields.io/github/stars/chenkang455/Spike-Zoo?style=flat&logo=github&logoColor=whitesmoke&label=Stars)](https://github.com/chenkang455/Spike-Zoo/stargazers) [![GitHub Issues](https://img.shields.io/github/issues/chenkang455/Spike-Zoo?style=flat&logo=github&logoColor=whitesmoke&label=Stars)](https://github.com/chenkang455/Spike-Zoo/issues) <a href="https://badge.fury.io/py/spikezoo"><img src="https://badge.fury.io/py/spikezoo.svg" alt="PyPI version"></a> [![License](https://img.shields.io/badge/License-MIT-yellow)](https://github.com/chenkang455/Spike-Zoo)
44
- <p>
45
-
46
- <!-- <h2 align="center">
47
- <a href="">⚡Spike-Zoo: A Toolbox for Spike-to-Image Reconstruction
48
- </a>
49
- </h2> -->
50
-
51
- ## 📖 About
52
- ⚡Spike-Zoo is the go-to library for state-of-the-art pretrained **spike-to-image** models designed to reconstruct images from spike streams. Whether you're looking for a simple inference solution or aiming to train your own spike-to-image models, ⚡Spike-Zoo is a modular toolbox that supports both, with key features including:
53
-
54
- - Fast inference with pre-trained models.
55
- - Training support for custom-designed spike-to-image models.
56
- - Specialized functions for processing spike data.
57
-
58
- > 📚Tutorials: https://spike-zoo.readthedocs.io/zh-cn/latest/#
59
-
60
- ## 🚩 Updates/Changelog
61
- * **25-02-02:** Release the `Spike-Zoo v0.2` code, which supports more methods, provide more usages like training your method from scratch.
62
- * **24-07-19:** Release the `Spike-Zoo v0.1` code for base evaluation of SOTA methods.
63
-
64
- ## 🍾 Quick Start
65
- ### 1. Installation
66
- For users focused on **utilizing pretrained models for spike-to-image conversion**, we recommend installing SpikeZoo using one of the following methods:
67
-
68
- * Install the last stable version `0.2.3` from PyPI:
69
- ```
70
- pip install spikezoo
71
- ```
72
- * Install the latest developing version `0.2.3` from the source code :
73
- ```
74
- git clone https://github.com/chenkang455/Spike-Zoo
75
- cd Spike-Zoo
76
- python setup.py install
77
- ```
78
-
79
- For users interested in **training their own spike-to-image model based on our framework**, we recommend cloning the repository and modifying the related code directly.
80
- ```
81
- git clone https://github.com/chenkang455/Spike-Zoo
82
- cd Spike-Zoo
83
- python setup.py develop
84
- ```
85
-
86
- ### 2. Inference
87
- Reconstructing images from the spike is super easy with Spike-Zoo. Try the following code of the single model:
88
- ``` python
89
- from spikezoo.pipeline import Pipeline, PipelineConfig
90
- import spikezoo as sz
91
- pipeline = Pipeline(
92
- cfg=PipelineConfig(save_folder="results",version="v023"),
93
- model_cfg=sz.METHOD.BASE,
94
- dataset_cfg=sz.DATASET.BASE
95
- )
96
- ```
97
- You can also run multiple models at once by changing the pipeline (version parameter corresponds to our released different versions in [Releases](https://github.com/chenkang455/Spike-Zoo/releases)):
98
- ``` python
99
- import spikezoo as sz
100
- from spikezoo.pipeline import EnsemblePipeline, EnsemblePipelineConfig
101
- pipeline = EnsemblePipeline(
102
- cfg=EnsemblePipelineConfig(save_folder="results",version="v023"),
103
- model_cfg_list=[
104
- sz.METHOD.BASE,sz.METHOD.TFP,sz.METHOD.TFI,sz.METHOD.SPK2IMGNET,sz.METHOD.WGSE,
105
- sz.METHOD.SSML,sz.METHOD.BSF,sz.METHOD.STIR,sz.METHOD.SPIKECLIP,sz.METHOD.SSIR],
106
- dataset_cfg=sz.DATASET.BASE,
107
- )
108
- ```
109
- Having established our pipelines, we provide following functions to enjoy these spike-to-image models.
110
-
111
- * I. Obtain the restoration metric and save the recovered image from the given spike:
112
- ``` python
113
- # 1. spike-to-image from the given dataset
114
- pipeline.infer_from_dataset(idx = 0)
115
-
116
- # 2. spike-to-image from the given .dat file
117
- pipeline.infer_from_file(file_path = 'data/scissor.dat',width = 400,height=250)
118
-
119
- # 3. spike-to-image from the given spike
120
- spike = sz.load_vidar_dat("data/scissor.dat",width = 400,height = 250)
121
- pipeline.infer_from_spk(spike)
122
- ```
123
-
124
-
125
- * II. Save all images from the given dataset.
126
- ``` python
127
- pipeline.save_imgs_from_dataset()
128
- ```
129
-
130
- * III. Calculate the metrics for the specified dataset.
131
- ``` python
132
- pipeline.cal_metrics()
133
- ```
134
-
135
- * IV. Calculate the parameters (params,flops,latency) based on the established pipeline.
136
- ``` python
137
- pipeline.cal_params()
138
- ```
139
-
140
- For detailed usage, welcome check [test_single.ipynb](examples/test/test_single.ipynb) and [test_ensemble.ipynb](examples/test/test_ensemble.ipynb).
141
-
142
- ### 3. Training
143
- We provide a user-friendly code for training our provided `base` model (modified from the `SpikeCLIP`) for the classic `REDS` dataset introduced in `Spk2ImgNet`:
144
- ``` python
145
- from spikezoo.pipeline import TrainPipelineConfig, TrainPipeline
146
- from spikezoo.datasets.reds_base_dataset import REDS_BASEConfig
147
- from spikezoo.models.base_model import BaseModelConfig
148
- pipeline = TrainPipeline(
149
- cfg=TrainPipelineConfig(save_folder="results", epochs = 10),
150
- dataset_cfg=REDS_BASEConfig(root_dir = "spikezoo/data/REDS_BASE"),
151
- model_cfg=BaseModelConfig(),
152
- )
153
- pipeline.train()
154
- ```
155
- We finish the training with one 4090 GPU in `2 minutes`, achieving `32.8dB` in PSNR and `0.92` in SSIM.
156
-
157
- > 🌟 We encourage users to develop their models with simple modifications to our framework, and the tutorial will be released soon.
158
-
159
- We retrain all supported methods except `SPIKECLIP` on this REDS dataset (training scripts are placed on [examples/train_reds_base](examples/train_reds_base) and evaluation script is placed on [test_REDS_base.py](examples/test/test_REDS_base.py)), with our reported metrics as follows:
160
-
161
- | Method | PSNR | SSIM | LPIPS | NIQE | BRISQUE | PIQE | Params (M) | FLOPs (G) | Latency (ms) |
162
- |----------------------|:-------:|:--------:|:---------:|:---------:|:----------:|:-------:|:------------:|:-----------:|:--------------:|
163
- | `tfi` | 16.503 | 0.454 | 0.382 | 7.289 | 43.17 | 49.12 | 0.00 | 0.00 | 3.60 |
164
- | `tfp` | 24.287 | 0.644 | 0.274 | 8.197 | 48.48 | 38.38 | 0.00 | 0.00 | 0.03 |
165
- | `spikeclip` | 21.873 | 0.578 | 0.333 | 7.802 | 42.08 | 54.01 | 0.19 | 23.69 | 1.27 |
166
- | `ssir` | 26.544 | 0.718 | 0.325 | 4.769 | 28.45 | 21.59 | 0.38 | 25.92 | 4.52 |
167
- | `ssml` | 33.697 | 0.943 | 0.088 | 4.669 | 32.48 | 37.30 | 2.38 | 386.02 | 244.18 |
168
- | `base` | 36.589 | 0.965 | 0.034 | 4.393 | 26.16 | 38.43 | 0.18 | 18.04 | 0.40 |
169
- | `stir` | 37.914 | 0.973 | 0.027 | 4.236 | 25.10 | 39.18 | 5.08 | 43.31 | 21.07 |
170
- | `wgse` | 39.036 | 0.978 | 0.023 | 4.231 | 25.76 | 44.11 | 3.81 | 415.26 | 73.62 |
171
- | `spk2imgnet` | 39.154 | 0.978 | 0.022 | 4.243 | 25.20 | 43.09 | 3.90 | 1000.50 | 123.38 |
172
- | `bsf` | 39.576 | 0.979 | 0.019 | 4.139 | 24.93 | 43.03 | 2.47 | 705.23 | 401.50 |
173
-
174
- ### 4. Model Usage
175
- We also provide a direct interface for users interested in taking the spike-to-image model as a part of their work:
176
-
177
- ```python
178
- import spikezoo as sz
179
- from spikezoo.models.base_model import BaseModel, BaseModelConfig
180
- # input data
181
- spike = sz.load_vidar_dat("data/data.dat", width=400, height=250, out_format="tensor")
182
- spike = spike[None].cuda()
183
- print(f"Input spike shape: {spike.shape}")
184
- # net
185
- net = BaseModel(BaseModelConfig(model_params={"inDim": 41}))
186
- net.build_network(mode = "debug")
187
- # process
188
- recon_img = net(spike)
189
- print(recon_img.shape,recon_img.max(),recon_img.min())
190
- ```
191
- For detailed usage, welcome check [test_model.ipynb](examples/test/test_model.ipynb).
192
-
193
- ### 5. Spike Utility
194
- #### I. Faster spike loading interface
195
- We provide a faster `load_vidar_dat` function implemented with `cpp` (by [@zeal-ye](https://github.com/zeal-ye)):
196
- ``` python
197
- import spikezoo as sz
198
- spike = sz.load_vidar_dat("data/scissor.dat",width = 400,height = 250,version='cpp')
199
- ```
200
- 🚀 Results on [test_load_dat.py](examples/test_load_dat.py) show that the `cpp` version is more than 10 times faster than the `python` version.
201
-
202
- #### II. Spike simulation pipeline.
203
- We provide our overall spike simulation pipeline in [scripts](scripts/), try to modify the config in `run.sh` and run the command to start the simulation process:
204
- ``` bash
205
- bash run.sh
206
- ```
207
-
208
- #### III. Spike-related functions.
209
- For other spike-related functions, welcome check [spike_utils.py](spikezoo/utils/spike_utils.py)
210
-
211
- ## 📅 TODO
212
- - [x] Support the overall pipeline for spike simulation.
213
- - [ ] Provide the tutorials.
214
- - [ ] Support more training settings.
215
- - [ ] Support more spike-based image reconstruction methods and datasets.
216
-
217
- ## 🤗 Supports
218
- Run the following code to find our supported models, datasets and metrics:
219
- ``` python
220
- import spikezoo as sz
221
- print(sz.METHODS)
222
- print(sz.DATASETS)
223
- print(sz.METRICS)
224
- ```
225
- **Supported Models:**
226
- | Models | Source
227
- | ---- | ---- |
228
- | `tfp`,`tfi` | Spike camera and its coding methods |
229
- | `spk2imgnet` | Spk2ImgNet: Learning to Reconstruct Dynamic Scene from Continuous Spike Stream |
230
- | `wgse` | Learning Temporal-Ordered Representation for Spike Streams Based on Discrete Wavelet Transforms |
231
- | `ssml` | Self-Supervised Mutual Learning for Dynamic Scene Reconstruction of Spiking Camera |
232
- | `ssir` | Spike Camera Image Reconstruction Using Deep Spiking Neural Networks |
233
- | `bsf` | Boosting Spike Camera Image Reconstruction from a Perspective of Dealing with Spike Fluctuations |
234
- | `stir` | Spatio-Temporal Interactive Learning for Efficient Image Reconstruction of Spiking Cameras |
235
- | `base`,`spikeclip` | Rethinking High-speed Image Reconstruction Framework with Spike Camera |
236
-
237
- **Supported Datasets:**
238
- | Datasets | Source
239
- | ---- | ---- |
240
- | `reds_base` | Spk2ImgNet: Learning to Reconstruct Dynamic Scene from Continuous Spike Stream |
241
- | `uhsr` | Recognizing Ultra-High-Speed Moving Objects with Bio-Inspired Spike Camera |
242
- | `realworld` | `recVidarReal2019`,`momVidarReal2021` in [SpikeCV](https://github.com/Zyj061/SpikeCV) |
243
- | `szdata` | SpikeReveal: Unlocking Temporal Sequences from Real Blurry Inputs with Spike Streams |
244
-
245
-
246
- ## ✨‍ Acknowledgment
247
- Our code is built on the open-source projects of [SpikeCV](https://spikecv.github.io/), [IQA-Pytorch](https://github.com/chaofengc/IQA-PyTorch), [BasicSR](https://github.com/XPixelGroup/BasicSR) and [NeRFStudio](https://github.com/nerfstudio-project/nerfstudio).We appreciate the effort of the contributors to these repositories. Thanks for [@ruizhao26](https://github.com/ruizhao26), [@shiyan_chen](https://github.com/hnmizuho) and [@Leozhangjiyuan](https://github.com/Leozhangjiyuan) for their help in building this project.
248
-
249
- ## 📑 Citation
250
- If you find our codes helpful to your research, please consider to use the following citation:
251
- ```
252
- @misc{spikezoo,
253
- title={{Spike-Zoo}: Spike-Zoo: A Toolbox for Spike-to-Image Reconstruction},
254
- author={Kang Chen and Zhiyuan Ye and Tiejun Huang and Zhaofei Yu},
255
- year={2025},
256
- howpublished = "[Online]. Available: \url{https://github.com/chenkang455/Spike-Zoo}"
257
- }
258
- ```