spikezoo 0.2.3.4__py3-none-any.whl → 0.2.3.6__py3-none-any.whl

Sign up to get free protection for your applications and to get access to all the features.
Files changed (55) hide show
  1. spikezoo/archs/__pycache__/__init__.cpython-39.pyc +0 -0
  2. spikezoo/archs/base/__pycache__/nets.cpython-39.pyc +0 -0
  3. spikezoo/archs/bsf/models/bsf/__pycache__/align.cpython-39.pyc +0 -0
  4. spikezoo/archs/bsf/models/bsf/__pycache__/bsf.cpython-39.pyc +0 -0
  5. spikezoo/archs/bsf/models/bsf/__pycache__/rep.cpython-39.pyc +0 -0
  6. spikezoo/archs/spikeclip/__pycache__/nets.cpython-39.pyc +0 -0
  7. spikezoo/archs/spk2imgnet/__pycache__/DCNv2.cpython-39.pyc +0 -0
  8. spikezoo/archs/spk2imgnet/__pycache__/align_arch.cpython-39.pyc +0 -0
  9. spikezoo/archs/spk2imgnet/__pycache__/nets.cpython-39.pyc +0 -0
  10. spikezoo/archs/ssir/models/__pycache__/layers.cpython-39.pyc +0 -0
  11. spikezoo/archs/ssir/models/__pycache__/networks.cpython-39.pyc +0 -0
  12. spikezoo/archs/ssml/__pycache__/cbam.cpython-39.pyc +0 -0
  13. spikezoo/archs/ssml/__pycache__/model.cpython-39.pyc +0 -0
  14. spikezoo/archs/stir/metrics/__pycache__/losses.cpython-39.pyc +0 -0
  15. spikezoo/archs/stir/models/__pycache__/Vgg19.cpython-39.pyc +0 -0
  16. spikezoo/archs/stir/models/__pycache__/networks_STIR.cpython-39.pyc +0 -0
  17. spikezoo/archs/stir/models/__pycache__/submodules.cpython-39.pyc +0 -0
  18. spikezoo/archs/stir/models/__pycache__/transformer_new.cpython-39.pyc +0 -0
  19. spikezoo/archs/stir/package_core/package_core/__pycache__/__init__.cpython-39.pyc +0 -0
  20. spikezoo/archs/stir/package_core/package_core/__pycache__/geometry.cpython-39.pyc +0 -0
  21. spikezoo/archs/stir/package_core/package_core/__pycache__/image_proc.cpython-39.pyc +0 -0
  22. spikezoo/archs/stir/package_core/package_core/__pycache__/losses.cpython-39.pyc +0 -0
  23. spikezoo/archs/stir/package_core/package_core/__pycache__/net_basics.cpython-39.pyc +0 -0
  24. spikezoo/archs/tfi/__pycache__/nets.cpython-39.pyc +0 -0
  25. spikezoo/archs/tfp/__pycache__/nets.cpython-39.pyc +0 -0
  26. spikezoo/archs/wgse/__pycache__/dwtnets.cpython-39.pyc +0 -0
  27. spikezoo/archs/wgse/__pycache__/submodules.cpython-39.pyc +0 -0
  28. spikezoo/archs/yourmodel/arch/__pycache__/net.cpython-39.pyc +0 -0
  29. spikezoo/archs/yourmodel/arch/net.py +35 -0
  30. spikezoo/datasets/__init__.py +20 -21
  31. spikezoo/datasets/base_dataset.py +26 -21
  32. spikezoo/datasets/{realworld_dataset.py → realdata_dataset.py} +5 -7
  33. spikezoo/datasets/reds_base_dataset.py +1 -1
  34. spikezoo/datasets/szdata_dataset.py +1 -5
  35. spikezoo/datasets/uhsr_dataset.py +1 -1
  36. spikezoo/datasets/yourdataset_dataset.py +23 -0
  37. spikezoo/models/__init__.py +12 -8
  38. spikezoo/models/base_model.py +10 -4
  39. spikezoo/models/bsf_model.py +0 -1
  40. spikezoo/models/spk2imgnet_model.py +0 -1
  41. spikezoo/models/stir_model.py +0 -1
  42. spikezoo/models/wgse_model.py +0 -1
  43. spikezoo/models/yourmodel_model.py +22 -0
  44. spikezoo/pipeline/base_pipeline.py +17 -10
  45. spikezoo/pipeline/ensemble_pipeline.py +2 -1
  46. spikezoo/pipeline/train_cfgs.py +3 -1
  47. spikezoo/pipeline/train_pipeline.py +12 -12
  48. spikezoo/utils/spike_utils.py +2 -2
  49. spikezoo-0.2.3.6.dist-info/METADATA +151 -0
  50. {spikezoo-0.2.3.4.dist-info → spikezoo-0.2.3.6.dist-info}/RECORD +53 -23
  51. spikezoo/data/base/train/spike/203_part4_key_id151.dat +0 -0
  52. spikezoo-0.2.3.4.dist-info/METADATA +0 -259
  53. {spikezoo-0.2.3.4.dist-info → spikezoo-0.2.3.6.dist-info}/LICENSE.txt +0 -0
  54. {spikezoo-0.2.3.4.dist-info → spikezoo-0.2.3.6.dist-info}/WHEEL +0 -0
  55. {spikezoo-0.2.3.4.dist-info → spikezoo-0.2.3.6.dist-info}/top_level.txt +0 -0
@@ -1,259 +0,0 @@
1
- Metadata-Version: 2.2
2
- Name: spikezoo
3
- Version: 0.2.3.4
4
- Summary: A deep learning toolbox for spike-to-image models.
5
- Home-page: https://github.com/chenkang455/Spike-Zoo
6
- Author: Kang Chen
7
- Author-email: mrchenkang@stu.pku.edu.cn
8
- Requires-Python: >=3.7
9
- Description-Content-Type: text/markdown
10
- License-File: LICENSE.txt
11
- Requires-Dist: torch
12
- Requires-Dist: requests
13
- Requires-Dist: numpy
14
- Requires-Dist: tqdm
15
- Requires-Dist: scikit-image
16
- Requires-Dist: lpips
17
- Requires-Dist: pyiqa
18
- Requires-Dist: opencv-python
19
- Requires-Dist: thop
20
- Requires-Dist: pytorch-wavelets
21
- Requires-Dist: pytz
22
- Requires-Dist: PyWavelets
23
- Requires-Dist: pandas
24
- Requires-Dist: pillow
25
- Requires-Dist: scikit-learn
26
- Requires-Dist: scipy
27
- Requires-Dist: spikingjelly
28
- Requires-Dist: setuptools
29
- Dynamic: author
30
- Dynamic: author-email
31
- Dynamic: description
32
- Dynamic: description-content-type
33
- Dynamic: home-page
34
- Dynamic: requires-dist
35
- Dynamic: requires-python
36
- Dynamic: summary
37
-
38
- <p align="center">
39
- <img src="imgs/spike-zoo.png" width="350"/>
40
- <p>
41
- <h5 align="center">
42
-
43
- [![GitHub repo stars](https://img.shields.io/github/stars/chenkang455/Spike-Zoo?style=flat&logo=github&logoColor=whitesmoke&label=Stars)](https://github.com/chenkang455/Spike-Zoo/stargazers) [![GitHub Issues](https://img.shields.io/github/issues/chenkang455/Spike-Zoo?style=flat&logo=github&logoColor=whitesmoke&label=Stars)](https://github.com/chenkang455/Spike-Zoo/issues) <a href="https://badge.fury.io/py/spikezoo"><img src="https://badge.fury.io/py/spikezoo.svg" alt="PyPI version"></a> [![License](https://img.shields.io/badge/License-MIT-yellow)](https://github.com/chenkang455/Spike-Zoo)
44
- <p>
45
-
46
- <!-- <h2 align="center">
47
- <a href="">⚡Spike-Zoo: A Toolbox for Spike-to-Image Reconstruction
48
- </a>
49
- </h2> -->
50
-
51
- ## 📖 About
52
- ⚡Spike-Zoo is the go-to library for state-of-the-art pretrained **spike-to-image** models designed to reconstruct images from spike streams. Whether you're looking for a simple inference solution or aiming to train your own spike-to-image models, ⚡Spike-Zoo is a modular toolbox that supports both, with key features including:
53
-
54
- - Fast inference with pre-trained models.
55
- - Training support for custom-designed spike-to-image models.
56
- - Specialized functions for processing spike data.
57
-
58
-
59
-
60
- ## 🚩 Updates/Changelog
61
- * **25-02-02:** Release the `Spike-Zoo v0.2` code, which supports more methods, provide more usages like training your method from scratch.
62
- * **24-07-19:** Release the `Spike-Zoo v0.1` code for base evaluation of SOTA methods.
63
-
64
- ## 🍾 Quick Start
65
- ### 1. Installation
66
- For users focused on **utilizing pretrained models for spike-to-image conversion**, we recommend installing SpikeZoo using one of the following methods:
67
-
68
- * Install the last stable version `0.2.3` from PyPI:
69
- ```
70
- pip install spikezoo
71
- ```
72
- * Install the latest developing version `0.2.3` from the source code :
73
- ```
74
- git clone https://github.com/chenkang455/Spike-Zoo
75
- cd Spike-Zoo
76
- python setup.py install
77
- ```
78
-
79
- For users interested in **training their own spike-to-image model based on our framework**, we recommend cloning the repository and modifying the related code directly.
80
- ```
81
- git clone https://github.com/chenkang455/Spike-Zoo
82
- cd Spike-Zoo
83
- python setup.py develop
84
- ```
85
-
86
- ### 2. Inference
87
- Reconstructing images from the spike is super easy with Spike-Zoo. Try the following code of the single model:
88
- ``` python
89
- from spikezoo.pipeline import Pipeline, PipelineConfig
90
- import spikezoo as sz
91
- pipeline = Pipeline(
92
- cfg=PipelineConfig(save_folder="results",version="v023"),
93
- model_cfg=sz.METHOD.BASE,
94
- dataset_cfg=sz.DATASET.BASE
95
- )
96
- ```
97
- You can also run multiple models at once by changing the pipeline (version parameter corresponds to our released different versions in [Releases](https://github.com/chenkang455/Spike-Zoo/releases)):
98
- ``` python
99
- import spikezoo as sz
100
- from spikezoo.pipeline import EnsemblePipeline, EnsemblePipelineConfig
101
- pipeline = EnsemblePipeline(
102
- cfg=EnsemblePipelineConfig(save_folder="results",version="v023"),
103
- model_cfg_list=[
104
- sz.METHOD.BASE,sz.METHOD.TFP,sz.METHOD.TFI,sz.METHOD.SPK2IMGNET,sz.METHOD.WGSE,
105
- sz.METHOD.SSML,sz.METHOD.BSF,sz.METHOD.STIR,sz.METHOD.SPIKECLIP,sz.METHOD.SSIR],
106
- dataset_cfg=sz.DATASET.BASE,
107
- )
108
- ```
109
- Having established our pipelines, we provide following functions to enjoy these spike-to-image models.
110
-
111
- * I. Obtain the restoration metric and save the recovered image from the given spike:
112
- ``` python
113
- # 1. spike-to-image from the given dataset
114
- pipeline.infer_from_dataset(idx = 0)
115
-
116
- # 2. spike-to-image from the given .dat file
117
- pipeline.infer_from_file(file_path = 'data/scissor.dat',width = 400,height=250)
118
-
119
- # 3. spike-to-image from the given spike
120
- import spikezoo as sz
121
- spike = sz.load_vidar_dat("data/scissor.dat",width = 400,height = 250)
122
- pipeline.infer_from_spk(spike)
123
- ```
124
-
125
-
126
- * II. Save all images from the given dataset.
127
- ``` python
128
- pipeline.save_imgs_from_dataset()
129
- ```
130
-
131
- * III. Calculate the metrics for the specified dataset.
132
- ``` python
133
- pipeline.cal_metrics()
134
- ```
135
-
136
- * IV. Calculate the parameters (params,flops,latency) based on the established pipeline.
137
- ``` python
138
- pipeline.cal_params()
139
- ```
140
-
141
- For detailed usage, welcome check [test_single.ipynb](examples/test/test_single.ipynb) and [test_ensemble.ipynb](examples/test/test_ensemble.ipynb).
142
-
143
- ### 3. Training
144
- We provide a user-friendly code for training our provided `base` model (modified from the `SpikeCLIP`) for the classic `REDS` dataset introduced in `Spk2ImgNet`:
145
- ``` python
146
- from spikezoo.pipeline import TrainPipelineConfig, TrainPipeline
147
- from spikezoo.datasets.reds_base_dataset import REDS_BASEConfig
148
- from spikezoo.models.base_model import BaseModelConfig
149
- pipeline = TrainPipeline(
150
- cfg=TrainPipelineConfig(save_folder="results", epochs = 10),
151
- dataset_cfg=REDS_BASEConfig(root_dir = "spikezoo/data/REDS_BASE"),
152
- model_cfg=BaseModelConfig(),
153
- )
154
- pipeline.train()
155
- ```
156
- We finish the training with one 4090 GPU in `2 minutes`, achieving `32.8dB` in PSNR and `0.92` in SSIM.
157
-
158
- > 🌟 We encourage users to develop their models with simple modifications to our framework, and the tutorial will be released soon.
159
-
160
- We retrain all supported methods except `SPIKECLIP` on this REDS dataset (training scripts are placed on [examples/train_reds_base](examples/train_reds_base) and evaluation script is placed on [test_REDS_base.py](examples/test/test_REDS_base.py)), with our reported metrics as follows:
161
-
162
- | Method | PSNR | SSIM | LPIPS | NIQE | BRISQUE | PIQE | Params (M) | FLOPs (G) | Latency (ms) |
163
- |----------------------|:-------:|:--------:|:---------:|:---------:|:----------:|:-------:|:------------:|:-----------:|:--------------:|
164
- | `tfi` | 16.503 | 0.454 | 0.382 | 7.289 | 43.17 | 49.12 | 0.00 | 0.00 | 3.60 |
165
- | `tfp` | 24.287 | 0.644 | 0.274 | 8.197 | 48.48 | 38.38 | 0.00 | 0.00 | 0.03 |
166
- | `spikeclip` | 21.873 | 0.578 | 0.333 | 7.802 | 42.08 | 54.01 | 0.19 | 23.69 | 1.27 |
167
- | `ssir` | 26.544 | 0.718 | 0.325 | 4.769 | 28.45 | 21.59 | 0.38 | 25.92 | 4.52 |
168
- | `ssml` | 33.697 | 0.943 | 0.088 | 4.669 | 32.48 | 37.30 | 2.38 | 386.02 | 244.18 |
169
- | `base` | 36.589 | 0.965 | 0.034 | 4.393 | 26.16 | 38.43 | 0.18 | 18.04 | 0.40 |
170
- | `stir` | 37.914 | 0.973 | 0.027 | 4.236 | 25.10 | 39.18 | 5.08 | 43.31 | 21.07 |
171
- | `wgse` | 39.036 | 0.978 | 0.023 | 4.231 | 25.76 | 44.11 | 3.81 | 415.26 | 73.62 |
172
- | `spk2imgnet` | 39.154 | 0.978 | 0.022 | 4.243 | 25.20 | 43.09 | 3.90 | 1000.50 | 123.38 |
173
- | `bsf` | 39.576 | 0.979 | 0.019 | 4.139 | 24.93 | 43.03 | 2.47 | 705.23 | 401.50 |
174
-
175
- ### 4. Model Usage
176
- We also provide a direct interface for users interested in taking the spike-to-image model as a part of their work:
177
-
178
- ```python
179
- import spikezoo as sz
180
- from spikezoo.models.base_model import BaseModel, BaseModelConfig
181
- # input data
182
- spike = sz.load_vidar_dat("data/data.dat", width=400, height=250, out_format="tensor")
183
- spike = spike[None].cuda()
184
- print(f"Input spike shape: {spike.shape}")
185
- # net
186
- net = BaseModel(BaseModelConfig(model_params={"inDim": 41}))
187
- net.build_network(mode = "debug")
188
- # process
189
- recon_img = net(spike)
190
- print(recon_img.shape,recon_img.max(),recon_img.min())
191
- ```
192
- For detailed usage, welcome check [test_model.ipynb](examples/test/test_model.ipynb).
193
-
194
- ### 5. Spike Utility
195
- #### I. Faster spike loading interface
196
- We provide a faster `load_vidar_dat` function implemented with `cpp` (by [@zeal-ye](https://github.com/zeal-ye)):
197
- ``` python
198
- import spikezoo as sz
199
- spike = sz.load_vidar_dat("data/scissor.dat",width = 400,height = 250,version='cpp')
200
- ```
201
- 🚀 Results on [test_load_dat.py](examples/test_load_dat.py) show that the `cpp` version is more than 10 times faster than the `python` version.
202
-
203
- #### II. Spike simulation pipeline.
204
- We provide our overall spike simulation pipeline in [scripts](scripts/), try to modify the config in `run.sh` and run the command to start the simulation process:
205
- ``` bash
206
- bash run.sh
207
- ```
208
-
209
- #### III. Spike-related functions.
210
- For other spike-related functions, welcome check [spike_utils.py](spikezoo/utils/spike_utils.py)
211
-
212
- ## 📅 TODO
213
- - [x] Support the overall pipeline for spike simulation.
214
- - [ ] Provide the tutorials.
215
- - [ ] Support more training settings.
216
- - [ ] Support more spike-based image reconstruction methods and datasets.
217
-
218
- ## 🤗 Supports
219
- Run the following code to find our supported models, datasets and metrics:
220
- ``` python
221
- import spikezoo as sz
222
- print(sz.METHODS)
223
- print(sz.DATASETS)
224
- print(sz.METRICS)
225
- ```
226
- **Supported Models:**
227
- | Models | Source
228
- | ---- | ---- |
229
- | `tfp`,`tfi` | Spike camera and its coding methods |
230
- | `spk2imgnet` | Spk2ImgNet: Learning to Reconstruct Dynamic Scene from Continuous Spike Stream |
231
- | `wgse` | Learning Temporal-Ordered Representation for Spike Streams Based on Discrete Wavelet Transforms |
232
- | `ssml` | Self-Supervised Mutual Learning for Dynamic Scene Reconstruction of Spiking Camera |
233
- | `ssir` | Spike Camera Image Reconstruction Using Deep Spiking Neural Networks |
234
- | `bsf` | Boosting Spike Camera Image Reconstruction from a Perspective of Dealing with Spike Fluctuations |
235
- | `stir` | Spatio-Temporal Interactive Learning for Efficient Image Reconstruction of Spiking Cameras |
236
- | `base`,`spikeclip` | Rethinking High-speed Image Reconstruction Framework with Spike Camera |
237
-
238
- **Supported Datasets:**
239
- | Datasets | Source
240
- | ---- | ---- |
241
- | `reds_base` | Spk2ImgNet: Learning to Reconstruct Dynamic Scene from Continuous Spike Stream |
242
- | `uhsr` | Recognizing Ultra-High-Speed Moving Objects with Bio-Inspired Spike Camera |
243
- | `realworld` | `recVidarReal2019`,`momVidarReal2021` in [SpikeCV](https://github.com/Zyj061/SpikeCV) |
244
- | `szdata` | SpikeReveal: Unlocking Temporal Sequences from Real Blurry Inputs with Spike Streams |
245
-
246
-
247
- ## ✨‍ Acknowledgment
248
- Our code is built on the open-source projects of [SpikeCV](https://spikecv.github.io/), [IQA-Pytorch](https://github.com/chaofengc/IQA-PyTorch), [BasicSR](https://github.com/XPixelGroup/BasicSR) and [NeRFStudio](https://github.com/nerfstudio-project/nerfstudio).We appreciate the effort of the contributors to these repositories. Thanks for [@ruizhao26](https://github.com/ruizhao26), [@shiyan_chen](https://github.com/hnmizuho) and [@Leozhangjiyuan](https://github.com/Leozhangjiyuan) for their help in building this project.
249
-
250
- ## 📑 Citation
251
- If you find our codes helpful to your research, please consider to use the following citation:
252
- ```
253
- @misc{spikezoo,
254
- title={{Spike-Zoo}: Spike-Zoo: A Toolbox for Spike-to-Image Reconstruction},
255
- author={Kang Chen and Zhiyuan Ye},
256
- year={2025},
257
- howpublished = "[Online]. Available: \url{https://github.com/chenkang455/Spike-Zoo}"
258
- }
259
- ```