spikezoo 0.2.1__py3-none-any.whl → 0.2.2__py3-none-any.whl
Sign up to get free protection for your applications and to get access to all the features.
- spikezoo/utils/spike_utils.py +1 -1
- {spikezoo-0.2.1.dist-info → spikezoo-0.2.2.dist-info}/METADATA +35 -6
- {spikezoo-0.2.1.dist-info → spikezoo-0.2.2.dist-info}/RECORD +6 -6
- {spikezoo-0.2.1.dist-info → spikezoo-0.2.2.dist-info}/LICENSE.txt +0 -0
- {spikezoo-0.2.1.dist-info → spikezoo-0.2.2.dist-info}/WHEEL +0 -0
- {spikezoo-0.2.1.dist-info → spikezoo-0.2.2.dist-info}/top_level.txt +0 -0
spikezoo/utils/spike_utils.py
CHANGED
@@ -5,7 +5,7 @@ import os
|
|
5
5
|
from .vidar_loader import load_vidar_dat_cpp
|
6
6
|
from typing import Literal
|
7
7
|
|
8
|
-
def load_vidar_dat(filename, height, width,remove_head=False, version:Literal['python','cpp'] = "
|
8
|
+
def load_vidar_dat(filename, height, width,remove_head=False, version:Literal['python','cpp'] = "python", out_format : Literal['array','tensor']="array",):
|
9
9
|
"""Load the spike stream from the .dat file."""
|
10
10
|
# Spike decode
|
11
11
|
if version == "python":
|
@@ -1,6 +1,6 @@
|
|
1
1
|
Metadata-Version: 2.2
|
2
2
|
Name: spikezoo
|
3
|
-
Version: 0.2.
|
3
|
+
Version: 0.2.2
|
4
4
|
Summary: A deep learning toolbox for spike-to-image models.
|
5
5
|
Home-page: https://github.com/chenkang455/Spike-Zoo
|
6
6
|
Author: Kang Chen
|
@@ -36,12 +36,12 @@ Dynamic: requires-python
|
|
36
36
|
Dynamic: summary
|
37
37
|
|
38
38
|
<h2 align="center">
|
39
|
-
<a href=""
|
39
|
+
<a href="">⚡Spike-Zoo: A Toolbox for Spike-to-Image Reconstruction
|
40
40
|
</a>
|
41
41
|
</h2>
|
42
42
|
|
43
43
|
## 📖 About
|
44
|
-
⚡
|
44
|
+
⚡Spike-Zoo is the go-to library for state-of-the-art pretrained **spike-to-image** models designed to reconstruct images from spike streams. Whether you're looking for a simple inference solution or aiming to train your own spike-to-image models, ⚡Spike-Zoo is a modular toolbox that supports both, with key features including:
|
45
45
|
|
46
46
|
- Fast inference with pre-trained models.
|
47
47
|
- Training support for custom-designed spike-to-image models.
|
@@ -138,22 +138,51 @@ We finish the training with one 4090 GPU in `2 minutes`, achieving `34.7dB` in P
|
|
138
138
|
> 🌟 We encourage users to develop their models using our framework, with the tutorial being released soon.
|
139
139
|
|
140
140
|
### 4. Others
|
141
|
-
We provide a faster `load_vidar_dat` function implemented with `cpp` (by @zeal-ye):
|
141
|
+
We provide a faster `load_vidar_dat` function implemented with `cpp` (by [@zeal-ye](https://github.com/zeal-ye)):
|
142
142
|
``` python
|
143
143
|
import spikezoo as sz
|
144
144
|
spike = sz.load_vidar_dat("data/scissor.dat",width = 400,height = 250,version='cpp')
|
145
145
|
```
|
146
146
|
🚀 Results on [examples/test_load_dat.py](examples/test_load_dat.py) show that the `cpp` version is more than 10 times faster than the `python` version.
|
147
147
|
|
148
|
-
|
149
148
|
## 📅 TODO
|
150
149
|
- [ ] Provide the tutorials.
|
151
150
|
- [ ] Support more training settings.
|
152
151
|
- [ ] Support more spike-based image reconstruction methods and datasets.
|
153
152
|
- [ ] Support the overall pipeline for spike simulation.
|
154
153
|
|
154
|
+
## 🤗 Supports
|
155
|
+
Run the following code to find our supported models, datasets and metrics:
|
156
|
+
``` python
|
157
|
+
import spikezoo as sz
|
158
|
+
print(sz.get_models())
|
159
|
+
print(sz.get_datasets())
|
160
|
+
print(sz.get_metrics())
|
161
|
+
```
|
162
|
+
**Supported Models:**
|
163
|
+
| Models | Source
|
164
|
+
| ---- | ---- |
|
165
|
+
| `tfp`,`tfi` | Spike camera and its coding methods |
|
166
|
+
| `spk2imgnet` | Spk2ImgNet: Learning to Reconstruct Dynamic Scene from Continuous Spike Stream |
|
167
|
+
| `wgse` | Learning Temporal-Ordered Representation for Spike Streams Based on Discrete Wavelet Transforms |
|
168
|
+
| `ssml` | Self-Supervised Mutual Learning for Dynamic Scene Reconstruction of Spiking Camera |
|
169
|
+
| `spikeformer` | SpikeFormer: Image Reconstruction from the Sequence of Spike Camera Based on Transformer |
|
170
|
+
| `ssir` | Spike Camera Image Reconstruction Using Deep Spiking Neural Networks |
|
171
|
+
| `bsf` | Boosting Spike Camera Image Reconstruction from a Perspective of Dealing with Spike Fluctuations |
|
172
|
+
| `stir` | Spatio-Temporal Interactive Learning for Efficient Image Reconstruction of Spiking Cameras |
|
173
|
+
| `spikeclip` | Rethinking High-speed Image Reconstruction Framework with Spike Camera |
|
174
|
+
|
175
|
+
**Supported Datasets:**
|
176
|
+
| Datasets | Source
|
177
|
+
| ---- | ---- |
|
178
|
+
| `reds_small` | Spk2ImgNet: Learning to Reconstruct Dynamic Scene from Continuous Spike Stream |
|
179
|
+
| `uhsr` | Recognizing Ultra-High-Speed Moving Objects with Bio-Inspired Spike Camera |
|
180
|
+
| `realworld` | `recVidarReal2019`,`momVidarReal2021` in [SpikeCV](https://github.com/Zyj061/SpikeCV) |
|
181
|
+
| `szdata` | SpikeReveal: Unlocking Temporal Sequences from Real Blurry Inputs with Spike Streams |
|
182
|
+
|
183
|
+
|
155
184
|
## ✨ Acknowledgment
|
156
|
-
Our code is built on the open-source projects of [SpikeCV](https://spikecv.github.io/), [IQA-Pytorch](https://github.com/chaofengc/IQA-PyTorch), [BasicSR](https://github.com/XPixelGroup/BasicSR) and [NeRFStudio](https://github.com/nerfstudio-project/nerfstudio).We appreciate the effort of the contributors to these repositories. Thanks for @ruizhao26 and @Leozhangjiyuan for their help in building this project.
|
185
|
+
Our code is built on the open-source projects of [SpikeCV](https://spikecv.github.io/), [IQA-Pytorch](https://github.com/chaofengc/IQA-PyTorch), [BasicSR](https://github.com/XPixelGroup/BasicSR) and [NeRFStudio](https://github.com/nerfstudio-project/nerfstudio).We appreciate the effort of the contributors to these repositories. Thanks for [@ruizhao26](https://github.com/ruizhao26) and [@Leozhangjiyuan](https://github.com/Leozhangjiyuan) for their help in building this project.
|
157
186
|
|
158
187
|
## 📑 Citation
|
159
188
|
If you find our codes helpful to your research, please consider to use the following citation:
|
@@ -202,10 +202,10 @@ spikezoo/utils/__init__.py,sha256=bYLlusAXwLCoY4s6nhVgviax9ioRA9aea8qgRmj2HpI,15
|
|
202
202
|
spikezoo/utils/data_utils.py,sha256=mk1xeyIb7o_E1J7Z6-gtPq-rpKiMTxAWSTcvvPvVku8,2033
|
203
203
|
spikezoo/utils/img_utils.py,sha256=0O9z58VzLxQEAuz-GGWCbpeHuHPOCpgBVjCBV9kf6sI,2257
|
204
204
|
spikezoo/utils/other_utils.py,sha256=fKqs4zRxzQsIfmYZv02PZlVaGrmVEjq2KHTMrk_tBKY,2845
|
205
|
-
spikezoo/utils/spike_utils.py,sha256=
|
205
|
+
spikezoo/utils/spike_utils.py,sha256=0GY1hQCOCj0HDDwjXxrHykdjTKmPdb9rC_CexpRzwdk,3123
|
206
206
|
spikezoo/utils/vidar_loader.cpython-39-x86_64-linux-gnu.so,sha256=uXqu7ME---cZRRU5LUcLiNrjjtlOjxNwWHyTIQ10BGg,199088
|
207
|
-
spikezoo-0.2.
|
208
|
-
spikezoo-0.2.
|
209
|
-
spikezoo-0.2.
|
210
|
-
spikezoo-0.2.
|
211
|
-
spikezoo-0.2.
|
207
|
+
spikezoo-0.2.2.dist-info/LICENSE.txt,sha256=ukEi8E0PKq1dQGTXHUflg3rppLymwAhr7il9x-0nPgg,1062
|
208
|
+
spikezoo-0.2.2.dist-info/METADATA,sha256=j-XErZpa-tDx5wkFwwbOaHdcUjRuuGopldedu0hwEVk,7939
|
209
|
+
spikezoo-0.2.2.dist-info/WHEEL,sha256=In9FTNxeP60KnTkGw7wk6mJPYd_dQSjEZmXdBdMCI-8,91
|
210
|
+
spikezoo-0.2.2.dist-info/top_level.txt,sha256=xF2iuOstrACJh43NW4dsTwIdgKfXPXAb_Xzl3M1ricM,9
|
211
|
+
spikezoo-0.2.2.dist-info/RECORD,,
|
File without changes
|
File without changes
|
File without changes
|