spike-encoding 0.1.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,444 @@
1
+ Metadata-Version: 2.3
2
+ Name: spike-encoding
3
+ Version: 0.1.0
4
+ Summary: A spike-encoding repository for converting conventional data to spiking data. Includes trainable encoding methods and PyTorch compatibility, as well as integration of some common datasets.
5
+ Author: Alexandru Vasilache, Jona Scholz
6
+ Author-email: Alexandru Vasilache <vasilache@fzi.de>, Jona Scholz <jona.scholz@kit.edu>
7
+ Requires-Dist: colorama>=0.4.6
8
+ Requires-Dist: gymnasium[classic-control]>=1.2.0
9
+ Requires-Dist: joblib>=1.5.2
10
+ Requires-Dist: optuna>=4.5.0
11
+ Requires-Dist: pandas>=2.3.2
12
+ Requires-Dist: scikit-learn>=1.7.1
13
+ Requires-Dist: torch>=2.8.0
14
+ Requires-Dist: torchmetrics>=1.8.2
15
+ Requires-Dist: tqdm>=4.67.1
16
+ Requires-Python: >=3.10
17
+ Description-Content-Type: text/markdown
18
+
19
+ # Spike Encoding
20
+ This repository contains common methods for encoding and generating spikes.
21
+
22
+ # Installation
23
+ To install this repository as a package use
24
+ > pip install git+https://github.com/Alex-Vasilache/Spike-Encoding.git
25
+
26
+ **NOTE** This will also install torch and torchmetrics
27
+
28
+ You can import it like any other package. For instance, you can import the StepForwardConverter as follows
29
+ > from encoding.step_forward_converter import StepForwardConverter
30
+
31
+ **NOTE** There may be compatibility issues with prior versions. If you are upgrading to a newer version, please use
32
+ > pip uninstall Spike-Encoding
33
+
34
+ and install it again. If you install the newer version without uninstalling, there may be strange errors.
35
+
36
+ # General overview
37
+ The repository provides common methods of encoding scalar values to spike trains. In many cases there is also an inverse method that decodes spike trains back to scalar values. The current implementations include
38
+
39
+ - Ben's spiker algorithm (BSA)[^1] - Encoding, Decoding & Optimization
40
+ - Step-forward encoding (SF)[^2] - Encoding, Decoding & Optimization
41
+ - Pulse-width modulation (PWM)[^3] - Encoding, Decoding & Optimization
42
+ - LIF-based encoding (LIF)[^4] - Encoding, Decoding & Optimization
43
+ - Gymnasium encoder[^5] - Encoding
44
+ - Bin encoder[^6] - Encoding
45
+
46
+ For each encoder, there are examples on its usage in the examples folder. In general, encoders are created by creating an instance of its class and then calling its encode method. Optionally, parameters can be given or determined through and optimization method.We will see this in more detail in the following sections.
47
+
48
+ # Ben's spiker algorithm (BSA)
49
+ BSA[^1] encodes signals into spikes by using a combination of FIR (Finite Impulse Response) filtering and error comparison. For each timestep, it compares the error between the signal and a potential spike's filter response. If adding a spike at the current timestep would reduce the overall error by more than a threshold amount, a spike is generated and the filter response is subtracted from the signal. This process continues for each timestep, effectively encoding the signal into a series of spikes that can later be decoded by applying the same FIR filter to the spike train.
50
+
51
+ The method has three main parameters that can be optimized:
52
+ - Filter order: Controls the length of the FIR filter
53
+ - Filter cutoff: Determines the frequency response of the filter
54
+ - Threshold: Sets how aggressive the spike generation should be
55
+ Here we will illustrate this with a simple hardcoded signal.
56
+
57
+ We will illustrate its usage with a simple hardcoded signal.
58
+
59
+ ```python
60
+ import torch
61
+
62
+ signal = torch.tensor([[0.5, 0.6, 0.7, 0.8, 0.9, 1.0, 0.9, 0.8, 0.7, 0.6, 0.5]])
63
+ ```
64
+
65
+ Then to encode the signal, we do
66
+
67
+ ```python
68
+ from encoding.bens_spiker_algorithm import BensSpikerAlgorithm
69
+
70
+ bsa = BensSpikerAlgorithm()
71
+ spikes = bsa.encode(signal)
72
+
73
+ # returns [0., 1., 1., 0., 1., 0., 0., 0., 0., 0., 0.]
74
+ ```
75
+
76
+ And to decode the spikes again, we can simply call the decode method
77
+
78
+ ```python
79
+ reconstructed = bsa.decode(spikes)
80
+
81
+ # returns [0.5, 0.5, 0.5, 0.5, 0.5, 0.82, 0.87, 0.80, 0.72, 0.62, 0.53]
82
+ # NOTE these outputs were rounded for easier displaying
83
+ ```
84
+
85
+ The implementation also supports optimization of the parameters for a given signal. This is achieved by calling the optimize method.
86
+
87
+ ```python
88
+ filter_order, filter_cutoff, threshold = bsa.optimize(signal)
89
+ ```
90
+
91
+ These parameters can then be used to create an optimized instance of the BensSpikerAlgorithm.
92
+
93
+ ```python
94
+ bsa = BensSpikerAlgorithm(threshold, filter_cutoff=filter_cutoff, filter_order=filter_order)
95
+ ```
96
+
97
+ # Step-forward encoding (SF)
98
+ SF[^2] encodes signals into spikes by comparing signal values against an adaptive baseline plus/minus a threshold. For each timestep, if the signal value exceeds the baseline plus threshold, an "up spike" is generated and the baseline is increased by the threshold amount. Similarly, if it falls below the baseline minus threshold, a "down spike" is generated and the baseline is decreased. This adaptive baseline approach creates two complementary spike trains that can be used to reconstruct the original signal by accumulating the changes represented by each spike.
99
+
100
+ The method has one main parameter that can be optimized:
101
+ - Threshold: Controls how far from the baseline the signal must deviate to generate a spike
102
+
103
+ Here we will illustrate its usage with a simple hardcoded signal.
104
+
105
+ ```python
106
+ import torch
107
+
108
+ signal = torch.tensor([[0.1, 0.3, 0.2, 0.4, 0.8, 0.6, 0.7, 0.9, 0.5, 0.3, 0.2]])
109
+ ```
110
+
111
+ Then to encode the signal, we do
112
+
113
+ ```python
114
+ from encoding.step_forward_converter import StepForwardConverter
115
+
116
+ sf = StepForwardConverter(threshold=torch.tensor([0.1])) # (optional parameter, default value 0.5)
117
+ spikes = sf.encode(signal)
118
+
119
+ # returns two spike trains (up/down spikes):
120
+ # up: [0.0, 1.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0]
121
+ # down: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, -1.0, -1.0, -1.0]
122
+ ```
123
+
124
+ And to decode the spikes again, we can simply call the decode method
125
+
126
+ ```python
127
+ reconstructed = sf.decode(spikes)
128
+
129
+ # returns [0.0, 0.1, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.5, 0.4, 0.3]
130
+ ```
131
+
132
+ The implementation also supports optimization of the threshold parameter for a given signal. This is achieved by calling the optimize method.
133
+
134
+ ```python
135
+ threshold = sf.optimize(signal)
136
+ ```
137
+
138
+ These parameters can then be used to create an optimized instance of the StepForwardConverter.
139
+
140
+ ```python
141
+ sf = StepForwardConverter(threshold=threshold)
142
+ ```
143
+
144
+ # Pulse-width modulation (PWM)
145
+ PWM[^3] encodes signals by comparing them against a carrier signal (typically a sawtooth wave) to generate spikes. When the input signal crosses the carrier signal, spikes are generated. The frequency of the carrier signal can be optimized to minimize reconstruction error. The method supports both unipolar (up spikes only) and bipolar (up and down spikes) encoding.
146
+
147
+ The method has several parameters:
148
+ - Frequency: Controls how often the carrier signal repeats, affecting spike density (optimizable)
149
+ - Scale Factor: Scaling applied to normalize the input signal amplitude
150
+ - Down Spike: Boolean flag to enable/disable bipolar encoding (True = bipolar, False = unipolar)
151
+
152
+ Here's an example using a simple signal:
153
+
154
+ ```python
155
+ import torch
156
+
157
+ signal = torch.tensor([[0.2, 0.4, 0.6, 0.8, 1.0, 0.8, 0.6, 0.4, 0.2]])
158
+ ```
159
+
160
+ To encode the signal:
161
+
162
+ ```python
163
+ from encoding.pulse_width_modulation import PulseWidthModulation
164
+
165
+ # Create encoder with default frequency=1Hz
166
+ pwm = PulseWidthModulation(frequency=torch.tensor([1.0]))
167
+ spikes = pwm.encode(signal)
168
+
169
+ # Returns two spike trains (up/down spikes):
170
+ # up: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0]
171
+ # down: [0.0, 0.0, -1.0, 0.0, 0.0, -1.0, 0.0, 0.0, 0.0]
172
+ ```
173
+
174
+ To decode the spikes back to a signal:
175
+
176
+ ```python
177
+ reconstructed = pwm.decode(spikes)
178
+
179
+ # Returns approximately:
180
+ # [0.0, 0.3, 0.6, 0.6, 0.6, 0.6, 0.53, 0.47, 0.4]
181
+ ```
182
+
183
+ The implementation supports optimization of the frequency parameter for a given signal:
184
+
185
+ ```python
186
+ frequency = pwm.optimize(signal, trials=100)
187
+
188
+ # Create optimized encoder
189
+ pwm_opt = PulseWidthModulation(frequency=frequency)
190
+ ```
191
+
192
+ # LIF-based encoding (LIF)
193
+ LIF[^4] (Leaky Integrate-and-Fire) encoding treats the input signal as a current that increases a membrane potential. When this potential exceeds a predefined threshold, a spike is generated and the potential resets. Between spikes, the membrane potential decays according to a constant value. The input signal must be normalized before encoding, as neither the threshold nor the decay rate adapts to different signal ranges. This approach creates a biologically plausible spike pattern that can effectively represent temporal dynamics in the signal.
194
+
195
+ The method has several parameters:
196
+ - Threshold: Controls how much voltage must accumulate before a spike is generated (optimizable)
197
+ - Down Spike: If set to True, the neuron can also generate spikes when the value gets lower than -threshold. This is not biologically plausible, but can be useful in some cases.
198
+
199
+ Here's an example using a simple signal:
200
+
201
+ ```python
202
+ import torch
203
+
204
+ signal = torch.tensor([[0.5, 0.3, 0.1, 0.4, 0.8, 1.0, 0.7, 0.3, 0.6]])
205
+ ```
206
+
207
+ To encode the signal:
208
+
209
+ ```python
210
+ from encoding.lif_based_encoding import LIFBasedEncoding
211
+
212
+ # Create encoder with default threshold=0.5 and membrane_constant=0.9
213
+ lif = LIFBasedEncoding(threshold=torch.tensor([0.5]), membrane_constant=torch.tensor([0.2]))
214
+ spikes = lif.encode(signal)
215
+
216
+ # Returns two spike trains (up/down spikes):
217
+ # up: [0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0]
218
+ # down: [0.0, 0.0, -1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
219
+ ```
220
+
221
+ To decode the spikes back to a signal:
222
+
223
+ ```python
224
+ reconstructed = lif.decode(spikes)
225
+
226
+ # Returns approximately:
227
+ # [0.55, 0.55, 0.32, 0.5, 0.54, 0.77, 0.59, 0.56, 0.55]
228
+ ```
229
+
230
+ The implementation supports optimization of both the threshold and membrane constant parameters:
231
+
232
+ ```python
233
+ # Optimize threshold and membrane constant
234
+ threshold, membrane_constant = lif.optimize(signal, trials=100)
235
+
236
+ # Create optimized encoder
237
+ lif_opt = LIFBasedEncoding(threshold=threshold, membrane_constant=membrane_constant)
238
+ ```
239
+
240
+ # GymnasiumEncoder
241
+ The purpose of this encoder is to convert scalar values to spike trains. I.e. if your feature is in the range from 0 to 1 and you want to encode the value 0.5, you might get a spike train as follows
242
+
243
+ ```
244
+ [[1], [0], [1], [0], [1]]
245
+ ```
246
+
247
+ ### Use with Gymnasium environments
248
+ The encoder works great with [gymnasium](https://gymnasium.farama.org/) environments. However, the documentation is incomplete. A small example is given below. For a more thorough example, see examples/example_cartpole.py.
249
+
250
+ Starting with an observation from an environment, such as cartpole. It might look like this
251
+ ```python
252
+ observation = [0.00662101, -0.02290802, -0.00224132, 0.00596699]
253
+ ```
254
+
255
+ If you want to encode it, you need to create a scaler and then the encoder.
256
+ ```python
257
+ # assuming you have gymnasium imported
258
+ cartpole_env = gym.make("CartPole-v1")
259
+ scaler_factory = ScalerFactory()
260
+ scaler = scaler_factory.from_env(cartpole_env)
261
+ encoder = GymnasiumEncoder(
262
+ cartpole_env.observation_space.shape[0],
263
+ batch_size,
264
+ seq_length,
265
+ scaler,
266
+ rate_coder=True,
267
+ step_coder=False,
268
+ split_exc_inh=True,
269
+ )
270
+ ```
271
+
272
+ Then to encode the observation, use encoder.forward as follows
273
+ ```python
274
+ spike_train = encoder.encode(np.array([observation]))
275
+ ```
276
+ Your spike_train will look something like this
277
+
278
+ ```
279
+ [[[0 0 0 0 0 0 0 0]]
280
+ [[ 0 0 0 0 0 0 0 0]]
281
+ [[ 0 0 0 0 0 0 0 0]]
282
+ [[ 0 0 0 0 0 0 0 0]]
283
+ [[ 0 0 0 0 0 0 0 0]]
284
+ [[ 0 0 0 0 0 0 0 0]]
285
+ [[ 0 0 0 0 0 0 0 0]]
286
+ [[ 0 0 0 0 0 0 0 0]]
287
+ [[ 0 0 0 0 0 0 0 0]]
288
+ [[ 1 1 0 1 0 0 1 0]]]
289
+ ```
290
+
291
+ **NOTE** The observation is wrapped in a list and the list was used to create numpy array. This is because the encoder supports batch processing.
292
+
293
+ If you want to use the step coder, you also need to create a converter.
294
+ ```python
295
+ converter_factory = ConverterFactory(cartpole_env, scaler)
296
+ converter, converter_th = converter_factory.generate()
297
+ ```
298
+
299
+ and create the encoder like this
300
+ ```python
301
+ encoder = GymnasiumEncoder(
302
+ cartpole_env.observation_space.shape[0],
303
+ batch_size,
304
+ seq_length,
305
+ scaler,
306
+ converter_th,
307
+ converter,
308
+ rate_coder=False,
309
+ step_coder=True,
310
+ split_exc_inh=True,
311
+ )
312
+ ```
313
+
314
+ ### Conversion method
315
+ To change the way the spikes are distributed in the spike train, use the spike_train_conversion argument. By default it is set to "deterministic". In the images below you can see it compared to poisson encoding. ![image info](img/poisson_vs_deterministic.png) <figcaption align="center">A comparison of deterministic encoding (left) as opposed to poisson encoding (right). The latter is more plausible and random.</figcaption>
316
+
317
+ What follows is an example of using the encoder with poisson encoding
318
+ ```python
319
+ encoder = GymnasiumEncoder(
320
+ cartpole_env.observation_space.shape[0],
321
+ batch_size,
322
+ seq_length,
323
+ scaler,
324
+ spike_train_conversion_method="poisson"
325
+ )
326
+ ```
327
+
328
+ ### Inverse inputs
329
+ Inverses of the input values may enable better predictions in low-spike count scenarios. You can set a flag to create inverse inputs.
330
+
331
+ <details>
332
+ <summary>Detailed explanation.</summary>
333
+ Let's say you are using temperatures to predict the weather. Your temperature may be between 0° and 10° Celsisus at this time of the year. When your temperature is 10°, your spike trains may look like this [[1], [1], [1], ...]. At 5° it will be alternating evenly like this [[1], [0], [1], [1], [0], ...]. Now at 0° your input will be [[0], [0], [0], ...] (always zero). Since there are no input spikes for this input, it will not trigger anything in the network. However, a temperature of 0° will have a positive impact on whether or not it may snow. Therefore you may want inputs to the contrary (i.e. not just hotness but also coldness)
334
+ </details>
335
+
336
+ The encoder supports this with another flag. For every input you will receive an additional spike train with the inverse activity (high when the feature's value is low and vice versa). If you use split_exc_inh, both the positive and the negative will receive an inverse spike train (i.e. 1 scalar leads to 4 spike trains). It is used as follows
337
+
338
+ ```python
339
+ encoder = GymnasiumEncoder(
340
+ cartpole_env.observation_space.shape[0],
341
+ batch_size,
342
+ seq_length,
343
+ scaler,
344
+ spike_train_conversion_method="poisson",
345
+ add_inverted_inputs=True
346
+ )
347
+ ```
348
+ For a given firing rate of 0.9, the inverse will have a firing rate of 0.1. Inverses are appended at the end. Thus, the n inputs are doubled to 2*n, where the first n are the regular inputs and the ones that follow are their inverses.
349
+
350
+ # BinEncoder (Gaussian Receptive Fields)
351
+ This class encodes roughly implements gaussian receptive fields. Essentially, instead of a generating one spike train, it generates a bunch of spike trains that represent how close a value is to some anchor points.
352
+
353
+ Imagine you are encoding the brightness of a pixel. It can be between 0 and 255. The encoder will first scale your input value to between 0 and 1. Now it creates a bunch of bins within this range, how many can be specified by a parameter. Depending on how close a given value is to any given bin, the value of the bin will be affected. For example, if the value is right in the center of the bin, the value may be 1. If it close to the bin, it could be 0.7. If it is far, it will be 0. The drop-off follows a gaussian curve.
354
+
355
+ In the figure below, you can see how each of the 5 bins reacts to the different input values. You can see that the value that would otherwise correspond to a firing rate of 0.6, is around the same for the green bin, but the red bin will have a lower firing rate for this particular input sample.
356
+ ![image info](img/grf_overview.png) <figcaption align="center">GRF as implemented here. To the left we see how 5 bins react to a value of 0.6 as well as their receptive fields. On the right we see the same, but in a bar chart</figcaption>
357
+
358
+ ## Example usage
359
+ In this example, we will create a bin encoder and encode two features. One is between -2 and 2, and the other between -5 and 5. We encode each one with 3 bins. This means we get 2(features) * 3(bins) = 6(spiketrains)
360
+ ```python
361
+ encoder = BinEncoder(
362
+ 10,
363
+ min_values=np.array([-2, -5]),
364
+ max_values=np.array([2, 5]),
365
+ n_bins=3,
366
+ )
367
+ spike_train = encoder.encode(np.array([1.8, 0]))
368
+ ```
369
+
370
+ The first 3 spiketrains correspond to the first feature and the last 3 to the second one. It should look as follows
371
+ ```
372
+ [[[0 0 1 0 1 0]]
373
+ [[0 0 1 0 1 0]]
374
+ [[0 0 1 0 1 0]]
375
+ [[0 0 1 0 1 0]]
376
+ [[0 0 1 0 1 0]]
377
+ [[0 0 1 0 1 0]]
378
+ [[0 0 1 0 1 0]]
379
+ [[0 0 1 0 1 0]]
380
+ [[0 0 1 0 1 0]]]
381
+ ```
382
+
383
+
384
+
385
+ # How to contribute
386
+ We are grateful for the support from our organizations and welcome contributions from the community! We hope this list of contributing organizations will grow much further as the project develops.
387
+
388
+ <p align="center">
389
+ <img src="img/fzi_logo.png" alt="FZI Logo" height="150"/>
390
+ <img src="img/kit_logo.png" alt="ITIV Logo" height="150"/>
391
+ </p>
392
+
393
+ If you're interested in improving this project, please feel free to clone the repository, make your changes, and submit a pull request. Make sure to tell us your organisation if you want it added to the list. Check out our guidelines on testing and formatting in the sections below or browse the Issues tab. We look forward to your contributions!
394
+
395
+
396
+ ## Testing
397
+ If you want to work on this repository, please note that we are using unittests to test our components. You can run our unittests in vs code by going to the testing tab and running the configuration. Select unittests as the testing framework and the root directory as the directory to run from. The result should look like this
398
+
399
+ ![image info](img/unittest_example.png) <figcaption align="center">An example of how the unittests might look in your VS code.</figcaption>
400
+
401
+ ## Formatting
402
+ In order to ensure consistent formatting, please install the "black formatter" extension. Follow the instructions on the extension page to ensure it is active. Furthermore, please enable "Format on Save" if you are using VS Code, or the equivalent if you are using a different IDE.
403
+
404
+ # License and Copyright
405
+
406
+ Copyright © 2025 Alexandru Vasilache
407
+
408
+ This project is licensed under the MIT License. See the [LICENSE](LICENSE) file for details.
409
+
410
+
411
+ # Citation
412
+
413
+ If you use this repository in your research, please cite the following paper:
414
+
415
+ ```bibtex
416
+ @misc{vasilache2025pytorchcompatiblespikeencodingframework,
417
+ title={A PyTorch-Compatible Spike Encoding Framework for Energy-Efficient Neuromorphic Applications},
418
+ author={Alexandru Vasilache and Jona Scholz and Vincent Schilling and Sven Nitzsche and Florian Kaelber and Johannes Korsch and Juergen Becker},
419
+ year={2025},
420
+ eprint={2504.11026},
421
+ archivePrefix={arXiv},
422
+ primaryClass={cs.LG},
423
+ url={https://arxiv.org/abs/2504.11026},
424
+ }
425
+ ```
426
+
427
+
428
+ [^1]: B. Schrauwen and J. Van Campenhout, “Bsa, a fast and accurate
429
+ spike train encoding scheme,” in Proceedings of the International Joint
430
+ Conference on Neural Networks, 2003., vol. 4. IEEE, 2003, pp. 2825–2830.
431
+ [^2]: N. Kasabov, N. M. Scott, E. Tu, S. Marks, N. Sengupta, E. Capecci,
432
+ M. Othman, M. G. Doborjeh, N. Murli, R. Hartono et al., “Evolving
433
+ spatio-temporal data machines based on the neucube neuromorphic
434
+ framework: Design methodology and selected applications,” Neural
435
+ Networks, vol. 78, pp. 1–14, 2016
436
+ [^3]: S. Y. A. Yarga, J. Rouat, and S. Wood, “Efficient spike encoding
437
+ algorithms for neuromorphic speech recognition,” in Proceedings of the
438
+ International Conference on Neuromorphic Systems 2022, 2022, pp. 1–8.
439
+ [^4]: A. Arriandiaga, E. Portillo, J. I. Espinosa-Ramos, and N. K. Kasabov,
440
+ “Pulsewidth modulation-based algorithm for spike phase encoding and
441
+ decoding of time-dependent analog data,” IEEE Transactions on Neural
442
+ Networks and Learning Systems, vol. 31, no. 10, pp. 3920–3931, 2019.
443
+ [^5]: The gymnasium encoder is a custom encoder specifically tailored for gymnasium environments.
444
+ [^6]: The bin encoder is based on gaussian receptive fields and splits each input into multiple spike trains, as determined by the number of bins.