scholarly journals Event-Based Trajectory Prediction Using Spiking Neural Networks

2021 ◽  
Vol 15 ◽  
Author(s):  
Guillaume Debat ◽  
Tushar Chauhan ◽  
Benoit R. Cottereau ◽  
Timothée Masquelier ◽  
Michel Paindavoine ◽  
...  

In recent years, event-based sensors have been combined with spiking neural networks (SNNs) to create a new generation of bio-inspired artificial vision systems. These systems can process spatio-temporal data in real time, and are highly energy efficient. In this study, we used a new hybrid event-based camera in conjunction with a multi-layer spiking neural network trained with a spike-timing-dependent plasticity learning rule. We showed that neurons learn from repeated and correlated spatio-temporal patterns in an unsupervised way and become selective to motion features, such as direction and speed. This motion selectivity can then be used to predict ball trajectory by adding a simple read-out layer composed of polynomial regressions, and trained in a supervised manner. Hence, we show that a SNN receiving inputs from an event-based sensor can extract relevant spatio-temporal patterns to process and predict ball trajectories.

2020 ◽  
Vol 34 (02) ◽  
pp. 1316-1323
Author(s):  
Zuozhu Liu ◽  
Thiparat Chotibut ◽  
Christopher Hillar ◽  
Shaowei Lin

Motivated by the celebrated discrete-time model of nervous activity outlined by McCulloch and Pitts in 1943, we propose a novel continuous-time model, the McCulloch-Pitts network (MPN), for sequence learning in spiking neural networks. Our model has a local learning rule, such that the synaptic weight updates depend only on the information directly accessible by the synapse. By exploiting asymmetry in the connections between binary neurons, we show that MPN can be trained to robustly memorize multiple spatiotemporal patterns of binary vectors, generalizing the ability of the symmetric Hopfield network to memorize static spatial patterns. In addition, we demonstrate that the model can efficiently learn sequences of binary pictures as well as generative models for experimental neural spike-train data. Our learning rule is consistent with spike-timing-dependent plasticity (STDP), thus providing a theoretical ground for the systematic design of biologically inspired networks with large and robust long-range sequence storage capacity.


2014 ◽  
Vol 134 ◽  
pp. 269-279 ◽  
Author(s):  
Nikola Kasabov ◽  
Valery Feigin ◽  
Zeng-Guang Hou ◽  
Yixiong Chen ◽  
Linda Liang ◽  
...  

2015 ◽  
Vol 43 (2) ◽  
pp. 327-343 ◽  
Author(s):  
Banafsheh Rekabdar ◽  
Monica Nicolescu ◽  
Mircea Nicolescu ◽  
Mohammad Taghi Saffar ◽  
Richard Kelley

2020 ◽  
Author(s):  
Sumedha Gandharava Dahl

In this dissertation, memristor-based spiking neural networks (SNNs) are used to analyze the effect of radiation on the spatio-temporal pattern recognition (STPR) capability of the networks. Two-terminal resistive memory devices (memristors) are used as synapses to manipulate conductivity paths in the network. Spike-timing-dependent plasticity (STDP) learning behavior results in pattern learning and is achieved using biphasic shaped pre- and post-synaptic spikes. A TiO2 based non-linear drift memristor model designed in Verilog-A implements synaptic behavior and is modified to include experimentally observed effects of state-altering, ionizing, and off-state degradation radiation on the device. The impact of neuron "death" (disabled neuron circuits) due to radiation is also examined. In general, radiation interaction events distort the STDP learning curve undesirably, favoring synaptic potentiation. At lower short-term flux, the network is able to recover and relearn the pattern with consistent training, although some pixels may be affected due to stability issues. As the radiation flux and duration increases, it can overwhelm the leaky integrate-and-fire (LIF) post-synaptic neuron circuit, and the network does not learn the pattern. On the other hand, in the absence of the pattern, the radiation effects cumulate, and the system never regains stability. Neuron-death simulation results emphasize the importance of non-participating neurons during the learning process, concluding that non-participating afferents contribute to improving the learning ability of the neural network. Instantaneous neuron death proves to be more detrimental for the network compared to when the afferents die over time thus, retaining the network's pattern learning capability.


Sign in / Sign up

Export Citation Format

Share Document