scholarly journals Modeling the Dynamics of Spiking Networks with Memristor-Based STDP to Solve Classification Tasks

Mathematics ◽  
2021 ◽  
Vol 9 (24) ◽  
pp. 3237
Author(s):  
Alexander Sboev ◽  
Danila Vlasov ◽  
Roman Rybka ◽  
Yury Davydov ◽  
Alexey Serenko ◽  
...  

The problem with training spiking neural networks (SNNs) is relevant due to the ultra-low power consumption these networks could exhibit when implemented in neuromorphic hardware. The ongoing progress in the fabrication of memristors, a prospective basis for analogue synapses, gives relevance to studying the possibility of SNN learning on the base of synaptic plasticity models, obtained by fitting the experimental measurements of the memristor conductance change. The dynamics of memristor conductances is (necessarily) nonlinear, because conductance changes depend on the spike timings, which neurons emit in an all-or-none fashion. The ability to solve classification tasks was previously shown for spiking network models based on the bio-inspired local learning mechanism of spike-timing-dependent plasticity (STDP), as well as with the plasticity that models the conductance change of nanocomposite (NC) memristors. Input data were presented to the network encoded into the intensities of Poisson input spike sequences. This work considers another approach for encoding input data into input spike sequences presented to the network: temporal encoding, in which an input vector is transformed into relative timing of individual input spikes. Since temporal encoding uses fewer input spikes, the processing of each input vector by the network can be faster and more energy-efficient. The aim of the current work is to show the applicability of temporal encoding to training spiking networks with three synaptic plasticity models: STDP, NC memristor approximation, and PPX memristor approximation. We assess the accuracy of the proposed approach on several benchmark classification tasks: Fisher’s Iris, Wisconsin breast cancer, and the pole balancing task (CartPole). The accuracies achieved by SNN with memristor plasticity and conventional STDP are comparable and are on par with classic machine learning approaches.

2021 ◽  
Vol 17 (4) ◽  
pp. 1-21
Author(s):  
He Wang ◽  
Nicoleta Cucu Laurenciu ◽  
Yande Jiang ◽  
Sorin Cotofana

Design and implementation of artificial neuromorphic systems able to provide brain akin computation and/or bio-compatible interfacing ability are crucial for understanding the human brain’s complex functionality and unleashing brain-inspired computation’s full potential. To this end, the realization of energy-efficient, low-area, and bio-compatible artificial synapses, which sustain the signal transmission between neurons, is of particular interest for any large-scale neuromorphic system. Graphene is a prime candidate material with excellent electronic properties, atomic dimensions, and low-energy envelope perspectives, which was already proven effective for logic gates implementations. Furthermore, distinct from any other materials used in current artificial synapse implementations, graphene is biocompatible, which offers perspectives for neural interfaces. In view of this, we investigate the feasibility of graphene-based synapses to emulate various synaptic plasticity behaviors and look into their potential area and energy consumption for large-scale implementations. In this article, we propose a generic graphene-based synapse structure, which can emulate the fundamental synaptic functionalities, i.e., Spike-Timing-Dependent Plasticity (STDP) and Long-Term Plasticity . Additionally, the graphene synapse is programable by means of back-gate bias voltage and can exhibit both excitatory or inhibitory behavior. We investigate its capability to obtain different potentiation/depression time scale for STDP with identical synaptic weight change amplitude when the input spike duration varies. Our simulation results, for various synaptic plasticities, indicate that a maximum 30% synaptic weight change and potentiation/depression time scale range from [-1.5 ms, 1.1 ms to [-32.2 ms, 24.1 ms] are achievable. We further explore the effect of our proposal at the Spiking Neural Network (SNN) level by performing NEST-based simulations of a small SNN implemented with 5 leaky-integrate-and-fire neurons connected via graphene-based synapses. Our experiments indicate that the number of SNN firing events exhibits a strong connection with the synaptic plasticity type, and monotonously varies with respect to the input spike frequency. Moreover, for graphene-based Hebbian STDP and spike duration of 20ms we obtain an SNN behavior relatively similar with the one provided by the same SNN with biological STDP. The proposed graphene-based synapse requires a small area (max. 30 nm 2 ), operates at low voltage (200 mV), and can emulate various plasticity types, which makes it an outstanding candidate for implementing large-scale brain-inspired computation systems.


2020 ◽  
Vol 32 (7) ◽  
pp. 1408-1429
Author(s):  
Jakub Fil ◽  
Dominique Chu

The multispike tempotron (MST) is a powersul, single spiking neuron model that can solve complex supervised classification tasks. It is also internally complex, computationally expensive to evaluate, and unsuitable for neuromorphic hardware. Here we aim to understand whether it is possible to simplify the MST model while retaining its ability to learn and process information. To this end, we introduce a family of generalized neuron models (GNMs) that are a special case of the spike response model and much simpler and cheaper to simulate than the MST. We find that over a wide range of parameters, the GNM can learn at least as well as the MST does. We identify the temporal autocorrelation of the membrane potential as the most important ingredient of the GNM that enables it to classify multiple spatiotemporal patterns. We also interpret the GNM as a chemical system, thus conceptually bridging computation by neural networks with molecular information processing. We conclude the letter by proposing alternative training approaches for the GNM, including error trace learning and error backpropagation.


Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 440
Author(s):  
Anup Vanarse ◽  
Adam Osseiran ◽  
Alexander Rassau ◽  
Peter van der Made

Current developments in artificial olfactory systems, also known as electronic nose (e-nose) systems, have benefited from advanced machine learning techniques that have significantly improved the conditioning and processing of multivariate feature-rich sensor data. These advancements are complemented by the application of bioinspired algorithms and architectures based on findings from neurophysiological studies focusing on the biological olfactory pathway. The application of spiking neural networks (SNNs), and concepts from neuromorphic engineering in general, are one of the key factors that has led to the design and development of efficient bioinspired e-nose systems. However, only a limited number of studies have focused on deploying these models on a natively event-driven hardware platform that exploits the benefits of neuromorphic implementation, such as ultra-low-power consumption and real-time processing, for simplified integration in a portable e-nose system. In this paper, we extend our previously reported neuromorphic encoding and classification approach to a real-world dataset that consists of sensor responses from a commercial e-nose system when exposed to eight different types of malts. We show that the proposed SNN-based classifier was able to deliver 97% accurate classification results at a maximum latency of 0.4 ms per inference with a power consumption of less than 1 mW when deployed on neuromorphic hardware. One of the key advantages of the proposed neuromorphic architecture is that the entire functionality, including pre-processing, event encoding, and classification, can be mapped on the neuromorphic system-on-a-chip (NSoC) to develop power-efficient and highly-accurate real-time e-nose systems.


Author(s):  
Mihai A. Petrovici ◽  
Anna Schroeder ◽  
Oliver Breitwieser ◽  
Andreas Grubl ◽  
Johannes Schemmel ◽  
...  

2020 ◽  
Author(s):  
Franz Scherr ◽  
Christoph Stöckl ◽  
Wolfgang Maass

AbstractUnderstanding how one-shot learning can be accomplished through synaptic plasticity in neural networks of the brain is a major open problem. We propose that approximations to BPTT in recurrent networks of spiking neurons (RSNNs) such as e-prop cannot achieve this because their local synaptic plasticity is gated by learning signals that are rather ad hoc from a biological perspective: Random projections of instantaneously arising losses at the network outputs, analogously as in Broadcast Alignment for feedforward networks. In contrast, synaptic plasticity is gated in the brain by learning signals such as dopamine, which are emitted by specialized brain areas, e.g. VTA. These brain areas have arguably been optimized by evolution to gate synaptic plasticity in such a way that fast learning of survival-relevant tasks is enabled. We found that a corresponding model architecture, where learning signals are emitted by a separate RSNN that is optimized to facilitate fast learning, enables one-shot learning via local synaptic plasticity in RSNNs for large families of learning tasks. The same learning approach also supports fast spike-based learning of posterior probabilities of potential input sources, thereby providing a new basis for probabilistic reasoning in RSNNs. Our new learning approach also solves an open problem in neuromorphic engineering, where on-chip one-shot learning capability is highly desirable for spike-based neuromorphic devices, but could so far not be achieved. Our method can easily be mapped into neuromorphic hardware, and thereby solves this problem.


2018 ◽  
Author(s):  
Wilten Nicola ◽  
Claudia Clopath

AbstractThe hippocampus is capable of rapidly learning incoming information, even if that information is only observed once. Further, this information can be replayed in a compressed format in either forward or reversed modes during Sharp Wave Ripples (SPW-R). We leveraged state-of-the-art techniques in training recurrent spiking networks to demonstrate how primarily inhibitory networks of neurons in CA3 and CA1 can: 1) generate internal theta sequences or “time-cells” to bind externally elicited spikes in the presence of septal inhibition, 2) reversibly compress the learned representation in the form of a SPW-R when septal inhibition is removed, 3) generate and refine gamma-assemblies during SPW-R mediated compression, and 4) regulate the inter-ripple-interval timing between SPW-R’s in ripple clusters. From the fast time scale of neurons to the slow time scale of behaviors, inhibitory networks serve as the scaffolding for one-shot learning by replaying, reversing, refining, and regulating spike sequences.


2021 ◽  
Author(s):  
Mohammad Dehghani Habibabadi ◽  
Klaus Richard Pawelzik

Spiking model neurons can be set up to respond selectively to specific spatio-temporal spike patterns by optimization of their input weights. It is unknown, however, if existing synaptic plasticity mechanisms can achieve this temporal mode of neuronal coding and computation. Here it is shown that changes of synaptic efficacies which tend to balance excitatory and inhibitory synaptic inputs can make neurons sensitive to particular input spike patterns. Simulations demonstrate that a combination of Hebbian mechanisms, hetero-synaptic plasticity and synaptic scaling is sufficient for self-organizing sensitivity for spatio-temporal spike patterns that repeat in the input. In networks inclusion of hetero-synaptic plasticity leads to specialization and faithful representation of pattern sequences by a group of target neurons. Pattern detection is found to be robust against a range of distortions and noise. Furthermore, the resulting balance of excitatory and inhibitory inputs protects the memory for a specific pattern from being overwritten during ongoing learning when the pattern is not present. These results not only provide an explanation for experimental observations of balanced excitation and inhibition in cortex but also promote the plausibility of precise temporal coding in the brain.


2021 ◽  
pp. 1-27
Author(s):  
Friedemann Zenke ◽  
Tim P. Vogels

Brains process information in spiking neural networks. Their intricate connections shape the diverse functions these networks perform. Yet how network connectivity relates to function is poorly understood, and the functional capabilities of models of spiking networks are still rudimentary. The lack of both theoretical insight and practical algorithms to find the necessary connectivity poses a major impediment to both studying information processing in the brain and building efficient neuromorphic hardware systems. The training algorithms that solve this problem for artificial neural networks typically rely on gradient descent. But doing so in spiking networks has remained challenging due to the nondifferentiable nonlinearity of spikes. To avoid this issue, one can employ surrogate gradients to discover the required connectivity. However, the choice of a surrogate is not unique, raising the question of how its implementation influences the effectiveness of the method. Here, we use numerical simulations to systematically study how essential design parameters of surrogate gradients affect learning performance on a range of classification problems. We show that surrogate gradient learning is robust to different shapes of underlying surrogate derivatives, but the choice of the derivative's scale can substantially affect learning performance. When we combine surrogate gradients with suitable activity regularization techniques, spiking networks perform robust information processing at the sparse activity limit. Our study provides a systematic account of the remarkable robustness of surrogate gradient learning and serves as a practical guide to model functional spiking neural networks.


Sign in / Sign up

Export Citation Format

Share Document