input spike
Recently Published Documents


TOTAL DOCUMENTS

34
(FIVE YEARS 8)

H-INDEX

12
(FIVE YEARS 1)

Mathematics ◽  
2021 ◽  
Vol 9 (24) ◽  
pp. 3237
Author(s):  
Alexander Sboev ◽  
Danila Vlasov ◽  
Roman Rybka ◽  
Yury Davydov ◽  
Alexey Serenko ◽  
...  

The problem with training spiking neural networks (SNNs) is relevant due to the ultra-low power consumption these networks could exhibit when implemented in neuromorphic hardware. The ongoing progress in the fabrication of memristors, a prospective basis for analogue synapses, gives relevance to studying the possibility of SNN learning on the base of synaptic plasticity models, obtained by fitting the experimental measurements of the memristor conductance change. The dynamics of memristor conductances is (necessarily) nonlinear, because conductance changes depend on the spike timings, which neurons emit in an all-or-none fashion. The ability to solve classification tasks was previously shown for spiking network models based on the bio-inspired local learning mechanism of spike-timing-dependent plasticity (STDP), as well as with the plasticity that models the conductance change of nanocomposite (NC) memristors. Input data were presented to the network encoded into the intensities of Poisson input spike sequences. This work considers another approach for encoding input data into input spike sequences presented to the network: temporal encoding, in which an input vector is transformed into relative timing of individual input spikes. Since temporal encoding uses fewer input spikes, the processing of each input vector by the network can be faster and more energy-efficient. The aim of the current work is to show the applicability of temporal encoding to training spiking networks with three synaptic plasticity models: STDP, NC memristor approximation, and PPX memristor approximation. We assess the accuracy of the proposed approach on several benchmark classification tasks: Fisher’s Iris, Wisconsin breast cancer, and the pole balancing task (CartPole). The accuracies achieved by SNN with memristor plasticity and conventional STDP are comparable and are on par with classic machine learning approaches.


2021 ◽  
Author(s):  
Mohammad Dehghani Habibabadi ◽  
Klaus Richard Pawelzik

Spiking model neurons can be set up to respond selectively to specific spatio-temporal spike patterns by optimization of their input weights. It is unknown, however, if existing synaptic plasticity mechanisms can achieve this temporal mode of neuronal coding and computation. Here it is shown that changes of synaptic efficacies which tend to balance excitatory and inhibitory synaptic inputs can make neurons sensitive to particular input spike patterns. Simulations demonstrate that a combination of Hebbian mechanisms, hetero-synaptic plasticity and synaptic scaling is sufficient for self-organizing sensitivity for spatio-temporal spike patterns that repeat in the input. In networks inclusion of hetero-synaptic plasticity leads to specialization and faithful representation of pattern sequences by a group of target neurons. Pattern detection is found to be robust against a range of distortions and noise. Furthermore, the resulting balance of excitatory and inhibitory inputs protects the memory for a specific pattern from being overwritten during ongoing learning when the pattern is not present. These results not only provide an explanation for experimental observations of balanced excitation and inhibition in cortex but also promote the plausibility of precise temporal coding in the brain.


2021 ◽  
Vol 17 (4) ◽  
pp. 1-21
Author(s):  
He Wang ◽  
Nicoleta Cucu Laurenciu ◽  
Yande Jiang ◽  
Sorin Cotofana

Design and implementation of artificial neuromorphic systems able to provide brain akin computation and/or bio-compatible interfacing ability are crucial for understanding the human brain’s complex functionality and unleashing brain-inspired computation’s full potential. To this end, the realization of energy-efficient, low-area, and bio-compatible artificial synapses, which sustain the signal transmission between neurons, is of particular interest for any large-scale neuromorphic system. Graphene is a prime candidate material with excellent electronic properties, atomic dimensions, and low-energy envelope perspectives, which was already proven effective for logic gates implementations. Furthermore, distinct from any other materials used in current artificial synapse implementations, graphene is biocompatible, which offers perspectives for neural interfaces. In view of this, we investigate the feasibility of graphene-based synapses to emulate various synaptic plasticity behaviors and look into their potential area and energy consumption for large-scale implementations. In this article, we propose a generic graphene-based synapse structure, which can emulate the fundamental synaptic functionalities, i.e., Spike-Timing-Dependent Plasticity (STDP) and Long-Term Plasticity . Additionally, the graphene synapse is programable by means of back-gate bias voltage and can exhibit both excitatory or inhibitory behavior. We investigate its capability to obtain different potentiation/depression time scale for STDP with identical synaptic weight change amplitude when the input spike duration varies. Our simulation results, for various synaptic plasticities, indicate that a maximum 30% synaptic weight change and potentiation/depression time scale range from [-1.5 ms, 1.1 ms to [-32.2 ms, 24.1 ms] are achievable. We further explore the effect of our proposal at the Spiking Neural Network (SNN) level by performing NEST-based simulations of a small SNN implemented with 5 leaky-integrate-and-fire neurons connected via graphene-based synapses. Our experiments indicate that the number of SNN firing events exhibits a strong connection with the synaptic plasticity type, and monotonously varies with respect to the input spike frequency. Moreover, for graphene-based Hebbian STDP and spike duration of 20ms we obtain an SNN behavior relatively similar with the one provided by the same SNN with biological STDP. The proposed graphene-based synapse requires a small area (max. 30 nm 2 ), operates at low voltage (200 mV), and can emulate various plasticity types, which makes it an outstanding candidate for implementing large-scale brain-inspired computation systems.


2020 ◽  
Vol 32 (10) ◽  
pp. 1863-1900
Author(s):  
Cunle Qian ◽  
Xuyun Sun ◽  
Yueming Wang ◽  
Xiaoxiang Zheng ◽  
Yiwen Wang ◽  
...  

Modeling spike train transformation among brain regions helps in designing a cognitive neural prosthesis that restores lost cognitive functions. Various methods analyze the nonlinear dynamic spike train transformation between two cortical areas with low computational eficiency. The application of a real-time neural prosthesis requires computational eficiency, performance stability, and better interpretation of the neural firing patterns that modulate target spike generation. We propose the binless kernel machine in the point-process framework to describe nonlinear dynamic spike train transformations. Our approach embeds the binless kernel to eficiently capture the feedforward dynamics of spike trains and maps the input spike timings into reproducing kernel Hilbert space (RKHS). An inhomogeneous Bernoulli process is designed to combine with a kernel logistic regression that operates on the binless kernel to generate an output spike train as a point process. Weights of the proposed model are estimated by maximizing the log likelihood of output spike trains in RKHS, which allows a global-optimal solution. To reduce computational complexity, we design a streaming-based clustering algorithm to extract typical and important spike train features. The cluster centers and their weights enable the visualization of the important input spike train patterns that motivate or inhibit output neuron firing. We test the proposed model on both synthetic data and real spike train data recorded from the dorsal premotor cortex and the primary motor cortex of a monkey performing a center-out task. Performances are evaluated by discrete-time rescaling Kolmogorov-Smirnov tests. Our model outperforms the existing methods with higher stability regardless of weight initialization and demonstrates higher eficiency in analyzing neural patterns from spike timing with less historical input (50%). Meanwhile, the typical spike train patterns selected according to weights are validated to encode output spike from the spike train of single-input neuron and the interaction of two input neurons.


2020 ◽  
Author(s):  
Casali Stefano ◽  
Tognolina Marialuisa ◽  
D’Angelo Egidio

AbstractLong-term synaptic plasticity, in the form of either potentiation or depression (LTP or LTD), is thought to provide the substrate for adaptive computations in brain circuits. Although molecular and cellular processes of plasticity have been clarified to a considerable extent at individual synapses, very little is known about the spatiotemporal organization of LTP and LTD in local microcircuits. Here, we have combined multi-spot two-photon laser microscopy and realistic modeling to map the distribution of plasticity in multi-neuronal units of the cerebellar granular layer activated by stimulating an afferent mossy fiber bundle. The units, composed by ~300 active neurons connected to ~50 glomeruli, showed potentiation concentrated in the core and depression in the periphery. This plasticity was effectively accounted for by an NMDA receptor and calcium-dependent induction rule and was regulated by local microcircuit mechanisms in the inhibitory Golgi cell loops. The organization of LTP and LTD created effective spatial filters tuning the time-delay and gain of spike retransmission at the cerebellum input stage and provided a plausible basis for the spatiotemporal recoding of input spike patterns anticipated by the motor learning theory.


2019 ◽  
Vol 114 (1) ◽  
pp. 43-61
Author(s):  
Pau Vilimelis Aceituno ◽  
Masud Ehsani ◽  
Jürgen Jost

AbstractLatency reduction in postsynaptic spikes is a well-known effect of spiking time-dependent plasticity. We expand this notion for long postsynaptic spike trains on single neurons, showing that, for a fixed input spike train, STDP reduces the number of postsynaptic spikes and concentrates the remaining ones. Then, we study the consequences of this phenomena in terms of coding, finding that this mechanism improves the neural code by increasing the signal-to-noise ratio and lowering the metabolic costs of frequent stimuli. Finally, we illustrate that the reduction in postsynaptic latencies can lead to the emergence of predictions.


2019 ◽  
Vol 31 (12) ◽  
pp. 2523-2561 ◽  
Author(s):  
Lili Su ◽  
Chia-Jung Chang ◽  
Nancy Lynch

Winner-take-all (WTA) refers to the neural operation that selects a (typically small) group of neurons from a large neuron pool. It is conjectured to underlie many of the brain's fundamental computational abilities. However, not much is known about the robustness of a spike-based WTA network to the inherent randomness of the input spike trains. In this work, we consider a spike-based [Formula: see text]–WTA model wherein [Formula: see text] randomly generated input spike trains compete with each other based on their underlying firing rates and [Formula: see text] winners are supposed to be selected. We slot the time evenly with each time slot of length 1 ms and model the [Formula: see text] input spike trains as [Formula: see text] independent Bernoulli processes. We analytically characterize the minimum waiting time needed so that a target minimax decision accuracy (success probability) can be reached. We first derive an information-theoretic lower bound on the waiting time. We show that to guarantee a (minimax) decision error [Formula: see text] (where [Formula: see text]), the waiting time of any WTA circuit is at least [Formula: see text]where [Formula: see text] is a finite set of rates and [Formula: see text] is a difficulty parameter of a WTA task with respect to set [Formula: see text] for independent input spike trains. Additionally, [Formula: see text] is independent of [Formula: see text], [Formula: see text], and [Formula: see text]. We then design a simple WTA circuit whose waiting time is [Formula: see text]provided that the local memory of each output neuron is sufficiently long. It turns out that for any fixed [Formula: see text], this decision time is order-optimal (i.e., it matches the above lower bound up to a multiplicative constant factor) in terms of its scaling in [Formula: see text], [Formula: see text], and [Formula: see text].


2017 ◽  
Vol 29 (8) ◽  
pp. 2021-2029
Author(s):  
Josue Orellana ◽  
Jordan Rodu ◽  
Robert E. Kass

Much attention has been paid to the question of how Bayesian integration of information could be implemented by a simple neural mechanism. We show that population vectors based on point-process inputs combine evidence in a form that closely resembles Bayesian inference, with each input spike carrying information about the tuning of the input neuron. We also show that population vectors can combine information relatively accurately in the presence of noisy synaptic encoding of tuning curves.


Sign in / Sign up

Export Citation Format

Share Document