output spike
Recently Published Documents


TOTAL DOCUMENTS

28
(FIVE YEARS 7)

H-INDEX

12
(FIVE YEARS 1)

2021 ◽  
Vol 17 (11) ◽  
pp. e1009558
Author(s):  
Eilam Goldenberg Leleo ◽  
Idan Segev

The output of neocortical layer 5 pyramidal cells (L5PCs) is expressed by a train of single spikes with intermittent bursts of multiple spikes at high frequencies. The bursts are the result of nonlinear dendritic properties, including Na+, Ca2+, and NMDA spikes, that interact with the ~10,000 synapses impinging on the neuron’s dendrites. Output spike bursts are thought to implement key dendritic computations, such as coincidence detection of bottom-up inputs (arriving mostly at the basal tree) and top-down inputs (arriving mostly at the apical tree). In this study we used a detailed nonlinear model of L5PC receiving excitatory and inhibitory synaptic inputs to explore the conditions for generating bursts and for modulating their properties. We established the excitatory input conditions on the basal versus the apical tree that favor burst and show that there are two distinct types of bursts. Bursts consisting of 3 or more spikes firing at < 200 Hz, which are generated by stronger excitatory input to the basal versus the apical tree, and bursts of ~2-spikes at ~250 Hz, generated by prominent apical tuft excitation. Localized and well-timed dendritic inhibition on the apical tree differentially modulates Na+, Ca2+, and NMDA spikes and, consequently, finely controls the burst output. Finally, we explored the implications of different burst classes and respective dendritic inhibition for regulating synaptic plasticity.


2021 ◽  
Vol 15 ◽  
Author(s):  
Iulia-Maria Comşa ◽  
Luca Versari ◽  
Thomas Fischbacher ◽  
Jyrki Alakuijala

Spiking neural networks with temporal coding schemes process information based on the relative timing of neuronal spikes. In supervised learning tasks, temporal coding allows learning through backpropagation with exact derivatives, and achieves accuracies on par with conventional artificial neural networks. Here we introduce spiking autoencoders with temporal coding and pulses, trained using backpropagation to store and reconstruct images with high fidelity from compact representations. We show that spiking autoencoders with a single layer are able to effectively represent and reconstruct images from the neuromorphically-encoded MNIST and FMNIST datasets. We explore the effect of different spike time target latencies, data noise levels and embedding sizes, as well as the classification performance from the embeddings. The spiking autoencoders achieve results similar to or better than conventional non-spiking autoencoders. We find that inhibition is essential in the functioning of the spiking autoencoders, particularly when the input needs to be memorised for a longer time before the expected output spike times. To reconstruct images with a high target latency, the network learns to accumulate negative evidence and to use the pulses as excitatory triggers for producing the output spikes at the required times. Our results highlight the potential of spiking autoencoders as building blocks for more complex biologically-inspired architectures. We also provide open-source code for the model.


2021 ◽  
Author(s):  
Eilam Goldenberg Leleo ◽  
Idan Segev

AbstractThe output of neocortical layer 5 pyramidal cells (L5PCs) is expressed by a train of single spikes with intermittent bursts of multiple spikes at high frequencies. The bursts are the result of nonlinear dendritic properties, including Na+, Ca2+, and NMDA spikes, that interact with the ∼10,000 synapses impinging on the neuron’s dendrites. Output spike bursts are thought to implement key dendritic computations, such as coincidence detection of bottom-up inputs (arriving mostly at the basal tree) and top-down inputs (arriving mostly at the apical tree). In this study we used a detailed nonlinear model of L5PC receiving excitatory and inhibitory synaptic inputs to explore the conditions for generating bursts and for modulating their properties. We established the excitatory input conditions on the basal versus the apical tree that favor burst and show that there are two distinct types of bursts. Bursts consisting of 3 or more spikes firing at < 200 Hz, which are generated by stronger excitatory input to the basal versus the apical tree, and bursts of ∼2-spikes at ∼250 Hz, generated by prominent apical tuft excitation. Localized and well-timed dendritic inhibition on the apical tree differentially modulates Na+, Ca2+, and NMDA spikes and, consequently, finely controls the burst output. Finally, we explored the implications of different burst classes and respective dendritic inhibition for regulating synaptic plasticity.


2020 ◽  
Vol 32 (10) ◽  
pp. 1863-1900
Author(s):  
Cunle Qian ◽  
Xuyun Sun ◽  
Yueming Wang ◽  
Xiaoxiang Zheng ◽  
Yiwen Wang ◽  
...  

Modeling spike train transformation among brain regions helps in designing a cognitive neural prosthesis that restores lost cognitive functions. Various methods analyze the nonlinear dynamic spike train transformation between two cortical areas with low computational eficiency. The application of a real-time neural prosthesis requires computational eficiency, performance stability, and better interpretation of the neural firing patterns that modulate target spike generation. We propose the binless kernel machine in the point-process framework to describe nonlinear dynamic spike train transformations. Our approach embeds the binless kernel to eficiently capture the feedforward dynamics of spike trains and maps the input spike timings into reproducing kernel Hilbert space (RKHS). An inhomogeneous Bernoulli process is designed to combine with a kernel logistic regression that operates on the binless kernel to generate an output spike train as a point process. Weights of the proposed model are estimated by maximizing the log likelihood of output spike trains in RKHS, which allows a global-optimal solution. To reduce computational complexity, we design a streaming-based clustering algorithm to extract typical and important spike train features. The cluster centers and their weights enable the visualization of the important input spike train patterns that motivate or inhibit output neuron firing. We test the proposed model on both synthetic data and real spike train data recorded from the dorsal premotor cortex and the primary motor cortex of a monkey performing a center-out task. Performances are evaluated by discrete-time rescaling Kolmogorov-Smirnov tests. Our model outperforms the existing methods with higher stability regardless of weight initialization and demonstrates higher eficiency in analyzing neural patterns from spike timing with less historical input (50%). Meanwhile, the typical spike train patterns selected according to weights are validated to encode output spike from the spike train of single-input neuron and the interaction of two input neurons.


2020 ◽  
Vol 29 (16) ◽  
pp. 2020009
Author(s):  
P. Manikandan ◽  
B. Bindu

A cap-less voltage spike detection and correction circuit for flipped voltage follower (FVF)-based low dropout regulator (LDO) is proposed in this paper. The transients in the output voltage are controlled by the pull-up currents [Formula: see text] and [Formula: see text] and pull-down currents [Formula: see text] and [Formula: see text]. These currents are dynamic current sources which are activated only during transient period and noise contributed by these current sources at steady state is zero. These currents increase/decrease based on the intermediate FVF node voltage [Formula: see text]. The proposed circuit detects the output voltage via [Formula: see text] and controls the power MOSFET gate and output capacitances by changing the pull-up and pull-down currents whenever the load changes. The proposed circuit consumes small additional bias current in the steady state and achieves less settling time and output spike voltage. This LDO is simulated using 180[Formula: see text]nm technology and the simulation result shows that the LDO has good load transient response with 190[Formula: see text]ns settling time and 170[Formula: see text]mV voltage spike over 1[Formula: see text]mA to 100[Formula: see text]mA load current range.


2019 ◽  
Vol 29 (08) ◽  
pp. 1850059 ◽  
Author(s):  
Marie Bernert ◽  
Blaise Yvert

Bio-inspired computing using artificial spiking neural networks promises performances outperforming currently available computational approaches. Yet, the number of applications of such networks remains limited due to the absence of generic training procedures for complex pattern recognition, which require the design of dedicated architectures for each situation. We developed a spike-timing-dependent plasticity (STDP) spiking neural network (SSN) to address spike-sorting, a central pattern recognition problem in neuroscience. This network is designed to process an extracellular neural signal in an online and unsupervised fashion. The signal stream is continuously fed to the network and processed through several layers to output spike trains matching the truth after a short learning period requiring only few data. The network features an attention mechanism to handle the scarcity of action potential occurrences in the signal, and a threshold adaptation mechanism to handle patterns with different sizes. This method outperforms two existing spike-sorting algorithms at low signal-to-noise ratio (SNR) and can be adapted to process several channels simultaneously in the case of tetrode recordings. Such attention-based STDP network applied to spike-sorting opens perspectives to embed neuromorphic processing of neural data in future brain implants.


2017 ◽  
Vol 117 (4) ◽  
pp. 1749-1760 ◽  
Author(s):  
Javier Rodriguez-Falces ◽  
Francesco Negro ◽  
Dario Farina

We investigated whether correlation measures derived from pairs of motor unit (MU) spike trains are reliable indicators of the degree of common synaptic input to motor neurons. Several 50-s isometric contractions of the biceps brachii muscle were performed at different target forces ranging from 10 to 30% of the maximal voluntary contraction relying on force feedback. Forty-eight pairs of MUs were examined at various force levels. Motor unit synchrony was assessed by cross-correlation analysis using three indexes: the output correlation as the peak of the cross-histogram (ρ) and the number of synchronous spikes per second (CIS) and per trigger (E). Individual analysis of MU pairs revealed that ρ, CIS, and E were most often positively associated with discharge rate (87, 85, and 76% of the MU pairs, respectively) and negatively with interspike interval variability (69, 65, and 62% of the MU pairs, respectively). Moreover, the behavior of synchronization indexes with discharge rate (and interspike interval variability) varied greatly among the MU pairs. These results were consistent with theoretical predictions, which showed that the output correlation between pairs of spike trains depends on the statistics of the input current and motor neuron intrinsic properties that differ for different motor neuron pairs. In conclusion, the synchronization between MU firing trains is necessarily caused by the (functional) common input to motor neurons, but it is not possible to infer the degree of shared common input to a pair of motor neurons on the basis of correlation measures of their output spike trains. NEW & NOTEWORTHY The strength of correlation between output spike trains is only poorly associated with the degree of common input to the population of motor neurons. The synchronization between motor unit firing trains is necessarily caused by the (functional) common input to motor neurons, but it is not possible to infer the degree of shared common input to a pair of motor neurons on the basis of correlation measures of their output spike trains.


2015 ◽  
Vol 27 (12) ◽  
pp. 2548-2586 ◽  
Author(s):  
Brian Gardner ◽  
Ioana Sporea ◽  
André Grüning

Information encoding in the nervous system is supported through the precise spike timings of neurons; however, an understanding of the underlying processes by which such representations are formed in the first place remains an open question. Here we examine how multilayered networks of spiking neurons can learn to encode for input patterns using a fully temporal coding scheme. To this end, we introduce a new supervised learning rule, MultilayerSpiker, that can train spiking networks containing hidden layer neurons to perform transformations between spatiotemporal input and output spike patterns. The performance of the proposed learning rule is demonstrated in terms of the number of pattern mappings it can learn, the complexity of network structures it can be used on, and its classification accuracy when using multispike-based encodings. In particular, the learning rule displays robustness against input noise and can generalize well on an example data set. Our approach contributes to both a systematic understanding of how computations might take place in the nervous system and a learning rule that displays strong technical capability.


Sign in / Sign up

Export Citation Format

Share Document