A Model for Fast Analog Computation Based on Unreliable Synapses

2000 ◽  
Vol 12 (7) ◽  
pp. 1679-1704 ◽  
Author(s):  
Wolfgang Maass ◽  
Thomas Natschläger

We investigate through theoretical analysis and computer simulations the consequences of unreliable synapses for fast analog computations in networks of spiking neurons, with analog variables encoded by the current firing activities of pools of spiking neurons. Our results suggest a possible functional role for the well-established unreliability of synaptic transmission on the network level. We also investigate computations on time series and Hebbian learning in this context of space-rate coding in networks of spiking neurons with unreliable synapses.

1993 ◽  
Vol 5 (6) ◽  
pp. 869-884 ◽  
Author(s):  
David S. Touretzky ◽  
A. David Redish ◽  
Hank S. Wan

O'Keefe (1991) has proposed that spatial information in rats might be represented as phasors: phase and amplitude of a sine wave encoding angle and distance to a landmark. We describe computer simulations showing that operations on phasors can be efficiently realized by arrays of spiking neurons that recode the temporal dimension of the sine wave spatially. Some cells in motor and parietal cortex exhibit response properties compatible with this proposal.


Sensors ◽  
2020 ◽  
Vol 20 (2) ◽  
pp. 500 ◽  
Author(s):  
Sergey A. Lobov ◽  
Andrey V. Chernyshov ◽  
Nadia P. Krilova ◽  
Maxim O. Shamshin ◽  
Victor B. Kazantsev

One of the modern trends in the design of human–machine interfaces (HMI) is to involve the so called spiking neuron networks (SNNs) in signal processing. The SNNs can be trained by simple and efficient biologically inspired algorithms. In particular, we have shown that sensory neurons in the input layer of SNNs can simultaneously encode the input signal based both on the spiking frequency rate and on varying the latency in generating spikes. In the case of such mixed temporal-rate coding, the SNN should implement learning working properly for both types of coding. Based on this, we investigate how a single neuron can be trained with pure rate and temporal patterns, and then build a universal SNN that is trained using mixed coding. In particular, we study Hebbian and competitive learning in SNN in the context of temporal and rate coding problems. We show that the use of Hebbian learning through pair-based and triplet-based spike timing-dependent plasticity (STDP) rule is accomplishable for temporal coding, but not for rate coding. Synaptic competition inducing depression of poorly used synapses is required to ensure a neural selectivity in the rate coding. This kind of competition can be implemented by the so-called forgetting function that is dependent on neuron activity. We show that coherent use of the triplet-based STDP and synaptic competition with the forgetting function is sufficient for the rate coding. Next, we propose a SNN capable of classifying electromyographical (EMG) patterns using an unsupervised learning procedure. The neuron competition achieved via lateral inhibition ensures the “winner takes all” principle among classifier neurons. The SNN also provides gradual output response dependent on muscular contraction strength. Furthermore, we modify the SNN to implement a supervised learning method based on stimulation of the target classifier neuron synchronously with the network input. In a problem of discrimination of three EMG patterns, the SNN with supervised learning shows median accuracy 99.5% that is close to the result demonstrated by multi-layer perceptron learned by back propagation of an error algorithm.


2020 ◽  
Vol 11 (4) ◽  
pp. 590-600
Author(s):  
Satoshi Moriya ◽  
Hideaki Yamamoto ◽  
Ayumi Hirano-Iwata ◽  
Shigeru Kubota ◽  
Shigeo Sato

2018 ◽  
Vol 84 (2) ◽  
pp. 65-73 ◽  
Author(s):  
Xuehong Chen ◽  
Meng Liu ◽  
Xiaolin Zhu ◽  
Jin Chen ◽  
Yanfei Zhong ◽  
...  

2017 ◽  
Vol 13 (2) ◽  
Author(s):  
Michael Malcolm

AbstractOnly about a quarter of child abuse reports are ultimately substantiated, which has caused some concern among policymakers and the general public. But previous literature suggests that unsubstantiated and substantiated reports may not be much different from each other in terms of child outcomes. We present a Bayesian theoretical analysis of the data-generating process underlying maltreatment substantiation, and then take a new empirical approach by examining the statistical time-series relationship between substantiated and unsubstantiated reports. We show that the two series are cointegrated. This suggests that unsubstantiated reports are not mostly malicious or unfounded, but that they emanate from the same signals as verifiable, substantiated abuse.


2002 ◽  
Vol 124 (3) ◽  
pp. 667-675 ◽  
Author(s):  
Ning Fang

Two patterns of chip curl, namely up- and side-curl, have been widely recognized in machining operations. This paper presents the third pattern of chip curl that is called lateral-curl. The rotating axis of chip lateral-curl is perpendicular to the rotating axes of up- and side-curl. The essential differences are illustrated between the chip lateral-curl concept and the “chip-twisting” concept presented in other related studies. Based on an analytical vector analysis, a new kinematic characterization is presented for the natural (or born) lateral-curl of the chip that is associated with flat-faced tool machining. It is demonstrated that chip forms (or shapes) can be determined by four governing variables: the chip up-, lateral-, and side-curl radii and the chip side-flow angle. A method to indirectly measure the chip lateral-curl radius is presented. The effect of chip lateral-curl on chip forms is investigated through cutting tests, theoretical analysis, and computer simulations.


1989 ◽  
Vol 43 (5) ◽  
pp. 855-860 ◽  
Author(s):  
Jun Uozumi ◽  
Toshimitsu Asakura

Estimation errors accompanying component spectra calculated by means of the concentration-spectrum correlation method are investigated by theoretical analysis and computer simulations. Discussion is concentrated on a modified version of the method, which operates under the constraint that the sum of all the component concentrations in a sample is unity. In an agreement similar to that for the basic method, which was treated in an earlier paper [Appl. Spectrosc. 43, 74 (1989)], the estimation error consists of a superposition of other component spectra, each multiplied by a weighting factor. In this case, however, the weighting factor is a function of five sample statistics: the averages and the standard deviations of the concentrations of both the objective and the interfering components, and the correlation coefficient of these two components. It is shown again that the nonparametric statistical technique called a bootstrap is useful as a tool of false-true discrimination of the peaks in the estimated spectra.


2005 ◽  
Vol 17 (11) ◽  
pp. 2337-2382 ◽  
Author(s):  
Robert Legenstein ◽  
Christian Naeger ◽  
Wolfgang Maass

Spiking neurons are very flexible computational modules, which can implement with different values of their adjustable synaptic parameters an enormous variety of different transformations F from input spike trains to output spike trains. We examine in this letter the question to what extent a spiking neuron with biologically realistic models for dynamic synapses can be taught via spike-timing-dependent plasticity (STDP) to implement a given transformation F. We consider a supervised learning paradigm where during training, the output of the neuron is clamped to the target signal (teacher forcing). The well-known perceptron convergence theorem asserts the convergence of a simple supervised learning algorithm for drastically simplified neuron models (McCulloch-Pitts neurons). We show that in contrast to the perceptron convergence theorem, no theoretical guarantee can be given for the convergence of STDP with teacher forcing that holds for arbitrary input spike patterns. On the other hand, we prove that average case versions of the perceptron convergence theorem hold for STDP in the case of uncorrelated and correlated Poisson input spike trains and simple models for spiking neurons. For a wide class of cross-correlation functions of the input spike trains, the resulting necessary and sufficient condition can be formulated in terms of linear separability, analogously as the well-known condition of learnability by perceptrons. However, the linear separability criterion has to be applied here to the columns of the correlation matrix of the Poisson input. We demonstrate through extensive computer simulations that the theoretically predicted convergence of STDP with teacher forcing also holds for more realistic models for neurons, dynamic synapses, and more general input distributions. In addition, we show through computer simulations that these positive learning results hold not only for the common interpretation of STDP, where STDP changes the weights of synapses, but also for a more realistic interpretation suggested by experimental data where STDP modulates the initial release probability of dynamic synapses.


Sign in / Sign up

Export Citation Format

Share Document