Spike-timing-dependent synaptic modification induced by natural spike trains

Nature ◽  
2002 ◽  
Vol 416 (6879) ◽  
pp. 433-438 ◽  
Author(s):  
Robert C. Froemke ◽  
Yang Dan
2003 ◽  
Vol 15 (10) ◽  
pp. 2399-2418 ◽  
Author(s):  
Zhao Songnian ◽  
Xiong Xiaoyun ◽  
Yao Guozheng ◽  
Fu Zhi

Based on synchronized responses of neuronal populations in the visual cortex to external stimuli, we proposed a computational model consisting primarily of a neuronal phase-locked loop (NPLL) and multiscaled operator. The former reveals the function of synchronous oscillations in the visual cortex. Regardless of which of these patterns of the spike trains may be an average firing-rate code, a spike-timing code, or a rate-time code, the NPLL can decode original visual information from neuronal spike trains modulated with patterns of external stimuli, because a voltage-controlled oscillator (VCO), which is included in the NPLL, can precisely track neuronal spike trains and instantaneous variations, that is, VCO can make a copy of an external stimulus pattern. The latter, however, describes multi-scaled properties of visual information processing, but not merely edge and contour detection. In this study, in which we combined NPLL with a multiscaled operator and maximum likelihood estimation, we proved that the model, as a neurodecoder, implements optimum algorithm decoding visual information from neuronal spike trains at the system level. At the same time, the model also obtains increasingly important supports, which come from a series of experimental results of neurobiology on stimulus-specific neuronal oscillations or synchronized responses of the neuronal population in the visual cortex. In addition, the problem of how to describe visual acuity and multiresolution of vision by wavelet transform is also discussed. The results indicate that the model provides a deeper understanding of the role of synchronized responses in decoding visual information.


2009 ◽  
Vol 21 (5) ◽  
pp. 1259-1276 ◽  
Author(s):  
Timothée Masquelier ◽  
Rudy Guyonneau ◽  
Simon J. Thorpe

Recently it has been shown that a repeating arbitrary spatiotemporal spike pattern hidden in equally dense distracter spike trains can be robustly detected and learned by a single neuron equipped with spike-timing-dependent plasticity (STDP) (Masquelier, Guyonneau, & Thorpe, 2008). To be precise, the neuron becomes selective to successive coincidences of the pattern. Here we extend this scheme to a more realistic scenario with multiple repeating patterns and multiple STDP neurons “listening” to the incoming spike trains. These “listening” neurons are in competition: as soon as one fires, it strongly inhibits the others through lateral connections (one-winner-take-all mechanism). This tends to prevent the neurons from learning the same (parts of the) repeating patterns, as shown in simulations. Instead, the population self-organizes, trying to cover the different patterns or coding one pattern by the successive firings of several neurons, and a powerful distributed coding scheme emerges. Taken together, these results illustrate how the brain could easily encode and decode information in the spike times, a theory referred to as temporal coding, and how STDP could play a key role by detecting repeating patterns and generating selective response to them.


1999 ◽  
Vol 11 (7) ◽  
pp. 1537-1551 ◽  
Author(s):  
Carlos D. Brody

Peaks in spike train correlograms are usually taken as indicative of spike timing synchronization between neurons. Strictly speaking, however, a peak merely indicates that the two spike trains were not independent. Two biologically plausible ways of departing from independence that are capable of generating peaks very similar to spike timing peaks are described here: covariations over trials in response latency and covariations over trials in neuronal excitability. Since peaks due to these interactions can be similar to spike timing peaks, interpreting a correlogram may be a problem with ambiguous solutions. What peak shapes do latency or excitability interactions generate? When are they similar to spike timing peaks? When can they be ruled out from having caused an observed correlogram peak? These are the questions addressed here. The previous article in this issue proposes quantitative methods to tell cases apart when latency or excitability covariations cannot be ruled out.


2005 ◽  
Vol 17 (11) ◽  
pp. 2337-2382 ◽  
Author(s):  
Robert Legenstein ◽  
Christian Naeger ◽  
Wolfgang Maass

Spiking neurons are very flexible computational modules, which can implement with different values of their adjustable synaptic parameters an enormous variety of different transformations F from input spike trains to output spike trains. We examine in this letter the question to what extent a spiking neuron with biologically realistic models for dynamic synapses can be taught via spike-timing-dependent plasticity (STDP) to implement a given transformation F. We consider a supervised learning paradigm where during training, the output of the neuron is clamped to the target signal (teacher forcing). The well-known perceptron convergence theorem asserts the convergence of a simple supervised learning algorithm for drastically simplified neuron models (McCulloch-Pitts neurons). We show that in contrast to the perceptron convergence theorem, no theoretical guarantee can be given for the convergence of STDP with teacher forcing that holds for arbitrary input spike patterns. On the other hand, we prove that average case versions of the perceptron convergence theorem hold for STDP in the case of uncorrelated and correlated Poisson input spike trains and simple models for spiking neurons. For a wide class of cross-correlation functions of the input spike trains, the resulting necessary and sufficient condition can be formulated in terms of linear separability, analogously as the well-known condition of learnability by perceptrons. However, the linear separability criterion has to be applied here to the columns of the correlation matrix of the Poisson input. We demonstrate through extensive computer simulations that the theoretically predicted convergence of STDP with teacher forcing also holds for more realistic models for neurons, dynamic synapses, and more general input distributions. In addition, we show through computer simulations that these positive learning results hold not only for the common interpretation of STDP, where STDP changes the weights of synapses, but also for a more realistic interpretation suggested by experimental data where STDP modulates the initial release probability of dynamic synapses.


2013 ◽  
Vol 110 (7) ◽  
pp. 1672-1688 ◽  
Author(s):  
Bertrand Fontaine ◽  
Victor Benichoux ◽  
Philip X. Joris ◽  
Romain Brette

A challenge for sensory systems is to encode natural signals that vary in amplitude by orders of magnitude. The spike trains of neurons in the auditory system must represent the fine temporal structure of sounds despite a tremendous variation in sound level in natural environments. It has been shown in vitro that the transformation from dynamic signals into precise spike trains can be accurately captured by simple integrate-and-fire models. In this work, we show that the in vivo responses of cochlear nucleus bushy cells to sounds across a wide range of levels can be precisely predicted by deterministic integrate-and-fire models with adaptive spike threshold. Our model can predict both the spike timings and the firing rate in response to novel sounds, across a large input level range. A noisy version of the model accounts for the statistical structure of spike trains, including the reliability and temporal precision of responses. Spike threshold adaptation was critical to ensure that predictions remain accurate at different levels. These results confirm that simple integrate-and-fire models provide an accurate phenomenological account of spike train statistics and emphasize the functional relevance of spike threshold adaptation.


2013 ◽  
Vol 25 (7) ◽  
pp. 1853-1869 ◽  
Author(s):  
Takumi Uramoto ◽  
Hiroyuki Torikai

Spike-timing-dependent plasticity (STDP) is a form of synaptic modification that depends on the relative timings of presynaptic and postsynaptic spikes. In this letter, we proposed a calcium-based simple STDP model, described by an ordinary differential equation having only three state variables: one represents the density of intracellular calcium, one represents a fraction of open state NMDARs, and one represents the synaptic weight. We shown that in spite of its simplicity, the model can reproduce the properties of the plasticity that have been experimentally measured in various brain areas (e.g., layer 2/3 and 5 visual cortical slices, hippocampal cultures, and layer 2/3 somatosensory cortical slices) with respect to various patterns of presynaptic and postsynaptic spikes. In addition, comparisons with other STDP models are made, and the significance and advantages of the proposed model are discussed.


1998 ◽  
Vol 80 (6) ◽  
pp. 3345-3351 ◽  
Author(s):  
Carlos D. Brody

Brody, Carlos D. Slow covariations in neuronal resting potentials can lead to artefactually fast cross-correlations in their spike trains. J. Neurophysiol. 80: 3345–3351, 1998. A model of two lateral geniculate nucleus (LGN) cells, which interact only through slow (tens of seconds) covariations in their resting membrane potentials, is used here to investigate the effect of such covariations on cross-correlograms taken during stimulus-driven conditions. Despite the slow timescale of the interactions, the model generates cross-correlograms with peak widths in the range of 25–200 ms. These bear a striking resemblance to those reported in studies of LGN cells by Sillito et al., which were taken at the time as evidence of a fast spike timing synchronization interaction; the model highlights the possibility that those correlogram peaks may have been caused by a mechanism other than spike synchronization. Slow resting potential covariations are suggested instead as the dominant generating mechanism. How can a slow interaction generate covariogram peaks with a width 100–1,000 times thinner than its timescale? Broad peaks caused by slow interactions are modulated by the cells' poststimulus time histograms (PSTHs). When the PSTHs have thin peaks (e.g., tens of milliseconds), the cross-correlogram peaks generated by slow interactions will also be thin; such peaks are easily misinterpretable as being caused by fast interactions. Although this point is explored here in the context of LGN recordings, it is a general point and applies elsewhere. When cross-correlogram peak widths are of the same order of magnitude as PSTH peak widths, experiments designed to reveal short-timescale interactions must be interpreted with the issue of possible contributions from slower interactions in mind.


2001 ◽  
Vol 13 (1) ◽  
pp. 35-67 ◽  
Author(s):  
Walter Senn ◽  
Henry Markram ◽  
Misha Tsodyks

The precise times of occurrence of individual pre- and postsynaptic action potentials are known to play a key role in the modification of synaptic efficacy. Based on stimulation protocols of two synaptically connected neurons, we infer an algorithm that reproduces the experimental data by modifying the probability of vesicle discharge as a function of the relative timing of spikes in the pre- and postsynaptic neurons. The primary feature of this algorithm is an asymmetry with respect to the direction of synaptic modification depending on whether the presynaptic spikes precede or follow the postsynaptic spike. Specifically, if the presynaptic spike occurs up to 50 ms before the postsynaptic spike, the probability of vesicle discharge is upregulated, while the probability of vesicle discharge is downregulated if the presynaptic spike occurs up to 50 ms after the postsynaptic spike. When neurons fire irregularly with Poisson spike trains at constant mean firing rates, the probability of vesicle discharge converges toward a characteristic value determined by the preand postsynaptic firing rates. On the other hand, if the mean rates of the Poisson spike trains slowly change with time, our algorithm predicts modifications in the probability of release that generalize Hebbian and Bienenstock-Cooper-Munro rules. We conclude that the proposed spike- based synaptic learning algorithm provides a general framework for regulating neurotransmitter release probability.


Sign in / Sign up

Export Citation Format

Share Document