scholarly journals Reading spike timing without a clock: intrinsic decoding of spike trains

2014 ◽  
Vol 369 (1637) ◽  
pp. 20120467 ◽  
Author(s):  
Stefano Panzeri ◽  
Robin A. A. Ince ◽  
Mathew E. Diamond ◽  
Christoph Kayser

The precise timing of action potentials of sensory neurons relative to the time of stimulus presentation carries substantial sensory information that is lost or degraded when these responses are summed over longer time windows. However, it is unclear whether and how downstream networks can access information in precise time-varying neural responses. Here, we review approaches to test the hypothesis that the activity of neural populations provides the temporal reference frames needed to decode temporal spike patterns. These approaches are based on comparing the single-trial stimulus discriminability obtained from neural codes defined with respect to network-intrinsic reference frames to the discriminability obtained from codes defined relative to the experimenter's computer clock. Application of this formalism to auditory, visual and somatosensory data shows that information carried by millisecond-scale spike times can be decoded robustly even with little or no independent external knowledge of stimulus time. In cortex, key components of such intrinsic temporal reference frames include dedicated neural populations that signal stimulus onset with reliable and precise latencies, and low-frequency oscillations that can serve as reference for partitioning extended neuronal responses into informative spike patterns.

2019 ◽  
Vol 10 (1) ◽  
Author(s):  
Leila Drissi-Daoudi ◽  
Adrien Doerig ◽  
Michael H. Herzog

Abstract Sensory information must be integrated over time to perceive, for example, motion and melodies. Here, to study temporal integration, we used the sequential metacontrast paradigm in which two expanding streams of lines are presented. When a line in one stream is offset observers perceive all other lines to be offset too, even though they are straight. When more lines are offset the offsets integrate mandatorily, i.e., observers cannot report the individual offsets. We show that mandatory integration lasts for up to 450 ms, depending on the observer. Importantly, integration occurs only when offsets are presented within a discrete window of time. Even stimuli that are in close spatio-temporal proximity do not integrate if they are in different windows. A window of integration starts with stimulus onset and integration in the next window has similar characteristics. We present a two-stage computational model based on discrete time windows that captures these effects.


2012 ◽  
Vol 24 (4) ◽  
pp. 819-829 ◽  
Author(s):  
Henry Railo ◽  
Niina Salminen-Vaparanta ◽  
Linda Henriksson ◽  
Antti Revonsuo ◽  
Mika Koivisto

Chromatic information is processed by the visual system both at an unconscious level and at a level that results in conscious perception of color. It remains unclear whether both conscious and unconscious processing of chromatic information depend on activity in the early visual cortex or whether unconscious chromatic processing can also rely on other neural mechanisms. In this study, the contribution of early visual cortex activity to conscious and unconscious chromatic processing was studied using single-pulse TMS in three time windows 40–100 msec after stimulus onset in three conditions: conscious color recognition, forced-choice discrimination of consciously invisible color, and unconscious color priming. We found that conscious perception and both measures of unconscious processing of chromatic information depended on activity in early visual cortex 70–100 msec after stimulus presentation. Unconscious forced-choice discrimination was above chance only when participants reported perceiving some stimulus features (but not color).


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1521
Author(s):  
Jihoon Lee ◽  
Seungwook Yoon ◽  
Euiseok Hwang

With the development of the internet of things (IoT), the power grid has become intelligent using massive IoT sensors, such as smart meters. Generally, installed smart meters can collect large amounts of data to improve grid visibility and situational awareness. However, the limited storage and communication capacities can restrain their infrastructure in the IoT environment. To alleviate these problems, efficient and various compression techniques are required. Deep learning-based compression techniques such as auto-encoders (AEs) have recently been deployed for this purpose. However, the compression performance of the existing models can be limited when the spectral properties of high-frequency sampled power data are widely varying over time. This paper proposes an AE compression model, based on a frequency selection method, which improves the reconstruction quality while maintaining the compression ratio (CR). For efficient data compression, the proposed method selectively applies customized compression models, depending on the spectral properties of the corresponding time windows. The framework of the proposed method involves two primary steps: (i) division of the power data into a series of time windows with specified spectral properties (high-frequency, medium-frequency, and low-frequency dominance) and (ii) separate training and selective application of the AE models, which prepares them for the power data compression that best suits the characteristics of each frequency. In simulations on the Dutch residential energy dataset, the frequency-selective AE model shows significantly higher reconstruction performance than the existing model with the same CR. In addition, the proposed model reduces the computational complexity involved in the analysis of the learning process.


Sensors ◽  
2021 ◽  
Vol 21 (7) ◽  
pp. 2461
Author(s):  
Alexander Kuc ◽  
Vadim V. Grubov ◽  
Vladimir A. Maksimenko ◽  
Natalia Shusharina ◽  
Alexander N. Pisarchik ◽  
...  

Perceptual decision-making requires transforming sensory information into decisions. An ambiguity of sensory input affects perceptual decisions inducing specific time-frequency patterns on EEG (electroencephalogram) signals. This paper uses a wavelet-based method to analyze how ambiguity affects EEG features during a perceptual decision-making task. We observe that parietal and temporal beta-band wavelet power monotonically increases throughout the perceptual process. Ambiguity induces high frontal beta-band power at 0.3–0.6 s post-stimulus onset. It may reflect the increasing reliance on the top-down mechanisms to facilitate accumulating decision-relevant sensory features. Finally, this study analyzes the perceptual process using mixed within-trial and within-subject design. First, we found significant percept-related changes in each subject and then test their significance at the group level. Thus, observed beta-band biomarkers are pronounced in single EEG trials and may serve as control commands for brain-computer interface (BCI).


2002 ◽  
Vol 87 (4) ◽  
pp. 1749-1762 ◽  
Author(s):  
Shigeto Furukawa ◽  
John C. Middlebrooks

Previous studies have demonstrated that the spike patterns of cortical neurons vary systematically as a function of sound-source location such that the response of a single neuron can signal the location of a sound source throughout 360° of azimuth. The present study examined specific features of spike patterns that might transmit information related to sound-source location. Analysis was based on responses of well-isolated single units recorded from cortical area A2 in α-chloralose-anesthetized cats. Stimuli were 80-ms noise bursts presented from loudspeakers in the horizontal plane; source azimuths ranged through 360° in 20° steps. Spike patterns were averaged across samples of eight trials. A competitive artificial neural network (ANN) identified sound-source locations by recognizing spike patterns; the ANN was trained using the learning vector quantization learning rule. The information about stimulus location that was transmitted by spike patterns was computed from joint stimulus-response probability matrices. Spike patterns were manipulated in various ways to isolate particular features. Full-spike patterns, which contained all spike-count information and spike timing with 100-μs precision, transmitted the most stimulus-related information. Transmitted information was sensitive to disruption of spike timing on a scale of more than ∼4 ms and was reduced by an average of ∼35% when spike-timing information was obliterated entirely. In a condition in which all but the first spike in each pattern were eliminated, transmitted information decreased by an average of only ∼11%. In many cases, that condition showed essentially no loss of transmitted information. Three unidimensional features were extracted from spike patterns. Of those features, spike latency transmitted ∼60% more information than that transmitted either by spike count or by a measure of latency dispersion. Information transmission by spike patterns recorded on single trials was substantially reduced compared with the information transmitted by averages of eight trials. In a comparison of averaged and nonaveraged responses, however, the information transmitted by latencies was reduced by only ∼29%, whereas information transmitted by spike counts was reduced by 79%. Spike counts clearly are sensitive to sound-source location and could transmit information about sound-source locations. Nevertheless, the present results demonstrate that the timing of the first poststimulus spike carries a substantial amount, probably the majority, of the location-related information present in spike patterns. The results indicate that any complete model of the cortical representation of auditory space must incorporate the temporal characteristics of neuronal response patterns.


2005 ◽  
Vol 93 (3) ◽  
pp. 1718-1729 ◽  
Author(s):  
Neeraj J. Gandhi ◽  
Desiree K. Bonadonna

Following the initial, sensory response to stimulus presentation, activity in many saccade-related burst neurons along the oculomotor neuraxis is observed as a gradually increasing low-frequency discharge hypothesized to encode both timing and metrics of the impending eye movement. When the activity reaches an activation threshold level, these cells discharge a high-frequency burst, inhibit the pontine omnipause neurons (OPNs) and trigger a high-velocity eye movement known as saccade. We tested whether early cessation of OPN activity, prior to when it ordinarily pauses, acts to effectively lower the threshold and prematurely trigger a movement of modified metrics and/or dynamics. Relying on the observation that OPN discharge ceases during not only saccades but also blinks, air-puffs were delivered to one eye to evoke blinks as monkeys performed standard oculomotor tasks. We observed a linear relationship between blink and saccade onsets when the blink occurred shortly after the cue to initiate the movement but before the average reaction time. Blinks that preceded and overlapped with the cue increased saccade latency. Blinks evoked during the overlap period of the delayed saccade task, when target location is known but a saccade cannot be initiated for correct performance, failed to trigger saccades prematurely. Furthermore, when saccade and blink execution coincided temporally, the peak velocity of the eye movement was attenuated, and its initial velocity was correlated with its latency. Despite the perturbations, saccade accuracy was maintained across all blink times and task types. Collectively, these results support the notion that temporal features of the low-frequency activity encode aspects of a premotor command and imply that inhibition of OPNs alone is not sufficient to trigger saccades.


2011 ◽  
Vol 23 (12) ◽  
pp. 3972-3982 ◽  
Author(s):  
Mathias Scharinger ◽  
William J. Idsardi ◽  
Samantha Poe

Mammalian cortex is known to contain various kinds of spatial encoding schemes for sensory information including retinotopic, somatosensory, and tonotopic maps. Tonotopic maps are especially interesting for human speech sound processing because they encode linguistically salient acoustic properties. In this study, we mapped the entire vowel space of a language (Turkish) onto cortical locations by using the magnetic N1 (M100), an auditory-evoked component that peaks approximately 100 msec after auditory stimulus onset. We found that dipole locations could be structured into two distinct maps, one for vowels produced with the tongue positioned toward the front of the mouth (front vowels) and one for vowels produced in the back of the mouth (back vowels). Furthermore, we found spatial gradients in lateral–medial, anterior–posterior, and inferior–superior dimensions that encoded the phonetic, categorical distinctions between all the vowels of Turkish. Statistical model comparisons of the dipole locations suggest that the spatial encoding scheme is not entirely based on acoustic bottom–up information but crucially involves featural–phonetic top–down modulation. Thus, multiple areas of excitation along the unidimensional basilar membrane are mapped into higher dimensional representations in auditory cortex.


2021 ◽  
Vol 11 (11) ◽  
pp. 1506
Author(s):  
Annalisa Tosoni ◽  
Emanuele Cosimo Altomare ◽  
Marcella Brunetti ◽  
Pierpaolo Croce ◽  
Filippo Zappasodi ◽  
...  

One fundamental principle of the brain functional organization is the elaboration of sensory information for the specification of action plans that are most appropriate for interaction with the environment. Using an incidental go/no-go priming paradigm, we have previously shown a facilitation effect for the execution of a walking-related action in response to far vs. near objects/locations in the extrapersonal space, and this effect has been called “macro-affordance” to reflect the role of locomotion in the coverage of extrapersonal distance. Here, we investigated the neurophysiological underpinnings of such an effect by recording scalp electroencephalography (EEG) from 30 human participants during the same paradigm. The results of a whole-brain analysis indicated a significant modulation of the event-related potentials (ERPs) both during prime and target stimulus presentation. Specifically, consistent with a mechanism of action anticipation and automatic activation of affordances, a stronger ERP was observed in response to prime images framing the environment from a far vs. near distance, and this modulation was localized in dorso-medial motor regions. In addition, an inversion of polarity for far vs. near conditions was observed during the subsequent target period in dorso-medial parietal regions associated with spatially directed foot-related actions. These findings were interpreted within the framework of embodied models of brain functioning as arising from a mechanism of motor-anticipation and subsequent prediction error which was guided by the preferential affordance relationship between the distant large-scale environment and locomotion. More in general, our findings reveal a sensory-motor mechanism for the processing of walking-related environmental affordances.


2019 ◽  
Author(s):  
David A. Tovar ◽  
Micah M. Murray ◽  
Mark T. Wallace

AbstractObjects are the fundamental building blocks of how we create a representation of the external world. One major distinction amongst objects is between those that are animate versus inanimate. Many objects are specified by more than a single sense, yet the nature by which multisensory objects are represented by the brain remains poorly understood. Using representational similarity analysis of human EEG signals, we show enhanced encoding of audiovisual objects when compared to their corresponding visual and auditory objects. Surprisingly, we discovered the often-found processing advantages for animate objects was not evident in a multisensory context due to greater neural enhancement of inanimate objects—the more weakly encoded objects under unisensory conditions. Further analysis showed that the selective enhancement of inanimate audiovisual objects corresponded with an increase in shared representations across brain areas, suggesting that neural enhancement was mediated by multisensory integration. Moreover, a distance-to-bound analysis provided critical links between neural findings and behavior. Improvements in neural decoding at the individual exemplar level for audiovisual inanimate objects predicted reaction time differences between multisensory and unisensory presentations during a go/no-go animate categorization task. Interestingly, links between neural activity and behavioral measures were most prominent 100 to 200ms and 350 to 500ms after stimulus presentation, corresponding to time periods associated with sensory evidence accumulation and decision-making, respectively. Collectively, these findings provide key insights into a fundamental process the brain uses to maximize information it captures across sensory systems to perform object recognition.Significance StatementOur world is filled with an ever-changing milieu of sensory information that we are able to seamlessly transform into meaningful perceptual experience. We accomplish this feat by combining different features from our senses to construct objects. However, despite the fact that our senses do not work in isolation but rather in concert with each other, little is known about how the brain combines the senses together to form object representations. Here, we used EEG and machine learning to study how the brain processes auditory, visual, and audiovisual objects. Surprisingly, we found that non-living objects, the objects which were more difficult to process with one sense alone, benefited the most from engaging multiple senses.


Perception ◽  
1993 ◽  
Vol 22 (8) ◽  
pp. 963-970 ◽  
Author(s):  
Piotr Jaśkowski

Point of subjective simultaneity and simple reaction time were compared for stimuli with different rise times. It was found that these measures behave differently. To explain the result it is suggested that in the case of temporal-order judgment the subject takes into account not only the stimulus onset but also other events connected with stimulus presentation.


Sign in / Sign up

Export Citation Format

Share Document