scholarly journals Cortical Columnar Processing in the Rat Whisker-to-Barrel System

1999 ◽  
Vol 82 (4) ◽  
pp. 1808-1817 ◽  
Author(s):  
Joshua C. Brumberg ◽  
David J Pinto ◽  
Daniel J. Simons

Controlled whisker stimulation and single-unit recordings were used to elucidate response transformations that occur during the processing of tactile information from ventral posterior medial thalamus (VPM) through cortical columns in the rat whisker/barrel cortex. Whiskers were either deflected alone, using punctate ramp-and-hold stimuli, or in combination with a random noise vibration applied simultaneously to two or more neighboring whiskers. Quantitative data were obtained from five anatomically defined groups of neurons based on their being located in: VPM, layer IV barrels, layer IV septa, supragranular laminae, and infragranular laminae. Neurons in each of these populations displayed characteristic properties related to their response latency and time course, relative magnitudes of responses evoked by stimulus onset versus offset, strength of excitatory responses evoked by the noise stimulus, and/or the degree to which the noise stimulus, when applied to neighboring whiskers, suppressed or facilitated responses evoked by the columnar whisker. Results indicate that within layer IV itself there are at least two anatomically distinct networks, barrel and septum, that independently process afferent information, transforming thalamic input in similar but quantitatively distinguishable ways. Transformed signals are passed on to circuits in supragranular and infragranular laminae. In the case of supragranular neurons, evidence suggests that circuits there function in a qualitatively different fashion from those in layer IV, diminishing response differentials between weak and strong inputs, rather than enhancing them. Compared to layer IV, the greater heterogeneity of receptive field properties in nongranular layers suggests the existence of multiple, operationally distinct local circuits in the output layers of the cortical column.

2002 ◽  
Vol 88 (1) ◽  
pp. 422-437 ◽  
Author(s):  
Anne J. Blood ◽  
Nader Pouratian ◽  
Arthur W. Toga

Characterization of neurovascular relationships is critical to accurate interpretation of functional neuroimaging data. We have previously observed spatial uncoupling of optical intrinsic signal imaging (OIS) and evoked potential (EP) responses in rodent barrel cortex following simultaneous whisker and forelimb stimulation, leading to changes in OIS response magnitude. To further test the hypothesis that this uncoupling may have resulted from “passive” overspill of perfusion-related responses between functional regions, we conducted the present study using temporally staggered rather than simultaneous whisker and forelimb stimulation. This paradigm minimized overlap of neural responses in barrel cortex and forelimb primary somatosensory cortex (SI), while maintaining overlap of vascular response time courses between regions. When contrasted with responses to 1.5-s lone-whisker stimulation, staggered whisker and forelimb stimulation resulted in broadening of barrel cortex OIS response time course in the temporal direction of forelimb stimulation. OIS response peaks were also temporally shifted toward the forelimb stimulation period; time-to-peak was shorter (relative to whisker stimulus onset) when forelimb stimulation preceded whisker stimulation and longer when forelimb stimulation followed whisker stimulation. In contrast with OIS and EP magnitude decreases previously observed during simultaneous whisker/forelimb stimulation, barrel cortex OIS response magnitude increased during staggered stimulation and no detectable changes in underlying EP activity were observed. Spatial extent of barrel cortex OIS responses also increased during staggered stimulation. These findings provide further evidence for spatial uncoupling of OIS and EP responses, and emphasize the importance of temporal stimulus properties on the effects of this uncoupling. It is hypothesized that spatial uncoupling is a result of passive overspill of perfusion-related responses into regions distinct from those which are functionally active. It will be important to consider potential influences of this uncoupling when designing and interpreting functional imaging studies that use hemodynamic responses to infer underlying neural activity.


2012 ◽  
Vol 24 (2) ◽  
pp. 521-529 ◽  
Author(s):  
Frank Oppermann ◽  
Uwe Hassler ◽  
Jörg D. Jescheniak ◽  
Thomas Gruber

The human cognitive system is highly efficient in extracting information from our visual environment. This efficiency is based on acquired knowledge that guides our attention toward relevant events and promotes the recognition of individual objects as they appear in visual scenes. The experience-based representation of such knowledge contains not only information about the individual objects but also about relations between them, such as the typical context in which individual objects co-occur. The present EEG study aimed at exploring the availability of such relational knowledge in the time course of visual scene processing, using oscillatory evoked gamma-band responses as a neural correlate for a currently activated cortical stimulus representation. Participants decided whether two simultaneously presented objects were conceptually coherent (e.g., mouse–cheese) or not (e.g., crown–mushroom). We obtained increased evoked gamma-band responses for coherent scenes compared with incoherent scenes beginning as early as 70 msec after stimulus onset within a distributed cortical network, including the right temporal, the right frontal, and the bilateral occipital cortex. This finding provides empirical evidence for the functional importance of evoked oscillatory activity in high-level vision beyond the visual cortex and, thus, gives new insights into the functional relevance of neuronal interactions. It also indicates the very early availability of experience-based knowledge that might be regarded as a fundamental mechanism for the rapid extraction of the gist of a scene.


2004 ◽  
Vol 92 (5) ◽  
pp. 3030-3042 ◽  
Author(s):  
Jay Hegdé ◽  
David C. Van Essen

The firing rate of visual cortical neurons typically changes substantially during a sustained visual stimulus. To assess whether, and to what extent, the information about shape conveyed by neurons in visual area V2 changes over the course of the response, we recorded the responses of V2 neurons in awake, fixating monkeys while presenting a diverse set of static shape stimuli within the classical receptive field. We analyzed the time course of various measures of responsiveness and stimulus-related response modulation at the level of individual cells and of the population. For a majority of V2 cells, the response modulation was maximal during the initial transient response (40–80 ms after stimulus onset). During the same period, the population response was relatively correlated, in that V2 cells tended to respond similarly to specific subsets of stimuli. Over the ensuing 80–100 ms, the signal-to-noise ratio of individual cells generally declined, but to a lesser degree than the evoked-response rate during the corresponding time bins, and the response profiles became decorrelated for many individual cells. Concomitantly, the population response became substantially decorrelated. Our results indicate that the information about stimulus shape evolves dynamically and relatively rapidly in V2 during static visual stimulation in ways that may contribute to form discrimination.


2019 ◽  
Author(s):  
Mattson Ogg ◽  
Thomas A. Carlson ◽  
L. Robert Slevc

Human listeners are bombarded by acoustic information that the brain rapidly organizes into coherent percepts of objects and events in the environment, which aids speech and music perception. The efficiency of auditory object recognition belies the critical constraint that acoustic stimuli necessarily require time to unfold. Using magentoencephalography (MEG), we studied the time course of the neural processes that transform dynamic acoustic information into auditory object representations. Participants listened to a diverse set of 36 tokens comprising everyday sounds from a typical human environment. Multivariate pattern analysis was used to decode the sound tokens from the MEG recordings. We show that sound tokens can be decoded from brain activity beginning 90 milliseconds after stimulus onset with peak decoding performance occurring at 155 milliseconds post stimulus onset. Decoding performance was primarily driven by differences between category representations (e.g., environmental vs. instrument sounds), although within-category decoding was better than chance. Representational similarity analysis revealed that these emerging neural representations were related to harmonic and spectrotemporal differences among the stimuli, which correspond to canonical acoustic features processed by the auditory pathway. Our findings begin to link the processing of physical sound properties with the perception of auditory objects and events in cortex.


2013 ◽  
Vol 26 (4) ◽  
pp. 371-386 ◽  
Author(s):  
Yi-Chuan Chen ◽  
Charles Spence

The time-course of cross-modal semantic interactions between pictures and either naturalistic sounds or spoken words was compared. Participants performed a speeded picture categorization task while hearing a task-irrelevant auditory stimulus presented at various stimulus onset asynchronies (SOAs) with respect to the visual picture. Both naturalistic sounds and spoken words gave rise to cross-modal semantic congruency effects (i.e., facilitation by semantically congruent sounds and inhibition by semantically incongruent sounds, as compared to a baseline noise condition) when the onset of the sound led that of the picture by 240 ms or more. Both naturalistic sounds and spoken words also gave rise to inhibition irrespective of their semantic congruency when presented within 106 ms of the onset of the picture. The peak of this cross-modal inhibitory effect occurred earlier for spoken words than for naturalistic sounds. These results therefore demonstrate that the semantic priming of visual picture categorization by auditory stimuli only occurs when the onset of the sound precedes that of the visual stimulus. The different time-courses observed for naturalistic sounds and spoken words likely reflect the different processing pathways to access the relevant semantic representations.


2019 ◽  
Vol 116 (6) ◽  
pp. 2027-2032 ◽  
Author(s):  
Jasper H. Fabius ◽  
Alessio Fracasso ◽  
Tanja C. W. Nijboer ◽  
Stefan Van der Stigchel

Humans move their eyes several times per second, yet we perceive the outside world as continuous despite the sudden disruptions created by each eye movement. To date, the mechanism that the brain employs to achieve visual continuity across eye movements remains unclear. While it has been proposed that the oculomotor system quickly updates and informs the visual system about the upcoming eye movement, behavioral studies investigating the time course of this updating suggest the involvement of a slow mechanism, estimated to take more than 500 ms to operate effectively. This is a surprisingly slow estimate, because both the visual system and the oculomotor system process information faster. If spatiotopic updating is indeed this slow, it cannot contribute to perceptual continuity, because it is outside the temporal regime of typical oculomotor behavior. Here, we argue that the behavioral paradigms that have been used previously are suboptimal to measure the speed of spatiotopic updating. In this study, we used a fast gaze-contingent paradigm, using high phi as a continuous stimulus across eye movements. We observed fast spatiotopic updating within 150 ms after stimulus onset. The results suggest the involvement of a fast updating mechanism that predictively influences visual perception after an eye movement. The temporal characteristics of this mechanism are compatible with the rate at which saccadic eye movements are typically observed in natural viewing.


2020 ◽  
Vol 32 (1) ◽  
pp. 111-123 ◽  
Author(s):  
Mattson Ogg ◽  
Thomas A. Carlson ◽  
L. Robert Slevc

Human listeners are bombarded by acoustic information that the brain rapidly organizes into coherent percepts of objects and events in the environment, which aids speech and music perception. The efficiency of auditory object recognition belies the critical constraint that acoustic stimuli necessarily require time to unfold. Using magnetoencephalography, we studied the time course of the neural processes that transform dynamic acoustic information into auditory object representations. Participants listened to a diverse set of 36 tokens comprising everyday sounds from a typical human environment. Multivariate pattern analysis was used to decode the sound tokens from the magnetoencephalographic recordings. We show that sound tokens can be decoded from brain activity beginning 90 msec after stimulus onset with peak decoding performance occurring at 155 msec poststimulus onset. Decoding performance was primarily driven by differences between category representations (e.g., environmental vs. instrument sounds), although within-category decoding was better than chance. Representational similarity analysis revealed that these emerging neural representations were related to harmonic and spectrotemporal differences among the stimuli, which correspond to canonical acoustic features processed by the auditory pathway. Our findings begin to link the processing of physical sound properties with the perception of auditory objects and events in cortex.


2018 ◽  
Vol 71 (5) ◽  
pp. 1112-1124 ◽  
Author(s):  
Vera Heyer ◽  
Dana Kornishova

Semantic transparency has been in the focus of psycholinguistic research for decades, with the controversy about the time course of the application of morpho-semantic information during the processing of morphologically complex words not yet resolved. This study reports two masked priming studies with English - ness and Russian - ost’ nominalisations, investigating how semantic transparency modulates native speakers’ morphological priming effects at short and long stimulus onset asynchronies (SOAs). In both languages, we found increased morphological priming for nominalisations at the transparent end of the scale (e.g. paleness – pale) in comparison to items at the opaque end of the scale (e.g. business – busy) but only at longer prime durations. The present findings are in line with models that posit an initial phase of morpho-orthographic (semantically blind) decomposition.


2004 ◽  
Vol 16 (3) ◽  
pp. 503-522 ◽  
Author(s):  
Matthias M. Müller ◽  
Andreas Keil

In the present study, subjects selectively attended to the color of checkerboards in a feature-based attention paradigm. Induced gamma band responses (GBRs), the induced alpha band, and the event-related potential (ERP) were analyzed to uncover neuronal dynamics during selective feature processing. Replicating previous ERP findings, the selection negativity (SN) with a latency of about 160 msec was extracted. Furthermore, and similarly to previous EEG studies, a gamma band peak in a time window between 290 and 380 msec was found. This peak had its major energy in the 55to 70-Hz range and was significantly larger for the attended color. Contrary to previous human induced gamma band studies, a much earlier 40to 50-Hz peak in a time window between 160 and 220 msec after stimulus onset and, thus, concurrently to the SN was prominent with significantly more energy for attended as opposed to unattended color. The induced alpha band (9.8–11.7 Hz), on the other hand, exhibited a marked suppression for attended color in a time window between 450 and 600 msec after stimulus onset. A comparison of the time course of the 40to 50-Hz and 55to 70-Hz induced GBR, the induced alpha band, and the ERP revealed temporal coincidences for changes in the morphology of these brain responses. Despite these similarities in the time domain, the cortical source configuration was found to discriminate between induced GBRs and the SN. Our results suggest that large-scale synchronous high-frequency brain activity as measured in the human GBR play a specific role in attentive processing of stimulus features.


2006 ◽  
Vol 18 (9) ◽  
pp. 1488-1497 ◽  
Author(s):  
James W. Tanaka ◽  
Tim Curran ◽  
Albert L. Porterfield ◽  
Daniel Collins

Electrophysiological studies using event-related potentials have demonstrated that face stimuli elicit a greater negative brain potential in right posterior recording sites 170 msec after stimulus onset (N170) relative to nonface stimuli. Results from repetition priming paradigms have shown that repeated exposures of familiar faces elicit a larger negative brainwave (N250r) at inferior temporal sites compared to repetitions of unfamiliar faces. However, less is known about the time course and learning conditions under which the N250 face representation is acquired. In the familiarization phase of the Joe/no Joe task, subjects studied a target “Joe” face (“Jane” for female subjects) and, during the course of the experiment, identified a series of sequentially presented faces as either Joe or not Joe. The critical stimulus conditions included the subject's own face, a same-sex Joe ( Jane) face and a same-sex “other” face. The main finding was that the subject's own face produced a focal negative deflection (N250) in posterior channels relative to nontarget faces. The task-relevant Joe target face was not differentiated from other nontarget faces in the first half of the experiment. However, in the second half, the Joe face produced an N250 response that was similar in magnitude to the own face. These findings suggest that the N250 indexes two types of face memories: a preexperimentally familiar face representation (i.e., the “own face” and a newly acquired face representation (i.e., the Joe/Jane face) that was formed during the course of the experiment.


Sign in / Sign up

Export Citation Format

Share Document