scholarly journals The time course of different surround suppression mechanisms

2018 ◽  
Author(s):  
Michael-Paul Schallmo ◽  
Alex M. Kale ◽  
Scott O. Murray

AbstractWhat we see depends on the spatial context in which it appears. Previous work has linked the reduction of perceived stimulus contrast in the presence of surrounding stimuli to the suppression of neural responses in early visual cortex. It has also been suggested that this surround suppression depends on at least two separable neural mechanisms, one ‘low-level’ and one ‘higher-level,’ which can be differentiated by their response characteristics. In a recent study, we found evidence consistent with these two suppression mechanisms using psychophysical measurements of perceived contrast. Here, we used EEG to demonstrate for the first time that neural responses in the human occipital lobe also show evidence of two separable suppression mechanisms. Eighteen adults (10 female and 8 male) each participated in a total of 3 experimental sessions, in which they viewed visual stimuli through a mirror stereoscope. The first session was used to definitively identify the C1 component, while the second and third comprised the main experiment. ERPs were measured in response to center gratings either with no surround, or with surrounding gratings oriented parallel or orthogonal, and presented either in the same eye (monoptic) or opposite eye (dichoptic). We found that the earliest ERP component (C1; ∼60 ms) was suppressed in the presence of surrounding stimuli, but that this suppression did not depend on surround configuration, suggesting a low-level suppression mechanism which is not tuned for relative orientation. A later response component (N1; ∼160 ms) showed stronger surround suppression for parallel and monoptic stimulus configurations, consistent with our earlier psychophysical results and a higher-level, binocular, orientation-tuned suppression mechanism. We conclude that these two surround suppression mechanisms have distinct response time courses in the human visual system, which can be differentiated using electrophysiology.

2021 ◽  
Author(s):  
Evi Hendrikx ◽  
Jacob Paul ◽  
Martijn van Ackooij ◽  
Nathan van der Stoep ◽  
Ben Harvey

Abstract Quantifying the timing (duration and frequency) of brief visual events is vital to human perception, multisensory integration and action planning. Tuned neural responses to visual event timing have been found in areas of the association cortices implicated in these processes. Here we ask whether and where the human brain derives these timing-tuned responses from the responses of early visual cortex, which monotonically increase with event duration and frequency. Using 7T fMRI and neural model-based analyses, we find a gradual transition from monotonically increasing to timing-tuned neural responses beginning in area MT/V5. Therefore, successive stages of visual processing gradually derive timing-tuned response components from the inherent modulation of sensory responses by event timing. This additional timing-tuned response component was independent of retinotopic location. We propose that this hierarchical derivation of timing-tuned responses from sensory processing areas quantifies sensory event timing while abstracting temporal representations from the spatial properties of their inputs.


2009 ◽  
Vol 101 (3) ◽  
pp. 1463-1479 ◽  
Author(s):  
Rui Kimura ◽  
Izumi Ohzawa

Responses of a visual neuron to optimally oriented stimuli can be suppressed by a superposition of another grating with a different orientation. This effect is known as cross-orientation suppression. However, it is still not clear whether the effect is intracortical in origin or a reflection of subcortical processes. To address this issue, we measured spatiotemporal responses to a plaid pattern, a superposition of two gratings, as well as to individual component gratings (optimal and mask) using a subspace reverse-correlation method. Suppression for the plaid was evaluated by comparing the response to that for the optimal grating. For component stimuli, excitatory and negative responses were defined as responses more positive and negative, respectively, than that to a blank stimulus. The suppressive effect for plaids was observed in the vast majority of neurons. However, only ∼30% of neurons showed the negative response to mask-only gratings. The magnitudes of negative responses to mask-only stimuli were correlated with the degree of suppression for plaid stimuli. Comparing the latencies, we found that the suppression for the plaids starts at about the same time or slightly later than the response onset for the optimal grating and reaches its maximum at about the same time as the peak latency for the mask-only grating. Based on these results, we propose that in addition to the suppressive effect originating at the subcortical stage, delayed suppressive signals derived from the intracortical networks act on the neuron to generate cross-orientation suppression.


1999 ◽  
Vol 64 (1) ◽  
pp. 95-103 ◽  
Author(s):  
Samuel Deurveilher ◽  
Iroudayanadin S Delamanche ◽  
Bernard Hars ◽  
Patrick Breton ◽  
Elizabeth Hennevin

2019 ◽  
Author(s):  
Mareike Bayer ◽  
Oksana Berhe ◽  
Isabel Dziobek ◽  
Tom Johnstone

AbstractThe faces of those most personally relevant to us are our primary source of social information, making their timely perception a priority. Recent research indicates that gender, age and identity of faces can be decoded from EEG/MEG data within 100ms. Yet the time course and neural circuitry involved in representing the personal relevance of faces remain unknown. We applied simultaneous EEG-fMRI to examine neural responses to emotional faces of female participants’ romantic partners, friends, and a stranger. Combining EEG and fMRI in cross-modal representational similarity analyses, we provide evidence that representations of personal relevance start prior to structural encoding at 100ms in visual cortex, but also in prefrontal and midline regions involved in value representation, and monitoring and recall of self-relevant information. Representations related to romantic love emerged after 300ms. Our results add to an emerging body of research that suggests that models of face perception need to be updated to account for rapid detection of personal relevance in cortical circuitry beyond the core face processing network.


2016 ◽  
Vol 28 (4) ◽  
pp. 643-655 ◽  
Author(s):  
Matthias M. Müller ◽  
Mireille Trautmann ◽  
Christian Keitel

Shifting attention from one color to another color or from color to another feature dimension such as shape or orientation is imperative when searching for a certain object in a cluttered scene. Most attention models that emphasize feature-based selection implicitly assume that all shifts in feature-selective attention underlie identical temporal dynamics. Here, we recorded time courses of behavioral data and steady-state visual evoked potentials (SSVEPs), an objective electrophysiological measure of neural dynamics in early visual cortex to investigate temporal dynamics when participants shifted attention from color or orientation toward color or orientation, respectively. SSVEPs were elicited by four random dot kinematograms that flickered at different frequencies. Each random dot kinematogram was composed of dashes that uniquely combined two features from the dimensions color (red or blue) and orientation (slash or backslash). Participants were cued to attend to one feature (such as color or orientation) and respond to coherent motion targets of the to-be-attended feature. We found that shifts toward color occurred earlier after the shifting cue compared with shifts toward orientation, regardless of the original feature (i.e., color or orientation). This was paralleled in SSVEP amplitude modulations as well as in the time course of behavioral data. Overall, our results suggest different neural dynamics during shifts of attention from color and orientation and the respective shifting destinations, namely, either toward color or toward orientation.


2020 ◽  
Vol 117 (45) ◽  
pp. 28442-28451
Author(s):  
Monzilur Rahman ◽  
Ben D. B. Willmore ◽  
Andrew J. King ◽  
Nicol S. Harper

Sounds are processed by the ear and central auditory pathway. These processing steps are biologically complex, and many aspects of the transformation from sound waveforms to cortical response remain unclear. To understand this transformation, we combined models of the auditory periphery with various encoding models to predict auditory cortical responses to natural sounds. The cochlear models ranged from detailed biophysical simulations of the cochlea and auditory nerve to simple spectrogram-like approximations of the information processing in these structures. For three different stimulus sets, we tested the capacity of these models to predict the time course of single-unit neural responses recorded in ferret primary auditory cortex. We found that simple models based on a log-spaced spectrogram with approximately logarithmic compression perform similarly to the best-performing biophysically detailed models of the auditory periphery, and more consistently well over diverse natural and synthetic sounds. Furthermore, we demonstrated that including approximations of the three categories of auditory nerve fiber in these simple models can substantially improve prediction, particularly when combined with a network encoding model. Our findings imply that the properties of the auditory periphery and central pathway may together result in a simpler than expected functional transformation from ear to cortex. Thus, much of the detailed biological complexity seen in the auditory periphery does not appear to be important for understanding the cortical representation of sound.


1999 ◽  
Vol 82 (2) ◽  
pp. 963-977 ◽  
Author(s):  
Donald B. Katz ◽  
S. A. Simon ◽  
Aaron Moody ◽  
Miguel A. L. Nicolelis

Reorganization of the somatosensory system was quantified by simultaneously recording from single-unit neural ensembles in the whisker regions of the ventral posterior medial (VPM) nucleus of the thalamus and the primary somatosensory (SI) cortex in anesthetized rats before, during, and after injecting capsaicin under the skin of the lip. Capsaicin, a compound that excites and then inactivates a subset of peripheral C and Aδ fibers, triggered increases in spontaneous firing of thalamocortical neurons (10–15 min after injection), as well as rapid reorganization of the whisker representations in both the VPM and SI. During the first hour after capsaicin injection, 57% of the 139 recorded neurons either gained or lost at least one whisker response in their receptive fields (RFs). Capsaicin-related changes continued to emerge for ≥6 h after the injection: Fifty percent of the single-neuron RFs changed between 1–2 and 5–6 h after capsaicin injection. Most (79%) of these late changes represented neural responses that had remained unchanged in the first postcapsaicin mapping; just under 20% of these late changes appeared in neurons that had previously shown no plasticity of response. The majority of the changes (55% immediately after injection, 66% 6 h later) involved “unmasking” of new tactile responses. RF change rates were comparable in SI and VPM (57–49%). Population analysis indicated that the reorganization was associated with a lessening of the “spatial coupling” between cortical neurons—a significant reduction in firing covariance that could be related to distances between neurons. This general loss of spatial coupling, in conjunction with increases in spontaneous firing, may create a situation that is favorable for the induction of synaptic plasticity. Our results indicate that the selective inactivation of a peripheral nociceptor subpopulation can induce rapid and long-evolving (≥6 h) shifts in the balance of inhibition and excitation in the somatosensory system. The time course of these processes suggest that thalamic and cortical plasticity is not a linear reflection of spinal and brainstem changes that occur following the application of capsaicin.


Sign in / Sign up

Export Citation Format

Share Document