scholarly journals Early Visual Cortex Dynamics during Top–Down Modulated Shifts of Feature-Selective Attention

2016 ◽  
Vol 28 (4) ◽  
pp. 643-655 ◽  
Author(s):  
Matthias M. Müller ◽  
Mireille Trautmann ◽  
Christian Keitel

Shifting attention from one color to another color or from color to another feature dimension such as shape or orientation is imperative when searching for a certain object in a cluttered scene. Most attention models that emphasize feature-based selection implicitly assume that all shifts in feature-selective attention underlie identical temporal dynamics. Here, we recorded time courses of behavioral data and steady-state visual evoked potentials (SSVEPs), an objective electrophysiological measure of neural dynamics in early visual cortex to investigate temporal dynamics when participants shifted attention from color or orientation toward color or orientation, respectively. SSVEPs were elicited by four random dot kinematograms that flickered at different frequencies. Each random dot kinematogram was composed of dashes that uniquely combined two features from the dimensions color (red or blue) and orientation (slash or backslash). Participants were cued to attend to one feature (such as color or orientation) and respond to coherent motion targets of the to-be-attended feature. We found that shifts toward color occurred earlier after the shifting cue compared with shifts toward orientation, regardless of the original feature (i.e., color or orientation). This was paralleled in SSVEP amplitude modulations as well as in the time course of behavioral data. Overall, our results suggest different neural dynamics during shifts of attention from color and orientation and the respective shifting destinations, namely, either toward color or toward orientation.

2020 ◽  
pp. 1-10
Author(s):  
Paula Vieweg ◽  
Matthias M. Müller

In an explorative study, we investigated the time course of attentional selection shifts in feature-based attention in early visual cortex by means of steady-state visual evoked potentials (SSVEPs). To this end, we presented four flickering random dot kinematograms with red/blue, horizontal/vertical bars, respectively. Given the oscillatory nature of SSVEPs, we were able to investigate neural temporal dynamics of facilitation and inhibition/suppression when participants shifted attention either within (i.e., color to color) or between feature dimensions (i.e., color to orientation). Extending a previous study of our laboratory [Müller, M. M., Trautmann, M., & Keitel, C. Early visual cortex dynamics during top–down modulated shifts of feature-selective attention. Journal of Cognitive Neuroscience, 28, 643–655, 2016] to a full factorial design, we replicated a critical finding of our previous study: Facilitation of color was quickest, regardless of the origin of the shift (from color or orientation). Furthermore, facilitation of the newly to-be-attended and inhibition/suppression of the then to-be-ignored feature is not a time-invariant process that occurs instantaneously, but a biphasic one with longer time delays between the two processes. Interestingly, inhibition/suppression of the to-be-ignored feature after the shifting cue had a much longer latency with between- compared to within-dimensional shifts (by about 130–150 msec). The exploratory nature of our study is reasoned by two limiting factors: (a) Identical to our precursor study, we found no attentional SSVEP amplitude time course modulation for orientation, and (b) the signal-to-noise ratio for single trials was too poor to allow for reliable statistical testing of the latencies that were obtained with running t tests of averaged data.


2009 ◽  
Vol 101 (3) ◽  
pp. 1463-1479 ◽  
Author(s):  
Rui Kimura ◽  
Izumi Ohzawa

Responses of a visual neuron to optimally oriented stimuli can be suppressed by a superposition of another grating with a different orientation. This effect is known as cross-orientation suppression. However, it is still not clear whether the effect is intracortical in origin or a reflection of subcortical processes. To address this issue, we measured spatiotemporal responses to a plaid pattern, a superposition of two gratings, as well as to individual component gratings (optimal and mask) using a subspace reverse-correlation method. Suppression for the plaid was evaluated by comparing the response to that for the optimal grating. For component stimuli, excitatory and negative responses were defined as responses more positive and negative, respectively, than that to a blank stimulus. The suppressive effect for plaids was observed in the vast majority of neurons. However, only ∼30% of neurons showed the negative response to mask-only gratings. The magnitudes of negative responses to mask-only stimuli were correlated with the degree of suppression for plaid stimuli. Comparing the latencies, we found that the suppression for the plaids starts at about the same time or slightly later than the response onset for the optimal grating and reaches its maximum at about the same time as the peak latency for the mask-only grating. Based on these results, we propose that in addition to the suppressive effect originating at the subcortical stage, delayed suppressive signals derived from the intracortical networks act on the neuron to generate cross-orientation suppression.


1994 ◽  
Vol 76 (2) ◽  
pp. 616-626 ◽  
Author(s):  
J. H. Bates ◽  
A. M. Lauzon ◽  
G. S. Dechman ◽  
G. N. Maksym ◽  
T. F. Schuessler

We measured tracheal pressure (Ptr) and tracheal flow (V) in open-chest anesthetized paralyzed dogs. The lungs were maintained at a fixed volume (initial positive end-expiratory pressure 0.5 kPa) for 80 s while small-amplitude oscillations in V at 1 and 6 Hz were applied simultaneously at the tracheal opening. A bolus of histamine was given intravenously at the start of the oscillation period. The time course of lung elastic recoil pressure (Pel) was obtained by passing a running average over Ptr to smooth out its oscillations. The oscillations themselves were separated into their 1- and 6-Hz components, as were those in V. By fitting models to the 1- and 6-Hz components of Ptr and V by recursive least squares, we obtained time courses of lung resistance at 6 Hz (RL6), dynamic lung elastance at 1 Hz (EL1), and the difference between dynamic lung resistance at 1 and 6 Hz (RL1-RL6). In four dogs we studied the effects of histamine doses of 0.05, 1.0, and 20 mg. We found that Pel increased quickly and plateaued, RL6 continued to increase throughout the oscillation period, and EL1 exhibited features of both Pel and RL6. Furthermore, the ratio of RL1-RL6 to EL1 was qualitatively similar in time course to Pel. We explain these varied time courses in terms of the development of regional ventilation inhomogeneity throughout the lung as the reaction to histamine develops. In four dogs we also studied the effects of reducing the initial positive end-expiratory pressure by 0.25 kPa and found that the changes in RL6, EL1, and RL1-RL6 were greatly magnified, presumably because of the reduced forces of parenchymal interdependence.


2020 ◽  
Author(s):  
Frederik Geweke ◽  
Emilia Pokta ◽  
Viola S. Störmer

Spatial attention can be deployed exogenously, based on salient events in the environment, or endogenously, based on current task goals. Numerous studies have compared the time courses of these two types of attention, and have demonstrated that exogenous attention is fast and transient and endogenous attention is relatively slow but sustained. In the present study we investigated whether and how the temporal dynamics of exogenous and endogenous attention differ in terms of where attention is deployed in the visual field, in particular at locations nearby or far from fixation. Across a series experiments, we measured attentional shift times for each type of attention, and found overall slower deployment of endogenous relative to exogenous attention, in line with previous research. Importantly, we also consistently found that it takes longer to deploy attention at more distant locations relative to nearby locations, regardless of how attention was instigated. Overall, our results suggest that the temporal limits of attentional deployment across different spatial distances are similar for exogenous and endogenous attention, pointing to shared constraints underlying both attentional modes.


2017 ◽  
Vol 29 (4) ◽  
pp. 619-627 ◽  
Author(s):  
Norman Forschack ◽  
Søren K. Andersen ◽  
Matthias M. Müller

A key property of feature-based attention is global facilitation of the attended feature throughout the visual field. Previously, we presented superimposed red and blue randomly moving dot kinematograms (RDKs) flickering at a different frequency each to elicit frequency-specific steady-state visual evoked potentials (SSVEPs) that allowed us to analyze neural dynamics in early visual cortex when participants shifted attention to one of the two colors. Results showed amplification of the attended and suppression of the unattended color as measured by SSVEP amplitudes. Here, we tested whether the suppression of the unattended color also operates globally. To this end, we presented superimposed flickering red and blue RDKs in the center of a screen and a red and blue RDK in the left and right periphery, respectively, also flickering at different frequencies. Participants shifted attention to one color of the superimposed RDKs in the center to discriminate coherent motion events in the attended from the unattended color RDK, whereas the peripheral RDKs were task irrelevant. SSVEP amplitudes elicited by the centrally presented RDKs confirmed the previous findings of amplification and suppression. For peripherally located RDKs, we found the expected SSVEP amplitude increase, relative to precue baseline when color matched the one of the centrally attended RDK. We found no reduction in SSVEP amplitude relative to precue baseline, when the peripheral color matched the unattended one of the central RDK, indicating that, while facilitation in feature-based attention operates globally, suppression seems to be linked to the location of focused attention.


2011 ◽  
Vol 23 (8) ◽  
pp. 2046-2058 ◽  
Author(s):  
Helen E. Payne ◽  
Harriet A. Allen

Selective attention is critical for controlling the input to mental processes. Attentional mechanisms act not only to select relevant stimuli but also to exclude irrelevant stimuli. There is evidence that we can actively ignore irrelevant information. We measured neural activity relating to successfully ignoring distracters (using preview search) and found increases in both the precuneus and primary visual cortex during preparation to ignore distracters. We also found reductions in activity in fronto-parietal regions while previewing distracters and a reduction in activity in early visual cortex during search when a subset of items was successfully excluded from search, both associated with precuneus activity. These results are consistent with the proposal that actively excluding distractions has two components: an initial stage where distracters are encoded, and a subsequent stage where further processing of these items is inhibited. Our findings suggest that it is the precuneus that controls this process and can modulate activity in visual cortex as early as V1.


2007 ◽  
Vol 19 (4) ◽  
pp. 587-593 ◽  
Author(s):  
Notger G. Müller ◽  
Andreas Kleinschmidt

A stimulus that suddenly appears in the corner of the eye inevitably captures our attention, and this in turn leads to faster detection of a second stimulus presented at the same position shortly thereafter. After about 250 msec, however, this effect reverses and the second stimulus is detected faster when it appears far away from the first. Here, we report a potential physiological correlate of this time-dependent attentional facilitation and inhibition. We measured the activity in visual cortex representations of the second (target) stimulus' location depending on the stimulus onset asynchrony (SOA) and spatial distance that separated the target from the preceding cue stimulus. At an SOA of 100 msec, the target yielded larger responses when it was presented near to than far away from the cue. At an SOA of 850 msec, however, the response to the target was more pronounced when it appeared far away from the cue. Our data show how the neural substrate of visual orienting is guided by immediately preceding sensory experience and how a fast-reacting brain system modulates sensory processing by briefly increasing and subsequently decreasing responsiveness in parts of the visual cortex. We propose these activity modulations as the neural correlate of the sequence of perceptual facilitation and inhibition after attentional capture.


2014 ◽  
Vol 26 (10) ◽  
pp. 2370-2384 ◽  
Author(s):  
Ramakrishna Chakravarthi ◽  
Thomas A. Carlson ◽  
Julie Chaffin ◽  
Jeremy Turret ◽  
Rufin VanRullen

Objects occupy space. How does the brain represent the spatial location of objects? Retinotopic early visual cortex has precise location information but can only segment simple objects. On the other hand, higher visual areas can resolve complex objects but only have coarse location information. Thus coarse location of complex objects might be represented by either (a) feedback from higher areas to early retinotopic areas or (b) coarse position encoding in higher areas. We tested these alternatives by presenting various kinds of first- (edge-defined) and second-order (texture) objects. We applied multivariate classifiers to the pattern of EEG amplitudes across the scalp at a range of time points to trace the temporal dynamics of coarse location representation. For edge-defined objects, peak classification performance was high and early and thus attributable to the retinotopic layout of early visual cortex. For texture objects, it was low and late. Crucially, despite these differences in peak performance and timing, training a classifier on one object and testing it on others revealed that the topography at peak performance was the same for both first- and second-order objects. That is, the same location information, encoded by early visual areas, was available for both edge-defined and texture objects at different time points. These results indicate that locations of complex objects such as textures, although not represented in the bottom–up sweep, are encoded later by neural patterns resembling the bottom–up ones. We conclude that feedback mechanisms play an important role in coarse location representation of complex objects.


2018 ◽  
Vol 30 (11) ◽  
pp. 1559-1576 ◽  
Author(s):  
Seyed-Mahdi Khaligh-Razavi ◽  
Radoslaw Martin Cichy ◽  
Dimitrios Pantazis ◽  
Aude Oliva

Animacy and real-world size are properties that describe any object and thus bring basic order into our perception of the visual world. Here, we investigated how the human brain processes real-world size and animacy. For this, we applied representational similarity to fMRI and MEG data to yield a view of brain activity with high spatial and temporal resolutions, respectively. Analysis of fMRI data revealed that a distributed and partly overlapping set of cortical regions extending from occipital to ventral and medial temporal cortex represented animacy and real-world size. Within this set, parahippocampal cortex stood out as the region representing animacy and size stronger than most other regions. Further analysis of the detailed representational format revealed differences among regions involved in processing animacy. Analysis of MEG data revealed overlapping temporal dynamics of animacy and real-world size processing starting at around 150 msec and provided the first neuromagnetic signature of real-world object size processing. Finally, to investigate the neural dynamics of size and animacy processing simultaneously in space and time, we combined MEG and fMRI with a novel extension of MEG–fMRI fusion by representational similarity. This analysis revealed partly overlapping and distributed spatiotemporal dynamics, with parahippocampal cortex singled out as a region that represented size and animacy persistently when other regions did not. Furthermore, the analysis highlighted the role of early visual cortex in representing real-world size. A control analysis revealed that the neural dynamics of processing animacy and size were distinct from the neural dynamics of processing low-level visual features. Together, our results provide a detailed spatiotemporal view of animacy and size processing in the human brain.


Sign in / Sign up

Export Citation Format

Share Document