Unconscious Processing of Unattended Features in Human Visual Cortex

2013 ◽  
Vol 25 (3) ◽  
pp. 329-337 ◽  
Author(s):  
Tatiana Aloi Emmanouil ◽  
Philip Burton ◽  
Tony Ro

Unconscious processing has been convincingly demonstrated for task-relevant feature dimensions. However, it is possible that the visual system is capable of more complex unconscious operations, extracting visual features even when they are unattended and task irrelevant. In the current study, we addressed this question by measuring unconscious priming using a task in which human participants attended to a target object's shape while ignoring its color. We measured both behavioral priming effects and priming-related fMRI activations from primes that were unconsciously presented using metacontrast masking. The results showed faster RTs and decreases in fMRI activation only when the primes were identical to the targets, indicating that primes were processed both in the attended shape and the unattended color dimensions. Reductions in activation were observed in early visual areas, including primary visual cortex, as well as in feature-responsive areas for shape and color. These results indicate that multiple features can be unconsciously encoded and possibly bound using the same visual networks activated by consciously perceived images.

2021 ◽  
Author(s):  
Jiageng Chen ◽  
Paul S Scotti ◽  
Emma W Dowd ◽  
Julie D Golomb

Visual attention plays an essential role in selecting task-relevant and ignoring task-irrelevant information, for both object features and their locations. In the real world, multiple objects with multiple features are often simultaneously present in a scene. When spatial attention selects an object, how are the task-relevant and task-irrelevant features represented in the brain? Previous literature has shown conflicting results on whether and how irrelevant features are represented in visual cortex. In an fMRI task, we used a modified inverted encoding model (IEM, e.g., Sprague & Serences, 2015) to test whether we can reconstruct the task-relevant and task-irrelevant features of spatially attended objects in a multi- feature (color + orientation), multi-item display. Subjects were briefly shown an array of three colored, oriented gratings. Subjects were instructed as to which feature (color or orientation) was relevant before each block, and on each trial were asked to report the task-relevant feature of the object that appeared at a spatially pre-cued location, using a continuous color or orientation wheel. By applying the IEM, we achieved reliable feature reconstructions for the task-relevant features of the attended object from visual ROIs (V1 and V4v) and Intraparietal sulcus. Preliminary searchlight analyses showed that task-irrelevant features of attended objects could be reconstructed from activity in some intraparietal areas, but the reconstructions were much weaker and less reliable compared with task-relevant features. These results suggest that both relevant and irrelevant features may be represented in visual and parietal cortex but in different forms. Our method provides potential tools to noninvasively measure unattended feature representations and probe the extent to which spatial attention acts as a "glue" to bind task-relevant and task-irrelevant features.


eLife ◽  
2021 ◽  
Vol 10 ◽  
Author(s):  
Paz Har-shai Yahav ◽  
Elana Zion Golumbic

Paying attention to one speaker in noisy environments can be extremely difficult, because to-be-attended and task-irrelevant speech compete for processing resources. We tested whether this competition is restricted to acoustic-phonetic interference or if it extends to competition for linguistic processing as well. Neural activity was recorded using Magnetoencephalography as human participants were instructed to attended to natural speech presented to one ear, and task-irrelevant stimuli were presented to the other. Task-irrelevant stimuli consisted either of random sequences of syllables, or syllables structured to form coherent sentences, using hierarchical frequency-tagging. We find that the phrasal structure of structured task-irrelevant stimuli was represented in the neural response in left inferior frontal and posterior parietal regions, indicating that selective attention does not fully eliminate linguistic processing of task-irrelevant speech. Additionally, neural tracking of to-be-attended speech in left inferior frontal regions was enhanced when competing with structured task-irrelevant stimuli, suggesting inherent competition between them for linguistic processing.


2020 ◽  
Author(s):  
Casey L Roark ◽  
Matthew Lehet ◽  
Frederic Dick ◽  
Lori L. Holt

The ability to form coherent acoustic categories is thought to be a fundamental cognitive process underlying speech perception. Auditory categories can be learned via passive exposure or overt training. Recent investigations have shown that listeners can learn categories incidentally based on associations between sounds, visual objects, actions, and events in the world. However, much remains unknown about the conditions under which this incidental learning occurs. Across five conditions, we manipulated how auditory categories are associated with visual features to inform behavioral responses in an incidental learning task. Category learning was unaffected by irrelevant visual features and was robust to manipulations of motor response. However, learning was disrupted when auditory categories were associated with task-irrelevant visual features. Together, this study demonstrates that incidental learning is driven by consistent relationships between auditory category information and task-relevant visual features.


2019 ◽  
Author(s):  
Carsen Stringer ◽  
Michalis Michaelos ◽  
Marius Pachitariu

Single neurons in visual cortex provide unreliable measurements of visual features due to their high trial-to-trial variability. It is not known if this “noise” extends its effects over large neural populations to impair the global encoding of stimuli. We recorded simultaneously from ∼20,000 neurons in mouse primary visual cortex (V1) and found that the neural populations had discrimination thresholds of ∼0.34° in an orientation decoding task. These thresholds were nearly 100 times smaller than those reported behaviorally in mice. The discrepancy between neural and behavioral discrimination could not be explained by the types of stimuli we used, by behavioral states or by the sequential nature of perceptual learning tasks. Furthermore, higher-order visual areas lateral to V1 could be decoded equally well. These results imply that the limits of sensory perception in mice are not set by neural noise in sensory cortex, but by the limitations of downstream decoders.


Author(s):  
Sunyoung Park ◽  
John T. Serences

Top-down spatial attention enhances cortical representations of behaviorally relevant visual information and increases the precision of perceptual reports. However, little is known about the relative precision of top-down attentional modulations in different visual areas, especially compared to the highly precise stimulus-driven responses that are observed in early visual cortex. For example, the precision of attentional modulations in early visual areas may be limited by the relatively coarse spatial selectivity and the anatomical connectivity of the areas in prefrontal cortex that generate and relay the top-down signals. Here, we used fMRI and human participants to assess the precision of bottom-up spatial representations evoked by high contrast stimuli across the visual hierarchy. Then, we examined the relative precision of top-down attentional modulations in the absence of spatially-specific bottom-up drive. While V1 showed the largest relative difference between the precision of top-down attentional modulations and the precision of bottom-up modulations, mid-level areas such as V4 showed relatively smaller differences between the precision of top-down and bottom-up modulations. Overall, this interaction between visual areas (e.g. V1 vs V4) and the relative precision of top-down and bottom-up modulations suggests that the precision of top-down attentional modulations is limited by the representational fidelity of areas that generate and relay top-down feedback signals.


2017 ◽  
Author(s):  
Nadine Dijkstra ◽  
Pim Mostert ◽  
Floris P. de Lange ◽  
Sander Bosch ◽  
Marcel A. J. van Gerven

Visual perception and imagery rely on similar representations in the visual cortex. During perception, visual activity is characterized by distinct processing stages, but the temporal dynamics underlying imagery remain unclear. Here, we investigated the dynamics of visual imagery in human participants using magnetoencephalography. We show that, contrary to perception, the onset of imagery is characterized by broad temporal generalization. Furthermore, there is consistent overlap between imagery and perceptual processing around 150 ms and from 300 ms after stimulus onset, presumably reflecting completion of the feedforward sweep and perceptual stabilization respectively. These results indicate that during imagery either the complete representation is activated at once and does not include low-level visual areas, or the order in which visual features are activated is less fixed and more flexible than during perception. These findings have important implications for our understanding of the neural mechanisms of visual imagery.


Author(s):  
Xiaolian Li ◽  
Qi Zhu ◽  
Wim Vanduffel

AbstractThe visuotopic organization of dorsal visual cortex rostral to area V2 in primates has been a longstanding source of controversy. Using sub-millimeter phase-encoded retinotopic fMRI mapping, we recently provided evidence for a surprisingly similar visuotopic organization in dorsal visual cortex of macaques compared to previously published maps in New world monkeys (Zhu and Vanduffel, Proc Natl Acad Sci USA 116:2306–2311, 2019). Although individual quadrant representations could be robustly delineated in that study, their grouping into hemifield representations remains a major challenge. Here, we combined in-vivo high-resolution myelin density mapping based on MR imaging (400 µm isotropic resolution) with fine-grained retinotopic fMRI to quantitatively compare myelin densities across retinotopically defined visual areas in macaques. Complementing previously documented differences in populational receptive-field (pRF) size and visual field signs, myelin densities of both quadrants of the dorsolateral posterior area (DLP) and area V3A are significantly different compared to dorsal and ventral area V3. Moreover, no differences in myelin density were observed between the two matching quadrants belonging to areas DLP, V3A, V1, V2 and V4, respectively. This was not the case, however, for the dorsal and ventral quadrants of area V3, which showed significant differences in MR-defined myelin densities, corroborating evidence of previous myelin staining studies. Interestingly, the pRF sizes and visual field signs of both quadrant representations in V3 are not different. Although myelin density correlates with curvature and anticorrelates with cortical thickness when measured across the entire cortex, exactly as in humans, the myelin density results in the visual areas cannot be explained by variability in cortical thickness and curvature between these areas. The present myelin density results largely support our previous model to group the two quadrants of DLP and V3A, rather than grouping DLP- with V3v into a single area VLP, or V3d with V3A+ into DM.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Domenica Veniero ◽  
Joachim Gross ◽  
Stephanie Morand ◽  
Felix Duecker ◽  
Alexander T. Sack ◽  
...  

AbstractVoluntary allocation of visual attention is controlled by top-down signals generated within the Frontal Eye Fields (FEFs) that can change the excitability of lower-level visual areas. However, the mechanism through which this control is achieved remains elusive. Here, we emulated the generation of an attentional signal using single-pulse transcranial magnetic stimulation to activate the FEFs and tracked its consequences over the visual cortex. First, we documented changes to brain oscillations using electroencephalography and found evidence for a phase reset over occipital sites at beta frequency. We then probed for perceptual consequences of this top-down triggered phase reset and assessed its anatomical specificity. We show that FEF activation leads to cyclic modulation of visual perception and extrastriate but not primary visual cortex excitability, again at beta frequency. We conclude that top-down signals originating in FEF causally shape visual cortex activity and perception through mechanisms of oscillatory realignment.


Sign in / Sign up

Export Citation Format

Share Document