Spatial Attention Shifting to Fearful Faces Depends on Visual Awareness in Attentional Blink: An ERP Study

2021 ◽  
Author(s):  
Zeguo Qiu ◽  
Stefanie Becker ◽  
Alan Pegna
2021 ◽  
Author(s):  
◽  
Daniel Jenkins

<p>Multisensory integration describes the cognitive processes by which information from various perceptual domains is combined to create coherent percepts. For consciously aware perception, multisensory integration can be inferred when information in one perceptual domain influences subjective experience in another. Yet the relationship between integration and awareness is not well understood. One current question is whether multisensory integration can occur in the absence of perceptual awareness. Because there is subjective experience for unconscious perception, researchers have had to develop novel tasks to infer integration indirectly. For instance, Palmer and Ramsey (2012) presented auditory recordings of spoken syllables alongside videos of faces speaking either the same or different syllables, while masking the videos to prevent visual awareness. The conjunction of matching voices and faces predicted the location of a subsequent Gabor grating (target) on each trial. Participants indicated the location/orientation of the target more accurately when it appeared in the cued location (80% chance), thus the authors inferred that auditory and visual speech events were integrated in the absence of visual awareness. In this thesis, I investigated whether these findings generalise to the integration of auditory and visual expressions of emotion. In Experiment 1, I presented spatially informative cues in which congruent facial and vocal emotional expressions predicted the target location, with and without visual masking. I found no evidence of spatial cueing in either awareness condition. To investigate the lack of spatial cueing, in Experiment 2, I repeated the task with aware participants only, and had half of those participants explicitly report the emotional prosody. A significant spatial-cueing effect was found only when participants reported emotional prosody, suggesting that audiovisual congruence can cue spatial attention during aware perception. It remains unclear whether audiovisual congruence can cue spatial attention without awareness, and whether such effects genuinely imply multisensory integration.</p>


PLoS ONE ◽  
2014 ◽  
Vol 9 (7) ◽  
pp. e101608 ◽  
Author(s):  
Dandan Zhang ◽  
Yunzhe Liu ◽  
Chenglin Zhou ◽  
Yuming Chen ◽  
Yuejia Luo

PLoS ONE ◽  
2019 ◽  
Vol 14 (3) ◽  
pp. e0212998 ◽  
Author(s):  
Jiaqing Chen ◽  
Jagjot Kaur ◽  
Hana Abbas ◽  
Ming Wu ◽  
Wenyi Luo ◽  
...  

2008 ◽  
Vol 28 (10) ◽  
pp. 2667-2679 ◽  
Author(s):  
V. Wyart ◽  
C. Tallon-Baudry

Author(s):  
Qiuxia Lai ◽  
Yu Li ◽  
Ailing Zeng ◽  
Minhao Liu ◽  
Hanqiu Sun ◽  
...  

The selective visual attention mechanism in the human visual system (HVS) restricts the amount of information to reach visual awareness for perceiving natural scenes, allowing near real-time information processing with limited computational capacity. This kind of selectivity acts as an ‘Information Bottleneck (IB)’, which seeks a trade-off between information compression and predictive accuracy. However, such information constraints are rarely explored in the attention mechanism for deep neural networks (DNNs). In this paper, we propose an IB-inspired spatial attention module for DNN structures built for visual recognition. The module takes as input an intermediate representation of the input image, and outputs a variational 2D attention map that minimizes the mutual information (MI) between the attention-modulated representation and the input, while maximizing the MI between the attention-modulated representation and the task label. To further restrict the information bypassed by the attention map, we quantize the continuous attention scores to a set of learnable anchor values during training. Extensive experiments show that the proposed IB-inspired spatial attention mechanism can yield attention maps that neatly highlight the regions of interest while suppressing backgrounds, and bootstrap standard DNN structures for visual recognition tasks (e.g., image classification, fine-grained recognition, cross-domain classification). The attention maps are interpretable for the decision making of the DNNs as verified in the experiments. Our code is available at this https URL.


2010 ◽  
Vol 8 (6) ◽  
pp. 767-767
Author(s):  
V. Wyart ◽  
C. Tallon-Baudry

2020 ◽  
Vol 45 (7) ◽  
pp. 601-608
Author(s):  
Fábio Silva ◽  
Nuno Gomes ◽  
Sebastian Korb ◽  
Gün R Semin

Abstract Exposure to body odors (chemosignals) collected under different emotional states (i.e., emotional chemosignals) can modulate our visual system, biasing visual perception. Recent research has suggested that exposure to fear body odors, results in a generalized faster access to visual awareness of different emotional facial expressions (i.e., fear, happy, and neutral). In the present study, we aimed at replicating and extending these findings by exploring if these effects are limited to fear odor, by introducing a second negative body odor—that is, disgust. We compared the time that 3 different emotional facial expressions (i.e., fear, disgust, and neutral) took to reach visual awareness, during a breaking continuous flash suppression paradigm, across 3 body odor conditions (i.e., fear, disgust, and neutral). We found that fear body odors do not trigger an overall faster access to visual awareness, but instead sped-up access to awareness specifically for facial expressions of fear. Disgust odor, on the other hand, had no effects on awareness thresholds of facial expressions. These findings contrast with prior results, suggesting that the potential of fear body odors to induce visual processing adjustments is specific to fear cues. Furthermore, our results support a unique ability of fear body odors in inducing such visual processing changes, compared with other negative emotional chemosignals (i.e., disgust). These conclusions raise interesting questions as to how fear odor might interact with the visual processing stream, whilst simultaneously giving rise to future avenues of research.


Sign in / Sign up

Export Citation Format

Share Document