scholarly journals Attention Capture by Angry Faces Depends on the Distribution of Attention

2021 ◽  
Author(s):  
◽  
Joshua James Foster

<p>The threat-capture hypothesis posits a threat-detection system that automatically directs visual attention to threat-related stimuli (e.g., angry facial expressions) in the environment. Importantly, this system is theorised to operate preattentively, processing all input across the visual field in parallel, prior to the operation of selective attention. The threat-capture hypothesis generates two predictions. First, because the threat-detection system directs attention to threat automatically, threat stimuli should capture attention when they are task-irrelevant and the observer has no intention to attend to them. Second, because the threat-detection system operates preattentively, threat stimuli should capture attention even when it is engaged elsewhere. This thesis tested these predictions using behavioural measures of attention capture in conjunction with the N2pc, an event-related potential (ERP) index of attention selection. Experiment 1 tested the first prediction of the threat-capture hypothesis – that threat stimuli capture attention when they are task-irrelevant. Participants performed a dot-probe task in which pairs of face cues – one angry and one neutral – preceded a lateral target. On some trials, the faces were Fourier phase-scrambled to control for low-level visual properties. Consistent with the threat-capture hypothesis, an N2pc was observed for angry faces, suggesting they captured attention despite being completely task-irrelevant. Interestingly, this effect remained when faces were Fourier phase-scrambled, suggesting it is low-level visual properties that drive attention capture by angry faces. Experiments 2A and 2B tested the second prediction of the threat capture hypothesis – that threat stimuli capture attention when it is engaged elsewhere. Participants performed a primary task in which they searched a column of letters at fixation for a target letter. The perceptual load of this task was manipulated to ensure that attentional resources were consumed by this task. Thus there were high and low perceptual load conditions in these experiments. Task-irrelevant angry faces interfered with task performance when the perceptual load of the task was high but not when it was low (Experiment 2A). Similarly, angry faces elicited an N2pc, indicating that they captured attention, but only when perceptual load was high and when faces were phase-scrambled (Experiment 2B). These experiments further suggest that low-level visual factors are important in attention capture by angry faces. These results appear to be inconsistent with the threat-capture hypothesis, and suggest that angry faces do not necessarily capture attention when it is engaged elsewhere.</p>

2021 ◽  
Author(s):  
◽  
Joshua James Foster

<p>The threat-capture hypothesis posits a threat-detection system that automatically directs visual attention to threat-related stimuli (e.g., angry facial expressions) in the environment. Importantly, this system is theorised to operate preattentively, processing all input across the visual field in parallel, prior to the operation of selective attention. The threat-capture hypothesis generates two predictions. First, because the threat-detection system directs attention to threat automatically, threat stimuli should capture attention when they are task-irrelevant and the observer has no intention to attend to them. Second, because the threat-detection system operates preattentively, threat stimuli should capture attention even when it is engaged elsewhere. This thesis tested these predictions using behavioural measures of attention capture in conjunction with the N2pc, an event-related potential (ERP) index of attention selection. Experiment 1 tested the first prediction of the threat-capture hypothesis – that threat stimuli capture attention when they are task-irrelevant. Participants performed a dot-probe task in which pairs of face cues – one angry and one neutral – preceded a lateral target. On some trials, the faces were Fourier phase-scrambled to control for low-level visual properties. Consistent with the threat-capture hypothesis, an N2pc was observed for angry faces, suggesting they captured attention despite being completely task-irrelevant. Interestingly, this effect remained when faces were Fourier phase-scrambled, suggesting it is low-level visual properties that drive attention capture by angry faces. Experiments 2A and 2B tested the second prediction of the threat capture hypothesis – that threat stimuli capture attention when it is engaged elsewhere. Participants performed a primary task in which they searched a column of letters at fixation for a target letter. The perceptual load of this task was manipulated to ensure that attentional resources were consumed by this task. Thus there were high and low perceptual load conditions in these experiments. Task-irrelevant angry faces interfered with task performance when the perceptual load of the task was high but not when it was low (Experiment 2A). Similarly, angry faces elicited an N2pc, indicating that they captured attention, but only when perceptual load was high and when faces were phase-scrambled (Experiment 2B). These experiments further suggest that low-level visual factors are important in attention capture by angry faces. These results appear to be inconsistent with the threat-capture hypothesis, and suggest that angry faces do not necessarily capture attention when it is engaged elsewhere.</p>


2021 ◽  
Author(s):  
Zhang Xiaojun ◽  
Li Yingcai ◽  
Zhang Fuqiang ◽  
Zhang Qian ◽  
Han Li

2017 ◽  
Author(s):  
Nicolas Burra ◽  
Dirk Kerzel ◽  
David Munoz ◽  
Didier Grandjean ◽  
Leonardo Ceravolo

Salient vocalizations, especially aggressive voices, are believed to attract attention due to an automatic threat detection system. However, studies assessing the temporal dynamics of auditory spatial attention to aggressive voices are missing. Using event-related potential markers of auditory spatial attention (N2ac and LPCpc), we show that attentional processing of threatening vocal signals is enhanced at two different stages of auditory processing. As early as 200 ms post stimulus onset, attentional orienting/engagement is enhanced for threatening as compared to happy vocal signals. Subsequently, as early as 400 ms post stimulus onset, the reorienting of auditory attention to the center of the screen (or disengagement from the target) is enhanced. This latter effect is consistent with the need to optimize perception by balancing the intake of stimulation from left and right auditory space. Our results extend the scope of theories from the visual to the auditory modality by showing that threatening stimuli also bias early spatial attention in the auditory modality. Although not the focus of the present work, we observed that the attentional enhancement was more pronounced in female than male participants.


Author(s):  
Zafar Sultan ◽  
Paul Kwan

In this paper, a hybrid identity fusion model at decision level is proposed for Simultaneous Threat Detection Systems. The hybrid model is comprised of mathematical and statistical data fusion engines; Dempster Shafer, Extended Dempster and Generalized Evidential Processing (GEP). Simultaneous Threat Detection Systems improve threat detection rate by 39%. In terms of efficiency and performance, the comparison of 3 inference engines of the Simultaneous Threat Detection Systems showed that GEP is the better data fusion model. GEP increased precision of threat detection from 56% to 95%. Furthermore, set cover packing was used as a middle tier data fusion tool to discover the reduced size groups of threat data. Set cover provided significant improvement and reduced threat population from 2272 to 295, which helped in minimizing the processing complexity of evidential processing cost and time in determining the combined probability mass of proposed Multiple Simultaneous Threat Detection System. This technique is particularly relevant to on-line and Internet dependent applications including portals.


2010 ◽  
Vol 2 (2) ◽  
pp. 51-67
Author(s):  
Zafar Sultan ◽  
Paul Kwan

In this paper, a hybrid identity fusion model at decision level is proposed for Simultaneous Threat Detection Systems. The hybrid model is comprised of mathematical and statistical data fusion engines; Dempster Shafer, Extended Dempster and Generalized Evidential Processing (GEP). Simultaneous Threat Detection Systems improve threat detection rate by 39%. In terms of efficiency and performance, the comparison of 3 inference engines of the Simultaneous Threat Detection Systems showed that GEP is the better data fusion model. GEP increased precision of threat detection from 56% to 95%. Furthermore, set cover packing was used as a middle tier data fusion tool to discover the reduced size groups of threat data. Set cover provided significant improvement and reduced threat population from 2272 to 295, which helped in minimizing the processing complexity of evidential processing cost and time in determining the combined probability mass of proposed Multiple Simultaneous Threat Detection System. This technique is particularly relevant to on-line and Internet dependent applications including portals.


2019 ◽  
Vol 14 (7) ◽  
pp. 727-735 ◽  
Author(s):  
Annett Schirmer ◽  
Maria Wijaya ◽  
Esther Wu ◽  
Trevor B Penney

Abstract This pre-registered event-related potential study explored how vocal emotions shape visual perception as a function of attention and listener sex. Visual task displays occurred in silence or with a neutral or an angry voice. Voices were task-irrelevant in a single-task block, but had to be categorized by speaker sex in a dual-task block. In the single task, angry voices increased the occipital N2 component relative to neutral voices in women, but not men. In the dual task, angry voices relative to neutral voices increased occipital N1 and N2 components, as well as accuracy, in women and marginally decreased accuracy in men. Thus, in women, vocal anger produced a strong, multifaceted visual enhancement comprising attention-dependent and attention-independent processes, whereas in men, it produced a small, behavior-focused visual processing impairment that was strictly attention-dependent. In sum, these data indicate that attention and listener sex critically modulate whether and how vocal emotions shape visual perception.


2009 ◽  
Vol 9 (12) ◽  
pp. 12-12 ◽  
Author(s):  
S. Taya ◽  
W. J. Adams ◽  
E. W. Graf ◽  
N. Lavie

Sign in / Sign up

Export Citation Format

Share Document