scholarly journals Searching for emotion: A top-down set governs attentional orienting to facial expressions

2020 ◽  
Vol 204 ◽  
pp. 103024
Author(s):  
Hannah L. Delchau ◽  
Bruce K. Christensen ◽  
Ottmar V. Lipp ◽  
Richard O'Kearney ◽  
Kavindu H. Bandara ◽  
...  
2006 ◽  
Vol 18 (7) ◽  
pp. 1120-1132 ◽  
Author(s):  
Christopher Summerfield ◽  
Jennifer A. Mangels

Attention is a necessary condition for the formation of new episodic memories, yet little is known about how dissociable attentional mechanisms for “top-down” and “bottom-up” orienting contribute to encoding. Here, subjects performed an intentional encoding task in which to-be-learned items were interspersed with irrelevant stimuli such that subjects could anticipate the appearance of some study items but not others. Subjects were more likely to later remember stimuli whose appearance was predictable at encoding. Electroencephalographic data were acquired during the study phase of the experiment to assess how synchronous neural activity related to later memory for predictable stimuli (to which attention could be oriented in a top-down fashion) and unpredictable stimuli (which rely to a greater extent on bottom-up attentional orienting). Over left frontal regions, gamma-band activity (25–55 Hz) early (∼150 msec) in the epoch was a robust predictor of later memory for predictable items, consistent with an emerging view that links high-frequency neural synchrony to top-down attention. By contrast, later (∼400 msec) theta-band activity (4–8 Hz) over the left and midline frontal cortex predicted subsequent memory for unpredictable items, suggesting a role in bottom-up attentional orienting. These results reveal for the first time the contribution of dissociable attentional mechanisms to successful encoding and contribute to a growing literature dedicated to understanding the role of neural synchrony in cognition.


2021 ◽  
Author(s):  
Agnieszka Wykowska

Attentional orienting towards others’ gaze direction or pointing has been wellinvestigated in laboratory conditions. However, less is known about the operation ofattentional mechanisms in online naturalistic social interaction scenarios. It is equally plausible that following social directional cues (gaze, pointing) occurs reflexively, and/orthat it is influenced by top-down cognitive factors. In a mobile eye-tracking experiment,we show that under natural interaction conditions overt attentional orienting is notnecessarily reflexively triggered by pointing gestures or a combination of gaze shifts andpointing gestures. We found that participants conversing with an experimenter, who,during the interaction, would play out pointing gestures as well as directional gaze movements, continued to mostly focus their gaze on the face of the experimenter, demonstrating the significance of attending to the face of the interaction partner – in linewith effective top-down control over reflexive orienting of attention in the direction of social cues.


PeerJ ◽  
2016 ◽  
Vol 4 ◽  
pp. e1677 ◽  
Author(s):  
Carlo Fantoni ◽  
Sara Rigutti ◽  
Walter Gerbino

Fantoni & Gerbino (2014) showed that subtle postural shifts associated with reaching can have a strong hedonic impact and affect how actors experience facial expressions of emotion. Using a novel Motor Action Mood Induction Procedure (MAMIP), they found consistent congruency effects in participants who performed a facial emotionidentificationtask after a sequence of visually-guided reaches: a face perceived as neutral in a baseline condition appeared slightly happy after comfortable actions and slightly angry after uncomfortable actions. However, skeptics about the penetrability of perception (Zeimbekis & Raftopoulos, 2015) would consider such evidence insufficient to demonstrate that observer’s internal states induced by action comfort/discomfort affect perception in a top-down fashion. The action-modulated mood might have produced a back-end memory effect capable of affecting post-perceptual and decision processing, but not front-end perception.Here, we present evidence that performing a facial emotiondetection(not identification) task after MAMIP exhibits systematic mood-congruentsensitivitychanges, rather than responsebiaschanges attributable to cognitive set shifts; i.e., we show that observer’s internal states induced by bodily action can modulate affective perception. The detection threshold forhappinesswas lower after fifty comfortable than uncomfortable reaches; while the detection threshold forangerwas lower after fifty uncomfortable than comfortable reaches. Action valence induced an overall sensitivity improvement in detecting subtle variations of congruent facial expressions (happiness afterpositivecomfortable actions, anger afternegativeuncomfortable actions), in the absence of significant response bias shifts. Notably, both comfortable and uncomfortable reaches impact sensitivity in an approximately symmetric way relative to a baseline inaction condition. All of these constitute compelling evidence of a genuine top-down effect on perception: specifically, facial expressions of emotion arepenetrableby action-induced mood. Affective priming by action valence is a candidate mechanism for the influence of observer’s internal states on properties experienced as phenomenally objective and yet loaded with meaning.


2010 ◽  
Vol 34 (6) ◽  
pp. 547-553 ◽  
Author(s):  
Jukka Leppänen ◽  
Mikko J. Peltola ◽  
Mirjami Mäntymaa ◽  
Mikko Koivuluoma ◽  
Anni Salminen ◽  
...  

To examine the ontogeny of emotion—attention interactions, we investigated whether infants exhibit adult-like biases in automatic and voluntary attentional processes towards fearful facial expressions. Heart rate and saccadic eye movements were measured from 7-month-old infants (n = 42) while viewing non-face control stimuli, and neutral, happy, and fearful facial expressions flanked after 1000 ms by a peripheral distractor. Relative to neutral and happy expressions, fearful expressions resulted in a greater cardiac deceleration response during the first 1000 ms of face-viewing and in a relatively long-lasting suppression of face-to-distractor saccades. The results suggest that the neural architecture for the integration of emotional significance with automatic attentional orienting as well as more voluntary attentional prioritization processes is present early in life.


2018 ◽  
Vol 72 (4) ◽  
pp. 729-741 ◽  
Author(s):  
Manuel G Calvo ◽  
Eva G Krumhuber ◽  
Andrés Fernández-Martín

A happy facial expression makes a person look (more) trustworthy. Do perceptions of happiness and trustworthiness rely on the same face regions and visual attention processes? In an eye-tracking study, eye movements and fixations were recorded while participants judged the un/happiness or the un/trustworthiness of dynamic facial expressions in which the eyes and/or the mouth unfolded from neutral to happy or vice versa. A smiling mouth and happy eyes enhanced perceived happiness and trustworthiness similarly, with a greater contribution of the smile relative to the eyes. This comparable judgement output for happiness and trustworthiness was reached through shared as well as distinct attentional mechanisms: (a) entry times and (b) initial fixation thresholds for each face region were equivalent for both judgements, thereby revealing the same attentional orienting in happiness and trustworthiness processing. However, (c) greater and (d) longer fixation density for the mouth region in the happiness task, and for the eye region in the trustworthiness task, demonstrated different selective attentional engagement. Relatedly, (e) mean fixation duration across face regions was longer in the trustworthiness task, thus showing increased attentional intensity or processing effort.


Author(s):  
Hyunwoong Ko ◽  
Kisun Kim ◽  
Minju Bae ◽  
Myo-Geong Seo ◽  
Gieun Nam ◽  
...  

The ability to express and recognize emotion via facial expressions is well known to change with age. The present study investigated the differences in the facial recognition and facial expression of the elderly (n = 57) and the young (n = 115) and measure how each group uses different facial muscles for each emotion with Facial Action Coding System (FACS). In facial recognition task, the elderly did not recognize facial expressions better than young people and reported stronger feelings of fear and sad from photographs. In making facial expression task, the elderly rated all their facial expressions as stronger than the younger, but in fact, they expressed strong expressions in fear and anger. Furthermore, the elderly used more muscles in the lower face when making facial expressions than younger people. These results help to understand better how the facial recognition and expression of the elderly change, and show that the elderly do not effectively execute the top-down processing concerning facial expression.


2021 ◽  
Author(s):  
Agnieszka Wykowska

Attentional orienting towards others’ gaze direction or pointing has been well investigated in laboratory conditions. However, less is known about the operation of attentional mechanisms in online naturalistic social interaction scenarios. It is equally plausible that following social directional cues (gaze, pointing) occurs reflexively, and/or that it is influenced by top-down cognitive factors. In a mobile eye-tracking experiment, we show that under natural interaction conditions overt attentional orienting is not necessarily reflexively triggered by pointing gestures or a combination of gaze shifts and pointing gestures. We found that participants conversing with an experimenter, who, during the interaction, would play out pointing gestures as well as directional gaze movements, continued to mostly focus their gaze on the face of the experimenter, demonstrating the significance of attending to the face of the interaction partner – in line with effective top-down control over reflexive orienting of attention in the direction of social cues.


2007 ◽  
Vol 14 (1) ◽  
pp. 159-165 ◽  
Author(s):  
Sowon Hahn ◽  
Scott D. Gronlund

2020 ◽  
Author(s):  
Abdulaziz Abubshait ◽  
Ali Momen ◽  
Eva Wiese

Understanding and reacting to others’ nonverbal social signals, such as changes in gaze direction (i.e., gaze cue), is essential for social interactions, as its important for processes such as joint attention and mentalizing. Although attentional orienting in response to gaze cues has a strong reflexive component, accumulating evidence shows that it can be top-down controlled by context information regarding the signals’ social relevance. For example, when a gazer is believed to be an entity “with a mind” (i.e., mind perception), people exert more top-down control on attention orienting. Although increasing an agent’s physical human-likeness can enhance mind perception, it could have negative consequences on top-down control of social attention when a gazer’s physical appearance is categorically ambiguous (i.e., difficult to categorize as human or nonhuman), as resolving this ambiguity would require using cognitive resources that otherwise could be used to top-down control attention orienting. To examine this question, we used mouse-tracking to explore if categorically ambiguos agents are associated with increased processing costs (Experiment 1), whether categorically ambiguous stimuli negatively impact top-down control of social attention (Experiment 2) and if resolving the conflict related to the agent’s categorical ambiguity (using exposure) would restore top-down control to orient attention (Experiment 3). The findings suggest that categorically ambigious stimuli are associated with cognitive conflict, which negatively impact the ability to exert top-down control on attentional orienting in a counterpredicitive gaze cueing paradigm; this negative impact, however, is attenuated when being pre-exposed to the stimuli prior to the gaze cueing task. Taken together, these findings suggest that manipulating physical human-likeness is a powerful way to affect mind perception in human-robot interaction but has a diminishing returns effect on social attention when it is categorically ambiguous due to drainage of cognitive resources and impairment of top-down control.


Sign in / Sign up

Export Citation Format

Share Document