scholarly journals The image features of emotional faces that predict the initial eye movement to a face

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
S. M. Stuit ◽  
T. M. Kootstra ◽  
D. Terburg ◽  
C. van den Boomen ◽  
M. J. van der Smagt ◽  
...  

AbstractEmotional facial expressions are important visual communication signals that indicate a sender’s intent and emotional state to an observer. As such, it is not surprising that reactions to different expressions are thought to be automatic and independent of awareness. What is surprising, is that studies show inconsistent results concerning such automatic reactions, particularly when using different face stimuli. We argue that automatic reactions to facial expressions can be better explained, and better understood, in terms of quantitative descriptions of their low-level image features rather than in terms of the emotional content (e.g. angry) of the expressions. Here, we focused on overall spatial frequency (SF) and localized Histograms of Oriented Gradients (HOG) features. We used machine learning classification to reveal the SF and HOG features that are sufficient for classification of the initial eye movement towards one out of two simultaneously presented faces. Interestingly, the identified features serve as better predictors than the emotional content of the expressions. We therefore propose that our modelling approach can further specify which visual features drive these and other behavioural effects related to emotional expressions, which can help solve the inconsistencies found in this line of research.

2020 ◽  
Author(s):  
Sjoerd Stuit ◽  
Timo Kootstra ◽  
David Terburg ◽  
Carlijn van den Boomen ◽  
Maarten van der Smagt ◽  
...  

Abstract Emotional facial expressions are important visual communication signals that indicate a sender’s intent and emotional state to an observer. As such, it is not surprising that reactions to different expressions are thought to be automatic and independent of awareness. What is surprising, is that studies show inconsistent results concerning such automatic reactions, particularly when using different face stimuli. We argue that automatic reactions to facial expressions can be better explained, and better understood, in terms of quantitative descriptions of their visual features rather than in terms of the semantic labels (e.g. angry) of the expressions. Here, we focused on overall spatial frequency (SF) and localized Histograms of Oriented Gradients (HOG) features. We used machine learning classification to reveal the SF and HOG features that are sufficient for classification of the first selected face out of two simultaneously presented faces. In other words, we show which visual features predict selection between two faces. Interestingly, the identified features serve as better predictors than the semantic label of the expressions. We therefore propose that our modelling approach can further specify which visual features drive the behavioural effects related to emotional expressions, which can help solve the inconsistencies found in this line of research.


Author(s):  
Christian PADILLA-NAVARRO ◽  
Carlos ZARATE-TREJO ◽  
Georges KHALAF ◽  
Pascal FALLAVOLLITA

Alexithymia is a condition that partially or completely deprives you of the ability to identify and describe emotions, and to show affective connotations in the actions of an individual. This problem has been taken to different research projects that seek to study its characteristics, forms of prevention, and implications, and that try to determine a measurement for the experience of an individual with this construct as well as the responses they provide to certain stimuli. Other studies that were reviewed aimed to find a connection between the responses of subjects diagnosed with alexithymia when facing a dynamic of emotional facial expressions to recognize and their assigned grade based on the Toronto Alexithymia Scale (TAS), a metric frequently used to evaluate the presence or absence of alexithymia in an individual. In this work, a review of the different articles that study this connection, as well as articles that describe the state of the art of the implementation of artificial intelligence algorithms applied to the treatment or prevention of secondary alexithymia is presented.


2010 ◽  
Vol 25 (1) ◽  
pp. 111-120 ◽  
Author(s):  
Lemke Leyman ◽  
Rudi De Raedt ◽  
Roel Vaeyens ◽  
Renaat M. Philippaerts

2021 ◽  
Vol 15 ◽  
Author(s):  
Teresa Sollfrank ◽  
Oona Kohnen ◽  
Peter Hilfiker ◽  
Lorena C. Kegel ◽  
Hennric Jokeit ◽  
...  

This study aimed to examine whether the cortical processing of emotional faces is modulated by the computerization of face stimuli (”avatars”) in a group of 25 healthy participants. Subjects were passively viewing 128 static and dynamic facial expressions of female and male actors and their respective avatars in neutral or fearful conditions. Event-related potentials (ERPs), as well as alpha and theta event-related synchronization and desynchronization (ERD/ERS), were derived from the EEG that was recorded during the task. All ERP features, except for the very early N100, differed in their response to avatar and actor faces. Whereas the N170 showed differences only for the neutral avatar condition, later potentials (N300 and LPP) differed in both emotional conditions (neutral and fear) and the presented agents (actor and avatar). In addition, we found that the avatar faces elicited significantly stronger reactions than the actor face for theta and alpha oscillations. Especially theta EEG frequencies responded specifically to visual emotional stimulation and were revealed to be sensitive to the emotional content of the face, whereas alpha frequency was modulated by all the stimulus types. We can conclude that the computerized avatar faces affect both, ERP components and ERD/ERS and evoke neural effects that are different from the ones elicited by real faces. This was true, although the avatars were replicas of the human faces and contained similar characteristics in their expression.


Neurology ◽  
2020 ◽  
Vol 95 (19) ◽  
pp. e2635-e2647 ◽  
Author(s):  
Lindsay D. Oliver ◽  
Chloe Stewart ◽  
Kristy Coleman ◽  
James H. Kryklywy ◽  
Robert Bartha ◽  
...  

ObjectiveTo determine whether intranasal oxytocin, alone or in combination with instructed mimicry of facial expressions, would augment neural activity in patients with frontotemporal dementia (FTD) in brain regions associated with empathy, emotion processing, and the simulation network, as indexed by blood oxygen–level dependent (BOLD) signal during fMRI.MethodsIn a placebo-controlled, randomized crossover design, 28 patients with FTD received 72 IU intranasal oxytocin or placebo and then completed an fMRI facial expression mimicry task.ResultsOxytocin alone and in combination with instructed mimicry increased activity in regions of the simulation network and in limbic regions associated with emotional expression processing.ConclusionsThe findings demonstrate latent capacity to augment neural activity in affected limbic and other frontal and temporal regions during social cognition in patients with FTD, and support the promise and need for further investigation of these interventions as therapeutics in FTD.ClinicalTrials.gov identifierNCT01937013.Classification of evidenceThis study provides Class III evidence that a single dose of 72 IU intranasal oxytocin augments BOLD signal in patients with FTD during viewing of emotional facial expressions.


2018 ◽  
Vol 24 (4) ◽  
pp. 565-575 ◽  
Author(s):  
Orrie Dan ◽  
Iris Haimov ◽  
Kfir Asraf ◽  
Kesem Nachum ◽  
Ami Cohen

Objective: The present study sought to investigate whether young adults with ADHD have more difficulty recognizing emotional facial expressions compared with young adults without ADHD, and whether such a difference worsens following sleep deprivation. Method: Thirty-one young men ( M = 25.6) with ( n = 15) or without ( n = 16) a diagnosis of ADHD were included in this study. The participants were instructed to sleep 7 hr or more each night for one week, and their sleep quality was monitored via actigraph. Subsequently, the participants were kept awake in a controlled environment for 30 hr. The participants completed a visual emotional morph task twice—at the beginning and at the end of this period. The task included presentation of interpolated face stimuli ranging from neutral facial expressions to fully emotional facial expressions of anger, sadness, or happiness, allowing for assessment of the intensity threshold for recognizing these facial emotional expressions. Results: Actigraphy data demonstrated that while the nightly sleep duration of the participants with ADHD was similar to that of participants without ADHD, their sleep efficiency was poorer. At the onset of the experiment, there were no differences in recognition thresholds between the participants with ADHD and those without ADHD. Following sleep deprivation, however, the ADHD group required clearer facial expressions to recognize the presence of angry, sad, and, to a lesser extent, happy faces. Conclusion: Among young adults with ADHD, sleep deprivation may hinder the processing of emotional facial stimuli.


Sign in / Sign up

Export Citation Format

Share Document