scholarly journals The Visual Features of Emotional Faces That Predict Forced Choice Selection of Faces Download Text Copy to Clipboard

Author(s):  
Sjoerd Stuit ◽  
Timo Kootstra ◽  
David Terburg ◽  
Carlijn van den Boomen ◽  
Maarten van der Smagt ◽  
...  

Abstract Emotional facial expressions are important visual communication signals that indicate a sender’s intent and emotional state to an observer. As such, it is not surprising that reactions to different expressions are thought to be automatic and independent of awareness. What is surprising, is that studies show inconsistent results concerning such automatic reactions, particularly when using different face stimuli. We argue that automatic reactions to facial expressions can be better explained, and better understood, in terms of quantitative descriptions of their visual features rather than in terms of the semantic labels (e.g. angry) of the expressions. Here, we focused on overall spatial frequency (SF) and localized Histograms of Oriented Gradients (HOG) features. We used machine learning classification to reveal the SF and HOG features that are sufficient for classification of the first selected face out of two simultaneously presented faces. In other words, we show which visual features predict selection between two faces. Interestingly, the identified features serve as better predictors than the semantic label of the expressions. We therefore propose that our modelling approach can further specify which visual features drive the behavioural effects related to emotional expressions, which can help solve the inconsistencies found in this line of research.

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
S. M. Stuit ◽  
T. M. Kootstra ◽  
D. Terburg ◽  
C. van den Boomen ◽  
M. J. van der Smagt ◽  
...  

AbstractEmotional facial expressions are important visual communication signals that indicate a sender’s intent and emotional state to an observer. As such, it is not surprising that reactions to different expressions are thought to be automatic and independent of awareness. What is surprising, is that studies show inconsistent results concerning such automatic reactions, particularly when using different face stimuli. We argue that automatic reactions to facial expressions can be better explained, and better understood, in terms of quantitative descriptions of their low-level image features rather than in terms of the emotional content (e.g. angry) of the expressions. Here, we focused on overall spatial frequency (SF) and localized Histograms of Oriented Gradients (HOG) features. We used machine learning classification to reveal the SF and HOG features that are sufficient for classification of the initial eye movement towards one out of two simultaneously presented faces. Interestingly, the identified features serve as better predictors than the emotional content of the expressions. We therefore propose that our modelling approach can further specify which visual features drive these and other behavioural effects related to emotional expressions, which can help solve the inconsistencies found in this line of research.


2018 ◽  
Vol 24 (4) ◽  
pp. 565-575 ◽  
Author(s):  
Orrie Dan ◽  
Iris Haimov ◽  
Kfir Asraf ◽  
Kesem Nachum ◽  
Ami Cohen

Objective: The present study sought to investigate whether young adults with ADHD have more difficulty recognizing emotional facial expressions compared with young adults without ADHD, and whether such a difference worsens following sleep deprivation. Method: Thirty-one young men ( M = 25.6) with ( n = 15) or without ( n = 16) a diagnosis of ADHD were included in this study. The participants were instructed to sleep 7 hr or more each night for one week, and their sleep quality was monitored via actigraph. Subsequently, the participants were kept awake in a controlled environment for 30 hr. The participants completed a visual emotional morph task twice—at the beginning and at the end of this period. The task included presentation of interpolated face stimuli ranging from neutral facial expressions to fully emotional facial expressions of anger, sadness, or happiness, allowing for assessment of the intensity threshold for recognizing these facial emotional expressions. Results: Actigraphy data demonstrated that while the nightly sleep duration of the participants with ADHD was similar to that of participants without ADHD, their sleep efficiency was poorer. At the onset of the experiment, there were no differences in recognition thresholds between the participants with ADHD and those without ADHD. Following sleep deprivation, however, the ADHD group required clearer facial expressions to recognize the presence of angry, sad, and, to a lesser extent, happy faces. Conclusion: Among young adults with ADHD, sleep deprivation may hinder the processing of emotional facial stimuli.


2007 ◽  
Vol 38 (10) ◽  
pp. 1475-1483 ◽  
Author(s):  
K. S. Kendler ◽  
L. J. Halberstadt ◽  
F. Butera ◽  
J. Myers ◽  
T. Bouchard ◽  
...  

BackgroundWhile the role of genetic factors in self-report measures of emotion has been frequently studied, we know little about the degree to which genetic factors influence emotional facial expressions.MethodTwenty-eight pairs of monozygotic (MZ) and dizygotic (DZ) twins from the Minnesota Study of Twins Reared Apart were shown three emotion-inducing films and their facial responses recorded. These recordings were blindly scored by trained raters. Ranked correlations between twins were calculated controlling for age and sex.ResultsTwin pairs were significantly correlated for facial expressions of general positive emotions, happiness, surprise and anger, but not for general negative emotions, sadness, or disgust or average emotional intensity. MZ pairs (n=18) were more correlated than DZ pairs (n=10) for most but not all emotional expressions.ConclusionsSince these twin pairs had minimal contact with each other prior to testing, these results support significant genetic effects on the facial display of at least some human emotions in response to standardized stimuli. The small sample size resulted in estimated twin correlations with very wide confidence intervals.


2016 ◽  
Vol 29 (8) ◽  
pp. 749-771 ◽  
Author(s):  
Min Hooi Yong ◽  
Ted Ruffman

Dogs respond to human emotional expressions. However, it is unknown whether dogs can match emotional faces to voices in an intermodal matching task or whether they show preferences for looking at certain emotional facial expressions over others, similar to human infants. We presented 52 domestic dogs and 24 seven-month-old human infants with two different human emotional facial expressions of the same gender simultaneously, while listening to a human voice expressing an emotion that matched one of them. Consistent with most matching studies, neither dogs nor infants looked longer at the matching emotional stimuli, yet dogs and humans demonstrated an identical pattern of looking less at sad faces when paired with happy or angry faces (irrespective of the vocal stimulus), with no preference for happyversusangry faces. Discussion focuses on why dogs and infants might have an aversion to sad faces, or alternatively, heightened interest in angry and happy faces.


Author(s):  
Christian PADILLA-NAVARRO ◽  
Carlos ZARATE-TREJO ◽  
Georges KHALAF ◽  
Pascal FALLAVOLLITA

Alexithymia is a condition that partially or completely deprives you of the ability to identify and describe emotions, and to show affective connotations in the actions of an individual. This problem has been taken to different research projects that seek to study its characteristics, forms of prevention, and implications, and that try to determine a measurement for the experience of an individual with this construct as well as the responses they provide to certain stimuli. Other studies that were reviewed aimed to find a connection between the responses of subjects diagnosed with alexithymia when facing a dynamic of emotional facial expressions to recognize and their assigned grade based on the Toronto Alexithymia Scale (TAS), a metric frequently used to evaluate the presence or absence of alexithymia in an individual. In this work, a review of the different articles that study this connection, as well as articles that describe the state of the art of the implementation of artificial intelligence algorithms applied to the treatment or prevention of secondary alexithymia is presented.


2002 ◽  
Vol 8 (1) ◽  
pp. 130-135 ◽  
Author(s):  
JOCELYN M. KEILLOR ◽  
ANNA M. BARRETT ◽  
GREGORY P. CRUCIAN ◽  
SARAH KORTENKAMP, ◽  
KENNETH M. HEILMAN

The facial feedback hypothesis suggests that facial expressions are either necessary or sufficient to produce emotional experience. Researchers have noted that the ideal test of the necessity aspect of this hypothesis would be an evaluation of emotional experience in a patient suffering from a bilateral facial paralysis; however, this condition is rare and no such report has been documented. We examined the role of facial expressions in the determination of emotion by studying a patient (F.P.) suffering from a bilateral facial paralysis. Despite her inability to convey emotions through facial expressions, F.P. reported normal emotional experience. When F.P. viewed emotionally evocative slides her reactions were not dampened relative to the normative sample. F.P. retained her ability to detect, discriminate, and image emotional expressions. These findings are not consistent with theories stating that feedback from an active face is necessary to experience emotion, or to process emotional facial expressions. (JINS, 2002, 8, 130–135.)


2010 ◽  
Vol 1 (3) ◽  
Author(s):  
Roy Kessels ◽  
Pieter Spee ◽  
Angelique Hendriks

AbstractPrevious studies have shown deficits in the perception of static emotional facial expressions in individuals with autism spectrum disorders (ASD), but results are inconclusive. Possibly, using dynamic facial stimuli expressing emotions at different levels of intensities may produce more robust results, since these resemble the expression of emotions in daily life to a greater extent. 30 Young adolescents with high-functioning ASD (IQ>85) and 30 age- and intelligence-matched controls (ages between 12 and 15) performed the Emotion Recognition Task, in which morphs were presented on a computer screen, depicting facial expressions of the six basic emotions (happiness, disgust, fear, anger, surprise and sadness) at nine levels of emotional intensity (20–100%). The results showed no overall group difference on the ERT, apart from a slightly worse performance on the perception of the emotions fear (p<0.03) and disgust (p<0.05). No interaction was found between intensity level of the emotions and group. High-functioning individuals with ASD perform similar to matched controls on the perception of dynamic facial emotional expressions, even at low intensities of emotional expression. These findings are in agreement with other recent studies showing that emotion perception deficits in high-functioning ASD may be less pronounced than previously thought.


2019 ◽  
Author(s):  
Jacob Israelashvlili

Previous research has found that individuals vary greatly in emotion differentiation, that is, the extent to which they distinguish between different emotions when reporting on their own feelings. Building on previous work that has shown that emotion differentiation is associated with individual differences in intrapersonal functions, the current study asks whether emotion differentiation is also related to interpersonal skills. Specifically, we examined whether individuals who are high in emotion differentiation would be more accurate in recognizing others’ emotional expressions. We report two studies in which we used an established paradigm tapping negative emotion differentiation and several emotion recognition tasks. In Study 1 (N = 363), we found that individuals high in emotion differentiation were more accurate in recognizing others’ emotional facial expressions. Study 2 (N = 217), replicated this finding using emotion recognition tasks with varying amounts of emotional information. These findings suggest that the knowledge we use to understand our own emotional experience also helps us understand the emotions of others.


2018 ◽  
Vol 11 (2) ◽  
pp. 16-33 ◽  
Author(s):  
A.V. Zhegallo

The study investigates the specifics of recognition of emotional facial expressions in peripherally exposed facial expressions, while exposition time was shorter compared to the duration of the latent period of a saccade towards the exposed image. The study showed that recognition of peripherical perception reproduces the patterns of the choice of the incorrect responses. The mutual mistaken recognition is common for the facial expressions of a fear, anger and surprise. In the case of worsening of the conditions of recognition, calmness and grief as facial expression were included in the complex of a mutually mistakenly identified expressions. The identification of the expression of happiness deserves a special attention, because it can be mistakenly identified as different facial expression, but other expressions are never recognized as happiness. Individual accuracy of recognition varies from 0.29 to 0.80. The sufficient condition of a high accuracy in recognition was the recognition of the facial expressions using peripherical vision without making a saccade in the direction of the face image exposed.


2013 ◽  
Vol 6 (4) ◽  
Author(s):  
Banu Cangöz ◽  
Arif Altun ◽  
Petek Aşkar ◽  
Zeynel Baran ◽  
Sacide Güzin Mazman

The main objective of the study is to investigate the effects of age of model, gender of observer, and lateralization on visual screening patterns while looking at the emotional facial expressions. Data were collected through eye tracking methodology. The areas of interests were set to include eyes, nose and mouth. The selected eye metrics were first fixation duration, fixation duration and fixation count. Those eye tracking metrics were recorded for different emotional expressions (sad, happy, neutral), and conditions (the age of model, part of face and lateralization). The results revealed that participants looked at the older faces shorter in time and fixated their gaze less compared to the younger faces. This study also showed that when participants were asked to passively look at the face expressions, eyes were important areas in determining sadness and happiness, whereas eyes and noise were important in determining neutral expression. The longest fixated face area was on eyes for both young and old models. Lastly, hemispheric lateralization hypothesis regarding emotional face process was supported.


Sign in / Sign up

Export Citation Format

Share Document