Relative preservation of the recognition of positive facial expression “happiness” in Alzheimer disease

2012 ◽  
Vol 25 (1) ◽  
pp. 105-110 ◽  
Author(s):  
Yohko Maki ◽  
Hiroshi Yoshida ◽  
Tomoharu Yamaguchi ◽  
Haruyasu Yamaguchi

ABSTRACTBackground:Positivity recognition bias has been reported for facial expression as well as memory and visual stimuli in aged individuals, whereas emotional facial recognition in Alzheimer disease (AD) patients is controversial, with possible involvement of confounding factors such as deficits in spatial processing of non-emotional facial features and in verbal processing to express emotions. Thus, we examined whether recognition of positive facial expressions was preserved in AD patients, by adapting a new method that eliminated the influences of these confounding factors.Methods:Sensitivity of six basic facial expressions (happiness, sadness, surprise, anger, disgust, and fear) was evaluated in 12 outpatients with mild AD, 17 aged normal controls (ANC), and 25 young normal controls (YNC). To eliminate the factors related to non-emotional facial features, averaged faces were prepared as stimuli. To eliminate the factors related to verbal processing, the participants were required to match the images of stimulus and answer, avoiding the use of verbal labels.Results:In recognition of happiness, there was no difference in sensitivity between YNC and ANC, and between ANC and AD patients. AD patients were less sensitive than ANC in recognition of sadness, surprise, and anger. ANC were less sensitive than YNC in recognition of surprise, anger, and disgust. Within the AD patient group, sensitivity of happiness was significantly higher than those of the other five expressions.Conclusions:In AD patient, recognition of happiness was relatively preserved; recognition of happiness was most sensitive and was preserved against the influences of age and disease.

1989 ◽  
Vol 68 (2) ◽  
pp. 443-452 ◽  
Author(s):  
Patricia T. Riccelli ◽  
Carol E. Antila ◽  
J. Alexander Dale ◽  
Herbert L. Klions

Two studies concerned the relation between facial expression cognitive induction of mood and perception of mood in women undergraduates. In Exp. 1, 20 subjects were randomly assigned to a group who were instructed in exaggerated facial expressions (Demand Group) and 20 subjects were randomly assigned to a group who were not instructed (Nondemand Group). All subjects completed a modified Velten (1968) elation- and depression-induction sequence. Ratings of depression on the Multiple Affect Adjective Checklist increased during the depression condition and decreased during the elation condition. Subjects made more facial expressions in the Demand Group than the Nondemand Group from electromyogram measures of the zygomatic and corrugator muscles and from corresponding action unit measures from visual scoring using the Facial Action Scoring System. Subjects who were instructed in the Demand Group rated their depression as more severe during the depression slides than the other group. No such effect was noted during the elation condition. In Exp. 2, 16 women were randomly assigned to a group who were instructed in facial expressions contradictory to those expected on the depression and elation tasks (Contradictory Expression Group). Another 16 women were randomly assigned to a group who were given no instructions about facial expressions (Nondemand Group). All subjects completed the depression- and elation-induction sequence mentioned in Exp. 1. No differences were reported between groups on the ratings of depression (MAACL) for the depression-induction or for the elation-induction but both groups rated depression higher after the depression condition and lower after the elation condition. Electromyographic and facial action scores verified that subjects in the Contradictory Expression Group were making the requested contradictory facial expressions during the mood-induction sequences. It was concluded that the primary influence on emotion came from the cognitive mood-induction sequences. Facial expressions only seem to modify the emotion in the case of depression being exacerbated by frowning. A contradictory facial expression did not affect the rating of an emotion.


2019 ◽  
Vol 6 (1) ◽  
pp. 96-136
Author(s):  
Anne Burkus-Chasson

Abstract Historians of Chinese literature and philosophy have written extensively about the significance of emotion (qing 情) in late Ming times (1522–1644). But how did a pictorial image manifest emotion, and how were its visible signs of emotion conceptualized? This article considers the dilemma that painters faced in general when they represented an expressive body: How could the display of emotion in gesture and facial expression be contained within the bounds of propriety? The author examines, in particular, how Chen Hongshou 陳洪綬 (1598–1652) resolved this dilemma in two figural paintings, one of which represents a sorrowful woman, and the other, a worried drunkard. She argues that Chen's representation of sorrow and anxiety was inextricably tied to the pictorial conventions utilized by the print designers of his day to illustrate dramatic, emotionally charged moments in a story. Hence, Chen animated the actors in his paintings with emphatic gestures and poses, complementing their expressive bodies with more subtly shaped facial features. But Chen's incorporation into his painting of what was at the time readily identified as “print” disturbed his figural compositions. As much as the expressive figures aroused an empathetic response from his viewers, the juxtaposition of incompatible manners of representation also made his work seem strange and preposterous. However, Chen's visual laments were derived from poems and historical anecdotes. The author argues that the verbal texts to which he alluded not only enhanced and justified his viewers' empathetic response to his paintings but also enabled him to delineate emotions that were otherwise eschewed by the painters of his time.


2021 ◽  
Vol 10 (6) ◽  
pp. 3802-3805
Author(s):  
Akshata Raut

Precise face detection analysis is a crucial element for a social interaction review. To the viewer, producing the facial features that correspond to the thoughts and feelings which succeed in arousing the sensation or enhancing of the emotional sensitivity. The study is based on Virtual Reality (VR), to evaluate facial expression using Azure Kinect in adults with Class I molar relationship. The study will be conducted in Human Research Lab, on participants with Class I molar relationship, by using Azure Kinect. 196 participants will be selected of age above 18 as per the eligibility criteria. This research would demonstrate the different tools and applications available by testing their precision and relevance to determine the facial expressions.


Emotion Recognition is of significance in the modern scenario. Among the many ways to perform it, one of them is through facial expression detection since it is a spontaneous arousal of mental state rather than a conscious effort. Sometimes emotions rule us in the form of the choices, actions and perceptions which are in turn, a result of the emotions we are overpowered by. Happiness, sadness, fear, disgust, anger, neutral and surprise are the seven basic emotions expressed by a human most frequently. In this period of automation and human computer interaction, it is a very difficult and tedious job to make the machines detect the emotions. Facial expressions are the medium through which emotions are shown. For one to detect the facial expression of a person, colour, orientation, lighting and posture play significant importance. Hence, the movements associated with eye, nose, lips etc. plays major role in differentiating the facial features. These facial features are then classified and compared through the trained data. In this paper, we have constructed a Convolution Neural Network (CNN) model and then recognised different emotions for a particular dataset. We have found the accuracy of the model and our main aim is to minimise the loss. We have made use of Adam’s optimizer and used loss function as sparse categorical crossentropy and activation function as softmax. The results which we have got are quite accurate and can be used for further research in this field.


2020 ◽  
Author(s):  
Maurryce Starks ◽  
Anna Shafer-Skelton ◽  
Michela Paradiso ◽  
Aleix M. Martinez ◽  
Julie Golomb

The “spatial congruency bias” is a behavioral phenomenon where two objects presented sequentially are more likely to be judged as being the same object if they are presented in the same location (Golomb et al., 2014), suggesting that irrelevant spatial location information may be bound to object representations. Here, we examine whether the spatial congruency bias extends to higher-level object judgments of facial identity and expression. On each trial, two real-world faces were sequentially presented in variable screen locations, and subjects were asked to make same-different judgments on the facial expression (Experiments 1-2) or facial identity (Experiment 3) of the stimuli. We observed a robust spatial congruency bias for judgements of facial identity, yet a more fragile one for judgements of facial expression. Subjects were more likely to judge two faces as displaying the same expression if they were presented in the same location (compared to in different locations), but only when the faces shared the same identity. On the other hand, a spatial congruency bias was found when subjects made judgements on facial identity, even across faces displaying different facial expressions. These findings suggest a possible difference between the binding of facial identity and facial expression to spatial location.


2018 ◽  
Vol 122 (4) ◽  
pp. 1432-1448 ◽  
Author(s):  
Charlott Maria Bodenschatz ◽  
Anette Kersting ◽  
Thomas Suslow

Orientation of gaze toward specific regions of the face such as the eyes or the mouth helps to correctly identify the underlying emotion. The present eye-tracking study investigates whether facial features diagnostic of specific emotional facial expressions are processed preferentially, even when presented outside of subjective awareness. Eye movements of 73 healthy individuals were recorded while completing an affective priming task. Primes (pictures of happy, neutral, sad, angry, and fearful facial expressions) were presented for 50 ms with forward and backward masking. Participants had to evaluate subsequently presented neutral faces. Results of an awareness check indicated that participants were subjectively unaware of the emotional primes. No affective priming effects were observed but briefly presented emotional facial expressions elicited early eye movements toward diagnostic regions of the face. Participants oriented their gaze more rapidly to the eye region of the neutral mask after a fearful facial expression. After a happy facial expression, participants oriented their gaze more rapidly to the mouth region of the neutral mask. Moreover, participants dwelled longest on the eye region after a fearful facial expression, and the dwell time on the mouth region was longest for happy facial expressions. Our findings support the idea that briefly presented fearful and happy facial expressions trigger an automatic mechanism that is sensitive to the distribution of relevant facial features and facilitates the orientation of gaze toward them.


2020 ◽  
Vol 8 (2) ◽  
pp. 68-84
Author(s):  
Naoki Imamura ◽  
Hiroki Nomiya ◽  
Teruhisa Hochin

Facial expression intensity has been proposed to digitize the degree of facial expressions in order to retrieve impressive scenes from lifelog videos. The intensity is calculated based on the correlation of facial features compared to each facial expression. However, the correlation is not determined objectively. It should be determined statistically based on the contribution score of the facial features necessary for expression recognition. Therefore, the proposed method recognizes facial expressions by using a neural network and calculates the contribution score of input toward the output. First, the authors improve some facial features. After that, they verify the score correctly by comparing the accuracy transitions depending on reducing useful and useless features and process the score statistically. As a result, they extract useful facial features from the neural network.


2020 ◽  
Vol 0 (0) ◽  
Author(s):  
Eleonora Meister ◽  
Claudia Horn-Hofmann ◽  
Miriam Kunz ◽  
Eva G. Krumhuber ◽  
Stefan Lautenbacher

AbstractObjectivesThe decoding of facial expressions of pain plays a crucial role in pain diagnostic and clinical decision making. For decoding studies, it is necessary to present facial expressions of pain in a flexible and controllable fashion. Computer models (avatars) of human facial expressions of pain allow for systematically manipulating specific facial features. The aim of the present study was to investigate whether avatars can show realistic facial expressions of pain and how the sex of the avatars influence the decoding of pain by human observers.MethodsFor that purpose, 40 female (mean age: 23.9 years) and 40 male (mean age: 24.6 years) observers watched 80 short videos showing computer-generated avatars, who presented the five clusters of facial expressions of pain (four active and one stoic cluster) identified by Kunz and Lautenbacher (2014). After each clip, observers were asked to provide ratings for the intensity of pain the avatars seem to experience and the certainty of judgement, i.e. if the shown expression truly represents pain.ResultsResults show that three of the four active facial clusters were similarly accepted as valid expressions of pain by the observers whereas only one cluster (“raised eyebrows”) was disregarded. The sex of the observed avatars influenced the decoding of pain as indicated by increased intensity and elevated certainty ratings for female avatars.ConclusionsThe assumption of different valid facial expressions of pain could be corroborated in avatars, which contradicts the idea of only one uniform pain face. The observers’ rating of the avatars’ pain was influenced by the avatars’ sex, which resembles known observer biases for humans. The use of avatars appeared to be a suitable method in research on the decoding of the facial expression of pain, mirroring closely the known forms of human facial expressions.


Personal Computer sourced Face Recognition has been a sophisticated and well-found technique which is being rationally utilized for most of the authenticated cases. In reality, there is a number of situations where the expressions of the face will be different. We are here able to instinctively detect the five universal expressions: smile, sadness, anger, surprise, neutral by studying face geometry by determining which type of facial expression has been carried out. Using some facial data with variant expressions. We hereby made some experimentations to calculate the accuracies of some machine learning methods by making some changes in the face images such as a change in expressions, which at last needed for training and recognition identifiers. Our objective is to take the features of neutral facial expressions and add them with the other expressive face images like smiling, angry, sadness to improve the accuracy.


PLoS ONE ◽  
2021 ◽  
Vol 16 (12) ◽  
pp. e0261666
Author(s):  
Ryota Kobai ◽  
Hiroki Murakami

Self-focus is a type of cognitive processing that maintains negative emotions. Moreover, bodily feedback is also essential for maintaining emotions. This study investigated the effect of interactions between self-focused attention and facial expressions on emotions. The results indicated that control facial expression manipulation after self-focus reduced happiness scores. On the contrary, the smiling facial expression manipulation after self-focus increased happiness scores marginally. However, facial expressions did not affect positive emotions after the other-focus manipulation. These findings suggest that self-focus plays a pivotal role in facial expressions’ effect on positive emotions. However, self-focusing is insufficient for decreasing positive emotions, and the interaction between self-focus and facial expressions is crucial for developing positive emotions.


Sign in / Sign up

Export Citation Format

Share Document