scholarly journals Differences in Facial Expressions between Spontaneous and Posed Smiles: Automated Method by Action Units and Three-Dimensional Facial Landmarks

Sensors ◽  
2020 ◽  
Vol 20 (4) ◽  
pp. 1199
Author(s):  
Seho Park ◽  
Kunyoung Lee ◽  
Jae-A Lim ◽  
Hyunwoong Ko ◽  
Taehoon Kim ◽  
...  

Research on emotion recognition from facial expressions has found evidence of different muscle movements between genuine and posed smiles. To further confirm discrete movement intensities of each facial segment, we explored differences in facial expressions between spontaneous and posed smiles with three-dimensional facial landmarks. Advanced machine analysis was adopted to measure changes in the dynamics of 68 segmented facial regions. A total of 57 normal adults (19 men, 38 women) who displayed adequate posed and spontaneous facial expressions for happiness were included in the analyses. The results indicate that spontaneous smiles have higher intensities for upper face than lower face. On the other hand, posed smiles showed higher intensities in the lower part of the face. Furthermore, the 3D facial landmark technique revealed that the left eyebrow displayed stronger intensity during spontaneous smiles than the right eyebrow. These findings suggest a potential application of landmark based emotion recognition that spontaneous smiles can be distinguished from posed smiles via measuring relative intensities between the upper and lower face with a focus on left-sided asymmetry in the upper region.

2012 ◽  
Vol 25 (0) ◽  
pp. 46-47
Author(s):  
Kazumichi Matsumiya

Adaptation to a face belonging to a facial category, such as expression, causes a subsequently neutral face to be perceived as belonging to an opposite facial category. This is referred to as the face aftereffect (FAE) (Leopold et al., 2001; Rhodes et al., 2004; Webster et al., 2004). The FAE is generally thought of as being a visual phenomenon. However, recent studies have shown that humans can haptically recognize a face (Kilgour and Lederman, 2002; Lederman et al., 2007). Here, I investigated whether FAEs could occur in haptic perception of faces. Three types of facial expressions (happy, sad and neutral) were generated using a computer-graphics software, and three-dimensional masks of these faces were made from epoxy-cured resin for use in the experiments. An adaptation facemask was positioned on the left side of a table in front of the participant, and a test facemask was placed on the right. During adaptation, participants haptically explored the adaptation facemask with their eyes closed for 20 s, after which they haptically explored the test facemask for 5 s. Participants were then requested to classify the test facemask as either happy or sad. The experiment was performed under two adaptation conditions: (1) with adaptation to a happy facemask and (2) with adaptation to a sad facemask. In both cases, the expression of the test facemask was neutral. The results indicate that adaptation to a haptic face that belongs to a specific facial expression causes a subsequently touched neutral face to be perceived as having the opposite facial expression, suggesting that FAEs can be observed in haptic perception of faces.


2021 ◽  
pp. 003329412110184
Author(s):  
Paola Surcinelli ◽  
Federica Andrei ◽  
Ornella Montebarocci ◽  
Silvana Grandi

Aim of the research The literature on emotion recognition from facial expressions shows significant differences in recognition ability depending on the proposed stimulus. Indeed, affective information is not distributed uniformly in the face and recent studies showed the importance of the mouth and the eye regions for a correct recognition. However, previous studies used mainly facial expressions presented frontally and studies which used facial expressions in profile view used a between-subjects design or children faces as stimuli. The present research aims to investigate differences in emotion recognition between faces presented in frontal and in profile views by using a within subjects experimental design. Method The sample comprised 132 Italian university students (88 female, Mage = 24.27 years, SD = 5.89). Face stimuli displayed both frontally and in profile were selected from the KDEF set. Two emotion-specific recognition accuracy scores, viz., frontal and in profile, were computed from the average of correct responses for each emotional expression. In addition, viewing times and response times (RT) were registered. Results Frontally presented facial expressions of fear, anger, and sadness were significantly better recognized than facial expressions of the same emotions in profile while no differences were found in the recognition of the other emotions. Longer viewing times were also found when faces expressing fear and anger were presented in profile. In the present study, an impairment in recognition accuracy was observed only for those emotions which rely mostly on the eye regions.


Author(s):  
Virgilio F. Ferrario ◽  
Chiarella Sforza ◽  
Carlo E. Poggio ◽  
Massimiliano Cova ◽  
Gianluca Tartaglia

Objective In this investigation, the precision of a commercial three-dimensional digitizer in the detection of facial landmarks in human adults was assessed. Methods Fifty landmarks were identified and marked on the faces of five men, on five women, and on a stone cast of the face of one man. For each subject, the three-dimensional coordinates of the landmarks were obtained twice using an electromagnetic three-dimensional digitizer, and the duplicate digitizations were superimposed using common orientations and centers of gravity. Metric differences between homologous landmarks were assessed, and Dahlberg's error was computed. Results For both men and women, the error was 1.05% of the nasion-mid-tragion distance, while for the cast, it was 0.9%. When the duplicate digitizations were used to mathematically reconstruct the faces, and several distances, angles, volumes, and surfaces were computed, more than 80% of the measurements had coefficients of variation lower than 1%. Conclusions The digitizer can assess the coordinates of facial landmarks with sufficient precision, and reliable measurements can be obtained.


2019 ◽  
Vol 29 (10) ◽  
pp. 1441-1451 ◽  
Author(s):  
Melina Nicole Kyranides ◽  
Kostas A. Fanti ◽  
Maria Petridou ◽  
Eva R. Kimonis

AbstractIndividuals with callous-unemotional (CU) traits show deficits in facial emotion recognition. According to preliminary research, this impairment may be due to attentional neglect to peoples’ eyes when evaluating emotionally expressive faces. However, it is unknown whether this atypical processing pattern is unique to established variants of CU traits or modifiable with intervention. This study examined facial affect recognition and gaze patterns among individuals (N = 80; M age = 19.95, SD = 1.01 years; 50% female) with primary vs secondary CU variants. These groups were identified based on repeated measurements of conduct problems, CU traits, and anxiety assessed in adolescence and adulthood. Accuracy and number of fixations on areas of interest (forehead, eyes, and mouth) while viewing six dynamic emotions were assessed. A visual probe was used to direct attention to various parts of the face. Individuals with primary and secondary CU traits were less accurate than controls in recognizing facial expressions across all emotions. Those identified in the low-anxious primary-CU group showed reduced overall fixations to fearful and painful facial expressions compared to those in the high-anxious secondary-CU group. This difference was not specific to a region of the face (i.e. eyes or mouth). Findings point to the importance of investigating both accuracy and eye gaze fixations, since individuals in the primary and secondary groups were only differentiated in the way they attended to specific facial expression. These findings have implications for differentiated interventions focused on improving facial emotion recognition with regard to attending and correctly identifying emotions.


2012 ◽  
Vol 2012 ◽  
pp. 1-8 ◽  
Author(s):  
Rebecca Ort ◽  
Philipp Metzler ◽  
Astrid L. Kruse ◽  
Felix Matthews ◽  
Wolfgang Zemann ◽  
...  

Ample data exists about the high precision of three-dimensional (3D) scanning devices and their data acquisition of the facial surface. However, a question remains regarding which facial landmarks are reliable if identified in 3D images taken under clinical circumstances. Sources of error to be addressed could be technical, user dependent, or patient respectively anatomy related. Based on clinical 3D photos taken with the 3dMDface system, the intra observer repeatability of 27 facial landmarks in six cleft lip (CL) infants and one non-CL infant was evaluated based on a total of over 1,100 measurements. Data acquisition was sometimes challenging but successful in all patients. The mean error was 0.86 mm, with a range of 0.39 mm (Exocanthion) to 2.21 mm (soft gonion). Typically, landmarks provided a small mean error but still showed quite a high variance in measurements, for example, exocanthion from 0.04 mm to 0.93 mm. Vice versa, relatively imprecise landmarks still provide accurate data regarding specific spatial planes. One must be aware of the fact that the degree of precision is dependent on landmarks and spatial planes in question. In clinical investigations, the degree of reliability for landmarks evaluated should be taken into account. Additional reliability can be achieved via multiple measuring.


2020 ◽  
Author(s):  
Anna Kosovicheva ◽  
Peter J. Bex

The binocular coordination of eye movements in a three-dimensional environment involves a combination of saccade and vergence movements. To maintain binocular accuracy and control in the face of sensory and motor changes (that occur with e.g. normal aging, surgery, corrective lenses), the oculomotor system must adapt in response to manifest visual errors. This may be achieved through a combination of binocular and monocular mechanisms, including the recalibration of saccade and vergence amplitudes in response to different visual errors induced in each eye (Maiello, Harrison, & Bex, 2016). This work has used a double-step paradigm to recalibrate eye movements in response to visual errors produced by dichoptic target steps (e.g., leftward in the left eye and rightward in the right eye). Here, we evaluated the immediate perceptual effects of this adaptation. Experiment 1 measured localization errors following adaptation, by comparing the apparent locations of pre- and post- saccadic probes. Consistent with previous work showing localization errors following saccadic adaptation, our results demonstrated that adaptation to a dichoptic step produces different localization errors in the two eyes. Furthermore, in Experiment 2, this effect was reduced for a vergence shift in the absence of a saccade, indicating that saccade programming is responsible for a large component of this illusory shift. Experiment 3 measured post-saccadic stereopsis thresholds and indicated that, unlike localization judgments, adaptation did not influence stereoacuity. Together, these results demonstrate novel dichoptic visual errors following oculomotor adaptation, and point to monocular and binocular mechanisms involved in the maintenance of binocular coordination.


2018 ◽  
Vol 7 (4) ◽  
pp. 2325
Author(s):  
Banita . ◽  
Dr Poonam Tanwar

Face recognition are of great interest to researchers in terms of Image processing and Computer Graphics. In recent years, various factors become popular which clearly affect the face model. Which are ageing, universal facial expressions, and muscle movement. Similarly in terms of medical terminology the facial paralysis can be peripheral or central depending on the level of motor neuron lesion which can be below the nucleus of the nerve or supra nuclear. The various medical therapy used for facial paralysis are electroaccupunture, electrotherapy, laser acupuncture, manual acupuncture which is a traditional form of acupuncture. Imaging plays a great role in evaluation of degree of paralysis and also for faces recognition. There is a wide research in terms of facial expressions and facial recognition but limited research work is available in facial paralysis. House- Brackmann Grading system is one of the simplest and easiest method to evaluate the degree of facial paralysis. During evaluation common facial expressions are recorded and are further evaluated by considering the focal points of the left or the right side of the face. This paper presents the classification of face recognition and its respective fuzzy rules to remove uncertainty in the result after evaluation of facial paralysis.  


Author(s):  
Maida Koso-Drljević ◽  
Meri Miličević

The aim of the study was to test two assumptions about the lateralization of the processing of emotional facial expressions: the assumption of right hemisphere dominance and the valence assumption and to egsamine the influence of gender of the presented stimulus (chimera) and depression as an emotional state of participants. The sample consisted of 83 female students, with an average age of 20 years. Participants solved the Task of Recognizing Emotional Facial Expressions on a computer and then completed the DASS-21, Depression subscale. The results of the study partially confirmed the assumption of valence for the dependent variable - the accuracy of the response. Participants were recognizing more accurately the emotion of sadness than happiness when it is presented on the left side of the face, which is consistent with the valence hypothesis, according to which the right hemisphere is responsible for recognizing negative emotions. However, when it comes to the right side of the face, participants were equally accurately recognizing the emotion of sadness and happiness, which is not consistent with the valence hypothesis. The main effect of the gender of the chimera was statistically significant for the accuracy of the response, the recognition accuracy was higher for the male chimeras compared to the female. A statistically significant negative correlation was obtained between the variable sides of the face (left and right) with the achieved result on the depression subscale for the dependent variable - reaction time. The higher the score on the depressive subscale, the slower (longer) is reaction time to the presented chimera, both on the left and on the right.


2021 ◽  
Author(s):  
Sarah McCrackin ◽  
Jelena Ristic ◽  
Florence Mayrand ◽  
Francesca Capozzi

With the widespread adoption of masks, there is a need for understanding how facial obstruction affects emotion recognition. We asked 120 participants to identify emotions from faces with and without masks. We also examined if recognition performance was related to autistic traits and personality. Masks impacted recognition of expressions with diagnostic lower face features the most and those with diagnostic upper face features the least. Persons with higher autistic traits were worse at identifying unmasked expressions, while persons with lower extraversion and higher agreeableness were better at recognizing masked expressions. These results show that different features play different roles in emotion recognition and suggest that obscuring features affects social communication differently as a function of autistic traits and personality.


2020 ◽  
Vol 41 (2) ◽  
pp. 183-196
Author(s):  
Fernando Gordillo León ◽  
Miguel Ángel Pérez Nieto ◽  
Lilia Mestas Hernández ◽  
José M. Arana Martínez ◽  
Gabriela Castillo Parra ◽  
...  

AbstractThe effective detection of those facial expressions that alert us to a possible threat is adaptive. Hence the reason that studies on face sampling have involved analysing how this process occurs, with evidence to show that the eyes focus mainly on the upper side of the face; nevertheless, no clear determination has been made of the relationship between the efficacy in detection (speed and accuracy) and the way in which emotions are visually tracked on the face. A sequential priming task was therefore held in which the four quadrants of the face were displayed consecutively, for 50 ms each one, and in a different order (24 sequences). The results reveal a quicker response when the priming sequence begins in the upper part, continues downward to the right-hand side of the face, and then follows an anti-clockwise direction. The results are discussed in the light of studies using the Eye-Tracking technique.


Sign in / Sign up

Export Citation Format

Share Document