scholarly journals QUANTIFYING FACIAL EXPRESSION USING ARTIFICIAL INTELLIGENCE IN CLASS 1 MOLAR RELATIONSHIP: A NOVEL RESEARCH PROTOCOL

2021 ◽  
Vol 10 (6) ◽  
pp. 3802-3805
Author(s):  
Akshata Raut

Precise face detection analysis is a crucial element for a social interaction review. To the viewer, producing the facial features that correspond to the thoughts and feelings which succeed in arousing the sensation or enhancing of the emotional sensitivity. The study is based on Virtual Reality (VR), to evaluate facial expression using Azure Kinect in adults with Class I molar relationship. The study will be conducted in Human Research Lab, on participants with Class I molar relationship, by using Azure Kinect. 196 participants will be selected of age above 18 as per the eligibility criteria. This research would demonstrate the different tools and applications available by testing their precision and relevance to determine the facial expressions.

Author(s):  
Ralph Reilly ◽  
Andrew Nyaboga ◽  
Carl Guynes

<p class="MsoNormal" style="text-align: justify; margin: 0in 0.5in 0pt;"><span style="layout-grid-mode: line; font-family: &quot;Times New Roman&quot;,&quot;serif&quot;;"><span style="font-size: x-small;">Facial Information Science is becoming a discipline in its own right, attracting not only computer scientists, but graphic animators and psychologists, all of whom require knowledge to understand how people make and interpret facial expressions. (Zeng, 2009). Computer advancements enhance the ability of researchers to study facial expression. Digitized computer-displayed faces can now be used in studies. Current advancements are facilitating not only the researcher&rsquo;s ability to accurately display information, but recording the subject&rsquo;s reaction automatically.<span style="mso-spacerun: yes;">&nbsp; </span><span style="mso-bidi-font-weight: bold;"><span style="mso-spacerun: yes;">&nbsp;</span></span>With increasing interest in Artificial Intelligence and man-machine communications, what importance does the gender of the user play in the design of today&rsquo;s multi-million dollar applications? Does research suggest that men and women respond to the &ldquo;gender&rdquo; of computer displayed images differently? Can this knowledge be used effectively to design applications specifically for use by men or women? This research is an attempt to understand these questions while studying whether automatic, or pre-attentive, processing plays a part in the identification of the facial expressions.</span></span></p>


2012 ◽  
Vol 25 (1) ◽  
pp. 105-110 ◽  
Author(s):  
Yohko Maki ◽  
Hiroshi Yoshida ◽  
Tomoharu Yamaguchi ◽  
Haruyasu Yamaguchi

ABSTRACTBackground:Positivity recognition bias has been reported for facial expression as well as memory and visual stimuli in aged individuals, whereas emotional facial recognition in Alzheimer disease (AD) patients is controversial, with possible involvement of confounding factors such as deficits in spatial processing of non-emotional facial features and in verbal processing to express emotions. Thus, we examined whether recognition of positive facial expressions was preserved in AD patients, by adapting a new method that eliminated the influences of these confounding factors.Methods:Sensitivity of six basic facial expressions (happiness, sadness, surprise, anger, disgust, and fear) was evaluated in 12 outpatients with mild AD, 17 aged normal controls (ANC), and 25 young normal controls (YNC). To eliminate the factors related to non-emotional facial features, averaged faces were prepared as stimuli. To eliminate the factors related to verbal processing, the participants were required to match the images of stimulus and answer, avoiding the use of verbal labels.Results:In recognition of happiness, there was no difference in sensitivity between YNC and ANC, and between ANC and AD patients. AD patients were less sensitive than ANC in recognition of sadness, surprise, and anger. ANC were less sensitive than YNC in recognition of surprise, anger, and disgust. Within the AD patient group, sensitivity of happiness was significantly higher than those of the other five expressions.Conclusions:In AD patient, recognition of happiness was relatively preserved; recognition of happiness was most sensitive and was preserved against the influences of age and disease.


Emotion Recognition is of significance in the modern scenario. Among the many ways to perform it, one of them is through facial expression detection since it is a spontaneous arousal of mental state rather than a conscious effort. Sometimes emotions rule us in the form of the choices, actions and perceptions which are in turn, a result of the emotions we are overpowered by. Happiness, sadness, fear, disgust, anger, neutral and surprise are the seven basic emotions expressed by a human most frequently. In this period of automation and human computer interaction, it is a very difficult and tedious job to make the machines detect the emotions. Facial expressions are the medium through which emotions are shown. For one to detect the facial expression of a person, colour, orientation, lighting and posture play significant importance. Hence, the movements associated with eye, nose, lips etc. plays major role in differentiating the facial features. These facial features are then classified and compared through the trained data. In this paper, we have constructed a Convolution Neural Network (CNN) model and then recognised different emotions for a particular dataset. We have found the accuracy of the model and our main aim is to minimise the loss. We have made use of Adam’s optimizer and used loss function as sparse categorical crossentropy and activation function as softmax. The results which we have got are quite accurate and can be used for further research in this field.


2018 ◽  
Vol 122 (4) ◽  
pp. 1432-1448 ◽  
Author(s):  
Charlott Maria Bodenschatz ◽  
Anette Kersting ◽  
Thomas Suslow

Orientation of gaze toward specific regions of the face such as the eyes or the mouth helps to correctly identify the underlying emotion. The present eye-tracking study investigates whether facial features diagnostic of specific emotional facial expressions are processed preferentially, even when presented outside of subjective awareness. Eye movements of 73 healthy individuals were recorded while completing an affective priming task. Primes (pictures of happy, neutral, sad, angry, and fearful facial expressions) were presented for 50 ms with forward and backward masking. Participants had to evaluate subsequently presented neutral faces. Results of an awareness check indicated that participants were subjectively unaware of the emotional primes. No affective priming effects were observed but briefly presented emotional facial expressions elicited early eye movements toward diagnostic regions of the face. Participants oriented their gaze more rapidly to the eye region of the neutral mask after a fearful facial expression. After a happy facial expression, participants oriented their gaze more rapidly to the mouth region of the neutral mask. Moreover, participants dwelled longest on the eye region after a fearful facial expression, and the dwell time on the mouth region was longest for happy facial expressions. Our findings support the idea that briefly presented fearful and happy facial expressions trigger an automatic mechanism that is sensitive to the distribution of relevant facial features and facilitates the orientation of gaze toward them.


2020 ◽  
Vol 8 (2) ◽  
pp. 68-84
Author(s):  
Naoki Imamura ◽  
Hiroki Nomiya ◽  
Teruhisa Hochin

Facial expression intensity has been proposed to digitize the degree of facial expressions in order to retrieve impressive scenes from lifelog videos. The intensity is calculated based on the correlation of facial features compared to each facial expression. However, the correlation is not determined objectively. It should be determined statistically based on the contribution score of the facial features necessary for expression recognition. Therefore, the proposed method recognizes facial expressions by using a neural network and calculates the contribution score of input toward the output. First, the authors improve some facial features. After that, they verify the score correctly by comparing the accuracy transitions depending on reducing useful and useless features and process the score statistically. As a result, they extract useful facial features from the neural network.


2020 ◽  
Vol 0 (0) ◽  
Author(s):  
Eleonora Meister ◽  
Claudia Horn-Hofmann ◽  
Miriam Kunz ◽  
Eva G. Krumhuber ◽  
Stefan Lautenbacher

AbstractObjectivesThe decoding of facial expressions of pain plays a crucial role in pain diagnostic and clinical decision making. For decoding studies, it is necessary to present facial expressions of pain in a flexible and controllable fashion. Computer models (avatars) of human facial expressions of pain allow for systematically manipulating specific facial features. The aim of the present study was to investigate whether avatars can show realistic facial expressions of pain and how the sex of the avatars influence the decoding of pain by human observers.MethodsFor that purpose, 40 female (mean age: 23.9 years) and 40 male (mean age: 24.6 years) observers watched 80 short videos showing computer-generated avatars, who presented the five clusters of facial expressions of pain (four active and one stoic cluster) identified by Kunz and Lautenbacher (2014). After each clip, observers were asked to provide ratings for the intensity of pain the avatars seem to experience and the certainty of judgement, i.e. if the shown expression truly represents pain.ResultsResults show that three of the four active facial clusters were similarly accepted as valid expressions of pain by the observers whereas only one cluster (“raised eyebrows”) was disregarded. The sex of the observed avatars influenced the decoding of pain as indicated by increased intensity and elevated certainty ratings for female avatars.ConclusionsThe assumption of different valid facial expressions of pain could be corroborated in avatars, which contradicts the idea of only one uniform pain face. The observers’ rating of the avatars’ pain was influenced by the avatars’ sex, which resembles known observer biases for humans. The use of avatars appeared to be a suitable method in research on the decoding of the facial expression of pain, mirroring closely the known forms of human facial expressions.


Information ◽  
2020 ◽  
Vol 11 (10) ◽  
pp. 485
Author(s):  
Hind A. Alrubaish ◽  
Rachid Zagrouba

The human mood has a temporary effect on the face shape due to the movement of its muscles. Happiness, sadness, fear, anger, and other emotional conditions may affect the face biometric system’s reliability. Most of the current studies on facial expressions are concerned about the accuracy of classifying the subjects based on their expressions. This study investigated the effect of facial expressions on the reliability of a face biometric system to find out which facial expression puts the biometric system at greater risk. Moreover, it identified a set of facial features that have the lowest facial deformation caused by facial expressions to be generalized during the recognition process, regardless of which facial expression is presented. In order to achieve the goal of this study, an analysis of 22 facial features between the normal face and six universal facial expressions is obtained. The results show that the face biometric systems are affected by facial expressions where the disgust expression achieved the most dissimilar score, while the sad expression achieved the lowest dissimilar score. Additionally, the study identified the five and top ten facial features that have the lowest facial deformations on the face shape in all facial expressions. Besides that, the relativity score showed less variances between the sample using the top facial features. The obtained results of this study minimized the false rejection rate in the face biometric system and subsequently the ability to raise the system’s acceptance threshold to maximize the intrusion detection rate without affecting the user convenience.


2014 ◽  
Vol 513-517 ◽  
pp. 4043-4046
Author(s):  
Ji Zheng Yan ◽  
Zhi Liang Wang ◽  
Yan Yan

Whether industrial or civilian, advanced intelligent robots are the focus of Artificial Intelligence (AI), especially which have humanoid emotion and could show anthropomorphic facial expressions, so our research focuses on how to design a humanoid robot head to show emotion to human beings. In this paper, we successively discuss three issues. Issue 1: what are the approaches and theories to make robot have humanoid emotion? Issue 2: how robot to show anthropomorphic facial expressions? Issue 3: what is the mechanical structure of the robot head? To issue 1, through analysis and comparison we choose Artificial Psychology as the means and guidance; To issue 2, we study Facial Coding System (FACS) and make innovative use, further optimize the combination of control points to construct facial expression; To issue 3, we divide the head into four parts, and each part could be driven by servos. Finally, we make a robot head according to the previous concept and design. Through experiments and correction, we achieve the expected goals of advanced intelligent robots.


2020 ◽  
Vol 10 (11) ◽  
pp. 4002
Author(s):  
Sathya Bursic ◽  
Giuseppe Boccignone ◽  
Alfio Ferrara ◽  
Alessandro D’Amelio ◽  
Raffaella Lanzarotti

When automatic facial expression recognition is applied to video sequences of speaking subjects, the recognition accuracy has been noted to be lower than with video sequences of still subjects. This effect known as the speaking effect arises during spontaneous conversations, and along with the affective expressions the speech articulation process influences facial configurations. In this work we question whether, aside from facial features, other cues relating to the articulation process would increase emotion recognition accuracy when added in input to a deep neural network model. We develop two neural networks that classify facial expressions in speaking subjects from the RAVDESS dataset, a spatio-temporal CNN and a GRU cell RNN. They are first trained on facial features only, and afterwards both on facial features and articulation related cues extracted from a model trained for lip reading, while varying the number of consecutive frames provided in input as well. We show that using DNNs the addition of features related to articulation increases classification accuracy up to 12%, the increase being greater with more consecutive frames provided in input to the model.


Author(s):  
Yogesh Kumar ◽  
Shashi Kant Verma ◽  
Sandeep Sharma

The optimization of the features is vital to effectively detecting facial expressions. This research work has optimized the facial features by employing the improved quantum-inspired gravitation search algorithm (IQI-GSA). The improvement to the quantum-inspired gravitational search algorithm (QIGSA) is conducted to handle the local optima trapping. The QIGSA is the amalgamation of the quantum computing and gravitational search algorithm that owns the overall strong global search ability to handle the optimization problems in comparison with the gravitational search algorithm. In spite of global searching ability, the QIGSA can be trapped in local optima in the later iterations. This work has adapted the IQI-GSA approach to handle the local optima, stochastic characteristics and maintaining balance among the exploration and exploitation. The IQI-GSA is utilized for the optimized features selection from the set of extracted features using the LGBP (a hybrid approach of local binary patterns with the Gabor filter) method. The system performance is analyzed for the application of automated facial expressions recognition with the classification technique of deep convolutional neural network (DCNN). The extensive experimentation evaluation is conducted on the benchmark datasets of Japanese Female Facial Expression (JAFFE), Radboud Faces Database (RaFD) and Karolinska Directed Emotional Faces (KDEF). To determine the effectiveness of the proposed facial expression recognition system, the results are also evaluated for the feature optimization with GSA and QIGSA. The evaluation results clearly demonstrate the outperformed performance of the considered system with IQI-GSA in comparison with GSA, QIGSA and existing techniques available for the experimentation on utilized datasets.


Sign in / Sign up

Export Citation Format

Share Document