facial emotions
Recently Published Documents


TOTAL DOCUMENTS

262
(FIVE YEARS 83)

H-INDEX

35
(FIVE YEARS 4)

2022 ◽  
Vol 70 (1) ◽  
pp. 781-800
Author(s):  
Ishaani Priyadarshini ◽  
Chase Cotton
Keyword(s):  

2021 ◽  
Vol 12 ◽  
Author(s):  
Amal Taamallah ◽  
Soumeyya Halayem ◽  
Olfa Rajhi ◽  
Malek Ghazzai ◽  
Mohamed Moussa ◽  
...  

Background: Facial expressions transmit information about emotional state, facilitating communication and regulation in interpersonal relationships. Their acute recognition is essential in social adaptation and lacks among children suffering from autism spectrum disorders. The aim of our study was to validate the “Recognition of Facial Emotions: Tunisian Test for Children” among Tunisian children in order to assess facial emotion recognition in children with autism spectrum disorders (ASD).Methods: We conducted a cross-sectional study among neurotypical children from the general population. The final version of or test consisted of a static subtest of 114 photographs and a dynamic subtest of 36 videos expressing the six basic emotions (happiness, anger, sadness, disgust, fear and surprise), presented by actors of different ages and genders. The test items were coded according to Ekman’s “Facial Action Coding System” method. The validation study focused on the validity of the content, the validity of the construct and the reliability.Results: We included 116 neurotypical children, from 7 to 12 years old. Our population was made up of 54 boys and 62 girls. The reliability’s study showed good internal consistency for each subtest: the Cronbach coefficient was 0.88 for the static subtest and 0.85 for the dynamic subtest. The study of the internal structure through the exploratory factor analysis of the items of emotions and those of intensity showed that the distribution of the items in sub-domains was similar to their theoretical distribution. Age was significantly correlated to the mean of the overall score for both subtests (p < 10–3). Gender was no significantly correlated to the overall score (p = 0.15). High intensity photographs were better recognized. The emotion of happiness was the most recognized in both subtests. A significant difference between the overall score of the static and dynamic subtest, in favor of the dynamic one, was identified (p < 10–3).Conclusion: This work provides clinicians with a reliable tool to assess recognition of facial emotions in typically developing children.


2021 ◽  
Vol 11 (22) ◽  
pp. 10540
Author(s):  
Navjot Rathour ◽  
Zeba Khanam ◽  
Anita Gehlot ◽  
Rajesh Singh ◽  
Mamoon Rashid ◽  
...  

There is a significant interest in facial emotion recognition in the fields of human–computer interaction and social sciences. With the advancements in artificial intelligence (AI), the field of human behavioral prediction and analysis, especially human emotion, has evolved significantly. The most standard methods of emotion recognition are currently being used in models deployed in remote servers. We believe the reduction in the distance between the input device and the server model can lead us to better efficiency and effectiveness in real life applications. For the same purpose, computational methodologies such as edge computing can be beneficial. It can also encourage time-critical applications that can be implemented in sensitive fields. In this study, we propose a Raspberry-Pi based standalone edge device that can detect real-time facial emotions. Although this edge device can be used in variety of applications where human facial emotions play an important role, this article is mainly crafted using a dataset of employees working in organizations. A Raspberry-Pi-based standalone edge device has been implemented using the Mini-Xception Deep Network because of its computational efficiency in a shorter time compared to other networks. This device has achieved 100% accuracy for detecting faces in real time with 68% accuracy, i.e., higher than the accuracy mentioned in the state-of-the-art with the FER 2013 dataset. Future work will implement a deep network on Raspberry-Pi with an Intel Movidious neural compute stick to reduce the processing time and achieve quick real time implementation of the facial emotion recognition system.


2021 ◽  
Vol 2089 (1) ◽  
pp. 012014
Author(s):  
Dr A ViswanathReddy ◽  
A Aswini Reddy ◽  
C A Bindyashree

Abstract Recognition of facial expression has many potential applications that have attracted the researcher’s attention during the last decade. Taking out of features, is an important step in the analysis of expression that contributes to a quick and accurate recognition of expression, i.e., happiness, surprise and disgust, sadness, anger, and fear are expressions of the faces. Facial expressions are most frequently used to interpret human emotions. Two categories contain a range of different emotions: positive emotions and non-positive emotions. Face Detection, Extraction, Classification, and Recognition are major steps used in the proposed system. The proposed segmentation techniques are applied and compared to determine which method is appropriate for splitting the mouth region, and then the mouth region can be extracted using techniques for stretching contrasts and segmenting the image. After the extraction of the mouth area, the facial emotions are graded in the face picture region of the extracted mouth based on white pixel values. The Supervisory Learning Approach is widely used for face identification algorithms and it takes more computation time and effort. It may also give incorrect class labels in the classification process. For this reason, supervised learning and reinforcement learning is being used. In general, it will be like a trial-and-error method that is, in the training process it tries to learn and produce expected results. It was specified accordingly. Reinforcement learning always tries to enhance the results.


Author(s):  
Italo Oliveira ◽  
Jacqueline Lopes Silva ◽  
Facundo Palomino Quispe ◽  
Ana Beatriz Alvarez

2021 ◽  
Vol 12 ◽  
Author(s):  
Weiwei Cai ◽  
Ming Gao ◽  
Runmin Liu ◽  
Jie Mao

Understanding human emotions and psychology is a critical step toward realizing artificial intelligence, and correct recognition of facial expressions is essential for judging emotions. However, the differences caused by changes in facial expression are very subtle, and different expression features are less distinguishable, making it difficult for computers to recognize human facial emotions accurately. Therefore, this paper proposes a novel multi-layer interactive feature fusion network model with angular distance loss. To begin, a multi-layer and multi-scale module is designed to extract global and local features of facial emotions in order to capture part of the feature relationships between different scales, thereby improving the model's ability to discriminate subtle features of facial emotions. Second, a hierarchical interactive feature fusion module is designed to address the issue of loss of useful feature information caused by layer-by-layer convolution and pooling of convolutional neural networks. In addition, the attention mechanism is also used between convolutional layers at different levels. Improve the neural network's discriminative ability by increasing the saliency of information about different features on the layers and suppressing irrelevant information. Finally, we use the angular distance loss function to improve the proposed model's inter-class feature separation and intra-class feature clustering capabilities, addressing the issues of large intra-class differences and high inter-class similarity in facial emotion recognition. We conducted comparison and ablation experiments on the FER2013 dataset. The results illustrate that the performance of the proposed MIFAD-Net is 1.02–4.53% better than the compared methods, and it has strong competitiveness.


2021 ◽  
Vol 12 ◽  
Author(s):  
Anna-Sophie Weil ◽  
Vivien Günther ◽  
Frank Martin Schmidt ◽  
Anette Kersting ◽  
Markus Quirin ◽  
...  

This study focused on the criterion-related validity of the Implicit Positive and Negative Affect Test (IPANAT). The IPANAT is thought to be a measure of automatic activation of cognitive representations of affects. In this study, it was investigated whether implicit affect scores differentially predict ratings of facial emotions over and above explicit affectivity. Ninety-six young female participants completed the IPANAT, the Positive and Negative Affect Schedule (PANAS) as an explicit measure of state and trait affectivity, and a task for the perception of facial emotions. Implicit negative affect predicted the perception of negative but not positive facial emotions, whereas implicit positive affect predicted the perception of positive but not negative facial emotions. The observed double-dissociation in the correlational pattern strongly supports the validity of the IPANAT as a measure of implicit affectivity and is indicative of the orthogonality and thus functional distinctness of the two affect dimensions of the IPANAT. Moreover, such affect-congruent correlations were absent for explicit affect scales, which additionally supports the incremental validity of the IPANAT.


Author(s):  
Abhinav Chaubey

Abstract: Artificial Intelligent give us capability to detect emotions of human being. Due variation of individual expression it is difficult to find precisely. With AI we can mimics a human's capability like recognising someone with a restricted facial feature. this . paper, of mine indentify the face emotions by detecting areas of face like eyes, nose, lips, and forehead. By implementing two repressing methods like histogram and data augmentation we propos to extract characteristics of facial emotion. Here in this paper two dimensional architecture is used. First is used for inputting greyscale of face image where as second is for accepting histograms. the final process calculate the result on the bases of KNN and SVM classifiers. The results indicates that proposed algorithm detect six fundamental facial emotions , Happiness, Anger, Fear and surprise. Précised result are expected by using trained model data set. Keywords: SVM, KNN, FER, DNN, VGG16, HOG, HSOG.


Sign in / Sign up

Export Citation Format

Share Document