The Relationship Between Facial Expression Recognition Accuracy and Sharing Behavior in childhood

2019 ◽  
Vol 32 (1) ◽  
pp. 69-86
Author(s):  
Sangeun Lee ◽  
Yoonkyung Jeong
2021 ◽  
Vol 12 ◽  
Author(s):  
Ma Ruihua ◽  
Guo Hua ◽  
Zhao Meng ◽  
Chen Nan ◽  
Liu Panqi ◽  
...  

Objective: Considerable evidence has shown that facial expression recognition ability and cognitive function are impaired in patients with depression. We aimed to investigate the relationship between facial expression recognition and cognitive function in patients with depression.Methods: A total of 51 participants (i.e., 31 patients with depression and 20 healthy control subjects) underwent facial expression recognition tests, measuring anger, fear, disgust, sadness, happiness, and surprise. The Chinese version of the MATRICS Consensus Cognitive Battery (MCCB), which assesses seven cognitive domains, was used.Results: When compared with a control group, there were differences in the recognition of the expressions of sadness (p = 0.036), happiness (p = 0.041), and disgust (p = 0.030) in a depression group. In terms of cognitive function, the scores of patients with depression in the Trail Making Test (TMT; p < 0.001), symbol coding (p < 0.001), spatial span (p < 0.001), mazes (p = 0.007), the Brief Visuospatial Memory Test (BVMT; p = 0.001), category fluency (p = 0.029), and continuous performance test (p = 0.001) were lower than those of the control group, and the difference was statistically significant. The accuracy of sadness and disgust expression recognition in patients with depression was significantly positively correlated with cognitive function scores. The deficits in sadness expression recognition were significantly correlated with the TMT (p = 0.001, r = 0.561), symbol coding (p = 0.001, r = 0.596), maze (p = 0.015, r = 0.439), and the BVMT (p = 0.044, r = 0.370). The deficits in disgust expression recognition were significantly correlated with impairments in the TMT (p = 0.005, r = 0.501) and symbol coding (p = 0.001, r = 0.560).Conclusion: Since cognitive function is impaired in patients with depression, the ability to recognize negative facial expressions declines, which is mainly reflected in processing speed, reasoning, problem-solving, and memory.


2019 ◽  
Vol 20 (1) ◽  
Author(s):  
Olalekan Agbolade ◽  
Azree Nazri ◽  
Razali Yaakob ◽  
Abdul Azim Ghani ◽  
Yoke Kqueen Cheah

Abstract Background Expression in H-sapiens plays a remarkable role when it comes to social communication. The identification of this expression by human beings is relatively easy and accurate. However, achieving the same result in 3D by machine remains a challenge in computer vision. This is due to the current challenges facing facial data acquisition in 3D; such as lack of homology and complex mathematical analysis for facial point digitization. This study proposes facial expression recognition in human with the application of Multi-points Warping for 3D facial landmark by building a template mesh as a reference object. This template mesh is thereby applied to each of the target mesh on Stirling/ESRC and Bosphorus datasets. The semi-landmarks are allowed to slide along tangents to the curves and surfaces until the bending energy between a template and a target form is minimal and localization error is assessed using Procrustes ANOVA. By using Principal Component Analysis (PCA) for feature selection, classification is done using Linear Discriminant Analysis (LDA). Result The localization error is validated on the two datasets with superior performance over the state-of-the-art methods and variation in the expression is visualized using Principal Components (PCs). The deformations show various expression regions in the faces. The results indicate that Sad expression has the lowest recognition accuracy on both datasets. The classifier achieved a recognition accuracy of 99.58 and 99.32% on Stirling/ESRC and Bosphorus, respectively. Conclusion The results demonstrate that the method is robust and in agreement with the state-of-the-art results.


2019 ◽  
Vol 8 (4) ◽  
pp. 9782-9787

Facial Expression Recognition is an important undertaking for the machinery to recognize different expressive alterations in individual. Emotions have a strong relationship with our behavior. Human emotions are discrete reactions to inside or outside occasions which have some importance meaning. Involuntary sentiment detection is a process to understand the individual’s expressive state to identify his intensions from facial expression which is also a noteworthy piece of non-verbal correspondence. In this paper we propose a Framework that combines discriminative features discovered using Convolutional Neural Networks (CNN) to enhance the performance and accuracy of Facial Expression Recognition. For this we have implemented Inception V3 pre-trained architecture of CNN and then applying concatenation of intermediate layer with final layer which is further passing through fully connected layer to perform classification. We have used JAFFE (Japanese Female Facial Expression) Dataset for this purpose and Experimental results show that our proposed method shows better performance and improve the recognition accuracy.


2011 ◽  
Vol 403-408 ◽  
pp. 3199-3202
Author(s):  
Zheng Zhang ◽  
Chao Xu

A distributed facial expression recognition approach based on MB-LGBP feature and decision fusion is presented in this paper to accomplish subject-independent facial expression recognition more efficiently. At first, the Multi-scale Block Local Gabor Binary Patterns (MB-LGBP) are extracted from expression regions to achieve both locally and globally informative features. Then a distributed architecture is proposed to accelerate the recognition process, in which features of each single region are utilized to perform expression classification in parallel. The final decision is made by an artificial neuron network (ANN) based data fusion of the confidence information got from the classification of each region. In experiment, we compare the runtime and recognition accuracy of our system with several other popular expression recognition paradigms. The results show that the distributed architecture can promote the efficiency of facial expression recognition prominently with comparative performance in recognition accuracy.


2020 ◽  
Vol 13 (4) ◽  
pp. 527-543
Author(s):  
Wenjuan Shen ◽  
Xiaoling Li

Purposerecent years, facial expression recognition has been widely used in human machine interaction, clinical medicine and safe driving. However, there is a limitation that conventional recurrent neural networks can only learn the time-series characteristics of expressions based on one-way propagation information.Design/methodology/approachTo solve such limitation, this paper proposes a novel model based on bidirectional gated recurrent unit networks (Bi-GRUs) with two-way propagations, and the theory of identity mapping residuals is adopted to effectively prevent the problem of gradient disappearance caused by the depth of the introduced network. Since the Inception-V3 network model for spatial feature extraction has too many parameters, it is prone to overfitting during training. This paper proposes a novel facial expression recognition model to add two reduction modules to reduce parameters, so as to obtain an Inception-W network with better generalization.FindingsFinally, the proposed model is pretrained to determine the best settings and selections. Then, the pretrained model is experimented on two facial expression data sets of CK+ and Oulu- CASIA, and the recognition performance and efficiency are compared with the existing methods. The highest recognition rate is 99.6%, which shows that the method has good recognition accuracy in a certain range.Originality/valueBy using the proposed model for the applications of facial expression, the high recognition accuracy and robust recognition results with lower time consumption will help to build more sophisticated applications in real world.


Electronics ◽  
2019 ◽  
Vol 8 (4) ◽  
pp. 385 ◽  
Author(s):  
Ying Chen ◽  
Zhihao Zhang ◽  
Lei Zhong ◽  
Tong Chen ◽  
Juxiang Chen ◽  
...  

Near-infrared (NIR) facial expression recognition is resistant to illumination change. In this paper, we propose a three-stream three-dimensional convolution neural network with a squeeze-and-excitation (SE) block for NIR facial expression recognition. We fed each stream with different local regions, namely the eyes, nose, and mouth. By using an SE block, the network automatically allocated weights to different local features to further improve recognition accuracy. The experimental results on the Oulu-CASIA NIR facial expression database showed that the proposed method has a higher recognition rate than some state-of-the-art algorithms.


2018 ◽  
Author(s):  
Jiayin Zhao ◽  
Yifang Wang ◽  
Licong An

AbstractFaces play important roles in the social lives of humans. In addition to real faces, people also encounter numerous cartoon faces in daily life. These cartoon faces convey basic emotional states through facial expressions. Using a behavioral research methodology and event-related potentials (ERPs), we conducted a facial expression recognition experiment with 17 university students to compare the processing of cartoon faces with that of real faces. This study used face type (real vs. cartoon) and participant gender (male vs. female) as independent variables. Reaction time, recognition accuracy, and the amplitudes and latencies of emotion processing-related ERP components such as N170, vertex positive potential (VPP), and late positive potential (LPP) were used as dependent variables. The ERP results revealed that cartoon faces caused larger N170 and VPP amplitudes as well as a briefer N170 latency than did real faces; that real faces induced larger LPP amplitudes than did cartoon faces; and that angry faces induced larger LPP amplitudes than did happy faces. In addition, the results showed a significant difference in the brain regions associated with face processing as reflected in a right hemispheric advantage. The behavioral results showed that the reaction times for happy faces were shorter than those for angry faces; that females showed a higher facial expression recognition accuracy than did males; and that males showed a higher recognition accuracy for angry faces than happy faces. These results demonstrate differences in facial expression recognition and neurological processing between cartoon faces and real faces among adults. Cartoon faces showed a higher processing intensity and speed than real faces during the early processing stage. However, more attentional resources were allocated for real faces during the late processing stage.


2012 ◽  
Vol 110 (1) ◽  
pp. 338-350 ◽  
Author(s):  
Mariano Chóliz ◽  
Enrique G. Fernández-Abascal

Recognition of emotional facial expressions is a central area in the psychology of emotion. This study presents two experiments. The first experiment analyzed recognition accuracy for basic emotions including happiness, anger, fear, sadness, surprise, and disgust. 30 pictures (5 for each emotion) were displayed to 96 participants to assess recognition accuracy. The results showed that recognition accuracy varied significantly across emotions. The second experiment analyzed the effects of contextual information on recognition accuracy. Information congruent and not congruent with a facial expression was displayed before presenting pictures of facial expressions. The results of the second experiment showed that congruent information improved facial expression recognition, whereas incongruent information impaired such recognition.


2020 ◽  
Vol 10 (11) ◽  
pp. 4002
Author(s):  
Sathya Bursic ◽  
Giuseppe Boccignone ◽  
Alfio Ferrara ◽  
Alessandro D’Amelio ◽  
Raffaella Lanzarotti

When automatic facial expression recognition is applied to video sequences of speaking subjects, the recognition accuracy has been noted to be lower than with video sequences of still subjects. This effect known as the speaking effect arises during spontaneous conversations, and along with the affective expressions the speech articulation process influences facial configurations. In this work we question whether, aside from facial features, other cues relating to the articulation process would increase emotion recognition accuracy when added in input to a deep neural network model. We develop two neural networks that classify facial expressions in speaking subjects from the RAVDESS dataset, a spatio-temporal CNN and a GRU cell RNN. They are first trained on facial features only, and afterwards both on facial features and articulation related cues extracted from a model trained for lip reading, while varying the number of consecutive frames provided in input as well. We show that using DNNs the addition of features related to articulation increases classification accuracy up to 12%, the increase being greater with more consecutive frames provided in input to the model.


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Qing Lin ◽  
Ruili He ◽  
Peihe Jiang

State-of-the-art facial expression methods outperform human beings, especially, thanks to the success of convolutional neural networks (CNNs). However, most of the existing works focus mainly on analyzing an adult’s face and ignore the important problems: how can we recognize facial expression from a baby’s face image and how difficult is it? In this paper, we first introduce a new face image database, named BabyExp, which contains 12,000 images from babies younger than two years old, and each image is with one of three facial expressions (i.e., happy, sad, and normal). To the best of our knowledge, the proposed dataset is the first baby face dataset for analyzing a baby’s face image, which is complementary to the existing adult face datasets and can shed some light on exploring baby face analysis. We also propose a feature guided CNN method with a new loss function, called distance loss, to optimize interclass distance. In order to facilitate further research, we provide the benchmark of expression recognition on the BabyExp dataset. Experimental results show that the proposed network achieves the recognition accuracy of 87.90% on BabyExp.


Sign in / Sign up

Export Citation Format

Share Document