scholarly journals Facial masks affect emotion recognition in the general population and individuals with autistic traits

PLoS ONE ◽  
2021 ◽  
Vol 16 (9) ◽  
pp. e0257740
Author(s):  
Farid Pazhoohi ◽  
Leilani Forby ◽  
Alan Kingstone

Facial expressions, and the ability to recognize these expressions, have evolved in humans to communicate information to one another. Face masks are equipment used in healthcare by health professionals to prevent the transmission of airborne infections. As part of the social distancing efforts related to COVID-19, wearing facial masks has been practiced globally. Such practice might influence affective information communication among humans. Previous research suggests that masks disrupt expression recognition of some emotions (e.g., fear, sadness or neutrality) and lower the confidence in their identification. To extend the previous research, in the current study we tested a larger and more diverse sample of individuals and also investigated the effect of masks on perceived intensity of expressions. Moreover, for the first time in the literature we examined these questions using individuals with autistic traits. Specifically, across three experiments using different populations (college students and general population), and the 10-item Autism Spectrum Quotient (AQ-10; lower and higher scorers), we tested the effect of facial masks on facial emotion recognition of anger, disgust, fear, happiness, sadness, and neutrality. Results showed that the ability to identify all facial expressions decreased when faces were masked, a finding observed across all three studies, contradicting previous research on fear, sad, and neutral expressions. Participants were also less confident in their judgements for all emotions, supporting previous research; and participants perceived emotions as less expressive in the mask condition compared to the unmasked condition, a finding novel to the literature. An additional novel finding was that participants with higher scores on the AQ-10 were less accurate and less confident overall in facial expression recognition, as well as perceiving expressions as less intense. Our findings reveal that wearing face masks decreases facial expression recognition, confidence in expression identification, as well as the perception of intensity for all expressions, affecting high-scoring AQ-10 individuals more than low-scoring individuals.

Emotion recognition is a prominent tough problem in machine vision systems. The significant way humans show emotions is through facial expressions. In this paper we used a 2D image processing method to recognize the facial expression by extracting of features. The proposed algorithm passes through few preprocessing steps initially. And then the preprocessed image is partitioned into two main parts Eyes and Mouth. To identify the emotions Bezier curves are drawn for main parts. The experimental result shows that the proposed technique is 80% to 85% accurate.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Michael C. W. English ◽  
Gilles E. Gignac ◽  
Troy A. W. Visser ◽  
Andrew J. O. Whitehouse ◽  
James T. Enns ◽  
...  

Abstract Background Traits and characteristics qualitatively similar to those seen in diagnosed autism spectrum disorder can be found to varying degrees in the general population. To measure these traits and facilitate their use in autism research, several questionnaires have been developed that provide broad measures of autistic traits [e.g. Autism-Spectrum Quotient (AQ), Broad Autism Phenotype Questionnaire (BAPQ)]. However, since their development, our understanding of autism has grown considerably, and it is arguable that existing measures do not provide an ideal representation of the trait dimensions currently associated with autism. Our aim was to create a new measure of autistic traits that reflects our current understanding of autism, the Comprehensive Autism Trait Inventory (CATI). Methods In Study 1, 107 pilot items were administered to 1119 individuals in the general population and exploratory factor analysis of responses used to create the 42-item CATI comprising six subscales: Social Interactions, Communication, Social Camouflage, Repetitive Behaviours, Cognitive Rigidity, and Sensory Sensitivity. In Study 2, the CATI was administered to 1068 new individuals and confirmatory factor analysis used to verify the factor structure. The AQ and BAPQ were administered to validate the CATI, and additional autistic participants were recruited to compare the predictive ability of the measures. In Study 3, to validate the CATI subscales, the CATI was administered to 195 new individuals along with existing valid measures qualitatively similar to each CATI subscale. Results The CATI showed convergent validity at both the total-scale (r ≥ .79) and subscale level (r ≥ .68). The CATI also showed superior internal reliability for total-scale scores (α = .95) relative to the AQ (α = .90) and BAPQ (α = .94), consistently high reliability for subscales (α > .81), greater predictive ability for classifying autism (Youden’s Index = .62 vs .56–.59), and demonstrated measurement invariance for sex. Limitations Analyses of predictive ability for classifying autism depended upon self-reported diagnosis or identification of autism. The autistic sample was not large enough to test measurement invariance of autism diagnosis. Conclusions The CATI is a reliable and economical new measure that provides observations across a wide range of trait dimensions associated with autism, potentially precluding the need to administer multiple measures, and to our knowledge, the CATI is also the first broad measure of autistic traits to have dedicated subscales for social camouflage and sensory sensitivity.


2021 ◽  
Vol 11 (4) ◽  
pp. 1428
Author(s):  
Haopeng Wu ◽  
Zhiying Lu ◽  
Jianfeng Zhang ◽  
Xin Li ◽  
Mingyue Zhao ◽  
...  

This paper addresses the problem of Facial Expression Recognition (FER), focusing on unobvious facial movements. Traditional methods often cause overfitting problems or incomplete information due to insufficient data and manual selection of features. Instead, our proposed network, which is called the Multi-features Cooperative Deep Convolutional Network (MC-DCN), maintains focus on the overall feature of the face and the trend of key parts. The processing of video data is the first stage. The method of ensemble of regression trees (ERT) is used to obtain the overall contour of the face. Then, the attention model is used to pick up the parts of face that are more susceptible to expressions. Under the combined effect of these two methods, the image which can be called a local feature map is obtained. After that, the video data are sent to MC-DCN, containing parallel sub-networks. While the overall spatiotemporal characteristics of facial expressions are obtained through the sequence of images, the selection of keys parts can better learn the changes in facial expressions brought about by subtle facial movements. By combining local features and global features, the proposed method can acquire more information, leading to better performance. The experimental results show that MC-DCN can achieve recognition rates of 95%, 78.6% and 78.3% on the three datasets SAVEE, MMI, and edited GEMEP, respectively.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2003 ◽  
Author(s):  
Xiaoliang Zhu ◽  
Shihao Ye ◽  
Liang Zhao ◽  
Zhicheng Dai

As a sub-challenge of EmotiW (the Emotion Recognition in the Wild challenge), how to improve performance on the AFEW (Acted Facial Expressions in the wild) dataset is a popular benchmark for emotion recognition tasks with various constraints, including uneven illumination, head deflection, and facial posture. In this paper, we propose a convenient facial expression recognition cascade network comprising spatial feature extraction, hybrid attention, and temporal feature extraction. First, in a video sequence, faces in each frame are detected, and the corresponding face ROI (range of interest) is extracted to obtain the face images. Then, the face images in each frame are aligned based on the position information of the facial feature points in the images. Second, the aligned face images are input to the residual neural network to extract the spatial features of facial expressions corresponding to the face images. The spatial features are input to the hybrid attention module to obtain the fusion features of facial expressions. Finally, the fusion features are input in the gate control loop unit to extract the temporal features of facial expressions. The temporal features are input to the fully connected layer to classify and recognize facial expressions. Experiments using the CK+ (the extended Cohn Kanade), Oulu-CASIA (Institute of Automation, Chinese Academy of Sciences) and AFEW datasets obtained recognition accuracy rates of 98.46%, 87.31%, and 53.44%, respectively. This demonstrated that the proposed method achieves not only competitive performance comparable to state-of-the-art methods but also greater than 2% performance improvement on the AFEW dataset, proving the significant outperformance of facial expression recognition in the natural environment.


2021 ◽  
Vol 2021 ◽  
pp. 1-17
Author(s):  
Yusra Khalid Bhatti ◽  
Afshan Jamil ◽  
Nudrat Nida ◽  
Muhammad Haroon Yousaf ◽  
Serestina Viriri ◽  
...  

Classroom communication involves teacher’s behavior and student’s responses. Extensive research has been done on the analysis of student’s facial expressions, but the impact of instructor’s facial expressions is yet an unexplored area of research. Facial expression recognition has the potential to predict the impact of teacher’s emotions in a classroom environment. Intelligent assessment of instructor behavior during lecture delivery not only might improve the learning environment but also could save time and resources utilized in manual assessment strategies. To address the issue of manual assessment, we propose an instructor’s facial expression recognition approach within a classroom using a feedforward learning model. First, the face is detected from the acquired lecture videos and key frames are selected, discarding all the redundant frames for effective high-level feature extraction. Then, deep features are extracted using multiple convolution neural networks along with parameter tuning which are then fed to a classifier. For fast learning and good generalization of the algorithm, a regularized extreme learning machine (RELM) classifier is employed which classifies five different expressions of the instructor within the classroom. Experiments are conducted on a newly created instructor’s facial expression dataset in classroom environments plus three benchmark facial datasets, i.e., Cohn–Kanade, the Japanese Female Facial Expression (JAFFE) dataset, and the Facial Expression Recognition 2013 (FER2013) dataset. Furthermore, the proposed method is compared with state-of-the-art techniques, traditional classifiers, and convolutional neural models. Experimentation results indicate significant performance gain on parameters such as accuracy, F1-score, and recall.


2021 ◽  
Vol 9 (5) ◽  
pp. 1141-1152
Author(s):  
Muazu Abdulwakil Auma ◽  
Eric Manzi ◽  
Jibril Aminu

Facial recognition is integral and essential in todays society, and the recognition of emotions based on facial expressions is already becoming more usual. This paper analytically provides an overview of the databases of video data of facial expressions and several approaches to recognizing emotions by facial expressions by including the three main image analysis stages, which are pre-processing, feature extraction, and classification. The paper presents approaches based on deep learning using deep neural networks and traditional means to recognizing human emotions based on visual facial features. The current results of some existing algorithms are presented. When reviewing scientific and technical literature, the focus was mainly on sources containing theoretical and research information of the methods under consideration and comparing traditional techniques and methods based on deep neural networks supported by experimental research. An analysis of scientific and technical literature describing methods and algorithms for analyzing and recognizing facial expressions and world scientific research results has shown that traditional methods of classifying facial expressions are inferior in speed and accuracy to artificial neural networks. This reviews main contributions provide a general understanding of modern approaches to facial expression recognition, which will allow new researchers to understand the main components and trends in facial expression recognition. A comparison of world scientific research results has shown that the combination of traditional approaches and approaches based on deep neural networks show better classification accuracy. However, the best classification methods are artificial neural networks.


Sensors ◽  
2021 ◽  
Vol 21 (19) ◽  
pp. 6438
Author(s):  
Chiara Filippini ◽  
David Perpetuini ◽  
Daniela Cardone ◽  
Arcangelo Merla

An intriguing challenge in the human–robot interaction field is the prospect of endowing robots with emotional intelligence to make the interaction more genuine, intuitive, and natural. A crucial aspect in achieving this goal is the robot’s capability to infer and interpret human emotions. Thanks to its design and open programming platform, the NAO humanoid robot is one of the most widely used agents for human interaction. As with person-to-person communication, facial expressions are the privileged channel for recognizing the interlocutor’s emotional expressions. Although NAO is equipped with a facial expression recognition module, specific use cases may require additional features and affective computing capabilities that are not currently available. This study proposes a highly accurate convolutional-neural-network-based facial expression recognition model that is able to further enhance the NAO robot’ awareness of human facial expressions and provide the robot with an interlocutor’s arousal level detection capability. Indeed, the model tested during human–robot interactions was 91% and 90% accurate in recognizing happy and sad facial expressions, respectively; 75% accurate in recognizing surprised and scared expressions; and less accurate in recognizing neutral and angry expressions. Finally, the model was successfully integrated into the NAO SDK, thus allowing for high-performing facial expression classification with an inference time of 0.34 ± 0.04 s.


2020 ◽  
Author(s):  
Sayaka Yoshimura ◽  
Kei Kobayashi ◽  
Tsukasa Ueno ◽  
Takashi Miyagi ◽  
Naoya Oishi ◽  
...  

Abstract Background: Previous studies have demonstrated that individuals with autism spectrum disorder (ASD) exhibit dysfunction in the three attention systems (i.e., alerting, orienting, and executive control) as well as atypical relationships among these systems. Additionally, other studies have reported that individuals with subclinical but high levels of autistic traits show similar attentional tendencies to those observed in ASD. Based on these findings, it was hypothesized that autistic traits would affect the functions and relationships of the three attention systems in a general population. Resting-state functional magnetic resonance imaging (fMRI) was performed in 119 healthy adults to investigate relationships between autistic traits and within- and between-system functional connectivity (FC) among the three attention systems. Twenty-six regions of interest that were defined as components of the three attention systems by a previous task-based fMRI study were examined in terms of within- and between-system FC. We assessed autistic traits using the Autism-Spectrum Quotient.Results: Correlational analyses revealed that autistic traits were significantly correlated with between-system FC, but not with within-system FC. Conclusions: Our results imply that a high autistic trait level, even when subclinical, is associated with the way the three attention systems interact.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Asmita Karmakar ◽  
Manisha Bhattacharya ◽  
Susmita Chatterjee ◽  
Atanu Kumar Dogra

Purpose The Autism-Spectrum Quotient (AQ) is a widely used tool to quantify autistic traits in the general population. This study aims to report the distribution, group differences and factor structure of autistic traits in Indian general population. The work also assesses the criterion validity of AQ across three patient group samples – autism spectrum disorder (ASD), obsessive-compulsive disorder and social anxiety disorder. Design/methodology/approach In this study, psychometric properties of the adapted AQ were assessed among 450 neurotypical university students matched for age. Confirmatory factor analysis was done to see if the adapted AQ fits the original factor structure. Test–retest, internal consistency reliability and criterion validity were found out. Group differences (gender and field of study) in AQ were also assessed. Findings Autistic traits were found to be continuously distributed in the population, and patterns of group differences were consistent with previous studies. The adapted AQ had five factors resembling the original factor structure with a good fit, and 38 items instead of the original 50 items. Acceptable reliability coefficients were demonstrated along with criterion validity across clinical groups. Originality/value This work is the first to present the pattern of distribution and factor structure of autistic traits among neurotypical adults from Eastern India, a culturally different population, as well as a reliable and valid tool to assess autistic traits in Bengali, a language with 300 million speakers. The findings add to the growing literature on AQ measurement and the concept of autism as a quantitative trait, examined outside of the western samples.


Author(s):  
Yi Ji ◽  
Khalid Idrissi

This paper proposes an automatic facial expression recognition system, which uses new methods in both face detection and feature extraction. In this system, considering that facial expressions are related to a small set of muscles and limited ranges of motions, the facial expressions are recognized by these changes in video sequences. First, the differences between neutral and emotional states are detected. Faces can be automatically located from changing facial organs. Then, LBP features are applied and AdaBoost is used to find the most important features for each expression on essential facial parts. At last, SVM with polynomial kernel is used to classify expressions. The method is evaluated on JAFFE and MMI databases. The performances are better than other automatic or manual annotated systems.


Sign in / Sign up

Export Citation Format

Share Document