Affective Facial Expressions Using Auto-Associative Neural Network in Kansei Robot ‘‘Ifbot’’

Author(s):  
Masayoshi Kanoh ◽  
Tsuyoshi Nakamura ◽  
Shohei Kato ◽  
Hidenori Itoh

The authors propose three methods of enabling a Kansei robot, Ifbot, to convey affective expressions using an emotion space composed of an auto-associative neural network. First, the authors attempt to extract the characteristics of Ifbot‘s facial expressions by mapping them to its emotion space using an auto-associative neural network, and create its emotion regions. They then propose a method for generating affective facial expressions using these emotion regions. The authors also propose an emotion-transition method using a path that minimizes the amount of change in an emotion space. Finally, they propose a method for creating personality using the face.

2021 ◽  
Vol 10 (2) ◽  
pp. 182-188
Author(s):  
Ajeng Restu Kusumastuti ◽  
Yosi Kristian ◽  
Endang Setyati

Abstract—The Covid-19 pandemic has transformed the offline education system into online. Therefore, in order to maximize the learning process, teachers were forced to adapt by having presentations that attract student's attention, including kindergarten teachers. This is a major problem considering the attention rate of children at early age is very diverse combined with their limited communication skill. Thus, there is a need to identify and classify student's learning interest through facial expressions and gestures during the online session. Through this research, student's learning interest were classified into several classes, validated by the teacher. There are three classes: Interested, Moderately Interested, and Not Interested. Trials to get the classification of student's learning interest by teacher validation, carried out by training and testing the cut area of the center of the face (eyes, mouth, face) to get facial expression recognition, supported by the gesture area as gesture recognition. This research has scenarios of four cut areas and two cut areas that were applied to the interest class that utilizes the weight of transfer learning architectures such as VGG16, ResNet50, and Xception. The results of the learning interest classification test obtained a minimum validation percentage of 70%. The result obtained through scenarios of three learning interest classes four cut areas using VGG16 was 75%, while for two cut areas using ResNet50 was 71%. These results proved that the methods of this research can be used to determine the duration and theme of online kindergarten classes.


Author(s):  
Abozar Atya Mohamed Atya ◽  
Khalid Hamid Bilal

The advent of artificial intelligence technology has reduced the gap between humans and machines as equips man to create more near-perfect humanoids. Facial expression is an important tool to communicate one’s emotions as a non-verbally overview of emotion recognition using facial expressions. A remarkable advantage of such a technique recently improved public security through tracking and recognizing, thus led to the high attention to keep up the scientific research in the field. The approaches used for facial expression include classifiers like Support Vector Machine (SVM), Artificial Neural Network (ANN), Convolution Neural Network (CNN), Active Appearance and Machine learning which all used to classify emotions based on certain parts of interest on the face like lips, lower jaw, eyebrows, cheeks and many more. By comparison, the reviews have shown that the average accuracy of the basic emotion ranged from 51% up to 100%, whereas carrying through 7% to 13% in the compound emotions, hence indicated that the indispensable emotion is much comfortable to recognize.


Perception ◽  
2021 ◽  
pp. 030100662110270
Author(s):  
Kennon M. Sheldon ◽  
Ryan Goffredi ◽  
Mike Corcoran

Facial expressions of emotion have important communicative functions. It is likely that mask-wearing during pandemics disrupts these functions, especially for expressions defined by activity in the lower half of the face. We tested this by asking participants to rate both Duchenne smiles (DSs; defined by the mouth and eyes) and non-Duchenne or “social” smiles (SSs; defined by the mouth alone), within masked and unmasked target faces. As hypothesized, masked SSs were rated much lower in “a pleasant social smile” and much higher in “a merely neutral expression,” compared with unmasked SSs. Essentially, masked SSs became nonsmiles. Masked DSs were still rated as very happy and pleasant, although significantly less so than unmasked DSs. Masked DSs and SSs were both rated as displaying more disgust than the unmasked versions.


2021 ◽  
pp. 003329412110184
Author(s):  
Paola Surcinelli ◽  
Federica Andrei ◽  
Ornella Montebarocci ◽  
Silvana Grandi

Aim of the research The literature on emotion recognition from facial expressions shows significant differences in recognition ability depending on the proposed stimulus. Indeed, affective information is not distributed uniformly in the face and recent studies showed the importance of the mouth and the eye regions for a correct recognition. However, previous studies used mainly facial expressions presented frontally and studies which used facial expressions in profile view used a between-subjects design or children faces as stimuli. The present research aims to investigate differences in emotion recognition between faces presented in frontal and in profile views by using a within subjects experimental design. Method The sample comprised 132 Italian university students (88 female, Mage = 24.27 years, SD = 5.89). Face stimuli displayed both frontally and in profile were selected from the KDEF set. Two emotion-specific recognition accuracy scores, viz., frontal and in profile, were computed from the average of correct responses for each emotional expression. In addition, viewing times and response times (RT) were registered. Results Frontally presented facial expressions of fear, anger, and sadness were significantly better recognized than facial expressions of the same emotions in profile while no differences were found in the recognition of the other emotions. Longer viewing times were also found when faces expressing fear and anger were presented in profile. In the present study, an impairment in recognition accuracy was observed only for those emotions which rely mostly on the eye regions.


2021 ◽  
pp. 174702182199299
Author(s):  
Mohamad El Haj ◽  
Emin Altintas ◽  
Ahmed A Moustafa ◽  
Abdel Halim Boudoukha

Future thinking, which is the ability to project oneself forward in time to pre-experience an event, is intimately associated with emotions. We investigated whether emotional future thinking can activate emotional facial expressions. We invited 43 participants to imagine future scenarios, cued by the words “happy,” “sad,” and “city.” Future thinking was video recorded and analysed with a facial analysis software to classify whether facial expressions (i.e., happy, sad, angry, surprised, scared, disgusted, and neutral facial expression) of participants were neutral or emotional. Analysis demonstrated higher levels of happy facial expressions during future thinking cued by the word “happy” than “sad” or “city.” In contrast, higher levels of sad facial expressions were observed during future thinking cued by the word “sad” than “happy” or “city.” Higher levels of neutral facial expressions were observed during future thinking cued by the word “city” than “happy” or “sad.” In the three conditions, the neutral facial expressions were high compared with happy and sad facial expressions. Together, emotional future thinking, at least for future scenarios cued by “happy” and “sad,” seems to trigger the corresponding facial expression. Our study provides an original physiological window into the subjective emotional experience during future thinking.


2021 ◽  
Vol 11 (4) ◽  
pp. 1428
Author(s):  
Haopeng Wu ◽  
Zhiying Lu ◽  
Jianfeng Zhang ◽  
Xin Li ◽  
Mingyue Zhao ◽  
...  

This paper addresses the problem of Facial Expression Recognition (FER), focusing on unobvious facial movements. Traditional methods often cause overfitting problems or incomplete information due to insufficient data and manual selection of features. Instead, our proposed network, which is called the Multi-features Cooperative Deep Convolutional Network (MC-DCN), maintains focus on the overall feature of the face and the trend of key parts. The processing of video data is the first stage. The method of ensemble of regression trees (ERT) is used to obtain the overall contour of the face. Then, the attention model is used to pick up the parts of face that are more susceptible to expressions. Under the combined effect of these two methods, the image which can be called a local feature map is obtained. After that, the video data are sent to MC-DCN, containing parallel sub-networks. While the overall spatiotemporal characteristics of facial expressions are obtained through the sequence of images, the selection of keys parts can better learn the changes in facial expressions brought about by subtle facial movements. By combining local features and global features, the proposed method can acquire more information, leading to better performance. The experimental results show that MC-DCN can achieve recognition rates of 95%, 78.6% and 78.3% on the three datasets SAVEE, MMI, and edited GEMEP, respectively.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2003 ◽  
Author(s):  
Xiaoliang Zhu ◽  
Shihao Ye ◽  
Liang Zhao ◽  
Zhicheng Dai

As a sub-challenge of EmotiW (the Emotion Recognition in the Wild challenge), how to improve performance on the AFEW (Acted Facial Expressions in the wild) dataset is a popular benchmark for emotion recognition tasks with various constraints, including uneven illumination, head deflection, and facial posture. In this paper, we propose a convenient facial expression recognition cascade network comprising spatial feature extraction, hybrid attention, and temporal feature extraction. First, in a video sequence, faces in each frame are detected, and the corresponding face ROI (range of interest) is extracted to obtain the face images. Then, the face images in each frame are aligned based on the position information of the facial feature points in the images. Second, the aligned face images are input to the residual neural network to extract the spatial features of facial expressions corresponding to the face images. The spatial features are input to the hybrid attention module to obtain the fusion features of facial expressions. Finally, the fusion features are input in the gate control loop unit to extract the temporal features of facial expressions. The temporal features are input to the fully connected layer to classify and recognize facial expressions. Experiments using the CK+ (the extended Cohn Kanade), Oulu-CASIA (Institute of Automation, Chinese Academy of Sciences) and AFEW datasets obtained recognition accuracy rates of 98.46%, 87.31%, and 53.44%, respectively. This demonstrated that the proposed method achieves not only competitive performance comparable to state-of-the-art methods but also greater than 2% performance improvement on the AFEW dataset, proving the significant outperformance of facial expression recognition in the natural environment.


Biology ◽  
2021 ◽  
Vol 10 (3) ◽  
pp. 182
Author(s):  
Rodrigo Dalvit Carvalho da Silva ◽  
Thomas Richard Jenkyn ◽  
Victor Alexander Carranza

In reconstructive craniofacial surgery, the bilateral symmetry of the midplane of the facial skeleton plays an important role in surgical planning. Surgeons can take advantage of the intact side of the face as a template for the malformed side by accurately locating the midplane to assist in the preparation of the surgical procedure. However, despite its importance, the location of the midline is still a subjective procedure. The aim of this study was to present a 3D technique using a convolutional neural network and geometric moments to automatically calculate the craniofacial midline symmetry of the facial skeleton from CT scans. To perform this task, a total of 195 skull images were assessed to validate the proposed technique. In the symmetry planes, the technique was found to be reliable and provided good accuracy. However, further investigations to improve the results of asymmetric images may be carried out.


Facial emotion analysis is the basic idea to train the system to understand the different facial expressions of human beings. The Facial expressions are recorded by the use of camera which is attached to user device. Additionally this project will be helpful for the online marketing of the products as it can detect the facial expressions and sentiment of the person. It is the study of people sentiment, opinions and emotions. Sentiment analysis is the method by which information is taken from the facial expressions of people in regard to different situations. The main aim is to read the facial expressions of the human beings using a good resolution camera so that the machine can identify the human sentiments. Convolutional neural network is used as an existing system which is unsupervised neural network to replace that with a supervised mechanism which is called supervised neural network. It can be used in gaming sector, unlock smart phones, automated facial language translation etc.


Sign in / Sign up

Export Citation Format

Share Document