facial expression database
Recently Published Documents


TOTAL DOCUMENTS

34
(FIVE YEARS 13)

H-INDEX

10
(FIVE YEARS 2)

2021 ◽  
Vol 2 ◽  
Author(s):  
C. Martin Grewe ◽  
Tuo Liu ◽  
Christoph Kahl ◽  
Andrea Hildebrandt ◽  
Stefan Zachow

A high realism of avatars is beneficial for virtual reality experiences such as avatar-mediated communication and embodiment. Previous work, however, suggested that the usage of realistic virtual faces can lead to unexpected and undesired effects, including phenomena like the uncanny valley. This work investigates the role of photographic and behavioral realism of avatars with animated facial expressions on perceived realism and congruence ratings. More specifically, we examine ratings of photographic and behavioral realism and their mismatch in differently created avatar faces. Furthermore, we utilize these avatars to investigate the effect of behavioral realism on perceived congruence between video-recorded physical person’s expressions and their imitations by the avatar. We compared two types of avatars, both with four identities that were created from the same facial photographs. The first type of avatars contains expressions that were designed by an artistic expert. The second type contains expressions that were statistically learned from a 3D facial expression database. Our results show that the avatars containing learned facial expressions were rated more photographically and behaviorally realistic and possessed a lower mismatch between the two dimensions. They were also perceived as more congruent to the video-recorded physical person’s expressions. We discuss our findings and the potential benefit of avatars with learned facial expressions for experiences in virtual reality and future research on enfacement.


PLoS ONE ◽  
2020 ◽  
Vol 15 (4) ◽  
pp. e0231304
Author(s):  
Tao Yang ◽  
Zeyun Yang ◽  
Guangzheng Xu ◽  
Duoling Gao ◽  
Ziheng Zhang ◽  
...  

2020 ◽  
Vol 38 (4) ◽  
pp. 799-817
Author(s):  
Wang Zhao ◽  
Long Lu

PurposeFacial expression provides abundant information for social interaction, and the analysis and utilization of facial expression data are playing a huge driving role in all areas of society. Facial expression data can reflect people's mental state. In health care, the analysis and processing of facial expression data can promote the improvement of people's health. This paper introduces several important public facial expression databases and describes the process of facial expression recognition. The standard facial expression database FER2013 and CK+ were used as the main training samples. At the same time, the facial expression image data of 16 Chinese children were collected as supplementary samples. With the help of VGG19 and Resnet18 algorithm models of deep convolution neural network, this paper studies and develops an information system for the diagnosis of autism by facial expression data.Design/methodology/approachThe facial expression data of the training samples are based on the standard expression database FER2013 and CK+. FER2013 and CK+ databases are a common facial expression data set, which is suitable for the research of facial expression recognition. On the basis of FER2013 and CK+ facial expression database, this paper uses the machine learning model support vector machine (SVM) and deep convolution neural network model CNN, VGG19 and Resnet18 to complete the facial expression recognition.FindingsIn this study, ten normal children and ten autistic patients were recruited to test the accuracy of the information system and the diagnostic effect of autism. After testing, the accuracy rate of facial expression recognition is 81.4 percent. This information system can easily identify autistic children. The feasibility of recognizing autism through facial expression is verified.Research limitations/implicationsThe CK+ facial expression database contains some adult facial expression images. In order to improve the accuracy of facial expression recognition for children, more facial expression data of children will be collected as training samples. Therefore, the recognition rate of the information system will be further improved.Originality/valueThis research uses facial expression data and the latest artificial intelligence technology, which is advanced in technology. The diagnostic accuracy of autism is higher than that of traditional systems, so this study is innovative. Research topics come from the actual needs of doctors, and the contents and methods of research have been discussed with doctors many times. The system can diagnose autism as early as possible, promote the early treatment and rehabilitation of patients, and then reduce the economic and mental burden of patients. Therefore, this information system has good social benefits and application value.


In many face recognition systems, the important part is face detection. The task of detecting face is complex due to its variability present across human faces including color, pose, expression, position, and orientation. So, by using various modeling techniques it is convenient to recognize various facial expressions. The system proposed consists of three phases, the facial expression database, pre-processing and classification. To simulate and assess recognition efficiency based on different variables (network composition, learning patterns and pre-processing), we present both the Japanese Female Facial Expression Database (JAFFE) and the Extended Cohn-Kanade Dataset (CK+). Comparative approaches of data preprocessing include face detection, translation, normalization of global contrast and histogram equalization. Significant results were obtained with 85.52 percent accuracy particularly in comparison with some other pre-processing phases and raw data in single pre-processing phases. The result indicates the ANN classifier representation produces a satisfactory result which reaches more accuracy.


2019 ◽  
Vol 16 (9) ◽  
pp. 3778-3782 ◽  
Author(s):  
Mamta Santosh ◽  
Avinash Sharma

Facial Expression Recognition has become the preliminary research area due to its importance in human-computer interaction. Facial Expressions conveys the major part of information so it has vast applications in various fields. Many techniques have been developed in the literature but there is still a need to make the current expression recognition methods efficient. This paper represents proposed framework for face detection and recognizing six universal facial expressions such as happy, anger, disgust, fear, surprise, sad along with neutral face. Viola-Jones method and Face Landmark Detection method are used for face detection. Histogram of oriented gradients is used for feature extraction due to its superiority over other methods. To reduce the dimensionality of features Principal Component Analysis is used so that the maximum variation is preserved. Canberra distance classifier is used for classifying the expressions into different emotions. The proposed method is applied on Japanese Female Facial Expression Database and have evaluated that the proposed method outperforms many state-of-the-art techniques.


2019 ◽  
Vol 11 (1) ◽  
pp. 1-8
Author(s):  
Malik Abdul Ghani ◽  
Andre Rusli ◽  
Ni Made Satvika Iswari

Expressions of facial expressions in addition to providing important emotional indicators, are very important objects in our daily lives too. Real-time video processing on mobile devices is a hot topic and has a very broad application. Photos that have used the filter have 21% more possibilities to be seen and 45% more likely to be commented on by photo consumers. The use of the Fisher-Yates algorithm is used as a filter scrambler for each facial expression emotion. The application is made for the iOS operating system with the Swift programming language that utilizes the Core ML and Vision framework. Custom Vision is used as a tool for creating and training models. In making a model, this study uses a dataset from Cohn-Kanade AU-Coded Facial Expression Database and Karolinska Directed Emotional Faces. Custom Vision can provide performance result training and provide precision and recall values ​​for data that has been trained. The facial expression match with the model is determined by the confidence level value. The results of trials with Hedonic Motivation System Adoption Model method produce a percentage of pleasure in using the application (joy) of 79.39%  of the users agree that the application provides joy.   


2019 ◽  
Vol 55 (3) ◽  
pp. 456-464 ◽  
Author(s):  
Jialin Ma ◽  
Bo Yang ◽  
Ran Luo ◽  
Xiaobin Ding

Electronics ◽  
2019 ◽  
Vol 8 (4) ◽  
pp. 385 ◽  
Author(s):  
Ying Chen ◽  
Zhihao Zhang ◽  
Lei Zhong ◽  
Tong Chen ◽  
Juxiang Chen ◽  
...  

Near-infrared (NIR) facial expression recognition is resistant to illumination change. In this paper, we propose a three-stream three-dimensional convolution neural network with a squeeze-and-excitation (SE) block for NIR facial expression recognition. We fed each stream with different local regions, namely the eyes, nose, and mouth. By using an SE block, the network automatically allocated weights to different local features to further improve recognition accuracy. The experimental results on the Oulu-CASIA NIR facial expression database showed that the proposed method has a higher recognition rate than some state-of-the-art algorithms.


Sign in / Sign up

Export Citation Format

Share Document