scholarly journals EXPLORING ANALYTICAL AND HOLISTIC PROCESSING IN FACIAL EXPRESSION RECOGNITION

2020 ◽  
pp. 103-140
Author(s):  
Yakov A. Bondarenko ◽  
Galina Ya. Menshikova

Background. The study explores two main processes of perception of facial expression: analytical (perception based on individual facial features) and holistic (holistic and non-additive perception of all features). The relative contribution of each process to facial expression recognition is still an open question. Objective. To identify the role of holistic and analytical mechanisms in the process of facial expression recognition. Methods. A method was developed and tested for studying analytical and holistic processes in the task of evaluating subjective differences of expressions, using composite and inverted facial images. A distinctive feature of the work is the use of a multidimensional scaling method, by which a judgment of the contribution of holistic and analytical processes to the perception of facial expressions is based on the analysis of the subjective space of the similarity of expressions obtained when presenting upright and inverted faces. Results. It was shown, first, that when perceiving upright faces, a characteristic clustering of expressions is observed in the subjective space of similarities of expression, which we interpret as a predominance of holistic processes; second, by inversion of the face, there is a change in the spatial configuration of expressions that may reflect a strengthening of analytical processes; in general, the method of multidimensional scaling has proven its effectiveness in solving the problem of the relation between holistic and analytical processes in recognition of facial expressions. Conclusion. The analysis of subjective spaces of the similarity of emotional faces is productive for the study of the ratio of analytical and holistic processes in the recognition of facial expressions.

2021 ◽  
Vol 11 (4) ◽  
pp. 1428
Author(s):  
Haopeng Wu ◽  
Zhiying Lu ◽  
Jianfeng Zhang ◽  
Xin Li ◽  
Mingyue Zhao ◽  
...  

This paper addresses the problem of Facial Expression Recognition (FER), focusing on unobvious facial movements. Traditional methods often cause overfitting problems or incomplete information due to insufficient data and manual selection of features. Instead, our proposed network, which is called the Multi-features Cooperative Deep Convolutional Network (MC-DCN), maintains focus on the overall feature of the face and the trend of key parts. The processing of video data is the first stage. The method of ensemble of regression trees (ERT) is used to obtain the overall contour of the face. Then, the attention model is used to pick up the parts of face that are more susceptible to expressions. Under the combined effect of these two methods, the image which can be called a local feature map is obtained. After that, the video data are sent to MC-DCN, containing parallel sub-networks. While the overall spatiotemporal characteristics of facial expressions are obtained through the sequence of images, the selection of keys parts can better learn the changes in facial expressions brought about by subtle facial movements. By combining local features and global features, the proposed method can acquire more information, leading to better performance. The experimental results show that MC-DCN can achieve recognition rates of 95%, 78.6% and 78.3% on the three datasets SAVEE, MMI, and edited GEMEP, respectively.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2003 ◽  
Author(s):  
Xiaoliang Zhu ◽  
Shihao Ye ◽  
Liang Zhao ◽  
Zhicheng Dai

As a sub-challenge of EmotiW (the Emotion Recognition in the Wild challenge), how to improve performance on the AFEW (Acted Facial Expressions in the wild) dataset is a popular benchmark for emotion recognition tasks with various constraints, including uneven illumination, head deflection, and facial posture. In this paper, we propose a convenient facial expression recognition cascade network comprising spatial feature extraction, hybrid attention, and temporal feature extraction. First, in a video sequence, faces in each frame are detected, and the corresponding face ROI (range of interest) is extracted to obtain the face images. Then, the face images in each frame are aligned based on the position information of the facial feature points in the images. Second, the aligned face images are input to the residual neural network to extract the spatial features of facial expressions corresponding to the face images. The spatial features are input to the hybrid attention module to obtain the fusion features of facial expressions. Finally, the fusion features are input in the gate control loop unit to extract the temporal features of facial expressions. The temporal features are input to the fully connected layer to classify and recognize facial expressions. Experiments using the CK+ (the extended Cohn Kanade), Oulu-CASIA (Institute of Automation, Chinese Academy of Sciences) and AFEW datasets obtained recognition accuracy rates of 98.46%, 87.31%, and 53.44%, respectively. This demonstrated that the proposed method achieves not only competitive performance comparable to state-of-the-art methods but also greater than 2% performance improvement on the AFEW dataset, proving the significant outperformance of facial expression recognition in the natural environment.


2011 ◽  
Vol 268-270 ◽  
pp. 471-475
Author(s):  
Sungmo Jung ◽  
Seoksoo Kim

Many 3D films use technologies of facial expression recognition. In order to use the existing technologies, a large number of markers shall be attached to a face, a camera is fixed in front of the face, and movements of the markers are calculated. However, the markers calculate only the changes in regions where the markers are attached, which makes difficult realistic recognition of facial expressions. Therefore, this study extracted a preliminary eye region in 320*240 by defining specific location values of the eye. And the final eye region was selected from the preliminary region. This study suggests an improved method of detecting an eye region, reducing errors arising from noise.


Webology ◽  
2020 ◽  
Vol 17 (2) ◽  
pp. 804-816
Author(s):  
Elaf J. Al Taee ◽  
Qasim Mohammed Jasim

A facial expression is a visual impression of a person's situations, emotions, cognitive activity, personality, intention and psychopathology, it has an active and vital role in the exchange of information and communication between people. In machines and robots which dedicated to communication with humans, the facial expressions recognition play an important and vital role in communication and reading of what is the person implies, especially in the field of health. For that the research in this field leads to development in communication with the robot. This topic has been discussed extensively, and with the progress of deep learning and use Convolution Neural Network CNN in image processing which widely proved efficiency, led to use CNN in the recognition of facial expressions. Automatic system for Facial Expression Recognition FER require to perform detection and location of faces in a cluttered scene, feature extraction, and classification. In this research, the CNN used for perform the process of FER. The target is to label each image of facial into one of the seven facial emotion categories considered in the JAFFE database. JAFFE facial expression database with seven facial expression labels as sad, happy, fear, surprise, anger, disgust, and natural are used in this research. We trained CNN with different depths using gray-scale images from the JAFFE database.The accuracy of proposed system was 100%.


Algorithms ◽  
2019 ◽  
Vol 12 (11) ◽  
pp. 227 ◽  
Author(s):  
Yingying Wang ◽  
Yibin Li ◽  
Yong Song ◽  
Xuewen Rong

In recent years, with the development of artificial intelligence and human–computer interaction, more attention has been paid to the recognition and analysis of facial expressions. Despite much great success, there are a lot of unsatisfying problems, because facial expressions are subtle and complex. Hence, facial expression recognition is still a challenging problem. In most papers, the entire face image is often chosen as the input information. In our daily life, people can perceive other’s current emotions only by several facial components (such as eye, mouth and nose), and other areas of the face (such as hair, skin tone, ears, etc.) play a smaller role in determining one’s emotion. If the entire face image is used as the only input information, the system will produce some unnecessary information and miss some important information in the process of feature extraction. To solve the above problem, this paper proposes a method that combines multiple sub-regions and the entire face image by weighting, which can capture more important feature information that is conducive to improving the recognition accuracy. Our proposed method was evaluated based on four well-known publicly available facial expression databases: JAFFE, CK+, FER2013 and SFEW. The new method showed better performance than most state-of-the-art methods.


eLife ◽  
2020 ◽  
Vol 9 ◽  
Author(s):  
Gilles Vannuscorps ◽  
Michael Andres ◽  
Alfonso Caramazza

What mechanisms underlie facial expression recognition? A popular hypothesis holds that efficient facial expression recognition cannot be achieved by visual analysis alone but additionally requires a mechanism of motor simulation — an unconscious, covert imitation of the observed facial postures and movements. Here, we first discuss why this hypothesis does not necessarily follow from extant empirical evidence. Next, we report experimental evidence against the central premise of this view: we demonstrate that individuals can achieve normotypical efficient facial expression recognition despite a congenital absence of relevant facial motor representations and, therefore, unaided by motor simulation. This underscores the need to reconsider the role of motor simulation in facial expression recognition.


2021 ◽  
Vol 8 (11) ◽  
Author(s):  
Shota Uono ◽  
Wataru Sato ◽  
Reiko Sawada ◽  
Sayaka Kawakami ◽  
Sayaka Yoshimura ◽  
...  

People with schizophrenia or subclinical schizotypal traits exhibit impaired recognition of facial expressions. However, it remains unclear whether the detection of emotional facial expressions is impaired in people with schizophrenia or high levels of schizotypy. The present study examined whether the detection of emotional facial expressions would be associated with schizotypy in a non-clinical population after controlling for the effects of IQ, age, and sex. Participants were asked to respond to whether all faces were the same as quickly and as accurately as possible following the presentation of angry or happy faces or their anti-expressions among crowds of neutral faces. Anti-expressions contain a degree of visual change that is equivalent to that of normal emotional facial expressions relative to neutral facial expressions and are recognized as neutral expressions. Normal expressions of anger and happiness were detected more rapidly and accurately than their anti-expressions. Additionally, the degree of overall schizotypy was negatively correlated with the effectiveness of detecting normal expressions versus anti-expressions. An emotion–recognition task revealed that the degree of positive schizotypy was negatively correlated with the accuracy of facial expression recognition. These results suggest that people with high levels of schizotypy experienced difficulties detecting and recognizing emotional facial expressions.


2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Junhuan Wang

Recognizing facial expressions accurately and effectively is of great significance to medical and other fields. Aiming at problem of low accuracy of face recognition in traditional methods, an improved facial expression recognition method is proposed. The proposed method conducts continuous confrontation training between the discriminator structure and the generator structure of the generative adversarial networks (GANs) to ensure enhanced extraction of image features of detected data set. Then, the high-accuracy recognition of facial expressions is realized. To reduce the amount of calculation, GAN generator is improved based on idea of residual network. The image is first reduced in dimension and then processed to ensure the high accuracy of the recognition method and improve real-time performance. Experimental part of the thesis uses JAFEE dataset, CK + dataset, and FER2013 dataset for simulation verification. The proposed recognition method shows obvious advantages in data sets of different sizes. The average recognition accuracy rates are 96.6%, 95.6%, and 72.8%, respectively. It proves that the method proposed has a generalization ability.


Electronics ◽  
2019 ◽  
Vol 8 (3) ◽  
pp. 324 ◽  
Author(s):  
Ridha Bendjillali ◽  
Mohammed Beladgham ◽  
Khaled Merit ◽  
Abdelmalik Taleb-Ahmed

Facial expression recognition (FER) has become one of the most important fields of research in pattern recognition. In this paper, we propose a method for the identification of facial expressions of people through their emotions. Being robust against illumination changes, this method combines four steps: Viola–Jones face detection algorithm, facial image enhancement using contrast limited adaptive histogram equalization (CLAHE) algorithm, the discrete wavelet transform (DWT), and deep convolutional neural network (CNN). We have used Viola–Jones to locate the face and facial parts; the facial image is enhanced using CLAHE; then facial features extraction is done using DWT; and finally, the extracted features are used directly to train the CNN network, for the purpose of classifying the facial expressions. Our experimental work was performed on the CK+ database and JAFFE face database. The results obtained using this network were 96.46% and 98.43%, respectively.


2012 ◽  
Vol 2012 ◽  
pp. 1-7 ◽  
Author(s):  
Lingdan Wu ◽  
Jie Pu ◽  
John J. B. Allen ◽  
Paul Pauli

Previous studies consistently reported abnormal recognition of facial expressions in depression. However, it is still not clear whether this abnormality is due to an enhanced or impaired ability to recognize facial expressions, and what underlying cognitive systems are involved. The present study aimed to examine how individuals with elevated levels of depressive symptoms differ from controls on facial expression recognition and to assess attention and information processing using eye tracking. Forty participants (18 with elevated depressive symptoms) were instructed to label facial expressions depicting one of seven emotions. Results showed that the high-depression group, in comparison with the low-depression group, recognized facial expressions faster and with comparable accuracy. Furthermore, the high-depression group demonstrated greater leftwards attention bias which has been argued to be an indicator of hyperactivation of right hemisphere during facial expression recognition.


Sign in / Sign up

Export Citation Format

Share Document