scholarly journals Nonverbal Communication through Facial Expression in Diverse Conditions

2021 ◽  
Author(s):  
Mahesh Goyani

In this chapter, we investigated computer vision technique for facial expression recognition, which increase both - the recognition rate and computational efficiency. Local and global appearance-based features are combined in order to incorporate precise local texture and global shapes. We proposed Multi-Level Haar (MLH) feature based system, which is simple and fast in computation. The driving factors behind using the Haar were its two interesting properties - signal compression and energy preservation. To depict the importance of facial geometry, we first segmented the facial components like eyebrows, eye, and mouth, and then applied feature extraction on these facial components only. Experiments are conducted on three well known publicly available expression datasets CK, JAFFE, TFEID and in-house WESFED dataset. The performance is measured against various template matching and machine learning classifiers. We achieved highest recognition rate for proposed operator with Discriminant Analysis Classifier. We studied the performance of proposed approach in several scenarios like expression recognition from low resolution, recognition from small training sample space, recognition in the presence of noise and so forth.

Author(s):  
ZHENGYOU ZHANG

In this paper, we report our experiments on feature-based facial expression recognition within an architecture based on a two-layer perceptron. We investigate the use of two types of features extracted from face images: the geometric positions of a set of fiducial points on a face, and a set of multiscale and multiorientation Gabor wavelet coefficients at these points. They can be used either independently or jointly. The recognition performance with different types of features has been compared, which shows that Gabor wavelet coefficients are much more powerful than geometric positions. Furthermore, since the first layer of the perceptron actually performs a nonlinear reduction of the dimensionality of the feature space, we have also studied the desired number of hidden units, i.e. the appropriate dimension to represent a facial expression in order to achieve a good recognition rate. It turns out that five to seven hidden units are probably enough to represent the space of facial expressions. Then, we have investigated the importance of each individual fiducial point to facial expression recognition. Sensitivity analysis reveals that points on cheeks and on forehead carry little useful information. After discarding them, not only the computational efficiency increases, but also the generalization performance slightly improves. Finally, we have studied the significance of image scales. Experiments show that facial expression recognition is mainly a low frequency process, and a spatial resolution of 64 pixels × 64 pixels is probably enough.


2014 ◽  
Vol 543-547 ◽  
pp. 2350-2353
Author(s):  
Xiao Yan Wan

In order to extract the expression features of critically ill patients, and realize the computer intelligent nursing, an improved facial expression recognition method is proposed based on the of active appearance model, the support vector machine (SVM) for facial expression recognition is taken in research, and the face recognition model structure active appearance model is designed, and the attribute reduction algorithm of rough set affine transformation theory is introduced, and the invalid and redundant feature points are removed. The critically ill patient expressions are classified and recognized based on the support vector machine (SVM). The face image attitudes are adjusted, and the self-adaptive performance of facial expression recognition for the critical patient attitudes is improved. New method overcomes the effect of patient attitude to the recognition rate to a certain extent. The highest average recognition rate can be increased about 7%. The intelligent monitoring and nursing care of critically ill patients are realized with the computer vision effect. The nursing quality is enhanced, and it ensures the timely treatment of rescue.


Author(s):  
Gopal Krishan Prajapat ◽  
Rakesh Kumar

Facial feature extraction and recognition plays a prominent role in human non-verbal interaction and it is one of the crucial factors among pose, speech, facial expression, behaviour and actions which are used in conveying information about the intentions and emotions of a human being. In this article an extended local binary pattern is used for the feature extraction process and a principal component analysis (PCA) is used for dimensionality reduction. The projections of the sample and model images are calculated and compared by Euclidean distance method. The combination of extended local binary pattern and PCA (ELBP+PCA) improves the accuracy of the recognition rate and also diminishes the evaluation complexity. The evaluation of proposed facial expression recognition approach will focus on the performance of the recognition rate. A series of tests are performed for the validation of algorithms and to compare the accuracy of the methods on the JAFFE, Extended Cohn-Kanade images database.


2020 ◽  
Vol 2020 ◽  
pp. 1-12
Author(s):  
Yifeng Zhao ◽  
Deyun Chen

Aiming at the problem of facial expression recognition under unconstrained conditions, a facial expression recognition method based on an improved capsule network model is proposed. Firstly, the expression image is normalized by illumination based on the improved Weber face, and the key points of the face are detected by the Gaussian process regression tree. Then, the 3dmms model is introduced. The 3D face shape, which is consistent with the face in the image, is provided by iterative estimation so as to further improve the image quality of face pose standardization. In this paper, we consider that the convolution features used in facial expression recognition need to be trained from the beginning and add as many different samples as possible in the training process. Finally, this paper attempts to combine the traditional deep learning technology with capsule configuration, adds an attention layer after the primary capsule layer in the capsule network, and proposes an improved capsule structure model suitable for expression recognition. The experimental results on JAFFE and BU-3DFE datasets show that the recognition rate can reach 96.66% and 80.64%, respectively.


Author(s):  
YU-YI LIAO ◽  
JZAU-SHENG LIN ◽  
SHEN-CHUAN TAI

In this paper, a facial expression recognition system based on cerebella model articulation controller with a clustering memory (CMAC-CM) is presented. Firstly, the facial expression features were automatically preprocessed and extracted from given still images in the JAFFE database in which the frontal view of faces were contained. Next, a block of lower frequency DCT coefficients was obtained by subtracting a neutral image from a given expression image and rearranged as input vectors to be fed into the CMAC-CM that can rapidly obtain output using nonlinear mapping with a look-up table in training or recognizing phase. Finally, the experimental results have demonstrated recognition rates with various block sizes of coefficients in lower frequency and cluster sizes of weight memory. A mean recognition rate of 92.86% is achieved for the testing images. CMAC-CM takes 0.028 seconds for test image in testing phase.


2011 ◽  
Vol 121-126 ◽  
pp. 617-621 ◽  
Author(s):  
Chang Yi Kao ◽  
Chin Shyurng Fahn

During the development of the facial expression classification procedure, we evaluate three machine learning methods. We combine ABAs with CARTs, which selects weak classifiers and integrates them into a strong classifier automatically. We have presented a highly automatic facial expression recognition system in which a face detection procedure is first able to detect and locate human faces in image sequences acquired in real environments. We need not label or choose characteristic blocks in advance. In the face detection procedure, some geometrical properties are applied to eliminate the skin color regions that do not belong to human faces. In the facial feature extraction procedure, we only perform both the binarization and edge detection operations on the proper ranges of eyes, mouth, and eyebrows to obtain the 16 landmarks of a human face to further produce 16 characteristic distances which represent a kind of expressions. We realize a facial expression classification procedure by employing an ABA to recognize six kinds of expressions. The performance of the system is very satisfactory; whose recognition rate achieves more than 90%.


2015 ◽  
Vol 140 ◽  
pp. 83-92 ◽  
Author(s):  
Huibin Li ◽  
Huaxiong Ding ◽  
Di Huang ◽  
Yunhong Wang ◽  
Xi Zhao ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document