Facial Expression Recognition Based on Features Fusion

2010 ◽  
Vol 20-23 ◽  
pp. 1253-1259
Author(s):  
Chang Jun Zhou ◽  
Xiao Peng Wei ◽  
Qiang Zhang

In this paper, we propose a novel algorithm for facial recognition based on features fusion in support vector machine (SVM). First, some local features and global features from pre-processed face images are obtained. The global features are obtained by making use of singular value decomposition (SVD). At the same time, the local features are obtained by utilizing principal component analysis (PCA) to extract the principal Gabor features. Finally, the feature vectors which are fused with global and local features are used to train SVM to realize the face expression recognition, and the computer simulation illustrates the effectivity of this method on the JAFFE database.

2014 ◽  
Vol 926-930 ◽  
pp. 3598-3603
Author(s):  
Xiao Xiong ◽  
Guo Fa Hao ◽  
Peng Zhong

Face recognition belongs to the important content of the biometric identification, which is a important method in research of image processing and pattern recognition. It can effectively overcome the traditional authentication defects Through the facial recognition technology. At present, face recognition under ideal state research made some achievements, but the changes in light, shade, expression, posture changes the interference factors such as face recognition is still exist many problems. For this, put forward the integration of global and local features of face recognition research. Practice has proved that through the effective integration of global features and local characteristics, build based on global features and local features fusion face recognition system, can improve the recognition rate of face recognition, face recognition application benefit.


2021 ◽  
Vol 260 ◽  
pp. 03013
Author(s):  
Yuqing Xie ◽  
Haichao Huang ◽  
Jianguang Hong ◽  
Xianke Zhou ◽  
Shilong Wu ◽  
...  

Facial expression recognition (FER) is an important means for machines to perceive human emotions and interact with human beings. Most of the existing facial expression recognition methods only use a single convolutional neural network to extract the global features of the face. Some insignificant details and features with low frequency are easy to be ignored, and part of the facial features are lost. This paper proposes a facial expression recognition method based on multi branch structure, which extracts the global and detailed features of the face from the global and local aspects respectively, so as to make a more detailed representation of the facial expression and further improve the accuracy of facial expression recognition. Specifically, we first design a multi branch network, which takes Resnet-50 as the backbone network. The network structure after Conv Block3 is divided into three branches. The first branch is used to extract the global features of the face, and the second and third branches are used to cut the face into two parts and three parts after Conv Block5 to extract the detailed features of the face. Finally, the global features and detail features are fused in the full connection layer and input into the classifier for classification. The experimental results show that the accuracy of this method is 73.7%, which is 4% higher than that of traditional Resnet-50, which fully verifies the effectiveness of this method.


2012 ◽  
Vol 12 (01) ◽  
pp. 1250005 ◽  
Author(s):  
J. SHEEBA RANI

In this paper, a hybrid feature extraction technique using 2D principal component analysis (2DPCA) and discrete orthogonal Krawtchouk moment (KM) are used to extract the global and local features from the face. Ensemble of RBF classifiers are used to classify the image. Decision-level fusion is done using fuzzy integral to generate more accurate classification than each of the constituent classifiers. The proposed system is evaluated using ORL and YALE databases. Experimental results show that the combination of global and local features promotes the system performance. The fusion of multiple RBFs using fuzzy integral performed better as compared to conventional aggregation rules.


Optical Character Recognition (OCR) is an automatic reading of text components that are optically sensed to translate human-readable characters into machine-rea dable codes. In handwritten the style of writing vary from person to person, so it is very challenging task to segment and recognize the characters. In this paper we are proposing segmentation and feature extraction techniques to recognise camera captured, handwritten Kannada documents. The segmentation is done by using projection profile technique & Connected Component Analysis (CCA). The pre-processing technique to detect the edges of Kannada character, we have proposed our own technique by combining of Sobel and Canny edge detection. The feature selection and extraction is done in two level, global and local features. Global features are extracted from entire image. In local feature extraction we divided an input character image in to four quadrate based on centroid of character and we will extract local features from all quadrates rather than whole image. We have used Support vector machine (SVM) to classify the handwritten Kannada characters. To evaluate the efficiency of proposed system we have used KHDD dataset, our own document and character dataset. The experimental results shows that our proposed features selection and extraction achieved 96.31% of accuracy, results are encouraging


2014 ◽  
Vol 519-520 ◽  
pp. 644-650
Author(s):  
Mian Shui Yu ◽  
Yu Xie ◽  
Xiao Meng Xie

Age classification based on facial images is attracting wide attention with its broad application to human-computer interaction (HCI). Since human senescence is a tremendously complex process, age classification is still a highly challenging issue. In our study, Local Directional Pattern (LDP) and Gabor wavelet transform were used to extract global and local facial features, respectively, that were fused based on information fusion theory. The Principal Component Analysis (PCA) method was used for dimensionality reduction of the fused features, to obtain a lower-dimensional age characteristic vector. A Support Vector Machine (SVM) multi-class classifier with Error Correcting Output Codes (ECOC) was proposed in the paper. This was aimed at multi-class classification problems, such as age classification. Experiments on a public FG-NET age database proved the efficiency of our method.


2014 ◽  
Vol 543-547 ◽  
pp. 2350-2353
Author(s):  
Xiao Yan Wan

In order to extract the expression features of critically ill patients, and realize the computer intelligent nursing, an improved facial expression recognition method is proposed based on the of active appearance model, the support vector machine (SVM) for facial expression recognition is taken in research, and the face recognition model structure active appearance model is designed, and the attribute reduction algorithm of rough set affine transformation theory is introduced, and the invalid and redundant feature points are removed. The critically ill patient expressions are classified and recognized based on the support vector machine (SVM). The face image attitudes are adjusted, and the self-adaptive performance of facial expression recognition for the critical patient attitudes is improved. New method overcomes the effect of patient attitude to the recognition rate to a certain extent. The highest average recognition rate can be increased about 7%. The intelligent monitoring and nursing care of critically ill patients are realized with the computer vision effect. The nursing quality is enhanced, and it ensures the timely treatment of rescue.


Author(s):  
Zhixian Chen ◽  
Jialin Tang ◽  
Xueyuan Gong ◽  
Qinglang Su

In order to improve the low accuracy of the face recognition methods in the case of e-health, this paper proposed a novel face recognition approach, which is based on convolutional neural network (CNN). In detail, through resolving the convolutional kernel, rectified linear unit (ReLU) activation function, dropout, and batch normalization, this novel approach reduces the number of parameters of the CNN model, improves the non-linearity of the CNN model, and alleviates overfitting of the CNN model. In these ways, the accuracy of face recognition is increased. In the experiments, the proposed approach is compared with principal component analysis (PCA) and support vector machine (SVM) on ORL, Cohn-Kanade, and extended Yale-B face recognition data set, and it proves that this approach is promising.


2020 ◽  
Vol 20 (19) ◽  
pp. 11412-11420 ◽  
Author(s):  
Duo Xu ◽  
Pengfei Jia ◽  
Huaisheng Cao ◽  
Wen Cao ◽  
Guocheng Wu

2006 ◽  
Vol 18 (6) ◽  
pp. 744-750
Author(s):  
Ryouta Nakano ◽  
◽  
Kazuhiro Hotta ◽  
Haruhisa Takahashi

This paper presents an object detection method using independent local feature extractor. Since objects are composed of a combination of characteristic parts, a good object detector could be developed if local parts specialized for a detection target are derived automatically from training samples. To do this, we use Independent Component Analysis (ICA) which decomposes a signal into independent elementary signals. We then used the basis vectors derived by ICA as independent local feature extractors specialized for a detection target. These feature extractors are applied to a candidate area, and their outputs are used in classification. However, the number of dimension of extracted independent local features is very high. To reduce the extracted independent local features efficiently, we use Higher-order Local AutoCorrelation (HLAC) features to extract the information that relates neighboring features. This may be more effective for object detection than simple independent local features. To classify detection targets and non-targets, we use a Support Vector Machine (SVM). The proposed method is applied to a car detection problem. Superior performance is obtained by comparison with Principal Component Analysis (PCA).


2020 ◽  
Author(s):  
ASHUTOSH DHAMIJA ◽  
R.B DUBEY

Abstract Forage, face recognition is one of the most demanding field challenges, since aging affects the shape and structure of the face. Age invariant face recognition (AIFR) is a relatively new area in face recognition studies, which in real-world implementations recently gained considerable interest due to its huge potential and relevance. The AIFR, however, is still evolving and evolving, providing substantial potential for further study and progress inaccuracy. Major issues with the AIFR involve major variations in appearance, texture, and facial features and discrepancies in position and illumination. These problems restrict the AIFR systems developed and intensify identity recognition tasks. To address this problem, a new technique Quadratic Support Vector Machine- Principal Component Analysis (QSVM-PCA) is introduced. Experimental results suggest that our QSVM-PCA achieved better results especially when the age range is larger than other existing techniques of face-aging datasets of FGNET. The maximum accuracy achieved by demonstrated methodology is 98.87%.


Sign in / Sign up

Export Citation Format

Share Document