POSE-EXPRESSION NORMALIZATION FOR FACE RECOGNITION USING CONNECTED COMPONENTS ANALYSIS

Author(s):  
JAE-YOUNG CHOI ◽  
TAEG-KEUN WHANGBO ◽  
YOUNG-GYU YANG ◽  
MURLIKRISHNA VISWANATHAN ◽  
NAK-BIN KIM

Accurate measurement of poses and expressions can increase the efficiency of recognition systems by avoiding the recognition of spurious faces. This paper presents a novel and robust pose-expression invariant face recognition method in order to improve the existing face recognition techniques. First, we apply the TSL color model for detecting facial region and estimate the vector X-Y-Z of face using connected components analysis. Second, the input face is mapped by a deformable 3D facial model. Third, the mapped face is transformed to the frontal face which appropriates for face recognition by the estimated pose vector and action unit of expression. Finally, the damaged regions which occur during the process of normalization are reconstructed using PCA. Several empirical tests are used to validate the application of face detection model and the method for estimating facial poses and expression. In addition, the tests suggest that recognition rate is greatly boosted through the normalization of the poses and expression.

Author(s):  
Yallamandaiah S. ◽  
Purnachand N.

<p>In the area of computer vision, face recognition is a challenging task because of the pose, facial expression, and illumination variations. The performance of face recognition systems reduces in an unconstrained environment. In this work, a new face recognition approach is proposed using a guided image filter, and a convolutional neural network (CNN). The guided image filter is a smoothing operator and performs well near the edges. Initially, the ViolaJones algorithm is used to detect the face region and then smoothened by a guided image filter. Later the proposed CNN is used to extract the features and recognize the faces. The experiments were performed on face databases like ORL, JAFFE, and YALE and attained a recognition rate of 98.33%, 99.53%, and 98.65% respectively. The experimental results show that the suggested face recognition method attains good results than some of the state-of-the-art techniques.</p>


2013 ◽  
Vol 10 (2) ◽  
pp. 1330-1338
Author(s):  
Vasudha S ◽  
Neelamma K. Patil ◽  
Dr. Lokesh R. Boregowda

Face recognition is one of the important applications of image processing and it has gained significant attention in wide range of law enforcement areas in which security is of prime concern. Although the existing automated machine recognition systems have certain level of maturity but their accomplishments are limited due to real time challenges. Face recognition systems are impressively sensitive to appearance variations due to lighting, expression and aging. The major metric in modeling the performance of a face recognition system is its accuracy of recognition. This paper proposes a novel method which improves the recognition accuracy as well as avoids face datasets being tampered through image splicing techniques. Proposed method uses a non-statistical procedure which avoids training step for face samples thereby avoiding generalizability problem which is caused due to statistical learning procedure. This proposed method performs well with images with partial occlusion and images with lighting variations as the local patch of the face is divided into several different patches. The performance improvement is shown considerably high in terms of recognition rate and storage space by storing train images in compressed domain and selecting significant features from superset of feature vectors for actual recognition.


Author(s):  
Leonel Ramí­rez-Valdez ◽  
Rogelio Hasimoto-Beltran

One of the main problems in Face Recognition systems is the recognition of an input face with a different expression than the available in the training database. In this work, we propose a new 3D‐face expression synthesis approach for expression independent face recognition systems (FRS). Different than current schemes in the literature, all the steps involved in our approach (face denoising, registration, and expression synthesis) are performed in the 3D domain. Our final goal is to increase the flexibility of 3D‐FRS by allowing them to artificially generate multiple face expressions from a neutral expression face. A generic 3D‐range image is modeled by the Finite Element Method with three simplified layers representing the skin, fatty tissue and the cranium. The face muscular anatomy is superimposed to the 3D model for the synthesis of expressions. Our approach can be divided into three main steps: Denoising Algorithm, which is applied to remove long peaks present in the original 3Dface samples; Automatic Control Points Detection, to detect particular facial landmarks such as eye and mouth corners, nose tip, etc., helpful in the recognition process; Face Registration of a 3D‐face model with each sample face with neutral expression in the training database in order to augment its training set (with 18 predefined expressions). Additional expressions can be learned from input faces or an unknown expression can be transformed to the closest known expression. Our results show that the 3D‐face model resembles perfectly the neutral expression faces in the training database while providing a natural change of expression. Moreover, the inclusion of our expression synthesis approach in a simple 3D‐FRS based on Fisherfaces increased significantly the recognition rate without requiring complex 3D‐face recognition chemes.


Sign in / Sign up

Export Citation Format

Share Document