scholarly journals Robust Face Recognition System Based on a Multi-Views Face Database

Author(s):  
Dominique Ginhac ◽  
Fan Yang ◽  
Xiaojuan Liu ◽  
Jianwu Dang ◽  
Michel Paindavoine
Author(s):  
Widodo Budiharto

The variation in illumination is one of the main challenging problem for face recognition. It has been proven that in face recognition, differences caused by illumination variations are more significant than differences between individuals. Recognizing face reliably across changes in pose and illumination using PCA has proved to be a much harder problem because eigenfaces method comparing the intensity of the pixel. To solve this problem, this research proposes an online face recognition system using improved PCA for a service robot in indoor environment based on stereo vision. Tested images are improved by generating random values for varying the intensity of face images. A program for online training is also developed where the tested images are captured real-time from camera. Varying illumination in tested images will increase the accuracy using ITS face database which its accuracy is 95.5 %, higher than ATT face database’s as 95.4% and Indian face database’s as 72%. The results from this experiment are still evaluated to be improved in the future.


Author(s):  
Ting Shan ◽  
Abbas Bigdeli ◽  
Brian C. Lovell ◽  
Shaokang Chen

In this chapter, we propose a pose variability compensation technique, which synthesizes realistic frontal face images from nonfrontal views. It is based on modeling the face via active appearance models and estimating the pose through a correlation model. The proposed technique is coupled with adaptive principal component analysis (APCA), which was previously shown to perform well in the presence of both lighting and expression variations. The proposed recognition techniques, though advanced, are not computationally intensive. So they are quite well suited to the embedded system environment. Indeed, the authors have implemented an early prototype of a face recognition module on a mobile camera phone so the camera can be used to identify the person holding the phone.


2022 ◽  
Vol 12 (1) ◽  
pp. 497
Author(s):  
Vicente Pavez ◽  
Gabriel Hermosilla ◽  
Francisco Pizarro ◽  
Sebastián Fingerhuth ◽  
Daniel Yunge

This article shows how to create a robust thermal face recognition system based on the FaceNet architecture. We propose a method for generating thermal images to create a thermal face database with six different attributes (frown, glasses, rotation, normal, vocal, and smile) based on various deep learning models. First, we use StyleCLIP, which oversees manipulating the latent space of the input visible image to add the desired attributes to the visible face. Second, we use the GANs N’ Roses (GNR) model, a multimodal image-to-image framework. It uses maps of style and content to generate thermal imaging from visible images, using generative adversarial approaches. Using the proposed generator system, we create a database of synthetic thermal faces composed of more than 100k images corresponding to 3227 individuals. When trained and tested using the synthetic database, the Thermal-FaceNet model obtained a 99.98% accuracy. Furthermore, when tested with a real database, the accuracy was more than 98%, validating the proposed thermal images generator system.


Sign in / Sign up

Export Citation Format

Share Document