scholarly journals Face Frontalization

Author(s):  
Cherukupally Sarika

Face recognition is generally utilized in PC vision. The majority of the inserted and electronic gadgets are utilizing the face verification for security purposes. FR is utilized to distinguish an individual in a video or advanced picture. To actualize this we have to have a lot of pictures of a specific individual in information base with various face stances and appearances. For this cycle it expends huge memory space to store various pictures of a solitary individual. The info profile picture should coordinate with the picture present in the information base if not the face won't be perceived.Our proposed model will decrease the need of putting away different pictures of a solitary individual. In the event that the information picture is a non-frontal picture, at that point this model will change over that picture into frontal picture. Here info picture will go through a few picture handling procedures. Picture is analog in nature which speak to consistent territory if position and force esteems.

Author(s):  
A. F. M. Saifuddin Saif ◽  
Anton Satria Prabuwono ◽  
Zainal Rasyid Mahayuddin ◽  
Teddy Mantoro

Face recognition has been used in various applications where personal identification is required. Other methods of person's identification and verification such as iris scan and finger print scan require high quality and costly equipment. The objective of this research is to present an extended principal component analysis model to recognize a person by comparing the characteristics of the face to those of new individuals for different dimension of face image. The main focus of this research is on frontal two dimensional images that are taken in a controlled environment i.e. the illumination and the background is constant. This research requires a normal camera giving a 2-D frontal image of the person that will be used for the process of the human face recognition. An Extended Principal Component Analysis (EPCA) technique has been used in the proposed model of face recognition. Based on the experimental results it is expected that proposed the EPCA performs well for different face images when a huge number of training images increases computation complexity in the database.


Author(s):  
Edy Winarno ◽  
Agus Harjoko ◽  
Aniati Murni Arymurthy ◽  
Edi Winarko

<p>The main problem in face recognition system based on half-face pattern is how to anticipate poses and illuminance variations to improve recognition rate. To solve this problem, we can use two lenses on stereo vision camera in face recognition system. Stereo vision camera has left and right lenses that can be used to produce a 2D image of each lens. Stereo vision camera in face recognition has capability to produce two of 2D face images with a different angle. Both angle of the face image will produce a detailed image of the face and better lighting levels on each of the left and right lenses. In this study, we proposed a face recognition technique, using 2 lens on a stereo vision camera namely symmetrical half-join. Symmetrical half-join is a method of normalizing the image of the face detection on each of the left and right lenses in stereo vision camera, then cropping and merging at each image. Tests on face recognition rate based on the variety of poses and variations in illumination shows that the symmetrical half-join method is able to provide a high accuracy of face recognition and can anticipate variations in given pose and illumination variations. The proposed model is able to produce 86% -97% recognition rate on a variety of poses and variations in angles between 0 °- 22.5 °. The variation of illuminance measured using a lux meter can result in 90% -100% recognition rate for the category of at least dim lighting levels (above 10 lux).</p>


Author(s):  
Jayanthi Raghavan ◽  
Majid Ahmadi

In this work, deep CNN based model have been suggested for face recognition. CNN is employed to extract unique facial features and softmax classifier is applied to classify facial images in a fully connected layer of CNN. The experiments conducted in Extended YALE B and FERET databases for smaller batch sizes and low value of learning rate, showed that the proposed model has improved the face recognition accuracy. Accuracy rates of up to 96.2% is achieved using the proposed model in Extended Yale B database. To improve the accuracy rate further, preprocessing techniques like SQI, HE, LTISN, GIC and DoG are applied to the CNN model. After the application of preprocessing techniques, the improved accuracy of 99.8% is achieved with deep CNN model for the YALE B Extended Database. In FERET Database with frontal face, before the application of preprocessing techniques, CNN model yields the maximum accuracy of 71.4%. After applying the above-mentioned preprocessing techniques, the accuracy is improved to 76.3%.


Author(s):  
Yu-Sheng Lin ◽  
Zhe-Yu Liu ◽  
Yu-An Chen ◽  
Yu-Siang Wang ◽  
Ya-Liang Chang ◽  
...  

We study the XAI (explainable AI) on the face recognition task, particularly the face verification. Face verification has become a crucial task in recent days and it has been deployed to plenty of applications, such as access control, surveillance, and automatic personal log-on for mobile devices. With the increasing amount of data, deep convolutional neural networks can achieve very high accuracy for the face verification task. Beyond exceptional performances, deep face verification models need more interpretability so that we can trust the results they generate. In this article, we propose a novel similarity metric, called explainable cosine ( xCos ), that comes with a learnable module that can be plugged into most of the verification models to provide meaningful explanations. With the help of xCos , we can see which parts of the two input faces are similar, where the model pays its attention to, and how the local similarities are weighted to form the output xCos score. We demonstrate the effectiveness of our proposed method on LFW and various competitive benchmarks, not only resulting in providing novel and desirable model interpretability for face verification but also ensuring the accuracy as plugging into existing face recognition models.


Sensors ◽  
2019 ◽  
Vol 19 (3) ◽  
pp. 578 ◽  
Author(s):  
Moisés Márquez-Olivera ◽  
Antonio-Gustavo Juárez-Gracia ◽  
Viridiana Hernández-Herrera ◽  
Amadeo-José Argüelles-Cruz ◽  
Itzamá López-Yáñez

Face recognition is a natural skill that a child performs from the first days of life; unfortunately, there are people with visual or neurological problems that prevent the individual from performing the process visually. This work describes a system that integrates Artificial Intelligence which learns the face of the people with whom the user interacts daily. During the study we propose a new hybrid model of Alpha-Beta Associative memories (Amαβ) with Correlation Matrix (CM) and K-Nearest Neighbors (KNN), where the Amαβ-CMKNN was trained with characteristic biometric vectors generated from images of faces from people who present different facial expressions such as happiness, surprise, anger and sadness. To test the performance of the hybrid model, two experiments that differ in the selection of parameters that characterize the face are conducted. The performance of the proposed model was tested in the databases CK+, CAS-PEAL-R1 and Face-MECS (own), which test the Amαβ-CMKNN with faces of subjects of both sexes, different races, facial expressions, poses and environmental conditions. The hybrid model was able to remember 100% of all the faces learned during their training, while in the test in which faces are presented that have variations with respect to those learned the results range from 95.05% in controlled environments and 86.48% in real environments using the proposed integrated system.


Paper The objective of face recognition is, given an image of a human face identify the class to which the face belongs to. Face classification is one of the useful task and can be used as a base for many real-time applications like authentication, tracking, fraud detection etc. Given a photo of a person, we humans can easily identify who the person is without any effort. But manual systems are biased and involves lot of effort and expensive. Automatic face recognition has been an important research topic due to its importance in real-time applications. The recent advance in GPU has taken many applications like image classification, hand written digit recognition and object recognition to the next level. According to the literature Deep CNN (Convolution neural network) features can effectively represent the image. In this paper we propose to use deep CNN based features for face recognition task. In this work we also investigate the effectiveness of different Deep CNN models for the task of face recognition. Initially facial features are extracted from pretrained CNN model like VGG16, VGG19, ResNet50 and Inception V3. Then a deep Neural network is used for the classification task. To show the effectiveness of the proposed model, ORL dataset is used for our experimental studies. Based on the experimental results we claim that deep CNN based features give better performance than existing hand crafted features. We also observe that the among all the pretrained CNN models we used, ResNet scores highest performance.


2021 ◽  
Author(s):  
Jayanthi Raghavan ◽  
Majid Ahmadi

In this work, deep CNN based model have been suggested for face recognition. CNN is employed to extract unique facial features and softmax classifier is applied to classify facial images in a fully connected layer of CNN. The experiments conducted in Extended YALE B and FERET databases for smaller batch sizes and low value of learning rate, showed that the proposed model has improved the face recognition accuracy. Accuracy rates of up to 96.2% is achieved using the proposed model in Extended Yale B database. To improve the accuracy rate further, preprocessing techniques like SQI, HE, LTISN, GIC and DoG are applied to the CNN model. After the application of preprocessing techniques, the improved accuracy of 99.8% is achieved with deep CNN model for the YALE B Extended Database. In FERET Database with frontal face, before the application of preprocessing techniques, CNN model yields the maximum accuracy of 71.4%. After applying the above-mentioned preprocessing techniques, the accuracy is improved to 76.3%.


Author(s):  
Edy Winarno ◽  
Agus Harjoko ◽  
Aniati Murni Arymurthy ◽  
Edi Winarko

<p>The main problem in face recognition system based on half-face pattern is how to anticipate poses and illuminance variations to improve recognition rate. To solve this problem, we can use two lenses on stereo vision camera in face recognition system. Stereo vision camera has left and right lenses that can be used to produce a 2D image of each lens. Stereo vision camera in face recognition has capability to produce two of 2D face images with a different angle. Both angle of the face image will produce a detailed image of the face and better lighting levels on each of the left and right lenses. In this study, we proposed a face recognition technique, using 2 lens on a stereo vision camera namely symmetrical half-join. Symmetrical half-join is a method of normalizing the image of the face detection on each of the left and right lenses in stereo vision camera, then cropping and merging at each image. Tests on face recognition rate based on the variety of poses and variations in illumination shows that the symmetrical half-join method is able to provide a high accuracy of face recognition and can anticipate variations in given pose and illumination variations. The proposed model is able to produce 86% -97% recognition rate on a variety of poses and variations in angles between 0 °- 22.5 °. The variation of illuminance measured using a lux meter can result in 90% -100% recognition rate for the category of at least dim lighting levels (above 10 lux).</p>


2019 ◽  
Vol 8 (3) ◽  
pp. 1204-1208

In the recent era, the importance of surveillance-related applications is increasing rapidly. In such applications, Face Recognition is becoming an emerging, fast-growing research field in the security authentication systems. Face recognition becomes one of the biometric techniques for identifying individuals face in digital images or in the stored image. It has various applications in biometrics, military, video surveillance and so on. In an earlier age, face recognition techniques implemented using a traditional approach like holistic based, hybrid and feature-based. In the traditional system, there are a number of issues like light illumination, occlusion problem, different facial expressions, and poses of the particular individual. These factors are affecting the accuracy and efficiency of the face recognition system. Nowadays there is an advancement in the technology and methods which are used in the face recognition system. The new methods and techniques of face recognition are devised by deep learning methods. The research focuses on a proposed model developed by using some Deep Learning methods and frameworks for face recognition. This model plays an important role in the authentication of an individual in the online examination system in educational institutes. Multi-level authentication is used for authenticating individual and to have crosschecked over throughout the examination period. The Deep Learning methods and frameworks overcome the issues raised in face recognition by traditional methods. This proposed model used for the authentication of an individual in educational institutes where online examinations are conducted.


2010 ◽  
Vol 69 (3) ◽  
pp. 161-167 ◽  
Author(s):  
Jisien Yang ◽  
Adrian Schwaninger

Configural processing has been considered the major contributor to the face inversion effect (FIE) in face recognition. However, most researchers have only obtained the FIE with one specific ratio of configural alteration. It remains unclear whether the ratio of configural alteration itself can mediate the occurrence of the FIE. We aimed to clarify this issue by manipulating the configural information parametrically using six different ratios, ranging from 4% to 24%. Participants were asked to judge whether a pair of faces were entirely identical or different. The paired faces that were to be compared were presented either simultaneously (Experiment 1) or sequentially (Experiment 2). Both experiments revealed that the FIE was observed only when the ratio of configural alteration was in the intermediate range. These results indicate that even though the FIE has been frequently adopted as an index to examine the underlying mechanism of face processing, the emergence of the FIE is not robust with any configural alteration but dependent on the ratio of configural alteration.


Sign in / Sign up

Export Citation Format

Share Document