scholarly journals FaceFilter: Face Identification with Deep Learning and Filter Algorithm

2020 ◽  
Vol 2020 ◽  
pp. 1-9
Author(s):  
Mohammed Alghaili ◽  
Zhiyong Li ◽  
Hamdi A. R. Ali

Although significant advances have been made recently in the field of face recognition, these have some limitations, especially when faces are in different poses or have different levels of illumination, or when the face is blurred. In this study, we present a system that can directly identify an individual under all conditions by extracting the most important features and using them to identify a person. Our method uses a deep convolutional network that is trained to extract the most important features. A filter is then used to select the most significant of these features by finding features greater than zero, storing their indices, and comparing the features of other identities with the same indices as the original image. Finally, the selected features of each identity in the dataset are subtracted from features of the original image to find the minimum number that refers to that identity. This method gives good results, as we only extract the most important features using the filter to recognize the face in different poses. We achieve state-of-the-art face recognition performance using only half of the 128 bytes per face. The system has an accuracy of 99.7% on the Labeled Faces in the Wild dataset and 94.02% on YouTube Faces DB.

Author(s):  
Dat Chu ◽  
Shishir Shah ◽  
Ioannis A. Kakadiaris

Performing face recognition under extreme poses and lighting conditions remains a challenging task for current state-of-the-art biometric algorithms. The recognition task is even more challenging when there is insufficient training data available in the gallery, or when the gallery dataset originates from one side of the face while the probe dataset originates from the other. The authors present a new method for computing the distance between two biometric signatures acquired under such challenging conditions. This method improves upon an existing Semi-Coupled Dictionary Learning method by computing a jointly-optimized solution that incorporates the reconstruction cost, the discrimination cost, and the semi-coupling cost. The use of a semi-coupling term allows the method to handle partial 3D face meshes where, for example, only the left side of the face is available for gallery and the right side of the face is available for probe. The method also extends to 2D signatures under varying poses and lighting changes by using 3D signatures as a coupling term. The experiments show that this method can improve recognition performance of existing state-of-the-art wavelet signatures used in 3D face recognition and provide excellent recognition results in the 3D-2D face recognition application.


2016 ◽  
Vol 2016 ◽  
pp. 1-9 ◽  
Author(s):  
Radhey Shyam ◽  
Yogendra Narain Singh

This paper presents a critical evaluation of multialgorithmic face recognition systems for human authentication in unconstrained environment. We propose different frameworks of multialgorithmic face recognition system combining holistic and texture methods. Our aim is to combine the uncorrelated methods of the face recognition that supplement each other and to produce a comprehensive representation of the biometric cue to achieve optimum recognition performance. The multialgorithmic frameworks are designed to combine different face recognition methods such as (i) Eigenfaces and local binary pattern (LBP), (ii) Fisherfaces and LBP, (iii) Eigenfaces and augmented local binary pattern (A-LBP), and (iv) Fisherfaces and A-LBP. The matching scores of these multialgorithmic frameworks are processed using different normalization techniques whereas their performance is evaluated using different fusion strategies. The robustness of proposed multialgorithmic frameworks of face recognition system is tested on publicly available databases, for example, AT & T (ORL) and Labeled Faces in the Wild (LFW). The experimental results show a significant improvement in recognition accuracies of the proposed frameworks of face recognition system in comparison to their individual methods. In particular, the performance of the multialgorithmic frameworks combining face recognition methods with the devised face recognition method such as A-LBP improves significantly.


2019 ◽  
Vol 35 (05) ◽  
pp. 525-533
Author(s):  
Evrim Gülbetekin ◽  
Seda Bayraktar ◽  
Özlenen Özkan ◽  
Hilmi Uysal ◽  
Ömer Özkan

AbstractThe authors tested face discrimination, face recognition, object discrimination, and object recognition in two face transplantation patients (FTPs) who had facial injury since infancy, a patient who had a facial surgery due to a recent wound, and two control subjects. In Experiment 1, the authors showed them original faces and morphed forms of those faces and asked them to rate the similarity between the two. In Experiment 2, they showed old, new, and implicit faces and asked whether they recognized them or not. In Experiment 3, they showed them original objects and morphed forms of those objects and asked them to rate the similarity between the two. In Experiment 4, they showed old, new, and implicit objects and asked whether they recognized them or not. Object discrimination and object recognition performance did not differ between the FTPs and the controls. However, the face discrimination performance of FTP2 and face recognition performance of the FTP1 were poorer than that of the controls were. Therefore, the authors concluded that the structure of the face might affect face processing.


2018 ◽  
Vol 9 (1) ◽  
pp. 60-77 ◽  
Author(s):  
Souhir Sghaier ◽  
Wajdi Farhat ◽  
Chokri Souani

This manuscript presents an improved system research that can detect and recognize the person in 3D space automatically and without the interaction of the people's faces. This system is based not only on a quantum computation and measurements to extract the vector features in the phase of characterization but also on learning algorithm (using SVM) to classify and recognize the person. This research presents an improved technique for automatic 3D face recognition using anthropometric proportions and measurement to detect and extract the area of interest which is unaffected by facial expression. This approach is able to treat incomplete and noisy images and reject the non-facial areas automatically. Moreover, it can deal with the presence of holes in the meshed and textured 3D image. It is also stable against small translation and rotation of the face. All the experimental tests have been done with two 3D face datasets FRAV 3D and GAVAB. Therefore, the test's results of the proposed approach are promising because they showed that it is competitive comparable to similar approaches in terms of accuracy, robustness, and flexibility. It achieves a high recognition performance rate of 95.35% for faces with neutral and non-neutral expressions for the identification and 98.36% for the authentification with GAVAB and 100% with some gallery of FRAV 3D datasets.


Now a days one of the critical factors that affects the recognition performance of any face recognition system is partial occlusion. The paper addresses face recognition in the presence of sunglasses and scarf occlusion. The face recognition approach that we proposed, detects the face region that is not occluded and then uses this region to obtain the face recognition. To segment the occluded and non-occluded parts, adaptive Fuzzy C-Means Clustering is used and for recognition Minimum Cost Sub-Block Matching Distance(MCSBMD) are used. The input face image is divided in to number of sub blocks and each block is checked if occlusion present or not and only from non-occluded blocks MWLBP features are extracted and are used for classification. Experiment results shows our method is giving promising results when compared to the other conventional techniques.


Author(s):  
M. Parisa Beham ◽  
S. M. Mansoor Roomi ◽  
J. Alageshan ◽  
V. Kapileshwaran

Face recognition and authentication are two significant and dynamic research issues in computer vision applications. There are many factors that should be accounted for face recognition; among them pose variation is a major challenge which severely influence in the performance of face recognition. In order to improve the performance, several research methods have been developed to perform the face recognition process with pose invariant conditions in constrained and unconstrained environments. In this paper, the authors analyzed the performance of a popular texture descriptors viz., Local Binary Pattern, Local Derivative Pattern and Histograms of Oriented Gradients for pose invariant problem. State of the art preprocessing techniques such as Discrete Cosine Transform, Difference of Gaussian, Multi Scale Retinex and Gradient face have also been applied before feature extraction. In the recognition phase K- nearest neighbor classifier is used to accomplish the classification task. To evaluate the efficiency of pose invariant face recognition algorithm three publicly available databases viz. UMIST, ORL and LFW datasets have been used. The above said databases have very wide pose variations and it is proved that the state of the art method is efficient only in constrained situations.


2018 ◽  
Vol 7 (3.34) ◽  
pp. 237
Author(s):  
R Aswini Priyanka ◽  
C Ashwitha ◽  
R Arun Chakravarthi ◽  
R Prakash

In scientific world, Face recognition becomes an important research topic. The face identification system is an application capable of verifying a human face from a live videos or digital images. One of the best methods is to compare the particular facial attributes of a person with the images and its database. It is widely used in biometrics and security systems. Back in old days, face identification was a challenging concept. Because of the variations in viewpoint and facial expression, the deep learning neural network came into the technology stack it’s been very easy to detect and recognize the faces. The efficiency has increased dramatically. In this paper, ORL database is about the ten images of forty people helps to evaluate our methodology. We use the concept of Back Propagation Neural Network (BPNN) in deep learning model is to recognize the faces and increase the efficiency of the model compared to previously existing face recognition models.   


2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Tongxin Wei ◽  
Qingbao Li ◽  
Jinjin Liu ◽  
Ping Zhang ◽  
Zhifeng Chen

In the process of face recognition, face acquisition data is seriously distorted. Many face images collected are blurred or even missing. Faced with so many problems, the traditional image inpainting was based on structure, while the current popular image inpainting method is based on deep convolutional neural network and generative adversarial nets. In this paper, we propose a 3D face image inpainting method based on generative adversarial nets. We identify two parallels of the vector to locate the planer positions. Compared with the previous, the edge information of the missing image is detected, and the edge fuzzy inpainting can achieve better visual match effect. We make the face recognition performance dramatically boost.


GEOMATICA ◽  
2019 ◽  
Vol 73 (2) ◽  
pp. 29-44
Author(s):  
Won Mo Jung ◽  
Faizaan Naveed ◽  
Baoxin Hu ◽  
Jianguo Wang ◽  
Ningyuan Li

With the advance of deep learning networks, their applications in the assessment of pavement conditions are gaining more attention. A convolutional neural network (CNN) is the most commonly used network in image classification. In terms of pavement assessment, most existing CNNs are designed to only distinguish between cracks and non-cracks. Few networks classify cracks in different levels of severity. Information on the severity of pavement cracks is critical for pavement repair services. In this study, the state-of-the-art CNN used in the detection of pavement cracks was improved to localize the cracks and identify their distress levels based on three categories (low, medium, and high). In addition, a fully convolutional network (FCN) was, for the first time, utilized in the detection of pavement cracks. These designed architectures were validated using the data acquired on four highways in Ontario, Canada, and compared with the ground truth that was provided by the Ministry of Transportation of Ontario (MTO). The results showed that with the improved CNN, the prediction precision on a series of test image patches were 72.9%, 73.9%, and 73.1% for cracks with the severity levels of low, medium, and high, respectively. The precision for the FCN was tested on whole pavement images, resulting in 62.8%, 63.3%, and 66.4%, respectively, for cracks with the severity levels of low, medium, and high. It is worth mentioning that the ground truth contained some uncertainties, which partially contributed to the relatively low precision.


Sign in / Sign up

Export Citation Format

Share Document