scholarly journals Kernelized Heterogeneity-Aware Cross-View Face Recognition

2021 ◽  
Vol 4 ◽  
Author(s):  
Tejas I. Dhamecha ◽  
Soumyadeep Ghosh ◽  
Mayank Vatsa ◽  
Richa Singh

Cross-view or heterogeneous face matching involves comparing two different views of the face modality such as two different spectrums or resolutions. In this research, we present two heterogeneity-aware subspace techniques, heterogeneous discriminant analysis (HDA) and its kernel version (KHDA) that encode heterogeneity in the objective function and yield a suitable projection space for improved performance. They can be applied on any feature to make it heterogeneity invariant. We next propose a face recognition framework that uses existing facial features along with HDA/KHDA for matching. The effectiveness of HDA and KHDA is demonstrated using both handcrafted and learned representations on three challenging heterogeneous cross-view face recognition scenarios: (i) visible to near-infrared matching, (ii) cross-resolution matching, and (iii) digital photo to composite sketch matching. It is observed that, consistently in all the case studies, HDA and KHDA help to reduce the heterogeneity variance, clearly evidenced in the improved results. Comparison with recent heterogeneous matching algorithms shows that HDA- and KHDA-based matching yields state-of-the-art or comparable results on all three case studies. The proposed algorithms yield the best rank-1 accuracy of 99.4% on the CASIA NIR-VIS 2.0 database, up to 100% on the CMU Multi-PIE for different resolutions, and 95.2% rank-10 accuracies on the e-PRIP database for digital to composite sketch matching.

Author(s):  
Amal Seralkhatem Osman Ali ◽  
Vijanth Sagayan Asirvadam ◽  
Aamir Saeed Malik ◽  
Mohamed Meselhy Eltoukhy ◽  
Azrina Aziz

Whilst facial recognition systems are vulnerable to different acquisition conditions, most notably lighting effects and pose variations, their particular level of sensitivity to facial aging effects is yet to be researched. The face recognition vendor test (FRVT) 2012's annual statement estimated deterioration in the performance of face recognition systems due to facial aging. There was about 5% degradation in the accuracies of the face recognition systems for each single year age difference between a test image and a probe image. Consequently, developing an age-invariant platform continues to be a significant requirement for building an effective facial recognition system. The main objective of this work is to address the challenge of facial aging which affects the performance of facial recognition systems. Accordingly, this work presents a geometrical model that is based on extracting a number of triangular facial features. The proposed model comprises a total of six triangular areas connecting and surrounding the main facial features (i.e. eyes, nose and mouth). Furthermore, a set of thirty mathematical relationships are developed and used for building a feature vector for each sample image. The areas and perimeters of the extracted triangular areas are calculated and used as inputs for the developed mathematical relationships. The performance of the system is evaluated over the publicly available face and gesture recognition research network (FG-NET) face aging database. The performance of the system is compared with that of some of the state-of-the-art face recognition methods and state-of-the-art age-invariant face recognition systems. Our proposed system yielded a good performance in term of classification accuracy of more than 94%.


Author(s):  
M. Parisa Beham ◽  
S. M. Mansoor Roomi ◽  
J. Alageshan ◽  
V. Kapileshwaran

Face recognition and authentication are two significant and dynamic research issues in computer vision applications. There are many factors that should be accounted for face recognition; among them pose variation is a major challenge which severely influence in the performance of face recognition. In order to improve the performance, several research methods have been developed to perform the face recognition process with pose invariant conditions in constrained and unconstrained environments. In this paper, the authors analyzed the performance of a popular texture descriptors viz., Local Binary Pattern, Local Derivative Pattern and Histograms of Oriented Gradients for pose invariant problem. State of the art preprocessing techniques such as Discrete Cosine Transform, Difference of Gaussian, Multi Scale Retinex and Gradient face have also been applied before feature extraction. In the recognition phase K- nearest neighbor classifier is used to accomplish the classification task. To evaluate the efficiency of pose invariant face recognition algorithm three publicly available databases viz. UMIST, ORL and LFW datasets have been used. The above said databases have very wide pose variations and it is proved that the state of the art method is efficient only in constrained situations.


2020 ◽  
Vol 2020 ◽  
pp. 1-7
Author(s):  
Ahmed Jawad A. AlBdairi ◽  
Zhu Xiao ◽  
Mohammed Alghaili

The interest in face recognition studies has grown rapidly in the last decade. One of the most important problems in face recognition is the identification of ethnics of people. In this study, a new deep learning convolutional neural network is designed to create a new model that can recognize the ethnics of people through their facial features. The new dataset for ethnics of people consists of 3141 images collected from three different nationalities. To the best of our knowledge, this is the first image dataset collected for the ethnics of people and that dataset will be available for the research community. The new model was compared with two state-of-the-art models, VGG and Inception V3, and the validation accuracy was calculated for each convolutional neural network. The generated models have been tested through several images of people, and the results show that the best performance was achieved by our model with a verification accuracy of 96.9%.


2020 ◽  
Author(s):  
ASHUTOSH DHAMIJA ◽  
R.B DUBEY

Abstract Forage, face recognition is one of the most demanding field challenges, since aging affects the shape and structure of the face. Age invariant face recognition (AIFR) is a relatively new area in face recognition studies, which in real-world implementations recently gained considerable interest due to its huge potential and relevance. The AIFR, however, is still evolving and evolving, providing substantial potential for further study and progress inaccuracy. Major issues with the AIFR involve major variations in appearance, texture, and facial features and discrepancies in position and illumination. These problems restrict the AIFR systems developed and intensify identity recognition tasks. To address this problem, a new technique Quadratic Support Vector Machine- Principal Component Analysis (QSVM-PCA) is introduced. Experimental results suggest that our QSVM-PCA achieved better results especially when the age range is larger than other existing techniques of face-aging datasets of FGNET. The maximum accuracy achieved by demonstrated methodology is 98.87%.


Author(s):  
Ramkumar Govindaraj ◽  
E. Logashanmugam

In recent times face tracking and face recognition have turned out to be increasingly dynamic research field in image processing. This work proposed the framework DEtecting Contiguous Outliers in the LOw-rank Representation for face tracking, in this algorithm the background is assessed by a low-rank network and foreground articles can be distinguished as anomalies. This is suitable for non-rigid foreground motion and moving camera. The face of a foreground person is caught from the frame and then it is contrasted and the speculated pictures stored in the dataset. Here we used Viola-Jones algorithm for face recognition. This approach outperforms the traditional algorithms on multimodal video methodologies and it works adequately on extensive variety of security and surveillance purposes. Results on the continuous demonstrate that the proposed calculation can correctly obtain facial features points. The algorithm is relegate on the continuous camera input and under ongoing ecological conditions.


2019 ◽  
Vol 8 (4) ◽  
pp. 6670-6674

Face Recognition is the most important part to identifying people in biometric system. It is the most usable biometric system. This paper focuses on human face recognition by calculating the facial features present in the image and recognizing the person using features. In every face recognition system follows the preprocessing, face detection techniques. In this paper mainly focused on Face detection and gender classification. They are performed in two stages, the first stage is face detection using an enhanced viola jones algorithm and the next stage is gender classification. Input to the video or surveillance that video converted into frames. Select few best frames from the video for detecting the face, before the particular image preprocessed using PSNR. After preprocessing face detection performed, and gender classification comparative analysis done by using a neural network classifier and LBP based classifier


Author(s):  
T. Ravindra Babu ◽  
Chethan S.A. Danivas ◽  
S.V. Subrahmanya

Face Recognition is an active research area. In many practical scenarios, when faces are acquired without the cooperation or knowledge of the subject, they are likely to get occluded. Apart from image background, pose, illumination, and orientation of the faces, occlusion forms an additional challenge for face recognition. Recognizing faces that are partially visible is a challenging task. Most of the solutions to the problem focus on reconstruction or restoration of the occluded part before attempting to recognize the face. In the current chapter, the authors discuss various approaches to face recognition, challenges in face recognition of occluded images, and approaches to solve the problem. The authors propose an adaptive system that accepts the localized region of occlusion and recognizes the face adaptively. The chapter demonstrates through case studies that the proposed scheme recognizes the partially occluded faces as accurately as the un-occluded faces and in some cases outperforms the recognition using un-occluded face images.


Author(s):  
Daniel J. Carragher ◽  
Peter J. B. Hancock

AbstractIn response to the COVID-19 pandemic, many governments around the world now recommend, or require, that their citizens cover the lower half of their face in public. Consequently, many people now wear surgical face masks in public. We investigated whether surgical face masks affected the performance of human observers, and a state-of-the-art face recognition system, on tasks of perceptual face matching. Participants judged whether two simultaneously presented face photographs showed the same person or two different people. We superimposed images of surgical masks over the faces, creating three different mask conditions: control (no masks), mixed (one face wearing a mask), and masked (both faces wearing masks). We found that surgical face masks have a large detrimental effect on human face matching performance, and that the degree of impairment is the same regardless of whether one or both faces in each pair are masked. Surprisingly, this impairment is similar in size for both familiar and unfamiliar faces. When matching masked faces, human observers are biased to reject unfamiliar faces as “mismatches” and to accept familiar faces as “matches”. Finally, the face recognition system showed very high classification accuracy for control and masked stimuli, even though it had not been trained to recognise masked faces. However, accuracy fell markedly when one face was masked and the other was not. Our findings demonstrate that surgical face masks impair the ability of humans, and naïve face recognition systems, to perform perceptual face matching tasks. Identification decisions for masked faces should be treated with caution.


2020 ◽  
Vol 10 (24) ◽  
pp. 8940
Author(s):  
Wanshun Gao ◽  
Xi Zhao ◽  
Jianhua Zou

Face recognition under drastic pose drops rapidly due to the limited samples during the model training. In this paper, we propose a pose-autoaugment face recognition framework (PAFR) based on the training of a Convolutional Neural Network (CNN) with multi-view face augmentation. The proposed framework consists of three parts: face augmentation, CNN training, and face matching. The face augmentation part is composed of pose autoaugment and background appending for increasing the pose variations of each subject. In the second part, we train a CNN model with the generated facial images to enhance the pose-invariant feature extraction. In the third part, we concatenate the feature vectors of each face and its horizontally flipped face from the trained CNN model to obtain a robust feature. The correlation score between the two faces is computed by the cosine similarity of their robust features. Comparable experiments are demonstrated on Bosphorus and CASIA-3D databases.


2021 ◽  
Vol 37 (5) ◽  
pp. 292-297
Author(s):  
Winney Eva

In the past two decades, many face recognition methods have been proposed. Among them, most researchers use the entire face as the basis for recognition. The basic technical route is to extract and compare the general features of the entire face. However, in actual scenes, human faces may be blocked by obstacles. Therefore, how to realize face recognition by using some of the facial features that can be obtained? In addition, this partial face recognition technology is mostly based on the acquisition of key points of the face to recognize the whole face. This review intends to summarize the full face and partial face recognition methods based on key points of the face.


Sign in / Sign up

Export Citation Format

Share Document