Deep face recognition using full and partial face images

2022 ◽  
pp. 221-241
Author(s):  
Hassan Ugail
Keyword(s):  
Entropy ◽  
2021 ◽  
Vol 23 (7) ◽  
pp. 878
Author(s):  
C. T. J. Dodson ◽  
John Soldera ◽  
Jacob Scharcanski

Secure user access to devices and datasets is widely enabled by fingerprint or face recognition. Organization of the necessarily large secure digital object datasets, with objects having content that may consist of images, text, video or audio, involves efficient classification and feature retrieval processing. This usually will require multidimensional methods applicable to data that is represented through a family of probability distributions. Then information geometry is an appropriate context in which to provide for such analytic work, whether with maximum likelihood fitted distributions or empirical frequency distributions. The important provision is of a natural geometric measure structure on families of probability distributions by representing them as Riemannian manifolds. Then the distributions are points lying in this geometrical manifold, different features can be identified and dissimilarities computed, so that neighbourhoods of objects nearby a given example object can be constructed. This can reveal clustering and projections onto smaller eigen-subspaces which can make comparisons easier to interpret. Geodesic distances can be used as a natural dissimilarity metric applied over data described by probability distributions. Exploring this property, we propose a new face recognition method which scores dissimilarities between face images by multiplying geodesic distance approximations between 3-variate RGB Gaussians representative of colour face images, and also obtaining joint probabilities. The experimental results show that this new method is more successful in recognition rates than published comparative state-of-the-art methods.


2021 ◽  
pp. 1-11
Author(s):  
Suphawimon Phawinee ◽  
Jing-Fang Cai ◽  
Zhe-Yu Guo ◽  
Hao-Ze Zheng ◽  
Guan-Chen Chen

Internet of Things is considerably increasing the levels of convenience at homes. The smart door lock is an entry product for smart homes. This work used Raspberry Pi, because of its low cost, as the main control board to apply face recognition technology to a door lock. The installation of the control sensing module with the GPIO expansion function of Raspberry Pi also improved the antitheft mechanism of the door lock. For ease of use, a mobile application (hereafter, app) was developed for users to upload their face images for processing. The app sends the images to Firebase and then the program downloads the images and captures the face as a training set. The face detection system was designed on the basis of machine learning and equipped with a Haar built-in OpenCV graphics recognition program. The system used four training methods: convolutional neural network, VGG-16, VGG-19, and ResNet50. After the training process, the program could recognize the user’s face to open the door lock. A prototype was constructed that could control the door lock and the antitheft system and stream real-time images from the camera to the app.


Electronics ◽  
2021 ◽  
Vol 10 (9) ◽  
pp. 1013
Author(s):  
Sayan Maity ◽  
Mohamed Abdel-Mottaleb ◽  
Shihab S. Asfour

Biometric identification using surveillance video has attracted the attention of many researchers as it can be applicable not only for robust identification but also personalized activity monitoring. In this paper, we present a novel multimodal recognition system that extracts frontal gait and low-resolution face images from frontal walking surveillance video clips to perform efficient biometric recognition. The proposed study addresses two important issues in surveillance video that did not receive appropriate attention in the past. First, it consolidates the model-free and model-based gait feature extraction approaches to perform robust gait recognition only using the frontal view. Second, it uses a low-resolution face recognition approach which can be trained and tested using low-resolution face information. This eliminates the need for obtaining high-resolution face images to create the gallery, which is required in the majority of low-resolution face recognition techniques. Moreover, the classification accuracy on high-resolution face images is considerably higher. Previous studies on frontal gait recognition incorporate assumptions to approximate the average gait cycle. However, we quantify the gait cycle precisely for each subject using only the frontal gait information. The approaches available in the literature use the high resolution images obtained in a controlled environment to train the recognition system. However, in our proposed system we train the recognition algorithm using the low-resolution face images captured in the unconstrained environment. The proposed system has two components, one is responsible for performing frontal gait recognition and one is responsible for low-resolution face recognition. Later, score level fusion is performed to fuse the results of the frontal gait recognition and the low-resolution face recognition. Experiments conducted on the Face and Ocular Challenge Series (FOCS) dataset resulted in a 93.5% Rank-1 for frontal gait recognition and 82.92% Rank-1 for low-resolution face recognition, respectively. The score level multimodal fusion resulted in 95.9% Rank-1 recognition, which demonstrates the superiority and robustness of the proposed approach.


2021 ◽  
pp. 1-15
Author(s):  
Yongjie Chu ◽  
Touqeer Ahmad ◽  
Lindu Zhao

Low-resolution face recognition with one-shot is a prevalent problem encountered in law enforcement, where it generally requires to recognize the low-resolution face images captured by surveillance cameras with the only one high-resolution profile face image in the database. The problem is very tough because the available samples is quite few and the quality of unknown images is quite low. To effectively address this issue, this paper proposes Adapted Discriminative Coupled Mappings (AdaDCM) approach, which integrates domain adaptation and discriminative learning. To achieve good domain adaptation performance for small size dataset, a new domain adaptation technique called Bidirectional Locality Matching-based Domain Adaptation (BLM-DA) is first developed. Then the proposed AdaDCM is formulated by unifying BLM-DA and discriminative coupled mappings into a single framework. AdaDCM is extensively evaluated on FERET, LFW, and SCface databases, which includes LR face images obtained in constrained, unconstrained, and real-world environment. The promising results on these datasets demonstrate the effectiveness of AdaDCM in LR face recognition with one-shot.


2017 ◽  
Vol 2017 ◽  
pp. 1-9 ◽  
Author(s):  
Tai-Xiang Jiang ◽  
Ting-Zhu Huang ◽  
Xi-Le Zhao ◽  
Tian-Hui Ma

We have proposed a patch-based principal component analysis (PCA) method to deal with face recognition. Many PCA-based methods for face recognition utilize the correlation between pixels, columns, or rows. But the local spatial information is not utilized or not fully utilized in these methods. We believe that patches are more meaningful basic units for face recognition than pixels, columns, or rows, since faces are discerned by patches containing eyes and noses. To calculate the correlation between patches, face images are divided into patches and then these patches are converted to column vectors which would be combined into a new “image matrix.” By replacing the images with the new “image matrix” in the two-dimensional PCA framework, we directly calculate the correlation of the divided patches by computing the total scatter. By optimizing the total scatter of the projected samples, we obtain the projection matrix for feature extraction. Finally, we use the nearest neighbor classifier. Extensive experiments on the ORL and FERET face database are reported to illustrate the performance of the patch-based PCA. Our method promotes the accuracy compared to one-dimensional PCA, two-dimensional PCA, and two-directional two-dimensional PCA.


2018 ◽  
Vol 2018 ◽  
pp. 1-10 ◽  
Author(s):  
Muhammad Sajid ◽  
Nouman Ali ◽  
Saadat Hanif Dar ◽  
Naeem Iqbal Ratyal ◽  
Asif Raza Butt ◽  
...  

Recently, face datasets containing celebrities photos with facial makeup are growing at exponential rates, making their recognition very challenging. Existing face recognition methods rely on feature extraction and reference reranking to improve the performance. However face images with facial makeup carry inherent ambiguity due to artificial colors, shading, contouring, and varying skin tones, making recognition task more difficult. The problem becomes more confound as the makeup alters the bilateral size and symmetry of the certain face components such as eyes and lips affecting the distinctiveness of faces. The ambiguity becomes even worse when different days bring different facial makeup for celebrities owing to the context of interpersonal situations and current societal makeup trends. To cope with these artificial effects, we propose to use a deep convolutional neural network (dCNN) using augmented face dataset to extract discriminative features from face images containing synthetic makeup variations. The augmented dataset containing original face images and those with synthetic make up variations allows dCNN to learn face features in a variety of facial makeup. We also evaluate the role of partial and full makeup in face images to improve the recognition performance. The experimental results on two challenging face datasets show that the proposed approach can compete with the state of the art.


Author(s):  
Amal A. Moustafa ◽  
Ahmed Elnakib ◽  
Nihal F. F. Areed

This paper presents a methodology for Age-Invariant Face Recognition (AIFR), based on the optimization of deep learning features. The proposed method extracts deep learning features using transfer deep learning, extracted from the unprocessed face images. To optimize the extracted features, a Genetic Algorithm (GA) procedure is designed in order to select the most relevant features to the problem of identifying a person based on his/her facial images over different ages. For classification, K-Nearest Neighbor (KNN) classifiers with different distance metrics are investigated, i.e., Correlation, Euclidian, Cosine, and Manhattan distance metrics. Experimental results using a Manhattan distance KNN classifier achieves the best Rank-1 recognition rate of 86.2% and 96% on the standard FGNET and MORPH datasets, respectively. Compared to the state-of-the-art methods, our proposed method needs no preprocessing stages. In addition, the experiments show its privilege over other related methods.


Author(s):  
Prasad A. Jagdale ◽  
Sudeep D. Thepade

Nowadays the system which holds private and confidential data are being protected using biometric password such as finger recognition, voice recognition, eyries and face recognition. Face recognition match the current user face with faces present in the database of that security system and it has one major drawback that it never works better if it doesn’t have liveness detection. These face recognition system can be spoofed using various traits. Spoofing is accessing a system software or data by harming the biometric recognition security system. These biometric systems can be easily attacked by spoofs like peoples face images, masks and videos which are easily available from social media. The proposed work mainly focused on detecting the spoofing attack by training the system. Spoofing methods like photo, mask or video image can be easily identified by this method. This paper proposed a fusion technique where different features of an image are combining together so that it can give best accuracy in terms of distinguish between spoof and live face. Also a comparative study is done of machine learning classifiers to find out which classifiers gives best accuracy.


Author(s):  
Ayan Seal ◽  
Debotosh Bhattacharjee ◽  
Mita Nasipuri ◽  
Dipak Kumar Basu

Automatic face recognition has been comprehensively studied for more than four decades, since face recognition of individuals has many applications, particularly in human-machine interaction and security. Although face recognition systems have achieved a significant level of maturity with some realistic achievement, face recognition still remains a challenging problem due to large variation in face images. Face recognition techniques can be generally divided into three categories based on the face image acquisition methodology: methods that work on intensity images, those that deal with video sequences, and those that require other sensory (like 3D sensory or infra-red imagery) data. Researchers are using thermal infrared images for face recognition. Since thermal infrared images have some advantages over 2D images. In this chapter, an overview of some of the well-known techniques of face recognition using thermal infrared faces are discussed, and some of the drawbacks and benefits of each of these methods mentioned therein are discussed. This chapter talks about some of the most recent algorithms developed for this purpose, and tries to give a brief idea of the state of the art of face recognition technology. The authors propose one approach for evaluating the performance of face recognition algorithms using thermal infrared images. They also note the results of several classifiers on a benchmark dataset (Terravic Facial Infrared Database).


Sign in / Sign up

Export Citation Format

Share Document