scholarly journals The Long Distance Face Recognition using Multiple Distance Face Images Acquired from a Zoom Camera

Author(s):  
Hae-Min Moon ◽  
Sung Bum Pan
Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4575
Author(s):  
Fitri Arnia ◽  
Maulisa Oktiana ◽  
Khairun Saddami ◽  
Khairul Munadi ◽  
Roslidar Roslidar ◽  
...  

Facial recognition has a significant application for security, especially in surveillance technologies. In surveillance systems, recognizing faces captured far away from the camera under various lighting conditions, such as in the daytime and nighttime, is a challenging task. A system capable of recognizing face images in both daytime and nighttime and at various distances is called Cross-Spectral Cross Distance (CSCD) face recognition. In this paper, we proposed a phase-based CSCD face recognition approach. We employed Homomorphic filtering as photometric normalization and Band Limited Phase Only Correlation (BLPOC) for image matching. Different from the state-of-the-art methods, we directly utilized the phase component from an image, without the need for a feature extraction process. The experiment was conducted using the Long-Distance Heterogeneous Face Database (LDHF-DB). The proposed method was evaluated in three scenarios: (i) cross-spectral face verification at 1m, (ii) cross-spectral face verification at 60m, and (iii) cross-spectral face verification where the probe images (near-infrared (NIR) face images) were captured at 1m and the gallery data (face images) was captured at 60 m. The proposed CSCD method resulted in the best recognition performance among the CSCD baseline approaches, with an Equal Error Rate (EER) of 5.34% and a Genuine Acceptance Rate (GAR) of 93%.


Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5229
Author(s):  
Ja Hyung Koo ◽  
Se Woon Cho ◽  
Na Rae Baek ◽  
Kang Ryoung Park

The long-distance recognition methods in indoor environments are commonly divided into two categories, namely face recognition and face and body recognition. Cameras are typically installed on ceilings for face recognition. Hence, it is difficult to obtain a front image of an individual. Therefore, in many studies, the face and body information of an individual are combined. However, the distance between the camera and an individual is closer in indoor environments than that in outdoor environments. Therefore, face information is distorted due to motion blur. Several studies have examined deblurring of face images. However, there is a paucity of studies on deblurring of body images. To tackle the blur problem, a recognition method is proposed wherein the blur of body and face images is restored using a generative adversarial network (GAN), and the features of face and body obtained using a deep convolutional neural network (CNN) are used to fuse the matching score. The database developed by us, Dongguk face and body dataset version 2 (DFB-DB2) and ChokePoint dataset, which is an open dataset, were used in this study. The equal error rate (EER) of human recognition in DFB-DB2 and ChokePoint dataset was 7.694% and 5.069%, respectively. The proposed method exhibited better results than the state-of-art methods.


2014 ◽  
Vol 11 (3) ◽  
pp. 961-974 ◽  
Author(s):  
Hae-Min Moon ◽  
Sung Pan

As many objects in the human ambient environment are intellectualized and networked, research on IoT technology have increased to improve the quality of human life. This paper suggests an LDA-based long distance face recognition algorithm to enhance the intelligent IoT interface. While the existing face recognition algorithm uses single distance image as training images, the proposed algorithm uses face images at distance extracted from 1m to 5m as training images. In the proposed LDA-based long distance face recognition algorithm, the bilinear interpolation is used to normalize the size of the face image and a Euclidean Distance measure is used for the similarity measure. As a result, the proposed face recognition algorithm is improved in its performance by 6.1% at short distance and 31.0% at long distance, so it is expected to be applicable for USN?s robot and surveillance security systems.


Author(s):  
Yidong Li ◽  
Wenhua Liu ◽  
Yi Jin ◽  
Yuanzhouhan Cao

Current face spoof detection schemes mainly rely on physiological cues such as eye blinking, mouth movements, and micro-expression changes, or textural attributes of the face images [9]. But none of these methods represent a viable mechanism for makeup-induced spoofing, especially since makeup has been widely used. Compared with face alteration techniques such as plastic surgery, makeup is non-permanent and cost efficient, which makes makeup-induced spoofing become a realistic threat to the integrity of a face recognition system. To solve this problem, we propose a generative model to construct spoofing face images (confusing face images) for improving the accuracy and robustness of automatic face recognition. Our network structure is composed of two separate parts, with one using inter-attention mechanism to obtain interested face region, and another using intra-attention to translate imitation style with preserving imitation style-excluding details. These two attention mechanisms can precisely learn imitation style, where inter-attention pays more attention to imitation regions of image and intra-attention learns face attributes with long distance in image. To effectively discriminate generated images, we introduce an imitation style discriminator. Our model (SPGAN) generates face images that transfer the imitation style from target to subject image and preserve the imitation-excluding features. Experimental results demonstrate the performance of our model in improving quality of imitated face images.


Entropy ◽  
2021 ◽  
Vol 23 (7) ◽  
pp. 878
Author(s):  
C. T. J. Dodson ◽  
John Soldera ◽  
Jacob Scharcanski

Secure user access to devices and datasets is widely enabled by fingerprint or face recognition. Organization of the necessarily large secure digital object datasets, with objects having content that may consist of images, text, video or audio, involves efficient classification and feature retrieval processing. This usually will require multidimensional methods applicable to data that is represented through a family of probability distributions. Then information geometry is an appropriate context in which to provide for such analytic work, whether with maximum likelihood fitted distributions or empirical frequency distributions. The important provision is of a natural geometric measure structure on families of probability distributions by representing them as Riemannian manifolds. Then the distributions are points lying in this geometrical manifold, different features can be identified and dissimilarities computed, so that neighbourhoods of objects nearby a given example object can be constructed. This can reveal clustering and projections onto smaller eigen-subspaces which can make comparisons easier to interpret. Geodesic distances can be used as a natural dissimilarity metric applied over data described by probability distributions. Exploring this property, we propose a new face recognition method which scores dissimilarities between face images by multiplying geodesic distance approximations between 3-variate RGB Gaussians representative of colour face images, and also obtaining joint probabilities. The experimental results show that this new method is more successful in recognition rates than published comparative state-of-the-art methods.


2021 ◽  
pp. 1-11
Author(s):  
Suphawimon Phawinee ◽  
Jing-Fang Cai ◽  
Zhe-Yu Guo ◽  
Hao-Ze Zheng ◽  
Guan-Chen Chen

Internet of Things is considerably increasing the levels of convenience at homes. The smart door lock is an entry product for smart homes. This work used Raspberry Pi, because of its low cost, as the main control board to apply face recognition technology to a door lock. The installation of the control sensing module with the GPIO expansion function of Raspberry Pi also improved the antitheft mechanism of the door lock. For ease of use, a mobile application (hereafter, app) was developed for users to upload their face images for processing. The app sends the images to Firebase and then the program downloads the images and captures the face as a training set. The face detection system was designed on the basis of machine learning and equipped with a Haar built-in OpenCV graphics recognition program. The system used four training methods: convolutional neural network, VGG-16, VGG-19, and ResNet50. After the training process, the program could recognize the user’s face to open the door lock. A prototype was constructed that could control the door lock and the antitheft system and stream real-time images from the camera to the app.


Electronics ◽  
2021 ◽  
Vol 10 (9) ◽  
pp. 1013
Author(s):  
Sayan Maity ◽  
Mohamed Abdel-Mottaleb ◽  
Shihab S. Asfour

Biometric identification using surveillance video has attracted the attention of many researchers as it can be applicable not only for robust identification but also personalized activity monitoring. In this paper, we present a novel multimodal recognition system that extracts frontal gait and low-resolution face images from frontal walking surveillance video clips to perform efficient biometric recognition. The proposed study addresses two important issues in surveillance video that did not receive appropriate attention in the past. First, it consolidates the model-free and model-based gait feature extraction approaches to perform robust gait recognition only using the frontal view. Second, it uses a low-resolution face recognition approach which can be trained and tested using low-resolution face information. This eliminates the need for obtaining high-resolution face images to create the gallery, which is required in the majority of low-resolution face recognition techniques. Moreover, the classification accuracy on high-resolution face images is considerably higher. Previous studies on frontal gait recognition incorporate assumptions to approximate the average gait cycle. However, we quantify the gait cycle precisely for each subject using only the frontal gait information. The approaches available in the literature use the high resolution images obtained in a controlled environment to train the recognition system. However, in our proposed system we train the recognition algorithm using the low-resolution face images captured in the unconstrained environment. The proposed system has two components, one is responsible for performing frontal gait recognition and one is responsible for low-resolution face recognition. Later, score level fusion is performed to fuse the results of the frontal gait recognition and the low-resolution face recognition. Experiments conducted on the Face and Ocular Challenge Series (FOCS) dataset resulted in a 93.5% Rank-1 for frontal gait recognition and 82.92% Rank-1 for low-resolution face recognition, respectively. The score level multimodal fusion resulted in 95.9% Rank-1 recognition, which demonstrates the superiority and robustness of the proposed approach.


2021 ◽  
pp. 1-15
Author(s):  
Yongjie Chu ◽  
Touqeer Ahmad ◽  
Lindu Zhao

Low-resolution face recognition with one-shot is a prevalent problem encountered in law enforcement, where it generally requires to recognize the low-resolution face images captured by surveillance cameras with the only one high-resolution profile face image in the database. The problem is very tough because the available samples is quite few and the quality of unknown images is quite low. To effectively address this issue, this paper proposes Adapted Discriminative Coupled Mappings (AdaDCM) approach, which integrates domain adaptation and discriminative learning. To achieve good domain adaptation performance for small size dataset, a new domain adaptation technique called Bidirectional Locality Matching-based Domain Adaptation (BLM-DA) is first developed. Then the proposed AdaDCM is formulated by unifying BLM-DA and discriminative coupled mappings into a single framework. AdaDCM is extensively evaluated on FERET, LFW, and SCface databases, which includes LR face images obtained in constrained, unconstrained, and real-world environment. The promising results on these datasets demonstrate the effectiveness of AdaDCM in LR face recognition with one-shot.


2017 ◽  
Vol 2017 ◽  
pp. 1-9 ◽  
Author(s):  
Tai-Xiang Jiang ◽  
Ting-Zhu Huang ◽  
Xi-Le Zhao ◽  
Tian-Hui Ma

We have proposed a patch-based principal component analysis (PCA) method to deal with face recognition. Many PCA-based methods for face recognition utilize the correlation between pixels, columns, or rows. But the local spatial information is not utilized or not fully utilized in these methods. We believe that patches are more meaningful basic units for face recognition than pixels, columns, or rows, since faces are discerned by patches containing eyes and noses. To calculate the correlation between patches, face images are divided into patches and then these patches are converted to column vectors which would be combined into a new “image matrix.” By replacing the images with the new “image matrix” in the two-dimensional PCA framework, we directly calculate the correlation of the divided patches by computing the total scatter. By optimizing the total scatter of the projected samples, we obtain the projection matrix for feature extraction. Finally, we use the nearest neighbor classifier. Extensive experiments on the ORL and FERET face database are reported to illustrate the performance of the patch-based PCA. Our method promotes the accuracy compared to one-dimensional PCA, two-dimensional PCA, and two-directional two-dimensional PCA.


2018 ◽  
Vol 2018 ◽  
pp. 1-10 ◽  
Author(s):  
Muhammad Sajid ◽  
Nouman Ali ◽  
Saadat Hanif Dar ◽  
Naeem Iqbal Ratyal ◽  
Asif Raza Butt ◽  
...  

Recently, face datasets containing celebrities photos with facial makeup are growing at exponential rates, making their recognition very challenging. Existing face recognition methods rely on feature extraction and reference reranking to improve the performance. However face images with facial makeup carry inherent ambiguity due to artificial colors, shading, contouring, and varying skin tones, making recognition task more difficult. The problem becomes more confound as the makeup alters the bilateral size and symmetry of the certain face components such as eyes and lips affecting the distinctiveness of faces. The ambiguity becomes even worse when different days bring different facial makeup for celebrities owing to the context of interpersonal situations and current societal makeup trends. To cope with these artificial effects, we propose to use a deep convolutional neural network (dCNN) using augmented face dataset to extract discriminative features from face images containing synthetic makeup variations. The augmented dataset containing original face images and those with synthetic make up variations allows dCNN to learn face features in a variety of facial makeup. We also evaluate the role of partial and full makeup in face images to improve the recognition performance. The experimental results on two challenging face datasets show that the proposed approach can compete with the state of the art.


Sign in / Sign up

Export Citation Format

Share Document