Fusion of Appearance and Depth Information for Face Recognition

Author(s):  
Jian-Gang Wang ◽  
Kar-Ann Toh ◽  
Ronda Venkateswarlu
Author(s):  
Manjunatha Hiremath ◽  
P. S. Hiremath

Human face images are the basis not only for person recognition, but for also identifying other attributes like gender, age, ethnicity, and emotional states of a person. Therefore, face is an important biometric identifier in the law enforcement and human–computer interaction (HCI) systems. The 3D human face recognition is emerging as a significant biometric technology. Research interest into 3D face recognition has increased during recent years due to availability of improved 3D acquisition devices and processing algorithms. A 3D face image is represented by 3D meshes or range images which contain depth information. In this paper, the objective is to propose a new 3D face recognition method based on radon transform and symbolic factorial discriminant analysis using KNN and SVM classifier with similarity and dissimilarity measures, which are applied on 3D facial range images. The experimentation is done using three publicly available databases, namely, Bhosphorus, Texas and CASIA 3D face database. The experimental results demonstrate the effectiveness of the proposed method.


2018 ◽  
Vol 2 (2) ◽  
pp. 39-54 ◽  
Author(s):  
João C. Monteiro ◽  
Tiago Freitas ◽  
Jaime S. Cardoso

Facial recognition under uncontrolled acquisition environments faces major challenges that limit the deployment of real-life systems. The use of 2.5D information can be used to improve discriminative power of such systems in conditions where RGB information alone would fail. In this paper we propose a multimodal extension of a previous work, based on SIFT descriptors of RGB images, integrated with LBP information obtained from depth scans, modeled by an hierarchical framework motivated by principles of human cognition. The framework was tested on EURECOM dataset and proved that the inclusion of depth information improved significantly the results in all the tested conditions, compared to independent unimodal approaches.


2015 ◽  
Vol 27 (6) ◽  
pp. 691-697 ◽  
Author(s):  
Michio Tanaka ◽  
◽  
Hiroki Matsubara ◽  
Takashi Morie

<div class=""abs_img""><img src=""[disp_template_path]/JRM/abst-image/00270006/11.jpg"" width=""300"" /> Summary of proposed method</div>Home service robots must possess the ability to communicate with humans, for which human detection and recognition methods are particularly important. This paper proposes methods for human detection and face recognition that are based on image processing, and are suitable for home service robots. For the human detection method, we combine the method proposed by Xia et al. based on the use of head shape with the results of region segmentation based on depth information, and use the positional relations of the detected points. We obtained a detection rate of 98.1% when the method was evaluated for various postures and facing directions. We demonstrate the robustness of the proposed method against postural changes such as stretching the arms, resting the chin on one’s hands, and drinking beverages. For the human recognition method, we combine the elastic bunch graph matching method proposed by Wiskott et al. with Face Tracking SDK to extract facial feature points, and use the 3D information in the deformation computation; we obtained a recognition rate of 93.6% during evaluation.


Sensors ◽  
2019 ◽  
Vol 19 (21) ◽  
pp. 4733 ◽  
Author(s):  
Shuhua Liu ◽  
Yu Song ◽  
Mengyu Zhang ◽  
Jianwei Zhao ◽  
Shihao Yang ◽  
...  

In this study, an advanced Kinect sensor was adopted to acquire infrared radiation (IR) images for liveness detection. The proposed liveness detection method based on infrared radiation (IR) images can deal with face spoofs. Face pictures were acquired by a Kinect camera and converted into IR images. Feature extraction and classification were carried out by a deep neural network to distinguish between real individuals and face spoofs. IR images collected by the Kinect camera have depth information. Therefore, the IR pixels from live images have an evident hierarchical structure, while those from photos or videos have no evident hierarchical feature. Accordingly, two types of IR images were learned through the deep network to realize the identification of whether images were from live individuals. In comparison with other liveness detection cross-databases, our recognition accuracy was 99.8% and better than other algorithms. FaceNet is a face recognition model, and it is robust to occlusion, blur, illumination, and steering. We combined the liveness detection and FaceNet model for identity authentication. For improving the application of the authentication approach, we proposed two improved ways to run the FaceNet model. Experimental results showed that the combination of the proposed liveness detection and improved face recognition had a good recognition effect and can be used for identity authentication.


2016 ◽  
Vol 2016 ◽  
pp. 1-9 ◽  
Author(s):  
Gaoli Sang ◽  
Jing Li ◽  
Qijun Zhao

Three-dimensional (3D) face models can intrinsically handle large pose face recognition problem. In this paper, we propose a novel pose-invariant face recognition method via RGB-D images. By employing depth, our method is able to handle self-occlusion and deformation, both of which are challenging problems in two-dimensional (2D) face recognition. Texture images in the gallery can be rendered to the same view as the probe via depth. Meanwhile, depth is also used for similarity measure via frontalization and symmetric filling. Finally, both texture and depth contribute to the final identity estimation. Experiments on Bosphorus, CurtinFaces, Eurecom, and Kiwi databases demonstrate that the additional depth information has improved the performance of face recognition with large pose variations and under even more challenging conditions.


Electronics ◽  
2021 ◽  
Vol 10 (20) ◽  
pp. 2539
Author(s):  
Hongyan Zou ◽  
Xinyan Sun

Face recognition is one of the essential applications in computer vision, while current face recognition technology is mainly based on 2D images without depth information, which are easily affected by illumination and facial expressions. This paper presents a fast face recognition algorithm combining 3D point cloud face data with deep learning, focusing on key part of face for recognition with an attention mechanism, and reducing the coding space by the sparse loss function. First, an attention mechanism-based convolutional neural network was constructed to extract facial features to avoid expressions and illumination interference. Second, a Siamese network was trained with a sparse loss function to minimize the face coding space and enhance the separability of the face features. With the FRGC face dataset, the experimental results show that the proposed method could achieve the recognition accuracy of 95.33%.


2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Kyoungmin Ko ◽  
Hyunmin Gwak ◽  
Nalinh Thoummala ◽  
Hyun Kwon ◽  
SungHwan Kim

In this paper, we propose a robust and reliable face recognition model that incorporates depth information such as data from point clouds and depth maps into RGB image data to avoid false facial verification caused by face spoofing attacks while increasing the model’s performance. The proposed model is driven by the spatially adaptive convolution (SAC) block of SqueezeSegv3; this is the attention block that enables the model to weight features according to their importance of spatial location. We also utilize large-margin loss instead of softmax loss as a supervision signal for the proposed method, to enforce high discriminatory power. In the experiment, the proposed model, which incorporates depth information, had 99.88% accuracy and an F 1 score of 93.45%, outperforming the baseline models, which used RGB data alone.


Sign in / Sign up

Export Citation Format

Share Document