scholarly journals Cross-Modality 2D-3D Face Recognition via Multiview Smooth Discriminant Analysis Based on ELM

2014 ◽  
Vol 2014 ◽  
pp. 1-9 ◽  
Author(s):  
Yi Jin ◽  
Jiuwen Cao ◽  
Qiuqi Ruan ◽  
Xueqiao Wang

In recent years, 3D face recognition has attracted increasing attention from worldwide researchers. Rather than homogeneous face data, more and more applications require flexible input face data nowadays. In this paper, we propose a new approach for cross-modality 2D-3D face recognition (FR), which is called Multiview Smooth Discriminant Analysis (MSDA) based on Extreme Learning Machines (ELM). Adding the Laplacian penalty constrain for the multiview feature learning, the proposed MSDA is first proposed to extract the cross-modality 2D-3D face features. The MSDA aims at finding a multiview learning based common discriminative feature space and it can then fully utilize the underlying relationship of features from different views. To speed up the learning phase of the classifier, the recent popular algorithm named Extreme Learning Machine (ELM) is adopted to train the single hidden layer feedforward neural networks (SLFNs). To evaluate the effectiveness of our proposed FR framework, experimental results on a benchmark face recognition dataset are presented. Simulations show that our new proposed method generally outperforms several recent approaches with a fast training speed.

In this paper we are proposing a compact CNN model for expression insensitive 3D face recognition. 3D face recognition is a trendy interested area in computer vision and is applied in different real time applications. Lots of research work is going on in industry and academia in this area. Traditional machine learning approaches for 3D face recognition is now superimposed by deep neural networks and are trained using large amount of data. We are applying a region based 3D face recognition approach along with a fusion CNN. 15 sub regions are generated from the frontal face region and features are extracted from it. The features extracted from each region are fused using the fusion CNN. These facial features are rich in identification information and they are not represented using single features. Fusions of multiple features extracted from the 15 regions are combined. The set of values extracted from the features after preprocessing are given as input to the CNN. The fusion CNN uses the features from different layers and fuses them together for a prediction as shown in figure1. The lower layer and higher layer features are fused. The computation time of the proposed system is 3.24 s for preprocessing and 0.09 s for matching. The overall computation time is 3.33 s. The running time of our previous region based approaches [13][14] is around 6 .48 seconds and 12 seconds. It is evident that the computation time of our proposed approach stands good and can be applied in time critical security applications. The three major steps are preprocessing, deep feature learning and deep feature classification.


2019 ◽  
Vol 2019 ◽  
pp. 1-21 ◽  
Author(s):  
Naeem Ratyal ◽  
Imtiaz Ahmad Taj ◽  
Muhammad Sajid ◽  
Anzar Mahmood ◽  
Sohail Razzaq ◽  
...  

Face recognition aims to establish the identity of a person based on facial characteristics and is a challenging problem due to complex nature of the facial manifold. A wide range of face recognition applications are based on classification techniques and a class label is assigned to the test image that belongs to the unknown class. In this paper, a pose invariant deeply learned multiview 3D face recognition approach is proposed and aims to address two problems: face alignment and face recognition through identification and verification setups. The proposed alignment algorithm is capable of handling frontal as well as profile face images. It employs a nose tip heuristic based pose learning approach to estimate acquisition pose of the face followed by coarse to fine nose tip alignment using L2 norm minimization. The whole face is then aligned through transformation using knowledge learned from nose tip alignment. Inspired by the intrinsic facial symmetry of the Left Half Face (LHF) and Right Half Face (RHF), Deeply learned (d) Multi-View Average Half Face (d-MVAHF) features are employed for face identification using deep convolutional neural network (dCNN). For face verification d-MVAHF-Support Vector Machine (d-MVAHF-SVM) approach is employed. The performance of the proposed methodology is demonstrated through extensive experiments performed on four databases: GavabDB, Bosphorus, UMB-DB, and FRGC v2.0. The results show that the proposed approach yields superior performance as compared to existing state-of-the-art methods.


Sign in / Sign up

Export Citation Format

Share Document