scholarly journals Integrated multilevel image fusion and match score fusion of visible and infrared face images for robust face recognition

2008 ◽  
Vol 41 (3) ◽  
pp. 880-893 ◽  
Author(s):  
Richa Singh ◽  
Mayank Vatsa ◽  
Afzel Noore
2018 ◽  
Vol 7 (2) ◽  
pp. 626
Author(s):  
A. Mallikarjuna Reddy ◽  
V. Venkata Krishna ◽  
L. Sumalatha

Face recognition (FR) is one of the challenging and active research fields of image processing, computer vision and biometrics with numerous proposed systems. We present a feature extraction method named “stable uniform local pattern (SULP)”, a refined variant of ULBP operator, for robust face recognition. The SULP directly applied on gradient face images (in x and y directions) of a single image for capturing significant fundamental local texture patterns to build up a feature vector of a face image. Histogram sequences of SULP images of the two gradient images are finally concatenated to form the “stable uniform local pattern gradient (SULPG)” vector for the given image. The SULPG approach is experimented on Yale, ATT-ORL, FERET, CAS-PEAL and LFW face databases and the results are compared with the LBP model and various variants of LBP descriptor. The results indicate that the present descriptor is more powerful against a wide range of challenges, such as illumination, expression and pose variations and outperforms the state-of-the-art methods based on LBP.


2021 ◽  
Vol 18 (5) ◽  
pp. 6638-6651
Author(s):  
Huilin Ge ◽  
◽  
Yuewei Dai ◽  
Zhiyu Zhu ◽  
Biao Wang

<abstract> <sec><title>Purpose</title><p>Due to the lack of prior knowledge of face images, large illumination changes, and complex backgrounds, the accuracy of face recognition is low. To address this issue, we propose a face detection and recognition algorithm based on multi-task convolutional neural network (MTCNN).</p> </sec> <sec><title>Methods</title><p>In our paper, MTCNN mainly uses three cascaded networks, and adopts the idea of candidate box plus classifier to perform fast and efficient face recognition. The model is trained on a database of 50 faces we have collected, and Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measurement (SSIM), and receiver operating characteristic (ROC) curve are used to analyse MTCNN, Region-CNN (R-CNN) and Faster R-CNN.</p> </sec> <sec><title>Results</title><p>The average PSNR of this technique is 1.24 dB higher than that of R-CNN and 0.94 dB higher than that of Faster R-CNN. The average SSIM value of MTCNN is 10.3% higher than R-CNN and 8.7% higher than Faster R-CNN. The Area Under Curve (AUC) of MTCNN is 97.56%, the AUC of R-CNN is 91.24%, and the AUC of Faster R-CNN is 92.01%. MTCNN has the best comprehensive performance in face recognition. For the face images with defective features, MTCNN still has the best effect.</p> </sec> <sec><title>Conclusions</title><p>This algorithm can effectively improve face recognition to a certain extent. The accuracy rate and the reduction of the false detection rate of face detection can not only be better used in key places, ensure the safety of property and security of the people, improve safety, but also better reduce the waste of human resources and improve efficiency.</p> </sec> </abstract>


2018 ◽  
Vol 2018 ◽  
pp. 1-8 ◽  
Author(s):  
Gabriel Hermosilla ◽  
José Luis Verdugo ◽  
Gonzalo Farias ◽  
Esteban Vera ◽  
Francisco Pizarro ◽  
...  

The aim of this study is to propose a system that is capable of recognising the identity of a person, indicating whether the person is drunk using only information extracted from thermal face images. The proposed system is divided into two stages, face recognition and classification. In the face recognition stage, test images are recognised using robust face recognition algorithms: Weber local descriptor (WLD) and local binary pattern (LBP). The classification stage uses Fisher linear discriminant to reduce the dimensionality of the features, and those features are classified using a classifier based on a Gaussian mixture model, creating a classification space for each person, extending the state-of-the-art concept of a “DrunkSpace Classifier.” The system was validated using a new drunk person database, which was specially designed for this work. The main results show that the performance of the face recognition stage was 100% with both algorithms, while the drunk identification saw a performance of 86.96%, which is a very promising result considering 46 individuals for our database in comparison with others that can be found in the literature.


Sensors ◽  
2021 ◽  
Vol 22 (1) ◽  
pp. 304
Author(s):  
Xianglong Chen ◽  
Haipeng Wang ◽  
Yaohui Liang ◽  
Ying Meng ◽  
Shifeng Wang

The presence of fake pictures affects the reliability of visible face images under specific circumstances. This paper presents a novel adversarial neural network designed named as the FTSGAN for infrared and visible image fusion and we utilize FTSGAN model to fuse the face image features of infrared and visible image to improve the effect of face recognition. In FTSGAN model design, the Frobenius norm (F), total variation norm (TV), and structural similarity index measure (SSIM) are employed. The F and TV are used to limit the gray level and the gradient of the image, while the SSIM is used to limit the image structure. The FTSGAN fuses infrared and visible face images that contains bio-information for heterogeneous face recognition tasks. Experiments based on the FTSGAN using hundreds of face images demonstrate its excellent performance. The principal component analysis (PCA) and linear discrimination analysis (LDA) are involved in face recognition. The face recognition performance after fusion improved by 1.9% compared to that before fusion, and the final face recognition rate was 94.4%. This proposed method has better quality, faster rate, and is more robust than the methods that only use visible images for face recognition.


Sensors ◽  
2019 ◽  
Vol 19 (7) ◽  
pp. 1643 ◽  
Author(s):  
Ali Khalili Mobarakeh ◽  
Juan Antonio Cabrera Carrillo ◽  
Juan Jesús Castillo Aguilar

Face recognition is one of the most popular techniques to achieve the goal of figuring out the identity of a person. This study has been conducted to develop a new non-linear subspace learning method named “supervised kernel locality-based discriminant neighborhood embedding,” which performs data classification by learning an optimum embedded subspace from a principal high dimensional space. In this approach, not only nonlinear and complex variation of face images is effectively represented using nonlinear kernel mapping, but local structure information of data from the same class and discriminant information from distinct classes are also simultaneously preserved to further improve final classification performance. Moreover, in order to evaluate the robustness of the proposed method, it was compared with several well-known pattern recognition methods through comprehensive experiments with six publicly accessible datasets. Experiment results reveal that our method consistently outperforms its competitors, which demonstrates strong potential to be implemented in many real-world systems.


2014 ◽  
Vol 573 ◽  
pp. 442-446
Author(s):  
D. Venkatakrishnan Ragu ◽  
C. Hariram ◽  
N. Anantharaj ◽  
A. Muthulakshmi

In recent years, the 3-D face has become biometric modal, for security applications. Dealing with occlusions covering the facial surface is difficult to handle. Occlusion means blocking of face images by objects such as sun glasses, kerchiefs, hands, hair and so on. Occlusions are occurred by facial expressions, poses also. Basically consider two things: i) Occlusion handling for surface registration and ii). Missing data handling for classification. For registration to use an adaptively-selected-model based registration scheme is used. After registering occlusions are detected and removed. In order to handle the missing data we use a masking strategy call masked projection technique called Fisher faces Projection. Registration based on the adaptively selected model together with the masked analysis offer an occlusion robust face recognition system.


Sign in / Sign up

Export Citation Format

Share Document