Recognizing Face Images with Disguise Variations

Author(s):  
Neslihan Kose ◽  
Jean-Luc Dugelay ◽  
Richa Singh ◽  
Mayank Vatsa

Challenges in automatic face recognition can be classified in several categories such as illumination, image quality, expression, pose, aging, and disguise. In this chapter, the authors focus on recognizing face images with disguise variations. Even though face recognition with disguise variations is a major challenge, the research studies on this topic are limited. In this study, first disguise variations are defined followed by an overview of the existing databases used for disguise analysis. Next, the studies that are dedicated to the impact of disguise variations on existing face recognition techniques are introduced. Finally, a collection of several techniques proposed in state-of-the-art which are robust against disguise variations is provided. This study shows that disguise variations have a significant impact on face recognition; hence, more robust approaches are required to address this important challenge.

Author(s):  
Guojun Lin ◽  
Meng Yang ◽  
Linlin Shen ◽  
Mingzhong Yang ◽  
Mei Xie

For face recognition, conventional dictionary learning (DL) methods have some disadvantages. First, face images of the same person vary with facial expressions and pose, illumination and disguises, so it is hard to obtain a robust dictionary for face recognition. Second, they don’t cover important components (e.g., particularity and disturbance) completely, which limit their performance. In the paper, we propose a novel robust and discriminative DL (RDDL) model. The proposed model uses sample diversities of the same face image to learn a robust dictionary, which includes class-specific dictionary atoms and disturbance dictionary atoms. These atoms can well represent the data from different classes. Discriminative regularizations on the dictionary and the representation coefficients are used to exploit discriminative information, which improves effectively the classification capability of the dictionary. The proposed RDDL is extensively evaluated on benchmark face image databases, and it shows superior performance to many state-of-the-art dictionary learning methods for face recognition.


2020 ◽  
Vol 8 (5) ◽  
pp. 3220-3229

This article presents a method “Template based pose and illumination invariant face recognition”. We know that pose and Illumination are important variants where we cannot find proper face images for a given query image. As per the literature, previous methods are also not accurately calculating the pose and Illumination variants of a person face image. So we concentrated on pose and Illumination. Our System firstly calculates the face inclination or the pose of the head of a person with various mathematical methods. Then Our System removes the Illumination from the image using a Gabor phase based illumination invariant extraction strategy. In this strategy, the system normalizes changing light on face images, which can decrease the impact of fluctuating Illumination somewhat. Furthermore, a lot of 2D genuine Gabor wavelet with various orientations is utilized for image change, and numerous Gabor coefficients are consolidated into one entire in thinking about spectrum and phase. Finally, the light invariant is acquired by separating the phase feature from the consolidated coefficients. Then after that, the obtained Pose and illumination invariant images are convolved with Gabor filters to obtain Gabor images. Then templates will be extracted from these Gabor images and one template average is generated. Then similarity measure will be performed between query image template average and database images template averages. Finally the most similar images will be displayed to the user. Exploratory results on PubFig database, Yale B and CMU PIE face databases show that our technique got a critical improvement over other related strategies for face recognition under enormous pose and light variation conditions.


Water ◽  
2021 ◽  
Vol 13 (23) ◽  
pp. 3470
Author(s):  
Fayadh Alenezi ◽  
Ammar Armghan ◽  
Sachi Nandan Mohanty ◽  
Rutvij H. Jhaveri ◽  
Prayag Tiwari

A lack of adequate consideration of underwater image enhancement gives room for more research into the field. The global background light has not been adequately addressed amid the presence of backscattering. This paper presents a technique based on pixel differences between global and local patches in scene depth estimation. The pixel variance is based on green and red, green and blue, and red and blue channels besides the absolute mean intensity functions. The global background light is extracted based on a moving average of the impact of suspended light and the brightest pixels within the image color channels. We introduce the block-greedy algorithm in a novel Convolutional Neural Network (CNN) proposed to normalize different color channels’ attenuation ratios and select regions with the lowest variance. We address the discontinuity associated with underwater images by transforming both local and global pixel values. We minimize energy in the proposed CNN via a novel Markov random field to smooth edges and improve the final underwater image features. A comparison of the performance of the proposed technique against existing state-of-the-art algorithms using entropy, Underwater Color Image Quality Evaluation (UCIQE), Underwater Image Quality Measure (UIQM), Underwater Image Colorfulness Measure (UICM), and Underwater Image Sharpness Measure (UISM) indicate better performance of the proposed approach in terms of average and consistency. As it concerns to averagely, UICM has higher values in the technique than the reference methods, which explainsits higher color balance. The μ values of UCIQE, UISM, and UICM of the proposed method supersede those of the existing techniques. The proposed noted a percent improvement of 0.4%, 4.8%, 9.7%, 5.1% and 7.2% in entropy, UCIQE, UIQM, UICM and UISM respectively compared to the best existing techniques. Consequently, dehazed images have sharp, colorful, and clear features in most images when compared to those resulting from the existing state-of-the-art methods. Stable σ values explain the consistency in visual analysis in terms of sharpness of color and clarity of features in most of the proposed image results when compared with reference methods. Our own assessment shows that only weakness of the proposed technique is that it only applies to underwater images. Future research could seek to establish edge strengthening without color saturation enhancement.


2012 ◽  
Vol 460 ◽  
pp. 30-34
Author(s):  
Peng Xu ◽  
Yuan Men Zhou

The paper introduces a kind of detection method of face pose based on stereoscopic vision technology, approximately divides head’s deflexion into three plane rotations. By calculating the deflexion angle of three directions, you can determine the face’s pose. This method obtains face images by the left and right video channels, first analyses the similarity of double channels’ images to obtain three-dimensional information of face features’ key points. Then calculates three deflexion angles according to these information, therefore can correspondingly adjust and deform the original image to get standard frontal face image, and provides correction image for the latter face recognition. By this method the impact of pose change to face recognition can be reduced obviously in the earlier stage, so the system’s overall recognition accuracy rate is enhanced effectively.


2019 ◽  
Vol 2019 ◽  
pp. 1-10 ◽  
Author(s):  
Muhammad Sajid ◽  
Naeem Iqbal Ratyal ◽  
Nouman Ali ◽  
Bushra Zafar ◽  
Saadat Hanif Dar ◽  
...  

Aging affects left and right half face differently owing to numerous factors such as sleeping habits, exposure to sun light, and weaker face muscles of one side of face. In computer vision, age of a given face image is estimated using features that are correlated with age, such as moles, scars, and wrinkles. In this study we report the asymmetric aging of the left and right sides of face images and its impact on accurate age estimation. Left symmetric faces were perceived as younger while right symmetric faces were perceived as older when presented to the state-of-the-art age estimator. These findings show that facial aging is an asymmetric process which plays role in accurate facial age estimation. Experimental results on two large datasets verify the significance of using asymmetric right face image to estimate the age of a query face image more accurately compared to the corresponding original or left asymmetric face image.


2021 ◽  
Vol 2021 ◽  
pp. 1-21
Author(s):  
Ke Li ◽  
Hu Chen ◽  
Faxiu Huang ◽  
Shenggui Ling ◽  
Zhisheng You

Face image quality has an important effect on recognition performance. Recognition-oriented face image quality assessment is particularly necessary for the screening or application of face images with various qualities. In this work, sharpness and brightness were mainly assessed by a classification model. We selected very high-quality images of each subject and established nine kinds of quality labels that are related to recognition performance by utilizing a combination of face recognition algorithms, the human vision system, and a traditional brightness calculation method. Experiments were conducted on a custom dataset and the CMU multi-PIE face database for training and testing and on Labeled Faces in the Wild for cross-validation. The experimental results show that the proposed method can effectively reduce the false nonmatch rate by removing the low-quality face images identified by the classification model and vice versa. This method is even effective for face recognition algorithms that are not involved in label creation and whose training data are nonhomologous to the training set of our quality assessment model. The results show that the proposed method can distinguish images of different qualities with reasonable accuracy and is consistent with subjective human evaluation. The quality labels established in this paper are closely related to the recognition performance and exhibit good generalization to other recognition algorithms. Our method can be used to reject low-quality images to improve the recognition rate and screen high-quality images for subsequent processing.


2013 ◽  
Vol 22 (2) ◽  
pp. 197-212 ◽  
Author(s):  
Khitikun Meethongjan ◽  
Mohamad Dzulkifli ◽  
Amjad Rehman ◽  
Ayman Altameem ◽  
Tanzila Saba

AbstractFace detection plays important roles in many applications such as human–computer interaction, security and surveillance, face recognition, etc. This article presents an intelligent enhanced fused approach for face recognition based on the Voronoi diagram (VD) and wavelet moment invariants. Discrete wavelet transform and moment invariants are used for feature extraction of the facial face. Finally, VD and the dual tessellation (Delaunay triangulation, DT) are used to locate and detect original face images. Face recognition results based on this new fusion are promising in the state of the art.


2018 ◽  
Author(s):  
Benjamin Balas ◽  
Amanda Auen

Though artificial faces of various kinds are rapidly becoming more and more life-like due to advances in graphics technology (Suwajanakorn et al., 2015; Booth et al., 2017), observers can typically distinguish real faces from artificial faces. In general, face recognition is tuned to experience such that expert-level processing is most evident for faces that we encounter frequently in our visual world, but the extent to which face animacy perception is also tuned to in-group vs. out-group categories remains an open question. In the current study, we chose to examine how the perception of animacy in human faces and dog faces was affected by face inversion and the duration of face images presented to adult observers. We hypothesized that the impact of these manipulations may differ as a function of species category, indicating that face animacy perception is tuned for in-group faces. Briefly, we found evidence of such a differential impact, suggesting either that distinct mechanisms are used to evaluate the “life” in a face for in-group and out-group faces, or that the efficiency of a common mechanism varies substantially as a function of visual expertise.


Author(s):  
Hui Fang ◽  
Nicolas Costen ◽  
Phil Grant ◽  
Min Chen

This chapter describes the approaches to extracting features via the motion subspace for improving face recognition from moving face sequences. Although the identity subspace analysis has achieved reasonable recognition performance in static face images, more recently there has been an interest in motion-based face recognition. This chapter reviews several state-of-the-art techniques to exploit the motion information for recognition and investigates the permuted distinctive motion similarity in the motion subspace. The motion features extracted from the motion subspaces are used to test the performance based on a verification experimental framework. Through experimental tests, the results show that the correlations between motion eigen-patterns significantly improve the performance of recognition.


Author(s):  
Bing Cao ◽  
Nannan Wang ◽  
Xinbo Gao ◽  
Jie Li ◽  
Zhifeng Li

Heterogeneous face recognition (HFR) refers to matching face images acquired from different domains with wide applications in security scenarios. However, HFR is still a challenging problem due to the significant cross-domain discrepancy and the lacking of sufficient training data in different domains. This paper presents a deep neural network approach namely Multi-Margin based Decorrelation Learning (MMDL) to extract decorrelation representations in a hyperspherical space for cross-domain face images. The proposed framework can be divided into two components: heterogeneous representation network and decorrelation representation learning. First, we employ a large scale of accessible visual face images to train heterogeneous representation network. The decorrelation layer projects the output of the first component into decorrelation latent subspace and obtain decorrelation representation. In addition, we design a multi-margin loss (MML), which consists of tetradmargin loss (TML) and heterogeneous angular margin loss (HAML), to constrain the proposed framework. Experimental results on two challenging heterogeneous face databases show that our approach achieves superior performance on both verification and recognition tasks, comparing with state-of-the-art methods.


Sign in / Sign up

Export Citation Format

Share Document