scholarly journals Effects of Geometric Distortions on Face-Recognition Performance

Perception ◽  
10.1068/p3252 ◽  
2002 ◽  
Vol 31 (10) ◽  
pp. 1221-1240 ◽  
Author(s):  
Graham J Hole ◽  
Patricia A George ◽  
Karen Eaves ◽  
Ayman Rasek

The importance of ‘configural’ processing for face recognition is now well established, but it remains unclear precisely what it entails. Through four experiments we attempted to clarify the nature of configural processing by investigating the effects of various affine transformations on the recognition of familiar faces. Experiment 1 showed that recognition was markedly impaired by inversion of faces, somewhat impaired by shearing or horizontally stretching them, but unaffected by vertical stretching of faces to twice their normal height. In experiment 2 we investigated vertical and horizontal stretching in more detail, and found no effects of either transformation. Two further experiments were performed to determine whether participants were recognising stretched faces by using configural information. Experiment 3 showed that nonglobal vertical stretching of faces (stretching either the top or the bottom half while leaving the remainder undistorted) impaired recognition, implying that configural information from the stretched part of the face was influencing the process of recognition — ie that configural processing involves global facial properties. In experiment 4 we examined the effects of Gaussian blurring on recognition of undistorted and vertically stretched faces. Faces remained recognisable even when they were both stretched and blurred, implying that participants were basing their judgments on configural information from these stimuli, rather than resorting to some strategy based on local featural details. The tolerance of spatial distortions in human face recognition suggests that the configural information used as a basis for face recognition is unlikely to involve information about the absolute position of facial features relative to each other, at least not in any simple way


2010 ◽  
Vol 69 (3) ◽  
pp. 161-167 ◽  
Author(s):  
Jisien Yang ◽  
Adrian Schwaninger

Configural processing has been considered the major contributor to the face inversion effect (FIE) in face recognition. However, most researchers have only obtained the FIE with one specific ratio of configural alteration. It remains unclear whether the ratio of configural alteration itself can mediate the occurrence of the FIE. We aimed to clarify this issue by manipulating the configural information parametrically using six different ratios, ranging from 4% to 24%. Participants were asked to judge whether a pair of faces were entirely identical or different. The paired faces that were to be compared were presented either simultaneously (Experiment 1) or sequentially (Experiment 2). Both experiments revealed that the FIE was observed only when the ratio of configural alteration was in the intermediate range. These results indicate that even though the FIE has been frequently adopted as an index to examine the underlying mechanism of face processing, the emergence of the FIE is not robust with any configural alteration but dependent on the ratio of configural alteration.



2016 ◽  
Vol 2016 ◽  
pp. 1-9 ◽  
Author(s):  
Radhey Shyam ◽  
Yogendra Narain Singh

This paper presents a critical evaluation of multialgorithmic face recognition systems for human authentication in unconstrained environment. We propose different frameworks of multialgorithmic face recognition system combining holistic and texture methods. Our aim is to combine the uncorrelated methods of the face recognition that supplement each other and to produce a comprehensive representation of the biometric cue to achieve optimum recognition performance. The multialgorithmic frameworks are designed to combine different face recognition methods such as (i) Eigenfaces and local binary pattern (LBP), (ii) Fisherfaces and LBP, (iii) Eigenfaces and augmented local binary pattern (A-LBP), and (iv) Fisherfaces and A-LBP. The matching scores of these multialgorithmic frameworks are processed using different normalization techniques whereas their performance is evaluated using different fusion strategies. The robustness of proposed multialgorithmic frameworks of face recognition system is tested on publicly available databases, for example, AT & T (ORL) and Labeled Faces in the Wild (LFW). The experimental results show a significant improvement in recognition accuracies of the proposed frameworks of face recognition system in comparison to their individual methods. In particular, the performance of the multialgorithmic frameworks combining face recognition methods with the devised face recognition method such as A-LBP improves significantly.



2019 ◽  
Vol 35 (05) ◽  
pp. 525-533
Author(s):  
Evrim Gülbetekin ◽  
Seda Bayraktar ◽  
Özlenen Özkan ◽  
Hilmi Uysal ◽  
Ömer Özkan

AbstractThe authors tested face discrimination, face recognition, object discrimination, and object recognition in two face transplantation patients (FTPs) who had facial injury since infancy, a patient who had a facial surgery due to a recent wound, and two control subjects. In Experiment 1, the authors showed them original faces and morphed forms of those faces and asked them to rate the similarity between the two. In Experiment 2, they showed old, new, and implicit faces and asked whether they recognized them or not. In Experiment 3, they showed them original objects and morphed forms of those objects and asked them to rate the similarity between the two. In Experiment 4, they showed old, new, and implicit objects and asked whether they recognized them or not. Object discrimination and object recognition performance did not differ between the FTPs and the controls. However, the face discrimination performance of FTP2 and face recognition performance of the FTP1 were poorer than that of the controls were. Therefore, the authors concluded that the structure of the face might affect face processing.



2018 ◽  
Vol 9 (1) ◽  
pp. 60-77 ◽  
Author(s):  
Souhir Sghaier ◽  
Wajdi Farhat ◽  
Chokri Souani

This manuscript presents an improved system research that can detect and recognize the person in 3D space automatically and without the interaction of the people's faces. This system is based not only on a quantum computation and measurements to extract the vector features in the phase of characterization but also on learning algorithm (using SVM) to classify and recognize the person. This research presents an improved technique for automatic 3D face recognition using anthropometric proportions and measurement to detect and extract the area of interest which is unaffected by facial expression. This approach is able to treat incomplete and noisy images and reject the non-facial areas automatically. Moreover, it can deal with the presence of holes in the meshed and textured 3D image. It is also stable against small translation and rotation of the face. All the experimental tests have been done with two 3D face datasets FRAV 3D and GAVAB. Therefore, the test's results of the proposed approach are promising because they showed that it is competitive comparable to similar approaches in terms of accuracy, robustness, and flexibility. It achieves a high recognition performance rate of 95.35% for faces with neutral and non-neutral expressions for the identification and 98.36% for the authentification with GAVAB and 100% with some gallery of FRAV 3D datasets.



Now a days one of the critical factors that affects the recognition performance of any face recognition system is partial occlusion. The paper addresses face recognition in the presence of sunglasses and scarf occlusion. The face recognition approach that we proposed, detects the face region that is not occluded and then uses this region to obtain the face recognition. To segment the occluded and non-occluded parts, adaptive Fuzzy C-Means Clustering is used and for recognition Minimum Cost Sub-Block Matching Distance(MCSBMD) are used. The input face image is divided in to number of sub blocks and each block is checked if occlusion present or not and only from non-occluded blocks MWLBP features are extracted and are used for classification. Experiment results shows our method is giving promising results when compared to the other conventional techniques.



Author(s):  
Ayan Seal ◽  
Debotosh Bhattacharjee ◽  
Mita Nasipuri ◽  
Dipak Kumar Basu

Automatic face recognition has been comprehensively studied for more than four decades, since face recognition of individuals has many applications, particularly in human-machine interaction and security. Although face recognition systems have achieved a significant level of maturity with some realistic achievement, face recognition still remains a challenging problem due to large variation in face images. Face recognition techniques can be generally divided into three categories based on the face image acquisition methodology: methods that work on intensity images, those that deal with video sequences, and those that require other sensory (like 3D sensory or infra-red imagery) data. Researchers are using thermal infrared images for face recognition. Since thermal infrared images have some advantages over 2D images. In this chapter, an overview of some of the well-known techniques of face recognition using thermal infrared faces are discussed, and some of the drawbacks and benefits of each of these methods mentioned therein are discussed. This chapter talks about some of the most recent algorithms developed for this purpose, and tries to give a brief idea of the state of the art of face recognition technology. The authors propose one approach for evaluating the performance of face recognition algorithms using thermal infrared images. They also note the results of several classifiers on a benchmark dataset (Terravic Facial Infrared Database).



2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Tongxin Wei ◽  
Qingbao Li ◽  
Jinjin Liu ◽  
Ping Zhang ◽  
Zhifeng Chen

In the process of face recognition, face acquisition data is seriously distorted. Many face images collected are blurred or even missing. Faced with so many problems, the traditional image inpainting was based on structure, while the current popular image inpainting method is based on deep convolutional neural network and generative adversarial nets. In this paper, we propose a 3D face image inpainting method based on generative adversarial nets. We identify two parallels of the vector to locate the planer positions. Compared with the previous, the edge information of the missing image is detected, and the edge fuzzy inpainting can achieve better visual match effect. We make the face recognition performance dramatically boost.



Perception ◽  
10.1068/p5637 ◽  
2007 ◽  
Vol 36 (9) ◽  
pp. 1334-1352 ◽  
Author(s):  
Simone K Favelle ◽  
Stephen Palmisano ◽  
Ryan T Maloney

Previous research into the effects of viewpoint change on face recognition has typically dealt with rotations around the head's vertical axis (yaw). Another common, although less studied, source of viewpoint variation in faces is rotation around the head's horizontal pitch axis (pitch). In the current study we used both a sequential matching task and an old/new recognition task to examine the effect of viewpoint change following rotation about both pitch and yaw axes on human face recognition. The results of both tasks showed that recognition performance was better for faces rotated about yaw compared to pitch. Further, recognition performance for faces rotated upwards on the pitch axis was better than for faces rotated downwards. Thus, equivalent angular rotations about pitch and yaw do not produce equivalent viewpoint-dependent declines in recognition performance.





Sign in / Sign up

Export Citation Format

Share Document