The Bandwidth of Diagnostic Horizontal Structure for Face Identification

Perception ◽  
2018 ◽  
Vol 47 (4) ◽  
pp. 397-413 ◽  
Author(s):  
Matthew V. Pachai ◽  
Patrick J. Bennett ◽  
Allison B. Sekuler

Horizontally oriented spatial frequency components are a diagnostic source of face identity information, and sensitivity to this information predicts upright identification accuracy and the magnitude of the face-inversion effect. However, the bandwidth at which this information is conveyed, and the extent to which human tuning matches this distribution of information, has yet to be characterized. We designed a 10-alternative forced choice face identification task in which upright or inverted faces were filtered to retain horizontal or vertical structure. We systematically varied the bandwidth of these filters in 10° steps and replaced the orientation components that were removed from the target face with components from the average of all possible faces. This manipulation created patterns that looked like faces but contained diagnostic information in orientation bands unknown to the observer on any given trial. Further, we quantified human performance relative to the actual information content of our face stimuli using an ideal observer with perfect knowledge of the diagnostic band. We found that the most diagnostic information for face identification is conveyed by a narrow band of orientations along the horizontal meridian, whereas human observers use information from a wide range of orientations.

2020 ◽  
Vol 2020 ◽  
pp. 1-16
Author(s):  
Zhi Zhang ◽  
Xin Xu ◽  
Jiuzhen Liang ◽  
Bingyu Sun

Face identification aims at putting a label on an unknown face with respect to some training set. Unconstrained face identification is a challenging problem because of the possible variations in face pose, illumination, occlusion, and facial expression. This paper presents an unconstrained face identification method based on face frontalization and learning-based data representation. Firstly, the frontal views of unconstrained face images are automatically generated by using a single, unchanged 3D face model. Then, we crop the face relevant regions of the frontal views to segment faces from the backgrounds. At last, to enhance the discriminative capability of the coding vectors, a support vector-guided dictionary learning (SVGDL) model is applied to adaptively assign different weights to different pairs of coding vectors. The performance of the proposed method FSVGDL (frontalization-based support vector guided dictionary learning) is evaluated on the Labeled Faces in the wild (LFW) database. After decision fusion, the identification accuracy yields 97.17% when using 7 images per individual for training and 3 images per individual for testing with 158 classes in total.


2019 ◽  
Author(s):  
Yasmin Allen-Davidian ◽  
Manuela Russo ◽  
Naohide Yamamoto ◽  
Jordy Kaufman ◽  
Alan J. Pegna ◽  
...  

Face Inversion Effects (FIEs) – differences in response to upside down faces compared to upright faces – occur for both behavioural and electrophysiological responses when people view face stimuli. In EEG, the inversion of a face is often reported to evoke an enhanced amplitude and delayed latency of the N170 event-related potential. This response has historically been attributed to the indexing of specialised face processing mechanisms within the brain. However, inspection of the literature revealed that while the N170 is consistently delayed to photographed, schematic, Mooney and line drawn face stimuli, only naturally photographed faces enhance the amplitude upon inversion. This raises the possibility that the increased N170 amplitudes to inverted faces may have other origins than the inversion of the face’s structural components. In line with previous research establishing the N170 as a prediction error signal, we hypothesise that the unique N170 amplitude response to inverted photographed faces stems from multiple expectation violations, over and above structural inversion. For instance, rotating an image of a face upside down not only violates the expectation that faces appear upright, but also lifelong priors that illumination comes from above and gravity pulls from below. To test this hypothesis, we recorded EEG whilst participants viewed face stimuli (upright versus inverted), where the faces were illuminated from above versus below, and where the models were photographed upright versus hanging upside down. The N170 amplitudes were found to be modulated by a complex interaction between orientation, lighting and gravity factors, with the amplitudes largest when faces consistently violated all three expectations and smallest when all these factors concurred with expectations. These results confirm our hypothesis that FIEs on N170 amplitudes are driven by a violation of the viewer’s expectations across several parameters that characterise faces, rather than a disruption in the configurational disposition of its features.


2021 ◽  
Vol 33 (2) ◽  
pp. 303-314
Author(s):  
Yasmin Allen-Davidian ◽  
Manuela Russo ◽  
Naohide Yamamoto ◽  
Jordy Kaufman ◽  
Alan J. Pegna ◽  
...  

Face inversion effects occur for both behavioral and electrophysiological responses when people view faces. In EEG, inverted faces are often reported to evoke an enhanced amplitude and delayed latency of the N170 ERP. This response has been attributed to the indexing of specialized face processing mechanisms within the brain. However, inspection of the literature revealed that, although N170 is consistently delayed to a variety of face representations, only photographed faces invoke enhanced N170 amplitudes upon inversion. This suggests that the increased N170 amplitudes to inverted faces may have other origins than the inversion of the face's structure. We hypothesize that the unique N170 amplitude response to inverted photographed faces stems from multiple expectation violations, over and above structural inversion. For instance, rotating an image of a face upside–down not only violates the expectation that faces appear upright but also lifelong priors about illumination and gravity. We recorded EEG while participants viewed face stimuli (upright vs. inverted), where the faces were illuminated from above versus below, and where the models were photographed upright versus hanging upside–down. The N170 amplitudes were found to be modulated by a complex interaction between orientation, lighting, and gravity factors, with the amplitudes largest when faces consistently violated all three expectations. These results confirm our hypothesis that face inversion effects on N170 amplitudes are driven by a violation of the viewer's expectations across several parameters that characterize faces, rather than a disruption in the configurational disposition of its features.


2021 ◽  
Vol 8 (11) ◽  
Author(s):  
Yuri Kawaguchi ◽  
Koyo Nakamura ◽  
Masaki Tomonaga ◽  
Ikuma Adachi

Impaired face recognition for certain face categories, such as faces of other species or other age class faces, is known in both humans and non-human primates. A previous study found that it is more difficult for chimpanzees to differentiate infant faces than adult faces. Infant faces of chimpanzees differ from adult faces in shape and colour, but the latter is especially a salient cue for chimpanzees. Therefore, impaired face differentiation of infant faces may be due to a specific colour. In the present study, we investigated which feature of infant faces has a greater effect on face identification difficulty. Adult chimpanzees were tested using a matching-to-sample task with four types of face stimuli whose shape and colour were manipulated as either infant or adult one independently. Chimpanzees' discrimination performance decreased as they matched faces with infant coloration, regardless of the shape. This study is the first to demonstrate the impairment effect of infantile coloration on face recognition in non-human primates, suggesting that the face recognition strategies of humans and chimpanzees overlap as both species show proficient face recognition for certain face colours.


2010 ◽  
Vol 69 (3) ◽  
pp. 161-167 ◽  
Author(s):  
Jisien Yang ◽  
Adrian Schwaninger

Configural processing has been considered the major contributor to the face inversion effect (FIE) in face recognition. However, most researchers have only obtained the FIE with one specific ratio of configural alteration. It remains unclear whether the ratio of configural alteration itself can mediate the occurrence of the FIE. We aimed to clarify this issue by manipulating the configural information parametrically using six different ratios, ranging from 4% to 24%. Participants were asked to judge whether a pair of faces were entirely identical or different. The paired faces that were to be compared were presented either simultaneously (Experiment 1) or sequentially (Experiment 2). Both experiments revealed that the FIE was observed only when the ratio of configural alteration was in the intermediate range. These results indicate that even though the FIE has been frequently adopted as an index to examine the underlying mechanism of face processing, the emergence of the FIE is not robust with any configural alteration but dependent on the ratio of configural alteration.


2019 ◽  
Vol 4 (91) ◽  
pp. 21-29 ◽  
Author(s):  
Yaroslav Trofimenko ◽  
Lyudmila Vinogradova ◽  
Evgeniy Ershov

2021 ◽  
pp. 003329412110184
Author(s):  
Paola Surcinelli ◽  
Federica Andrei ◽  
Ornella Montebarocci ◽  
Silvana Grandi

Aim of the research The literature on emotion recognition from facial expressions shows significant differences in recognition ability depending on the proposed stimulus. Indeed, affective information is not distributed uniformly in the face and recent studies showed the importance of the mouth and the eye regions for a correct recognition. However, previous studies used mainly facial expressions presented frontally and studies which used facial expressions in profile view used a between-subjects design or children faces as stimuli. The present research aims to investigate differences in emotion recognition between faces presented in frontal and in profile views by using a within subjects experimental design. Method The sample comprised 132 Italian university students (88 female, Mage = 24.27 years, SD = 5.89). Face stimuli displayed both frontally and in profile were selected from the KDEF set. Two emotion-specific recognition accuracy scores, viz., frontal and in profile, were computed from the average of correct responses for each emotional expression. In addition, viewing times and response times (RT) were registered. Results Frontally presented facial expressions of fear, anger, and sadness were significantly better recognized than facial expressions of the same emotions in profile while no differences were found in the recognition of the other emotions. Longer viewing times were also found when faces expressing fear and anger were presented in profile. In the present study, an impairment in recognition accuracy was observed only for those emotions which rely mostly on the eye regions.


2021 ◽  
Vol 11 (5) ◽  
pp. 2074
Author(s):  
Bohan Yoon ◽  
Hyeonji So ◽  
Jongtae Rhee

Recent improvements in the performance of the human face recognition model have led to the development of relevant products and services. However, research in the similar field of animal face identification has remained relatively limited due to the greater diversity and complexity in shape and the lack of relevant data for animal faces such as dogs. In the face identification model using triplet loss, the length of the embedding vector is normalized by adding an L2-normalization (L2-norm) layer for using cosine-similarity-based learning. As a result, object identification depends only on the angle, and the distribution of the embedding vector is limited to the surface of a sphere with a radius of 1. This study proposes training the model from which the L2-norm layer is removed by using the triplet loss to utilize a wide vector space beyond the surface of a sphere with a radius of 1, for which a novel loss function and its two-stage learning method. The proposed method classifies the embedding vector within a space rather than on the surface, and the model’s performance is also increased. The accuracy, one-shot identification performance, and distribution of the embedding vectors are compared between the existing learning method and the proposed learning method for verification. The verification was conducted using an open-set. The resulting accuracy of 97.33% for the proposed learning method is approximately 4% greater than that of the existing learning method.


2019 ◽  
Vol 2019 ◽  
pp. 1-21 ◽  
Author(s):  
Naeem Ratyal ◽  
Imtiaz Ahmad Taj ◽  
Muhammad Sajid ◽  
Anzar Mahmood ◽  
Sohail Razzaq ◽  
...  

Face recognition aims to establish the identity of a person based on facial characteristics and is a challenging problem due to complex nature of the facial manifold. A wide range of face recognition applications are based on classification techniques and a class label is assigned to the test image that belongs to the unknown class. In this paper, a pose invariant deeply learned multiview 3D face recognition approach is proposed and aims to address two problems: face alignment and face recognition through identification and verification setups. The proposed alignment algorithm is capable of handling frontal as well as profile face images. It employs a nose tip heuristic based pose learning approach to estimate acquisition pose of the face followed by coarse to fine nose tip alignment using L2 norm minimization. The whole face is then aligned through transformation using knowledge learned from nose tip alignment. Inspired by the intrinsic facial symmetry of the Left Half Face (LHF) and Right Half Face (RHF), Deeply learned (d) Multi-View Average Half Face (d-MVAHF) features are employed for face identification using deep convolutional neural network (dCNN). For face verification d-MVAHF-Support Vector Machine (d-MVAHF-SVM) approach is employed. The performance of the proposed methodology is demonstrated through extensive experiments performed on four databases: GavabDB, Bosphorus, UMB-DB, and FRGC v2.0. The results show that the proposed approach yields superior performance as compared to existing state-of-the-art methods.


Sign in / Sign up

Export Citation Format

Share Document