face similarity
Recently Published Documents


TOTAL DOCUMENTS

36
(FIVE YEARS 9)

H-INDEX

8
(FIVE YEARS 1)

2021 ◽  
Vol 21 (9) ◽  
pp. 2149
Author(s):  
Rosyl S. Somai ◽  
Peter J.B. Hancock
Keyword(s):  

2021 ◽  
Author(s):  
Isabelle Bülthoff ◽  
Mintao Zhao

Many face recognition studies use average faces as a theoretical concept (e.g., face norm) and/or a research tool (e.g., for face morphing), nonetheless, how the averaging process—using an increasing number of faces to create an average face—changes the resulting averaged faces and how our visual system perceives these faces remain unclear. Here we aimed to address these questions by combining 3D-face averaging, eye movement tracking, and the computation of image-based face similarity. Our results show that average faces created with an increasing number of “parent” faces become increasingly more similar to each other. Participants’ ability to discriminate between two average faces dropped from near-ceiling level (when comparing two average faces created each from two-parent faces) to chance level (when the faces to compare were created out of 80 faces each). The non-linear relation between face similarity and participants’ face discrimination performance was captured nearly perfectly with an exponential function. This finding suggests that the relationship between physical and perceived face similarity follows a Fechner law. Eye-tracking revealed that when the comparison task became more challenging, participants performed more fixations onto the faces. Nonetheless, the distribution of fixations across core facial features (eyes, nose, mouth, and center area of a face) remained unchanged, irrespective of task difficulty. These results not only provide a long-needed benchmark for the theoretical characterization and empirical use of average faces, but also set new constraints on the understanding of how faces are encoded, stored, categorized and identified using a modernized face space metaphor.


Author(s):  
Dan Zhang ◽  
Zhongke Wu ◽  
Xingce Wang ◽  
Chenlei Lv ◽  
Na Liu

2020 ◽  
Vol 37 (6) ◽  
Author(s):  
Sarah Bernadette Hackett ◽  
David Keatley ◽  
Brendan Chapman

Author(s):  
Maria Tsantani ◽  
Nikolaus Kriegeskorte ◽  
Katherine Storrs ◽  
Adrian Lloyd Williams ◽  
Carolyn McGettigan ◽  
...  

AbstractFaces of different people elicit distinct functional MRI (fMRI) patterns in several face-selective brain regions. Here we used representational similarity analysis to investigate what type of identity-distinguishing information is encoded in three face-selective regions: fusiform face area (FFA), occipital face area (OFA), and posterior superior temporal sulcus (pSTS). We used fMRI to measure brain activity patterns elicited by naturalistic videos of famous face identities, and compared their representational distances in each region with models of the differences between identities. Models included low-level to high-level image-computable properties and complex human-rated properties. We found that the FFA representation reflected perceived face similarity, social traits, and gender, and was well accounted for by the OpenFace model (deep neural network, trained to cluster faces by identity). The OFA encoded low-level image-based properties (pixel-wise and Gabor-jet dissimilarities). Our results suggest that, although FFA and OFA can both discriminate between identities, the FFA representation is further removed from the image, encoding higher-level perceptual and social face information.


2019 ◽  
Author(s):  
Catrina Hacker ◽  
Irving Biederman

Tests of face recognition implicitly assume it to be an undifferentiated ability. However, several possible independent components could comprise face recognition proficiency, such as those for the perceptual discrimination of faces, face memory, and the ability to generalize across viewpoints. We assessed two possible components of face recognition ability: the proficiency for the perceptual discrimination of faces using a quantitative measure of face similarity and the proficiency for remembering those faces using a minimal delayed match-to-sample task. We document a strong cost of delay on error rates and mean correct reaction times in face matching. We further demonstrate that the slope of performance over increased perceptual difficulty is parallel for each of three levels of delay. These results provide evidence, by additive factors logic, that face memory and face perception are independent processes within subjects. Finally, we document an independence of the proficiencies for face memory and face perception across subjects.


2019 ◽  
Author(s):  
Zarrar Shehzad ◽  
Eunjoo Byeon ◽  
Gregory McCarthy

AbstractWe are highly accurate at recognizing familiar faces even with large variation in visual presentation due to pose, lighting, hairstyle, etc. The neural basis of such within-person face variation has been largely unexplored. Building on prior behavioral work, we hypothesized that learning a person’s average face helps link the different instances of that person’s face into a coherent identity within face-selective regions within ventral occipitotemporal cortex (VOTC). To test this hypothesis, we measured brain activity using fMRI for eight well-known celebrities with 18 naturalistic photos per identity. Each photo was mapped into a face-space using a neural network where the Euclidean distance between photos corresponded with face similarity. We confirmed in a behavioral study that photos closer to a person’s average face in a face-space were judged to look more like that person. fMRI results revealed hemispheric differences in identity processing. The right fusiform face area (FFA) encoded face-likeness with brain signal increasing the closer a photo was to the average of all faces. This suggests that the right FFA pattern matches to an average face template. In contrast, the left FFA and left anterior fusiform gyrus (aFus) encoded person-likeness. The brain signal increased the further a photo was from the person’s average face weighted by the features most relevant for face identification. This suggests that the left FFA and aFUS processes an identity error signal. Our results encourage a new consideration of the left fusiform in face processing, specifically for within-person processing of face identity.


2019 ◽  
Author(s):  
Catrina Hacker ◽  
Irving Biederman ◽  
Tianyi Zhu ◽  
Miles Nelken ◽  
Emily X. Meschke

Attempting to match unfamiliar faces at moderate differences in orientation in depth is surprisingly difficult. No general account of these costs has been offered. We assessed the effects of orientation disparity in a match-to-sample paradigm of a triangular display of three faces. Two lower test faces, a matching face and a foil, were always at the same orientation and differed by 0° to 20° from the sample on top. The similarity of the images was scaled by a model based on simple cell tuning that correlates almost perfectly with psychophysical similarity. Two measures of face similarity accounted for matching performance: a) the decrease in similarity between the images of the matching and sample faces produced by increases in their orientation disparity, and b) the similarity between the matching face and the selection of a particular foil. The two images of the same face at a 20° difference in orientation revealed a previously unappreciated marked increase in dissimilarity that was so high that it could be equivalent to the image dissimilarity between two faces at the same orientation, but differing in race, sex, and expression. The 20° orientation disparity was thus sufficient to yield a sizeable 301 msec increase in reaction time.


2018 ◽  
Author(s):  
Catrina Hacker ◽  
Irving Biederman

Stretching (or compressing) a face by a factor of two has no effect on its recognition as assessed by the speed and accuracy of judging whether the face is that of a celebrity (Hole, 2002). This invariance has stood as a challenge to all contemporary accounts of the relation between neurocomputational measures of face similarity and face recognition. We extend the documentation of strong invariance over compression to a factor of four and show that the deformation so produced is sufficiently great that the resultant image is as similar to markedly different faces—even those differing in race, sex, and expression-- as it is to the original face. The invariance to face compression is readily witnessed with less familiar celebrities and unfamiliar faces ruling out a role of exposure to transformed images of particular faces through depth rotation or viewing pictures at varied viewing angles. We additionally discount the possibility that faces are “un-stretched” by warping them to an average face. Instead, we suggest that the percept of an elongated face provides a signal for the shrink-wrapping of receptive fields to conform to an attended object, a phenomenon witnessed in single unit activity in the macaque by Moran and Desimone (1985) which may serve, more generally, as the underlying neural mechanism for object-based attention.


Sign in / Sign up

Export Citation Format

Share Document