scholarly journals Familiar and unfamiliar face recognition in crested macaques ( Macaca nigra )

2015 ◽  
Vol 2 (5) ◽  
pp. 150109 ◽  
Author(s):  
Jérôme Micheletta ◽  
Jamie Whitehouse ◽  
Lisa A. Parr ◽  
Paul Marshman ◽  
Antje Engelhardt ◽  
...  

Many species use facial features to identify conspecifics, which is necessary to navigate a complex social environment. The fundamental mechanisms underlying face processing are starting to be well understood in a variety of primate species. However, most studies focus on a limited subset of species tested with unfamiliar faces. As well as limiting our understanding of how widely distributed across species these skills are, this also limits our understanding of how primates process faces of individuals they know, and whether social factors (e.g. dominance and social bonds) influence how readily they recognize others. In this study, socially housed crested macaques voluntarily participated in a series of computerized matching-to-sample tasks investigating their ability to discriminate (i) unfamiliar individuals and (ii) members of their own social group. The macaques performed above chance on all tasks. Familiar faces were not easier to discriminate than unfamiliar faces. However, the subjects were better at discriminating higher ranking familiar individuals, but not unfamiliar ones. This suggests that our subjects applied their knowledge of their dominance hierarchies to the pictorial representation of their group mates. Faces of high-ranking individuals garner more social attention, and therefore might be more deeply encoded than other individuals. Our results extend the study of face recognition to a novel species, and consequently provide valuable data for future comparative studies.

2018 ◽  
Author(s):  
Naphtali Abudarham ◽  
Lior Shkiller ◽  
Galit Yovel

Face recognition is a computationally challenging task that humans perform effortlessly. Nonetheless, this remarkable ability is limited to familiar faces and does not generalize to unfamiliar faces. To account for humans’ superior ability to recognize familiar faces, current theories suggest that familiar and unfamiliar faces have different perceptual representations. In the current study, we applied a reverse engineering approach to reveal which facial features are critical for familiar face recognition. In contrast to current views, we discovered that the same subset of features that are used for matching unfamiliar faces, are also used for matching as well as recognition of familiar faces. We further show that these features are also used by a deep neural network face recognition algorithm. We therefore propose a new framework that assumes similar perceptual representation for all faces and integrates cognition and perception to account for humans’ superior recognition of familiar faces.


2018 ◽  
Vol 5 (5) ◽  
pp. 170634
Author(s):  
Angus F. Chapman ◽  
Hannah Hawkins-Elder ◽  
Tirta Susilo

Recent theories suggest that familiar faces have a robust representation in memory because they have been encountered over a wide variety of contexts and image changes (e.g. lighting, viewpoint and expression). By contrast, unfamiliar faces are encountered only once, and so they do not benefit from such richness of experience and are represented based on image-specific details. In this registered report, we used a repeat detection task to test whether familiar faces are recognized better than unfamiliar faces across image changes. Participants viewed a stream of more than 1000 celebrity face images for 0.5 s each, any of which might be repeated at a later point and has to be detected. Some participants saw the same image at repeats, while others saw a different image of the same face. A post-experimental familiarity check allowed us to determine which celebrities were and were not familiar to each participant. We had three predictions: (i) detection would be better for familiar than unfamiliar faces, (ii) detection would be better across same rather than different images, and (iii) detection of familiar faces would be comparable across same and different images, but detection of unfamiliar faces would be poorer across different images. We obtained support for the first two predictions but not the last. Instead, we found that repeat detection of faces, regardless of familiarity, was poorer across different images. Our study suggests that the robustness of familiar face recognition may have limits, and that under some conditions, familiar face recognition can be just as influenced by image changes as unfamiliar face recognition.


2020 ◽  
Author(s):  
Ashok Jansari ◽  
E. Green ◽  
Francesco Innocenti ◽  
Diego Nardi ◽  
Elena Belanova ◽  
...  

Unfamiliar face identification ability varies widely in the population. Those at the extreme top and bottom ends of the continuum have been labelled super-recognisers and prosopagnosics, respectively. Here we describe the development of two new tests - the Goldsmiths Unfamiliar Face Memory Test (GUFMT) and the Before They Were Adult Test (BTWA), that have been designed to measure different aspects of face identity ability across the spectrum. The GUFMT is a test of face memory, the BTWA a test of simultaneous adult-to-child face matching. Their designs draw on theories suggesting face identification is achieved by the recognition of facial features, the consistency across time of configurations between those features, and holistic processing of faces as a Gestalt. In four phases, participants (n = 16737), recruited using different methods, allowed evaluations to drive GUFMT development, the creation of likely population norms, as well as correlations with established face recognition tests. Recommendations for criteria for classification of super-recognition ability are also made.


2006 ◽  
Vol 65 (4) ◽  
pp. 245-252 ◽  
Author(s):  
Claus-Christian Carbon ◽  
Helmut Leder

When faces are viewed from different angles the appearance of facial features undergoes dramatic changes. We investigated two types of 3D-head models in frontal and three-quarter views, varying either in componential information such as different eyes, mouths and noses, or in relational information. Variations of the latter can only be investigated using 3D-head versions. Experiment 1 revealed high costs of transfer in recognition performance when views change, that were similar for both componentially and relationally altered faces. In Experiment 2, whole-to-part superiority was investigated by presenting isolated parts of critical features in addition to the whole face. Recognition of the whole face was only superior when views were identical. The results support the hypothesis of picture-based and view-dependent processing. Thus, there seems to be no efficient view-independent representation, at least for relatively unfamiliar faces.


Perception ◽  
2021 ◽  
pp. 030100662110140
Author(s):  
Xingchen Zhou ◽  
A. M. Burton ◽  
Rob Jenkins

One of the best-known phenomena in face recognition is the other-race effect, the observation that own-race faces are better remembered than other-race faces. However, previous studies have not put the magnitude of other-race effect in the context of other influences on face recognition. Here, we compared the effects of (a) a race manipulation (own-race/other-race face) and (b) a familiarity manipulation (familiar/unfamiliar face) in a 2 × 2 factorial design. We found that the familiarity effect was several times larger than the race effect in all performance measures. However, participants expected race to have a larger effect on others than it actually did. Face recognition accuracy depends much more on whether you know the person’s face than whether you share the same race.


2021 ◽  
pp. 174702182110097
Author(s):  
Niamh Hunnisett ◽  
Simone Favelle

Unfamiliar face identification is concerningly error prone, especially across changes in viewing conditions. Within-person variability has been shown to improve matching performance for unfamiliar faces, but this has only been demonstrated using images of a front view. In this study, we test whether the advantage of within-person variability from front views extends to matching to target images of a face rotated in view. Participants completed either a simultaneous matching task (Experiment 1) or a sequential matching task (Experiment 2) in which they were tested on their ability to match the identity of a face shown in an array of either one or three ambient front-view images, with a target image shown in front, three-quarter, or profile view. While the effect was stronger in Experiment 2, we found a consistent pattern in match trials across both experiments in that there was a multiple image matching benefit for front, three-quarter, and profile-view targets. We found multiple image effects for match trials only, indicating that providing observers with multiple ambient images confers an advantage for recognising different images of the same identity but not for discriminating between images of different identities. Signal detection measures also indicate a multiple image advantage despite a more liberal response bias for multiple image trials. Our results show that within-person variability information for unfamiliar faces can be generalised across views and can provide insights into the initial processes involved in the representation of familiar faces.


Author(s):  
CHING-WEN CHEN ◽  
CHUNG-LIN HUANG

This paper presents a face recognition system which can identify the unknown identity effectively using the front-view facial features. In front-view facial feature extractions, we can capture the contours of eyes and mouth by the deformable template model because of their analytically describable shapes. However, the shapes of eyebrows, nostrils and face are difficult to model using a deformable template. We extract them by using the active contour model (snake). After the contours of all facial features have been captured, we calculate effective feature values from these extracted contours and construct databases for unknown identities classification. In the database generation phase, 12 models are photographed, and feature vectors are calculated for each portrait. In the identification phase if any one of these 12 persons has his picture taken again, the system can recognize his identity.


2017 ◽  
Vol 7 (1.1) ◽  
pp. 213
Author(s):  
Sheela Rani ◽  
Vuyyuru Tejaswi ◽  
Bonthu Rohitha ◽  
Bhimavarapu Akhil

Recognition of face has been turned out to be the most important and interesting area in research. A face recognition framework is a PC application that is apt for recognizing or confirming the presence of human face from a computerized picture, from the video frames etc. One of the approaches to do this is by matching the chosen facial features with the pictures in the database. It is normally utilized as a part of security frameworks and can be implemented in different biometrics, for example, unique finger impression or eye iris acknowledgment frameworks. A picture is a mix of edges. The curved line potions where the brightness of the image change intensely are known as edges. We utilize a similar idea in the field of face-detection, the force of facial colours are utilized as a consistent value. Face recognition includes examination of a picture with a database of stored faces keeping in mind the end goal to recognize the individual in the given input picture. The entire procedure covers in three phases face detection, feature extraction and recognition and different strategies are required according to the specified requirements.


2016 ◽  
Vol 33 (S1) ◽  
pp. S367-S368
Author(s):  
N. Deltort ◽  
J.R. Cazalets ◽  
A. Amestoy ◽  
M. Bouvard

Studies on individuals without developmental disorder show that mental representation of self-face is subject to a multimodal process in the same way that the representation of the self-body is. People with autistic spectrum disorder (ASD) have a particular pattern of face processing and a multimodal integration deficit.The objectives of our study were to evaluate the self-face recognition and the effect of interpersonal multisensory stimulation (IMS) in individuals with ASD. We aimed to show a self-face recognition deficit and a lack of multimodal integration among this population.IMS consisted of the presentation of a movie displaying an unfamiliar face being touched intermittently, while the examiner applied the same stimulation synchronously or asynchronously on the participant. The effect resulting from IMS was measured on two groups with or without ASD by a self-face recognition task on morphing movies made from self-face and unfamiliar-face pictures.There was a significant difference between groups on self-recognition before stimulation. This result shows a self-face recognition deficit in individuals with ASD. Results for the control group showed a significant effect of IMS on self-face recognition in synchronous condition. This suggests the existence of an update of self-face mental representation by multimodal process. In contrast, there was no significant effect of IMS demonstrated in ASD group, suggesting a multimodal integration deficit for the constitution of self-representation in this population.Our results show the existence of a self-face recognition deficit in individuals with ASD, which may be linked to a lack of multimodal integration in the development of the self-face representation.Disclosure of interestThe authors have not supplied their declaration of competing interest.


Sign in / Sign up

Export Citation Format

Share Document