Local and Relational Aspects of Face Distinctiveness

1998 ◽  
Vol 51 (3) ◽  
pp. 449-473 ◽  
Author(s):  
Helmut Leder ◽  
Vicki Bruce

Distinctiveness contributes strongly to the recognition and rejection of faces in memory tasks. In four experiments we examine the role played by local and relational information in the distinctiveness of upright and inverted faces. In all experiments subjects saw one of three versions of a face: original faces, which had been rated as average in distinctiveness in a previous study (Hancock, Burton, & Bruce, 1996), a more distinctive version in which local features had been changed ( D-local), and a more distinctive version in which relational features had been changed ( D-rel). An increase in distinctiveness was found for D-local and D-rel faces in Experiment 1 (complete faces) and 3 and 4 (face internals only) when the faces had to be rated in upright presentation, but the distinctiveness of the D-rel faces was reduced much more than that of the D-local versions when the ratings were given to the faces presented upside-down (Experiments 1 and 3). Recognition performance showed a similar pattern: presented upright, both D-local and D-rel revealed higher performance compared to the originals, but in upside-down presentation the D-local versions showed a much stronger distinctiveness advantage. When only internal features of faces were used (Experiments 3 and 4), the D-rel faces lost their advantage over the Original versions in inverted presentation. The results suggest that at least two dimensions of facial information contribute to a face's apparent distinctiveness, but that these sources of information are differentially affected by turning the face upside-down. These findings are in accordance with a face processing model in which face inversion effects occur because a specific type of information processing is disrupted, rather than because of a general disruption of performance.

2019 ◽  
Vol 35 (05) ◽  
pp. 525-533
Author(s):  
Evrim Gülbetekin ◽  
Seda Bayraktar ◽  
Özlenen Özkan ◽  
Hilmi Uysal ◽  
Ömer Özkan

AbstractThe authors tested face discrimination, face recognition, object discrimination, and object recognition in two face transplantation patients (FTPs) who had facial injury since infancy, a patient who had a facial surgery due to a recent wound, and two control subjects. In Experiment 1, the authors showed them original faces and morphed forms of those faces and asked them to rate the similarity between the two. In Experiment 2, they showed old, new, and implicit faces and asked whether they recognized them or not. In Experiment 3, they showed them original objects and morphed forms of those objects and asked them to rate the similarity between the two. In Experiment 4, they showed old, new, and implicit objects and asked whether they recognized them or not. Object discrimination and object recognition performance did not differ between the FTPs and the controls. However, the face discrimination performance of FTP2 and face recognition performance of the FTP1 were poorer than that of the controls were. Therefore, the authors concluded that the structure of the face might affect face processing.


2005 ◽  
Vol 17 (10) ◽  
pp. 1652-1666 ◽  
Author(s):  
Roberto Caldara ◽  
Philippe Schyns ◽  
Eugéne Mayer ◽  
Marie L. Smith ◽  
Frédéric Gosselin ◽  
...  

One of the most impressive disorders following brain damage to the ventral occipitotemporal cortex is prosopagnosia, or the inability to recognize faces. Although acquired prosopagnosia with preserved general visual and memory functions is rare, several cases have been described in the neuropsychological literature and studied at the functional and neural level over the last decades. Here we tested a brain-damaged patient (PS) presenting a deficit restricted to the category of faces to clarify the nature of the missing and preserved components of the face processing system when it is selectively damaged. Following learning to identify 10 neutral and happy faces through extensive training, we investigated patient PS's recognition of faces using Bubbles, a response classification technique that sampled facial information across the faces in different bandwidths of spatial frequencies [Gosselin, F., & Schyns, P. E., Bubbles: A technique to reveal the use of information in recognition tasks. Vision Research, 41, 2261-2271, 2001]. Although PS gradually used less information (i.e., the number of bubbles) to identify faces over testing, the total information required was much larger than for normal controls and decreased less steeply with practice. Most importantly, the facial information used to identify individual faces differed between PS and controls. Specifically, in marked contrast to controls, PS did not use the optimal eye information to identify familiar faces, but instead the lower part of the face, including the mouth and the external contours, as normal observers typically do when processing unfamiliar faces. Together, the findings reported here suggest that damage to the face processing system is characterized by an inability to use the information that is optimal to judge identity, focusing instead on suboptimal information.


2014 ◽  
Vol 10 (4) ◽  
pp. 20140063 ◽  
Author(s):  
Alexandra Alvergne ◽  
Fanny Perreau ◽  
Allan Mazur ◽  
Ulrich Mueller ◽  
Michel Raymond

Understanding how individuals identify their relatives has implications for the evolution of social behaviour. Kinship cues might be based on familiarity, but in the face of paternity uncertainty and costly paternal investment, other mechanisms such as phenotypic matching may have evolved. In humans, paternal recognition of offspring and subsequent discriminative paternal investment have been linked to father–offspring facial phenotypic similarities. However, the extent to which paternity detection is impaired by environmentally induced facial information is unclear. We used 27 portraits of fathers and their adult sons to quantify the level of paternity detection according to experimental treatments that manipulate the location, type and quantity of visible facial information. We found that (i) the lower part of the face, that changes most with development, does not contain paternity cues, (ii) paternity can be detected even if relational information within the face is disrupted and (iii) the signal depends on the presence of specific information rather than their number. Taken together, the results support the view that environmental effects have little influence on the detection of paternity using facial similarities. This suggests that the cognitive dispositions enabling the facial detection of kinship relationships ignore genetic irrelevant facial information.


2021 ◽  
Author(s):  
Zuzanna Laudańska ◽  
Aleksandra Dopierała ◽  
Magdalena Szmytke ◽  
Dianna Ilyka ◽  
Anna Malinowska-Korczak ◽  
...  

Abstract Configural processing is a specialised perceptual mechanism that allows adult humans to quickly process facial information. It emerges before the first birthday and can be disrupted by upside-down presentation of the face (inversion). To date, little is known about the relationship of configural face processing to the emerging knowledge of audiovisual (AV) speech in infancy. Using eye-tracking we measured attention to speaking mouth in upright and inverted faces that were either congruent or incongruent with the speech sound. Face inversion affected looking at AV speech only in older infants (9- to 11- and 12- to 14-month-olds). The youngest group of infants (5- to 7-month-olds) did not show any differences in looking durations between upright and inverted faces, while in both older groups face inversion led to reduced looking at the articulating mouth. We also observed a stronger interest in the eyes in the youngest infants, followed by an increase in looking time to the mouth in both older groups. Our findings suggest that configural face processing is involved in AV speech processing already in infancy, indicating early integration of face and speech processing mechanisms in cognitive development.


2003 ◽  
Vol 15 (3) ◽  
pp. 462-474 ◽  
Author(s):  
Galit Yovel ◽  
Jerre Levy ◽  
Marcia Grabowecky ◽  
Ken A. Paller

Studies in healthy individuals and split-brain patients have shown that the representation of facial information from the left visual field (LVF) is better than the representation of facial information from the right visual field (RVF). To investigate the neurophysiological basis of this LVF superiority in face perception, we recorded event-related potentials (ERPs) to centrally presented face stimuli in which relevant facial information is present bilaterally (B faces) or only in the left (L faces) or the right (R faces) visual field. Behavioral findings showed best performance for B faces and, in line with the LVF superiority, better performance for L than R faces. Evoked potentials to B, L, and R faces at 100 to 150-msec poststimulus showed no evidence of asymmetric transfer of information between the hemispheres at early stages of visual processing, suggesting that this factor is not responsible for the LVF superiority. Neural correlates of the LVF superiority, however, were manifested in a shorter latency of the face-specific N170 component to L than R faces and in a larger amplitude to L than R faces at 220—280 and 400—600 msec over both hemispheres. These ERP amplitude differences between L and R faces covaried across subjects with the extent to which the face-specific N170 component was larger over the right than the left hemisphere. We conclude that the two hemispheres exchange information symmetrically at early stages of face processing and together generate a shared facial representation, which is better when facial information is directly presented to the right hemisphere (RH; L faces) than to the left hemisphere (LH; R faces) and best when both hemispheres receive facial information (B faces).


2020 ◽  
Author(s):  
Noa Simhi ◽  
Galit Yovel

While most studies on person recognition examine the face alone, recent studies have shown evidence for the contribution of the body and gait to person recognition beyond the face. Nevertheless, little is known on whether person recognition can be performed based on the body alone. In this study, we examined two sources of information that may enhance body-based person recognition: body motion and whole person context. Body motion has been shown to contribute to person recognition especially when facial information is unclear. Additionally, generating whole person context, by attaching faceless heads to bodies, has been shown to activate face processing mechanisms and may therefore enhance body-based person recognition. To assess body-based person recognition, participants performed a sequential matching task in which they studied a video of a person walking followed by a headless image of the same or different identity. The role of body motion was examined by comparing recognition from dynamic vs. static headless bodies. The role of whole person context was examined by comparing bodies with and without faceless heads. Our findings show that body motion contributed to body-based person recognition, and person recognition from the body alone was better in dynamic vs. static displays. In addition, whole person context contributed to body-based person recognition when recognition was performed in static displays. Overall these findings show that recognizing people based on their body alone is challenging but can be performed under certain circumstances that enhance the processing of the body when seeing the whole person.


2014 ◽  
Vol 687-691 ◽  
pp. 3714-3717
Author(s):  
Lin Zhang

In this paper, we proposed a face gender recognition method based on local features and SVM. First, we divide the face image into five parts which are used to instead of the whole face for better recognition performance. Second, we use CS to extract local features of these five parts. Then, we respectively train five single SVM classifiers to achieve one to one feature recognition for local features. Finally, decision information fusion is used to achieve the final classification. Because SVM were successfully used to solve numerous pattern recognition problems and is mainly used to solve two-classification problem, selecting SVM to do gender recognition in our method has the obvious superiority. After a lot of experiments, results show that the proposed method in this paper is stable and effective, greatly improving the efficiency of face gender recognition.


2010 ◽  
Vol 69 (3) ◽  
pp. 161-167 ◽  
Author(s):  
Jisien Yang ◽  
Adrian Schwaninger

Configural processing has been considered the major contributor to the face inversion effect (FIE) in face recognition. However, most researchers have only obtained the FIE with one specific ratio of configural alteration. It remains unclear whether the ratio of configural alteration itself can mediate the occurrence of the FIE. We aimed to clarify this issue by manipulating the configural information parametrically using six different ratios, ranging from 4% to 24%. Participants were asked to judge whether a pair of faces were entirely identical or different. The paired faces that were to be compared were presented either simultaneously (Experiment 1) or sequentially (Experiment 2). Both experiments revealed that the FIE was observed only when the ratio of configural alteration was in the intermediate range. These results indicate that even though the FIE has been frequently adopted as an index to examine the underlying mechanism of face processing, the emergence of the FIE is not robust with any configural alteration but dependent on the ratio of configural alteration.


Sign in / Sign up

Export Citation Format

Share Document