Configurational Information in Face Perception

Perception ◽  
1987 ◽  
Vol 16 (6) ◽  
pp. 747-759 ◽  
Author(s):  
Andrew W Young ◽  
Deborah Hellawell ◽  
Dennis C Hay

A new facial composites technique is demonstrated, in which photographs of the top and bottom halves of different familiar faces fuse to form unfamiliar faces when aligned with each other. The perception of a novel configuration in such composite stimuli is sufficiently convincing to interfere with identification of the constituent parts (experiment 1), but this effect disappears when stimuli are inverted (experiment 2). Difficulty in identifying the parts of upright composites is found even for stimuli made from parts of unfamiliar faces that have only ever been encountered as face fragments (experiment 3). An equivalent effect is found for composites made from internal and external facial features of well-known people (experiment 4). These findings demonstrate the importance of configurational information in face perception, and that configurations are only properly perceived in upright faces.

2017 ◽  
Vol 7 (1) ◽  
Author(s):  
Matteo Visconti di Oleggio Castello ◽  
Yaroslav O. Halchenko ◽  
J. Swaroop Guntupalli ◽  
Jason D. Gors ◽  
M. Ida Gobbini

2015 ◽  
Vol 2 (5) ◽  
pp. 150109 ◽  
Author(s):  
Jérôme Micheletta ◽  
Jamie Whitehouse ◽  
Lisa A. Parr ◽  
Paul Marshman ◽  
Antje Engelhardt ◽  
...  

Many species use facial features to identify conspecifics, which is necessary to navigate a complex social environment. The fundamental mechanisms underlying face processing are starting to be well understood in a variety of primate species. However, most studies focus on a limited subset of species tested with unfamiliar faces. As well as limiting our understanding of how widely distributed across species these skills are, this also limits our understanding of how primates process faces of individuals they know, and whether social factors (e.g. dominance and social bonds) influence how readily they recognize others. In this study, socially housed crested macaques voluntarily participated in a series of computerized matching-to-sample tasks investigating their ability to discriminate (i) unfamiliar individuals and (ii) members of their own social group. The macaques performed above chance on all tasks. Familiar faces were not easier to discriminate than unfamiliar faces. However, the subjects were better at discriminating higher ranking familiar individuals, but not unfamiliar ones. This suggests that our subjects applied their knowledge of their dominance hierarchies to the pictorial representation of their group mates. Faces of high-ranking individuals garner more social attention, and therefore might be more deeply encoded than other individuals. Our results extend the study of face recognition to a novel species, and consequently provide valuable data for future comparative studies.


Perception ◽  
1979 ◽  
Vol 8 (4) ◽  
pp. 431-439 ◽  
Author(s):  
Hadyn D Ellis ◽  
John W Shepherd ◽  
Graham M Davies

Three experiments are reported in which recognition of faces from whole faces or internal or external features was compared. In the first experiment, where the faces were of famous people, an advantage was found for identification from internal features. In the second experiment involving unfamiliar faces, however, no difference was found in recognition rates when subjects were given the internal or the external features. In a third experiment famous faces were presented and mixed with other famous faces for a recognition test. As in experiment 1, better recognition occurred from internal as compared with external features. It is argued that the internal representation for familiar faces may be qualitatively different from that for faces seen just once. In particular some advantage in feature saliency may accrue to the internal or ‘expressive’ features of familiar faces. The implications of these results are considered in relation to general theories of face perception and recognition.


2021 ◽  
Vol 12 ◽  
Author(s):  
Simon Faghel-Soubeyrand ◽  
Juliane A. Kloess ◽  
Frédéric Gosselin ◽  
Ian Charest ◽  
Jessica Woodhams

Knowing how humans differentiate children from adults has useful implications in many areas of both forensic and cognitive psychology. Yet, how we extract age from faces has been surprisingly underexplored in both disciplines. Here, we used a novel data-driven experimental technique to objectively measure the facial features human observers use to categorise child and adult faces. Relying on more than 35,000 trials, we used a reverse correlation technique that enabled us to reveal how specific features which are known to be important in face-perception – position, spatial-frequency (SF), and orientation – are associated with accurate child and adult discrimination. This showed that human observers relied on evidence in the nasal bone and eyebrow area for accurate adult categorisation, while they relied on the eye and jawline area to accurately categorise child faces. For orientation structure, only facial information of vertical orientation was linked to face-adult categorisation, while features of horizontal and, to a lesser extent oblique orientations, were more diagnostic of a child face. Finally, we found that SF diagnosticity showed a U-shaped pattern for face-age categorisation, with information in low and high SFs being diagnostic of child faces, and mid SFs being diagnostic of adult faces. Through this first characterisation of the facial features of face-age categorisation, we show that important information found in psychophysical studies of face-perception in general (i.e., the eye area, horizontals, and mid-level SFs) is crucial to the practical context of face-age categorisation, and present data-driven procedures through which face-age classification training could be implemented for real-world challenges.


2018 ◽  
Author(s):  
Naphtali Abudarham ◽  
Lior Shkiller ◽  
Galit Yovel

Face recognition is a computationally challenging task that humans perform effortlessly. Nonetheless, this remarkable ability is limited to familiar faces and does not generalize to unfamiliar faces. To account for humans’ superior ability to recognize familiar faces, current theories suggest that familiar and unfamiliar faces have different perceptual representations. In the current study, we applied a reverse engineering approach to reveal which facial features are critical for familiar face recognition. In contrast to current views, we discovered that the same subset of features that are used for matching unfamiliar faces, are also used for matching as well as recognition of familiar faces. We further show that these features are also used by a deep neural network face recognition algorithm. We therefore propose a new framework that assumes similar perceptual representation for all faces and integrates cognition and perception to account for humans’ superior recognition of familiar faces.


Perception ◽  
1998 ◽  
Vol 27 (3) ◽  
pp. 295-312 ◽  
Author(s):  
Patricia A George ◽  
Graham J Hole

The influence of the internal features (eyes, nose, and mouth) in the age processing of unfamiliar faces was examined. Younger and older versions of the faces of six individuals (covering three different age ranges, from infancy to maturity) were used as donor stimuli. For each individual in turn, the effects on age estimates of placing older features in the younger face version (or vice versa) were investigated. Age estimates were heavily influenced by the age of the internal facial features. Experiment 2 replicated these effects with a larger number of faces within a narrower age range (after growth is complete and before major skin changes have occurred). Taken together, these two experiments show that the internal facial features may be influential in conveying age information to the perceiver. However, the mechanisms by which features exert their influence remain difficult to determine: although age estimates might be based on local information from the features themselves, an alternative possibility is that featural changes indirectly influence age estimates by altering the global three-dimensional shape of the head.


2021 ◽  
Author(s):  
Alice GOMEZ ◽  
Guillaume Lio ◽  
Manuela Costa ◽  
Angela Sirigu ◽  
Caroline Demily

Autistic Spectrum Disorder (ASD) and Williams syndrome (WS) are characterized by atypical social interaction and face processing, with opposite profiles. Evoked activity in the superior temporal sulcus (STS) occurring at 260ms is an eyes-sensitive neuronal marker. Here, we tested whether a double dissociation between these two syndromes is also observed at the cerebral level within faces processing network. Forty-two participants divided into three groups (diagnosed with WS, ASD or without neurodevelopmental disorders) performed a face perception task during a high-density EEG recording. In each trial, a particular area of the face was flashed under the participants fovea. First, using the predefined filter STS at 260ms, we found that the cortical sensitivity of STS to facial features, observed in neurotypical participants1, extended to patients with WS as opposed to patients with ASD. Again, this activity was eyes sensitive. Next, we used spatial regressor of the face cue map and computed, at each time point (without spatial a priori), to what extent the multichannel EEG signal could decode this regressor. We showed that, in contrast to patients with ASD, WS patients and young adults, produce a late (260ms) STS evoked response that is sensitive to socially relevant facial features (such as the eye and mouth). We also showed that the early (170ms) evoked response can be predicted by socially relevant facial features in patients with ASD and controls, but not in WS patients. In patients with WS, the decoding peak occurred only in the late component (at 260 ms) of the EEG, while, for patients with ASD, it occurred preferentially in the early component (at 170 ms). The present study confirms the previous results reported by Lio et al. and extend these findings by dissociating the time course of neural processes involved in face perception in WS and ASD patients. It reveals a dissociation among these two patients groups and suggest that although both syndromes are associated with social disturbances, their impairment at the neural level may have a different origin. These results provide essential evidence for understanding what, where and when neurocomputations are performed in our brain.


2021 ◽  
pp. 196-219
Author(s):  
Galina Ya. Menshikova ◽  
Anna O. Pichugina

Background. The article is devoted to the study of the mechanisms of face perception when using the technology of eye-tracking. In the scientific literature, two processes are distinguished - analytical (perception of individual facial features) and holistic (perception of a general configuration of facial features). It is assumed that each of the mechanisms can be specifically manifested in patterns of eye movements during face perception. However, there is disagreement among the authors concerning the eye movements patterns which reflect the dominance of the holistic or analytic processing. We hypothesized that the contradictions in the interpretation of eye movement indicators in the studies of face perception may be associated with the features of the eye-tracker data processing, namely, with the specifics of identifying areas of interest (eyes, nose, bridge of the nose, lips), as well as with individual strategies of eye movements. Objective. Revealing the features of eye movements analysis in the process of facial perception. Method. A method for studying analytical and holistic processing in the task of assessing the attractiveness of upright and inverted faces using eye-tracking technology has been developed and tested. The eye-tracking data were analyzed for the entire sample using three types of processing, differing in the marking of the areas of interest (AOIs), and separately for two groups differing in eye movement strategies. The distinction of strategies was considered based on differences in the mean values of the fixation duration and the amplitude of saccades. Results. It was shown that: the presence of statistically significant differences of the dwell time in the AOIs between the condition of upright and inverted faces depended on the method of identifying these AOIs. It was shown that the distribution of the dwell time by zones is closely related to individual strategies of eye movements. Analysis of the data separately by groups showed significant differences in the distribution of the dwell time in the AOIs. Conclusion. When processing eye-tracking data obtained in the studies of face perception, it is necessary to consider individual strategies of eye movements, as well as the features associated with identifying AOIs. The absence of a single standard for identifying these areas can be the reason for inconsistency of the data about the holistic or analytical processing dominance. According to our data, the most effective for the analysis of holistic processing is a more detailed type of marking the AOIs, in which not only the main features (eyes, nose, mouth) are distinguished, but also the area of the nose bridge and nose.


Author(s):  
David Anaki ◽  
Elena I. Nica ◽  
Morris Moscovitch

We examined the perceptual dependency of local facial information on the whole facial context. In Experiment 1 participants matched a predetermined facial feature that appeared in two sequentially presented faces judging whether it is identical or not, while ignoring an irrelevant dimension in the faces. This irrelevant dimension was either (a) compatible or incompatible with the target’s response and (b) same or different in either featural characteristics or metric distance between facial features in the two faces. A compatibility effect was observed for upright but not inverted faces, regardless of the type of change that differentiated between the faces in the irrelevant dimension. Even when the target was presented upright in the inverted faces, to attenuate perceptual load, no compatibility effect was found (Experiment 2). Finally, no compatibility effects were found for either upright or inverted houses (Experiment 3). These findings suggest that holistic face perception is mandatory.


Sign in / Sign up

Export Citation Format

Share Document