scholarly journals Diagnostic Features for Human Categorisation of Adult and Child Faces

2021 ◽  
Vol 12 ◽  
Author(s):  
Simon Faghel-Soubeyrand ◽  
Juliane A. Kloess ◽  
Frédéric Gosselin ◽  
Ian Charest ◽  
Jessica Woodhams

Knowing how humans differentiate children from adults has useful implications in many areas of both forensic and cognitive psychology. Yet, how we extract age from faces has been surprisingly underexplored in both disciplines. Here, we used a novel data-driven experimental technique to objectively measure the facial features human observers use to categorise child and adult faces. Relying on more than 35,000 trials, we used a reverse correlation technique that enabled us to reveal how specific features which are known to be important in face-perception – position, spatial-frequency (SF), and orientation – are associated with accurate child and adult discrimination. This showed that human observers relied on evidence in the nasal bone and eyebrow area for accurate adult categorisation, while they relied on the eye and jawline area to accurately categorise child faces. For orientation structure, only facial information of vertical orientation was linked to face-adult categorisation, while features of horizontal and, to a lesser extent oblique orientations, were more diagnostic of a child face. Finally, we found that SF diagnosticity showed a U-shaped pattern for face-age categorisation, with information in low and high SFs being diagnostic of child faces, and mid SFs being diagnostic of adult faces. Through this first characterisation of the facial features of face-age categorisation, we show that important information found in psychophysical studies of face-perception in general (i.e., the eye area, horizontals, and mid-level SFs) is crucial to the practical context of face-age categorisation, and present data-driven procedures through which face-age classification training could be implemented for real-world challenges.

2021 ◽  
Author(s):  
Simon Faghel-Soubeyrand ◽  
Juliane A. Kloess ◽  
Frédéric Gosselin ◽  
Ian Charest ◽  
Jessica Woodhams

Knowing how humans differentiate children from adults has useful implications in many areas of both forensic and cognitive psychology. Yet, how we extract age from faces has been surprisingly underexplored in both disciplines. Here, we used a novel data-driven experimental technique to objectively measure the facial features human observers use to categorise child and adult faces. Relying on more than 35,000 trials, we used a reverse correlation technique that enabled us to reveal how specific features which are known to be important in face-perception––position, spatial-frequency (granularity), and orientation––are associated with accurate child and adult discrimination. This showed that human observers relied on evidence in the nasal bone and eyebrow area for accurate adult categorisation, while they relied on the eye and jawline area to accurately categorise child faces. For orientation structure, only facial information of vertical orientation was linked to face-adult categorisation, while features of horizontal and, to a lesser extent oblique orientations, were more diagnostic of a child face. Finally, we found that spatial-frequency (SF) diagnosticity showed a U-shaped pattern for face-age categorisation, with facial information in low and high spatial frequencies being diagnostic of child faces, and mid spatial frequencies being diagnostic of adult faces. Through this first characterisation of the facial features of face-age categorisation, we show that important face information found in psychophysical studies of face-perception in general (i.e. the eye area, the horizontals, and mid-level SFs) are crucial to the practical context of face-age categorisation, and present data-driven procedures through which face-age classification training could be implemented for real world challenges.


Author(s):  
David Anaki ◽  
Elena I. Nica ◽  
Morris Moscovitch

We examined the perceptual dependency of local facial information on the whole facial context. In Experiment 1 participants matched a predetermined facial feature that appeared in two sequentially presented faces judging whether it is identical or not, while ignoring an irrelevant dimension in the faces. This irrelevant dimension was either (a) compatible or incompatible with the target’s response and (b) same or different in either featural characteristics or metric distance between facial features in the two faces. A compatibility effect was observed for upright but not inverted faces, regardless of the type of change that differentiated between the faces in the irrelevant dimension. Even when the target was presented upright in the inverted faces, to attenuate perceptual load, no compatibility effect was found (Experiment 2). Finally, no compatibility effects were found for either upright or inverted houses (Experiment 3). These findings suggest that holistic face perception is mandatory.


2016 ◽  
Vol 75 (3) ◽  
pp. 133-140
Author(s):  
Robert Busching ◽  
Johannes Lutz

Abstract. Legally irrelevant information like facial features is used to form judgments about rape cases. Using a reverse-correlation technique, it is possible to visualize criminal stereotypes and test whether these representations influence judgments. In the first step, images of the stereotypical faces of a rapist, a thief, and a lifesaver were generated. These images showed a clear distinction between the lifesaver and the two criminal representations, but the criminal representations were rather similar. In the next step, the images were presented together with rape scenarios, and participants (N = 153) indicated the defendant’s level of liability. Participants with high rape myth acceptance scores attributed a lower level of liability to a defendant who resembled a stereotypical lifesaver. However, no specific effects of the image of the stereotypical rapist compared to the stereotypical thief were found. We discuss the findings with respect to the influence of visual stereotypes on legal judgments and the nature of these mental representations.


2019 ◽  
Author(s):  
Bastian Jaeger ◽  
Alexander Todorov ◽  
Anthony M Evans ◽  
Ilja van Beest

Trait impressions from faces influence many consequential decisions even in situations in which decisions should not be based on a person’s appearance. Here, we test (a) whether people rely on trait impressions when making legal sentencing decisions and (b) whether two types of interventions—educating decision-makers and changing the accessibility of facial information—reduce the influence of facial stereotypes. We first introduced a novel legal decision-making paradigm. Results of a pretest (n = 320) showed that defendants with an untrustworthy (vs. trustworthy) facial appearance were found guilty more often. We then tested the effectiveness of different interventions in reducing the influence of facial stereotypes. Educating participants about the biasing effects of facial stereotypes reduced explicit beliefs that personality is reflected in facial features, but did not reduce the influence of facial stereotypes on verdicts (Study 1, n = 979). In Study 2 (n = 975), we presented information sequentially to disrupt the intuitive accessibility of trait impressions. Participants indicated an initial verdict based on case-relevant information and a final verdict based on all information (including facial photographs). The majority of initial sentences were not revised and therefore unbiased. However, most revised sentences were in line with facial stereotypes (e.g., a guilty verdict for an untrustworthy-looking defendant). On average, this actually increased facial bias in verdicts. Together, our findings highlight the persistent influence of trait impressions from faces on legal sentencing decisions.


2021 ◽  
Author(s):  
Haiyang Jin ◽  
Matt Oxner ◽  
Paul Michael Corballis ◽  
William Hayward

Holistic face processing has been widely implicated in conscious face perception. Yet, little is known about whether holistic face processing occurs when faces are processed unconsciously. The present study used the composite face task and continuous flash suppression (CFS) to inspect whether the processing of target facial information (the top half of a face) is influenced by irrelevant information (the bottom half) that is presented unconsciously. Results of multiple experiments showed that the composite effect was observed in both the monocular and CFS conditions, providing the first evidence that the processing of top facial halves is influenced by the aligned bottom halves no matter whether they are presented consciously or unconsciously. However, much of the composite effect for faces without masking was disrupted when bottom facial parts were rendered with CFS. These results suggest that holistic face processing can occur unconsciously, but also highlight the significance of holistic processing of consciously presented faces.


Author(s):  
Carlos M. Travieso ◽  
Marcos del Pozo-Baños ◽  
Jaime R. Ticay-Rivas ◽  
Jesús B. Alonso

This chapter presents a comprehensive study on the influence of the intra-modal facial information for an identification approach. It was developed and implemented a biometric identification system by merging different intra-multimodal facial features: mouth, eyes, and nose. The Principal Component Analysis, Independent Component Analysis, and Discrete Cosine Transform were used as feature extractors. Support Vector Machines were implemented as classifier systems. The recognition rates obtained by multimodal fusion of three facial features has reached values above 97% in each of the databases used, confirming that the system is adaptive to images from different sources, sizes, lighting conditions, etc. Even though a good response has been shown when the three facial traits were merged, an acceptable performance has been shown when merging only two facial features. Therefore, the system is robust against problems in one isolate sensor or occlusion in any biometric trait. In this case, the success rate achieved was over 92%.


2020 ◽  
Vol 37 (4) ◽  
pp. 413-428
Author(s):  
Igor Loboda ◽  
Luis Angel Miró Zárate ◽  
Sergiy Yepifanov ◽  
Cristhian Maravilla Herrera ◽  
Juan Luis Pérez Ruiz

AbstractOne of the main functions of gas turbine monitoring is to estimate important unmeasured variables, for instance, thrust and power. Existing methods are too complex for an online monitoring system. Moreover, they do not extract diagnostic features from the estimated variables, making them unusable for diagnostics. Two of our previous studies began to address the problem of “light” algorithms for online estimation of unmeasured variables. The first study deals with models for unmeasured thermal boundary conditions of a turbine blade. These models allow an enhanced prediction of blade lifetime and are sufficiently simple to be used online. The second study introduces unmeasured variable deviations and proves their applicability. However, the algorithms developed were dependent on a specific engine and a specific variable. The present paper proposes a universal algorithm to estimate and monitor any unmeasured gas turbine variables. This algorithm is based on simple data-driven models and can be used in online monitoring systems. It is evaluated on real data of two different engines affected by compressor fouling. The results prove that the estimates of unmeasured variables are sufficiently accurate, and the deviations of these variables are good diagnostic features. Thus, the algorithm is ready for practical implementation.


Perception ◽  
1987 ◽  
Vol 16 (6) ◽  
pp. 747-759 ◽  
Author(s):  
Andrew W Young ◽  
Deborah Hellawell ◽  
Dennis C Hay

A new facial composites technique is demonstrated, in which photographs of the top and bottom halves of different familiar faces fuse to form unfamiliar faces when aligned with each other. The perception of a novel configuration in such composite stimuli is sufficiently convincing to interfere with identification of the constituent parts (experiment 1), but this effect disappears when stimuli are inverted (experiment 2). Difficulty in identifying the parts of upright composites is found even for stimuli made from parts of unfamiliar faces that have only ever been encountered as face fragments (experiment 3). An equivalent effect is found for composites made from internal and external facial features of well-known people (experiment 4). These findings demonstrate the importance of configurational information in face perception, and that configurations are only properly perceived in upright faces.


2013 ◽  
Vol 37 (2) ◽  
pp. 111-117 ◽  
Author(s):  
Jennifer L. Rennels ◽  
Andrew J. Cummings

When face processing studies find sex differences, male infants appear better at face recognition than female infants, whereas female adults appear better at face recognition than male adults. Both female infants and adults, however, discriminate emotional expressions better than males. To investigate if sex and age differences in facial scanning might account for these processing discrepancies, 3–4-month-olds, 9–10-month-olds, and adults viewed faces presented individually while an eye tracker recorded eye movements. Regardless of age, males shifted fixations between internal and external facial features more than females, suggesting more holistic processing. Females shifted fixations between internal facial features more than males, suggesting more second-order relational processing, which may explain females’ emotion discrimination advantage. Older male infants made more fixations than older female infants. Female adults made more fixations for shorter fixation durations than male adults. Male infants and female adults’ greater encoding of facial information may explain their face recognition advantage.


Sign in / Sign up

Export Citation Format

Share Document