A facial features detector integrating holistic facial information and part-based model

Author(s):  
Eslam Mostafa ◽  
Asem A. Ali ◽  
Ahmed Shalaby ◽  
Aly Farag
2019 ◽  
Author(s):  
Bastian Jaeger ◽  
Alexander Todorov ◽  
Anthony M Evans ◽  
Ilja van Beest

Trait impressions from faces influence many consequential decisions even in situations in which decisions should not be based on a person’s appearance. Here, we test (a) whether people rely on trait impressions when making legal sentencing decisions and (b) whether two types of interventions—educating decision-makers and changing the accessibility of facial information—reduce the influence of facial stereotypes. We first introduced a novel legal decision-making paradigm. Results of a pretest (n = 320) showed that defendants with an untrustworthy (vs. trustworthy) facial appearance were found guilty more often. We then tested the effectiveness of different interventions in reducing the influence of facial stereotypes. Educating participants about the biasing effects of facial stereotypes reduced explicit beliefs that personality is reflected in facial features, but did not reduce the influence of facial stereotypes on verdicts (Study 1, n = 979). In Study 2 (n = 975), we presented information sequentially to disrupt the intuitive accessibility of trait impressions. Participants indicated an initial verdict based on case-relevant information and a final verdict based on all information (including facial photographs). The majority of initial sentences were not revised and therefore unbiased. However, most revised sentences were in line with facial stereotypes (e.g., a guilty verdict for an untrustworthy-looking defendant). On average, this actually increased facial bias in verdicts. Together, our findings highlight the persistent influence of trait impressions from faces on legal sentencing decisions.


2021 ◽  
Vol 12 ◽  
Author(s):  
Simon Faghel-Soubeyrand ◽  
Juliane A. Kloess ◽  
Frédéric Gosselin ◽  
Ian Charest ◽  
Jessica Woodhams

Knowing how humans differentiate children from adults has useful implications in many areas of both forensic and cognitive psychology. Yet, how we extract age from faces has been surprisingly underexplored in both disciplines. Here, we used a novel data-driven experimental technique to objectively measure the facial features human observers use to categorise child and adult faces. Relying on more than 35,000 trials, we used a reverse correlation technique that enabled us to reveal how specific features which are known to be important in face-perception – position, spatial-frequency (SF), and orientation – are associated with accurate child and adult discrimination. This showed that human observers relied on evidence in the nasal bone and eyebrow area for accurate adult categorisation, while they relied on the eye and jawline area to accurately categorise child faces. For orientation structure, only facial information of vertical orientation was linked to face-adult categorisation, while features of horizontal and, to a lesser extent oblique orientations, were more diagnostic of a child face. Finally, we found that SF diagnosticity showed a U-shaped pattern for face-age categorisation, with information in low and high SFs being diagnostic of child faces, and mid SFs being diagnostic of adult faces. Through this first characterisation of the facial features of face-age categorisation, we show that important information found in psychophysical studies of face-perception in general (i.e., the eye area, horizontals, and mid-level SFs) is crucial to the practical context of face-age categorisation, and present data-driven procedures through which face-age classification training could be implemented for real-world challenges.


Author(s):  
Carlos M. Travieso ◽  
Marcos del Pozo-Baños ◽  
Jaime R. Ticay-Rivas ◽  
Jesús B. Alonso

This chapter presents a comprehensive study on the influence of the intra-modal facial information for an identification approach. It was developed and implemented a biometric identification system by merging different intra-multimodal facial features: mouth, eyes, and nose. The Principal Component Analysis, Independent Component Analysis, and Discrete Cosine Transform were used as feature extractors. Support Vector Machines were implemented as classifier systems. The recognition rates obtained by multimodal fusion of three facial features has reached values above 97% in each of the databases used, confirming that the system is adaptive to images from different sources, sizes, lighting conditions, etc. Even though a good response has been shown when the three facial traits were merged, an acceptable performance has been shown when merging only two facial features. Therefore, the system is robust against problems in one isolate sensor or occlusion in any biometric trait. In this case, the success rate achieved was over 92%.


2013 ◽  
Vol 37 (2) ◽  
pp. 111-117 ◽  
Author(s):  
Jennifer L. Rennels ◽  
Andrew J. Cummings

When face processing studies find sex differences, male infants appear better at face recognition than female infants, whereas female adults appear better at face recognition than male adults. Both female infants and adults, however, discriminate emotional expressions better than males. To investigate if sex and age differences in facial scanning might account for these processing discrepancies, 3–4-month-olds, 9–10-month-olds, and adults viewed faces presented individually while an eye tracker recorded eye movements. Regardless of age, males shifted fixations between internal and external facial features more than females, suggesting more holistic processing. Females shifted fixations between internal facial features more than males, suggesting more second-order relational processing, which may explain females’ emotion discrimination advantage. Older male infants made more fixations than older female infants. Female adults made more fixations for shorter fixation durations than male adults. Male infants and female adults’ greater encoding of facial information may explain their face recognition advantage.


2018 ◽  
Author(s):  
Lisa Stacchi ◽  
Meike Ramon ◽  
Junpeng Lao ◽  
Roberto Caldara

ABSTRACTEye movements provide a functional signature of how human vision is achieved. Many recent studies have reported idiosyncratic visual sampling strategies during face recognition. Whether these inter-individual differences are mirrored by idiosyncratic neural responses has not been investigated yet. Here, we tracked observers’ eye movements during face recognition; additionally, we obtained an objective index of neural face discrimination through EEG that was recorded while subjects fixated different facial information.Across all observers, we found that those facial features that were fixated longer during face recognition elicited stronger neural face discrimination responses. This relationship occurred independently of inter-individual differences in fixation biases. Our data show that eye movements play a functional role during face processing by providing the neural system with information that is diagnostic to a specific observer. The effective processing of face identity involves idiosyncratic, rather than universal representations.


Author(s):  
David Anaki ◽  
Elena I. Nica ◽  
Morris Moscovitch

We examined the perceptual dependency of local facial information on the whole facial context. In Experiment 1 participants matched a predetermined facial feature that appeared in two sequentially presented faces judging whether it is identical or not, while ignoring an irrelevant dimension in the faces. This irrelevant dimension was either (a) compatible or incompatible with the target’s response and (b) same or different in either featural characteristics or metric distance between facial features in the two faces. A compatibility effect was observed for upright but not inverted faces, regardless of the type of change that differentiated between the faces in the irrelevant dimension. Even when the target was presented upright in the inverted faces, to attenuate perceptual load, no compatibility effect was found (Experiment 2). Finally, no compatibility effects were found for either upright or inverted houses (Experiment 3). These findings suggest that holistic face perception is mandatory.


Complexity ◽  
2018 ◽  
Vol 2018 ◽  
pp. 1-16
Author(s):  
Felix Fuentes-Hurtado ◽  
Jose A. Diego-Mas ◽  
Valery Naranjo ◽  
Mariano Alcañiz

Human faces play a central role in our lives. Thanks to our behavioural capacity to perceive faces, how a face looks in a painting, a movie, or an advertisement can dramatically influence what we feel about them and what emotions are elicited. Facial information is processed by our brain in such a way that we immediately make judgements like attractiveness or masculinity or interpret personality traits or moods of other people. Due to the importance of appearance-driven judgements of faces, this has become a major focus not only for psychological research, but for neuroscientists, artists, engineers, and software developers. New technologies are now able to create realistic looking synthetic faces that are used in arts, online activities, advertisement, or movies. However, there is not a method to generate virtual faces that convey the desired sensations to the observers. In this work, we present a genetic algorithm based procedure to create realistic faces combining facial features in the adequate relative positions. A model of how observers will perceive a face based on its features’ appearances and relative positions was developed and used as the fitness function of the algorithm. The model is able to predict 15 facial social traits related to aesthetic, moods, and personality. The proposed procedure was validated comparing its results with the opinion of human observers. This procedure is useful not only for creating characters with artistic purposes, but also for online activities, advertising, surgery, or criminology.


2021 ◽  
Vol 17 (2) ◽  
pp. 176-192
Author(s):  
Ronja Mueller ◽  
Sandra Utz ◽  
Claus-Christian Carbon ◽  
Tilo Strobach

Inspecting new visual information in a face can affect the perception of subsequently seen faces. In experimental settings for example, previously seen manipulated versions of a face can lead to a clear bias of the participant’s perception of subsequent images: Original images are then perceived as manipulated in the opposite direction of the adaptor while images that are more similar to the adaptor are perceived as normal or natural. These so-called face adaptation effects can be a useful tool to provide information about which facial information is processed and stored in facial memory. Most experiments so far used variants of the second-order relationship configural information (e.g., spatial relations between facial features) when investigating these effects. However, non-configural face information (e.g., color) was mainly neglected when focusing on face adaptation, although this type of information plays an important role in face processing. Therefore, we investigated adaptation effects of non-configural face information by employing brightness alterations. Our results provide clear evidence for brightness adaptation effects (Experiment 1). These effects are face-specific to some extent (Experiments 2 and 3) and robust over time (Experiments 4 and 5). They support the assumption that non-configural face information is not only relevant in face perception but also in face retention. Brightness information seems to be stored in memory and thus is even involved in face recognition.


2021 ◽  
Author(s):  
Simon Faghel-Soubeyrand ◽  
Juliane A. Kloess ◽  
Frédéric Gosselin ◽  
Ian Charest ◽  
Jessica Woodhams

Knowing how humans differentiate children from adults has useful implications in many areas of both forensic and cognitive psychology. Yet, how we extract age from faces has been surprisingly underexplored in both disciplines. Here, we used a novel data-driven experimental technique to objectively measure the facial features human observers use to categorise child and adult faces. Relying on more than 35,000 trials, we used a reverse correlation technique that enabled us to reveal how specific features which are known to be important in face-perception––position, spatial-frequency (granularity), and orientation––are associated with accurate child and adult discrimination. This showed that human observers relied on evidence in the nasal bone and eyebrow area for accurate adult categorisation, while they relied on the eye and jawline area to accurately categorise child faces. For orientation structure, only facial information of vertical orientation was linked to face-adult categorisation, while features of horizontal and, to a lesser extent oblique orientations, were more diagnostic of a child face. Finally, we found that spatial-frequency (SF) diagnosticity showed a U-shaped pattern for face-age categorisation, with facial information in low and high spatial frequencies being diagnostic of child faces, and mid spatial frequencies being diagnostic of adult faces. Through this first characterisation of the facial features of face-age categorisation, we show that important face information found in psychophysical studies of face-perception in general (i.e. the eye area, the horizontals, and mid-level SFs) are crucial to the practical context of face-age categorisation, and present data-driven procedures through which face-age classification training could be implemented for real world challenges.


i-Perception ◽  
2020 ◽  
Vol 11 (5) ◽  
pp. 204166952096112
Author(s):  
Jose A. Diego-Mas ◽  
Felix Fuentes-Hurtado ◽  
Valery Naranjo ◽  
Mariano Alcañiz

Facial information is processed by our brain in such a way that we immediately make judgments about, for example, attractiveness or masculinity or interpret personality traits or moods of other people. The appearance of each facial feature has an effect on our perception of facial traits. This research addresses the problem of measuring the size of these effects for five facial features (eyes, eyebrows, nose, mouth, and jaw). Our proposal is a mixed feature-based and image-based approach that allows judgments to be made on complete real faces in the categorization tasks, more than on synthetic, noisy, or partial faces that can influence the assessment. Each facial feature of the faces is automatically classified considering their global appearance using principal component analysis. Using this procedure, we establish a reduced set of relevant specific attributes (each one describing a complete facial feature) to characterize faces. In this way, a more direct link can be established between perceived facial traits and what people intuitively consider an eye, an eyebrow, a nose, a mouth, or a jaw. A set of 92 male faces were classified using this procedure, and the results were related to their scores in 15 perceived facial traits. We show that the relevant features greatly depend on what we are trying to judge. Globally, the eyes have the greatest effect. However, other facial features are more relevant for some judgments like the mouth for happiness and femininity or the nose for dominance.


Sign in / Sign up

Export Citation Format

Share Document