Automatic Aspects in Face Perception

Author(s):  
David Anaki ◽  
Elena I. Nica ◽  
Morris Moscovitch

We examined the perceptual dependency of local facial information on the whole facial context. In Experiment 1 participants matched a predetermined facial feature that appeared in two sequentially presented faces judging whether it is identical or not, while ignoring an irrelevant dimension in the faces. This irrelevant dimension was either (a) compatible or incompatible with the target’s response and (b) same or different in either featural characteristics or metric distance between facial features in the two faces. A compatibility effect was observed for upright but not inverted faces, regardless of the type of change that differentiated between the faces in the irrelevant dimension. Even when the target was presented upright in the inverted faces, to attenuate perceptual load, no compatibility effect was found (Experiment 2). Finally, no compatibility effects were found for either upright or inverted houses (Experiment 3). These findings suggest that holistic face perception is mandatory.

2021 ◽  
Vol 12 ◽  
Author(s):  
Simon Faghel-Soubeyrand ◽  
Juliane A. Kloess ◽  
Frédéric Gosselin ◽  
Ian Charest ◽  
Jessica Woodhams

Knowing how humans differentiate children from adults has useful implications in many areas of both forensic and cognitive psychology. Yet, how we extract age from faces has been surprisingly underexplored in both disciplines. Here, we used a novel data-driven experimental technique to objectively measure the facial features human observers use to categorise child and adult faces. Relying on more than 35,000 trials, we used a reverse correlation technique that enabled us to reveal how specific features which are known to be important in face-perception – position, spatial-frequency (SF), and orientation – are associated with accurate child and adult discrimination. This showed that human observers relied on evidence in the nasal bone and eyebrow area for accurate adult categorisation, while they relied on the eye and jawline area to accurately categorise child faces. For orientation structure, only facial information of vertical orientation was linked to face-adult categorisation, while features of horizontal and, to a lesser extent oblique orientations, were more diagnostic of a child face. Finally, we found that SF diagnosticity showed a U-shaped pattern for face-age categorisation, with information in low and high SFs being diagnostic of child faces, and mid SFs being diagnostic of adult faces. Through this first characterisation of the facial features of face-age categorisation, we show that important information found in psychophysical studies of face-perception in general (i.e., the eye area, horizontals, and mid-level SFs) is crucial to the practical context of face-age categorisation, and present data-driven procedures through which face-age classification training could be implemented for real-world challenges.


2021 ◽  
Author(s):  
Simon Faghel-Soubeyrand ◽  
Juliane A. Kloess ◽  
Frédéric Gosselin ◽  
Ian Charest ◽  
Jessica Woodhams

Knowing how humans differentiate children from adults has useful implications in many areas of both forensic and cognitive psychology. Yet, how we extract age from faces has been surprisingly underexplored in both disciplines. Here, we used a novel data-driven experimental technique to objectively measure the facial features human observers use to categorise child and adult faces. Relying on more than 35,000 trials, we used a reverse correlation technique that enabled us to reveal how specific features which are known to be important in face-perception––position, spatial-frequency (granularity), and orientation––are associated with accurate child and adult discrimination. This showed that human observers relied on evidence in the nasal bone and eyebrow area for accurate adult categorisation, while they relied on the eye and jawline area to accurately categorise child faces. For orientation structure, only facial information of vertical orientation was linked to face-adult categorisation, while features of horizontal and, to a lesser extent oblique orientations, were more diagnostic of a child face. Finally, we found that spatial-frequency (SF) diagnosticity showed a U-shaped pattern for face-age categorisation, with facial information in low and high spatial frequencies being diagnostic of child faces, and mid spatial frequencies being diagnostic of adult faces. Through this first characterisation of the facial features of face-age categorisation, we show that important face information found in psychophysical studies of face-perception in general (i.e. the eye area, the horizontals, and mid-level SFs) are crucial to the practical context of face-age categorisation, and present data-driven procedures through which face-age classification training could be implemented for real world challenges.


i-Perception ◽  
2020 ◽  
Vol 11 (5) ◽  
pp. 204166952096112
Author(s):  
Jose A. Diego-Mas ◽  
Felix Fuentes-Hurtado ◽  
Valery Naranjo ◽  
Mariano Alcañiz

Facial information is processed by our brain in such a way that we immediately make judgments about, for example, attractiveness or masculinity or interpret personality traits or moods of other people. The appearance of each facial feature has an effect on our perception of facial traits. This research addresses the problem of measuring the size of these effects for five facial features (eyes, eyebrows, nose, mouth, and jaw). Our proposal is a mixed feature-based and image-based approach that allows judgments to be made on complete real faces in the categorization tasks, more than on synthetic, noisy, or partial faces that can influence the assessment. Each facial feature of the faces is automatically classified considering their global appearance using principal component analysis. Using this procedure, we establish a reduced set of relevant specific attributes (each one describing a complete facial feature) to characterize faces. In this way, a more direct link can be established between perceived facial traits and what people intuitively consider an eye, an eyebrow, a nose, a mouth, or a jaw. A set of 92 male faces were classified using this procedure, and the results were related to their scores in 15 perceived facial traits. We show that the relevant features greatly depend on what we are trying to judge. Globally, the eyes have the greatest effect. However, other facial features are more relevant for some judgments like the mouth for happiness and femininity or the nose for dominance.


Symmetry ◽  
2018 ◽  
Vol 10 (10) ◽  
pp. 442 ◽  
Author(s):  
Dongxue Liang ◽  
Kyoungju Park ◽  
Przemyslaw Krompiec

With the advent of the deep learning method, portrait video stylization has become more popular. In this paper, we present a robust method for automatically stylizing portrait videos that contain small human faces. By extending the Mask Regions with Convolutional Neural Network features (R-CNN) with a CNN branch which detects the contour landmarks of the face, we divided the input frame into three regions: the region of facial features, the region of the inner face surrounded by 36 face contour landmarks, and the region of the outer face. Besides keeping the facial features region as it is, we used two different stroke models to render the other two regions. During the non-photorealistic rendering (NPR) of the animation video, we combined the deformable strokes and optical flow estimation between adjacent frames to follow the underlying motion coherently. The experimental results demonstrated that our method could not only effectively reserve the small and distinct facial features, but also follow the underlying motion coherently.


Author(s):  
CHING-WEN CHEN ◽  
CHUNG-LIN HUANG

This paper presents a face recognition system which can identify the unknown identity effectively using the front-view facial features. In front-view facial feature extractions, we can capture the contours of eyes and mouth by the deformable template model because of their analytically describable shapes. However, the shapes of eyebrows, nostrils and face are difficult to model using a deformable template. We extract them by using the active contour model (snake). After the contours of all facial features have been captured, we calculate effective feature values from these extracted contours and construct databases for unknown identities classification. In the database generation phase, 12 models are photographed, and feature vectors are calculated for each portrait. In the identification phase if any one of these 12 persons has his picture taken again, the system can recognize his identity.


Author(s):  
CHIN-CHEN CHANG ◽  
YUAN-HUI YU

This paper proposes an efficient approach for human face detection and exact facial features location in a head-and-shoulder image. This method searches for the eye pair candidate as a base line by using the characteristic of the high intensity contrast between the iris and the sclera. To discover other facial features, the algorithm uses geometric knowledge of the human face based on the obtained eye pair candidate. The human face is finally verified with these unclosed facial features. Due to the merits of applying the Prune-and-Search and simple filtering techniques, we have shown that the proposed method indeed achieves very promising performance of face detection and facial feature location.


2004 ◽  
Vol 48 (1) ◽  
pp. 89-99
Author(s):  
Frank Eyetsemitan

This study initially set out to explore the facial features (and their descriptions) of the emotion-expressive behaviors of “peace” and “contentment” but ended up with a third one, “annoyed/irritated.” The emotion-expressive behaviors of “peace” and “contentment” have been associated with the faces of deceased persons in a previous study. The pictures of two volunteers taken during a class on relaxation technique were given to 93 respondents made up of volunteer students from a small midwestern college and volunteer residents of a nursing home (see Appendix A and B). Participants were asked to choose from a list provided them the emotion-expressive behavior (“e.g., peace, content, hopeful, other”) that closely described each of the facial pictures presented. They were also asked to both identify and describe the facial feature(s) that closely matched the emotion-expressive behavior they had chosen. Most of the respondents identified Picture #1 as “peaceful” and Picture 2 as “annoyed/irritated.” The eyes and the mouth were more salient in describing both emotions. This study has implications for those who identify loved ones before viewing; for individuals who prepare deceased persons for viewing; for embalming educators; and for actors of these emotions.


Author(s):  
Yuanyuan Liu ◽  
Jingying Chen ◽  
Cunjie Shan ◽  
Zhiming Su ◽  
Pei Cai

Head pose and facial feature detection are important for face analysis. However, many studies reported good results in constrained environment, the performance could be decreased due to the high variations in facial appearance, poses, illumination, occlusion, expression and make-up. In this paper, we propose a hierarchical regression approach, Dirichlet-tree enhanced random forests (D-RF) for face analysis in unconstrained environment. D-RF introduces Dirichlet-tree probabilistic model into regression RF framework in the hierarchical way to achieve the efficiency and robustness. To eliminate noise influence of unconstrained environment, facial patches extracted from face area are classified as positive or negative facial patches, only positive facial patches are used for face analysis. The proposed hierarchical D-RF works in two iterative procedures. First, coarse head pose is estimated to constrain the facial features detection, then the head pose is updated based on the estimated facial features. Second, the facial feature localization is refined based on the updated head pose. In order to further improve the efficiency and robustness, multiple probabilitic models are learned in leaves of the D-RF, i.e. the patch’s classification, the head pose probabilities, the locations of facial points and face deformation models (FDM). Moreover, our algorithm takes a composite weight voting method, where each patch extracted from the image can directly cast a vote for the head pose or each of the facial features. Extensive experiments have been done with different publicly available databases. The experimental results demonstrate that the proposed approach is robust and efficient for head pose and facial feature detection.


2009 ◽  
Vol 2009 ◽  
pp. 1-15 ◽  
Author(s):  
Yu Zhang ◽  
Edmond C. Prakash

This paper presents a new anthropometrics-based method for generating realistic, controllable face models. Our method establishes an intuitive and efficient interface to facilitate procedures for interactive 3D face modeling and editing. It takes 3D face scans as examples in order to exploit the variations presented in the real faces of individuals. The system automatically learns a model prior from the data-sets of example meshes of facial features using principal component analysis (PCA) and uses it to regulate the naturalness of synthesized faces. For each facial feature, we compute a set of anthropometric measurements to parameterize the example meshes into a measurement space. Using PCA coefficients as a compact shape representation, we formulate the face modeling problem in a scattered data interpolation framework which takes the user-specified anthropometric parameters as input. Solving the interpolation problem in a reduced subspace allows us to generate a natural face shape that satisfies the user-specified constraints. At runtime, the new face shape can be generated at an interactive rate. We demonstrate the utility of our method by presenting several applications, including analysis of facial features of subjects in different race groups, facial feature transfer, and adapting face models to a particular population group.


Sign in / Sign up

Export Citation Format

Share Document