A Facial-Action Imposter: How Head Tilt Influences Perceptions of Dominance From a Neutral Face

2019 ◽  
Vol 30 (6) ◽  
pp. 893-906 ◽  
Author(s):  
Zachary Witkower ◽  
Jessica L. Tracy

Research on face perception tends to focus on facial morphology and the activation of facial muscles while ignoring any impact of head position. We raise questions about this approach by demonstrating that head movements can dramatically shift the appearance of the face to shape social judgments without engaging facial musculature. In five studies (total N = 1,517), we found that when eye gaze was directed forward, tilting one’s head downward (compared with a neutral angle) increased perceptions of dominance, and this effect was due to the illusory appearance of lowered and V-shaped eyebrows caused by a downward head tilt. Tilting one’s head downward therefore functions as an action-unit imposter, creating the artificial appearance of a facial action unit that has a strong effect on social perception. Social judgments about faces are therefore driven not only by facial shape and musculature but also by movements in the face’s physical foundation: the head.

Author(s):  
Guozhu Peng ◽  
Shangfei Wang

Current works on facial action unit (AU) recognition typically require fully AU-labeled training samples. To reduce the reliance on time-consuming manual AU annotations, we propose a novel semi-supervised AU recognition method leveraging two kinds of readily available auxiliary information. The method leverages the dependencies between AUs and expressions as well as the dependencies among AUs, which are caused by facial anatomy and therefore embedded in all facial images, independent on their AU annotation status. The other auxiliary information is facial image synthesis given AUs, the dual task of AU recognition from facial images, and therefore has intrinsic probabilistic connections with AU recognition, regardless of AU annotations. Specifically, we propose a dual semi-supervised generative adversarial network for AU recognition from partially AU-labeled and fully expressionlabeled facial images. The proposed network consists of an AU classifier C, an image generator G, and a discriminator D. In addition to minimize the supervised losses of the AU classifier and the face generator for labeled training data, we explore the probabilistic duality between the tasks using adversary learning to force the convergence of the face-AU-expression tuples generated from the AU classifier and the face generator, and the ground-truth distribution in labeled data for all training data. This joint distribution also includes the inherent AU dependencies. Furthermore, we reconstruct the facial image using the output of the AU classifier as the input of the face generator, and create AU labels by feeding the output of the face generator to the AU classifier. We minimize reconstruction losses for all training data, thus exploiting the informative feedback provided by the dual tasks. Within-database and cross-database experiments on three benchmark databases demonstrate the superiority of our method in both AU recognition and face synthesis compared to state-of-the-art works.


2021 ◽  
Vol 11 (23) ◽  
pp. 11171
Author(s):  
Shushi Namba ◽  
Wataru Sato ◽  
Sakiko Yoshikawa

Automatic facial action detection is important, but no previous studies have evaluated pre-trained models on the accuracy of facial action detection as the angle of the face changes from frontal to profile. Using static facial images obtained at various angles (0°, 15°, 30°, and 45°), we investigated the performance of three automated facial action detection systems (FaceReader, OpenFace, and Py-feat). The overall performance was best for OpenFace, followed by FaceReader and Py-Feat. The performance of FaceReader significantly decreased at 45° compared to that at other angles, while the performance of Py-Feat did not differ among the four angles. The performance of OpenFace decreased as the target face turned sideways. Prediction accuracy and robustness to angle changes varied with the target facial components and action detection system.


Perception ◽  
2020 ◽  
Vol 49 (4) ◽  
pp. 422-438 ◽  
Author(s):  
Dongyu Zhang ◽  
Hongfei Lin ◽  
David I. Perrett

Interpreting the personality and the disposition of people is important for social interaction. Both emotional expression and facial width are known to affect personality perception. Moreover, both the apparent emotional expression and the apparent width-to-height ratio of the face change with head tilt. We investigated how head tilt affects judgements of trustworthiness and dominance and whether such trait judgements reflect apparent emotion or facial width. Sixty-seven participants rated the dominance, emotion, and trustworthiness of 24 faces posing with different head tilts while maintaining eye gaze at the camera. Both the 30° up and 20° down head postures were perceived as less trustworthy and more dominant (less submissive) than the head-level posture. Change in perceived trustworthiness and submissiveness with head tilt correlated with change in apparent emotional positivity but not change in facial width. Hence, our analysis suggests that apparent emotional expression provides a better explanation of perceived trustworthiness and dominance compared with cues to facial structure.


2014 ◽  
Vol 23 (3) ◽  
pp. 132-139 ◽  
Author(s):  
Lauren Zubow ◽  
Richard Hurtig

Children with Rett Syndrome (RS) are reported to use multiple modalities to communicate although their intentionality is often questioned (Bartolotta, Zipp, Simpkins, & Glazewski, 2011; Hetzroni & Rubin, 2006; Sigafoos et al., 2000; Sigafoos, Woodyatt, Tuckeer, Roberts-Pennell, & Pittendreigh, 2000). This paper will present results of a study analyzing the unconventional vocalizations of a child with RS. The primary research question addresses the ability of familiar and unfamiliar listeners to interpret unconventional vocalizations as “yes” or “no” responses. This paper will also address the acoustic analysis and perceptual judgments of these vocalizations. Pre-recorded isolated vocalizations of “yes” and “no” were presented to 5 listeners (mother, father, 1 unfamiliar, and 2 familiar clinicians) and the listeners were asked to rate the vocalizations as either “yes” or “no.” The ratings were compared to the original identification made by the child's mother during the face-to-face interaction from which the samples were drawn. Findings of this study suggest, in this case, the child's vocalizations were intentional and could be interpreted by familiar and unfamiliar listeners as either “yes” or “no” without contextual or visual cues. The results suggest that communication partners should be trained to attend to eye-gaze and vocalizations to ensure the child's intended choice is accurately understood.


2018 ◽  
Vol 9 (2) ◽  
pp. 31-38
Author(s):  
Fransisca Adis ◽  
Yohanes Merci Widiastomo

Facial expression is one of some aspects that can deliver story and character’s emotion in 3D animation. To achieve that, we need to plan the character facial from very beginning of the production. At early stage, the character designer need to think about the expression after theu done the character design. Rigger need to create a flexible rigging to achieve the design. Animator can get the clear picture how they animate the facial. Facial Action Coding System (FACS) that originally developed by Carl-Herman Hjortsjo and adopted by Paul Ekman and Wallace V. can be used to identify emotion in a person generally. This paper is going to explain how the Writer use FACS to help designing the facial expression in 3D characters. FACS will be used to determine the basic characteristic of basic shapes of the face when show emotions, while compare with actual face reference. Keywords: animation, facial expression, non-dialog


2009 ◽  
Vol 35 (2) ◽  
pp. 198-201 ◽  
Author(s):  
Lei WANG ◽  
Bei-Ji ZOU ◽  
Xiao-Ning PENG

Author(s):  
Dakai Ren ◽  
Xiangmin Wen ◽  
Jiazhong Chen ◽  
Yu Han ◽  
Shiqi Zhang

Sensors ◽  
2021 ◽  
Vol 21 (12) ◽  
pp. 4222
Author(s):  
Shushi Namba ◽  
Wataru Sato ◽  
Masaki Osumi ◽  
Koh Shimokawa

In the field of affective computing, achieving accurate automatic detection of facial movements is an important issue, and great progress has already been made. However, a systematic evaluation of systems that now have access to the dynamic facial database remains an unmet need. This study compared the performance of three systems (FaceReader, OpenFace, AFARtoolbox) that detect each facial movement corresponding to an action unit (AU) derived from the Facial Action Coding System. All machines could detect the presence of AUs from the dynamic facial database at a level above chance. Moreover, OpenFace and AFAR provided higher area under the receiver operating characteristic curve values compared to FaceReader. In addition, several confusion biases of facial components (e.g., AU12 and AU14) were observed to be related to each automated AU detection system and the static mode was superior to dynamic mode for analyzing the posed facial database. These findings demonstrate the features of prediction patterns for each system and provide guidance for research on facial expressions.


1988 ◽  
Vol 59 (3) ◽  
pp. 796-818 ◽  
Author(s):  
C. S. Huang ◽  
M. A. Sirisko ◽  
H. Hiraba ◽  
G. M. Murray ◽  
B. J. Sessle

1. The technique of intracortical microstimulation (ICMS), supplemented by single-neuron recording, was used to carry out an extensive mapping of the face primary motor cortex. The ICMS study involved a total of 969 microelectrode penetrations carried out in 10 unanesthetized monkeys (Macaca fascicularis). 2. Monitoring of ICMS-evoked movements and associated electromyographic (EMG) activity revealed a general pattern of motor cortical organization. This was characterized by a representation of the facial musculature, which partially enclosed and overlapped the rostral, medial, and caudal borders of the more laterally located cortical regions representing the jaw and tongue musculatures. Responses were evoked at ICMS thresholds as low as 1 microA, and the latency of the suprathreshold EMG responses ranged from 10 to 45 ms. 3. Although contralateral movements predominated, a representation of ipsilateral movements was found, which was much more extensive than previously reported and which was intermingled with the contralateral representations in the anterior face motor cortex. 4. In examining the fine organizational pattern of the representations, we found clear evidence for multiple representation of a particular muscle, thus supporting other investigations of the motor cortex, which indicate that multiple, yet discrete, efferent microzones represent an essential organizational principle of the motor cortex. 5. The close interrelationship of the representations of all three muscle groups, as well as the presence of a considerable ipsilateral representation, may allow for the necessary integration of unilateral or bilateral activities of the numerous face, jaw, and tongue muscles, which is a feature of many of the movement patterns in which these various muscles participate. 6. In six of these same animals, plus an additional two animals, single-neuron recordings were made in the motor and adjacent sensory cortices in the anesthetized state. These neurons were electrophysiologically identified as corticobulbar projection neurons or as nonprojection neurons responsive to superficial or deep orofacial afferent inputs. The rostral, medial, lateral, and caudal borders of the face motor cortex were delineated with greater definition by ICMS and these electrophysiological procedures than by cytoarchitectonic features alone. We noted that there was an approximate fit in area 4 between the extent of projection neurons and field potentials anti-dromically evoked from the brain stem and the extent of positive ICMS sites.(ABSTRACT TRUNCATED AT 400 WORDS)


Sign in / Sign up

Export Citation Format

Share Document