Generating Facial Character: A Systematic Method Accumulating Expressive Histories

Leonardo ◽  
2021 ◽  
pp. 1-11
Author(s):  
Ana Jofre

Abstract I present a method to simulate facial character development by accumulating an expressive history onto a face. The model analytically combines facial features from Paul Ekman's seven universal facial expressions using a simple Markov chain algorithm. The output is a series of 3d digital faces created in Blender with Python. The results of this work show that systematically imprinting features from emotional expressions onto a neutral face transforms it into one with distinct character. This method could be applied to creative works that depend on character creation, ranging from figurative sculpture to game design, and allows the creator to incorporate chance into the creative process. I demonstrate the sculpture application in this paper with ceramic casts of the generated faces.

2021 ◽  
Vol 12 ◽  
Author(s):  
Shu Zhang ◽  
Xinge Liu ◽  
Xuan Yang ◽  
Yezhi Shu ◽  
Niqi Liu ◽  
...  

Cartoon faces are widely used in social media, animation production, and social robots because of their attractive ability to convey different emotional information. Despite their popular applications, the mechanisms of recognizing emotional expressions in cartoon faces are still unclear. Therefore, three experiments were conducted in this study to systematically explore a recognition process for emotional cartoon expressions (happy, sad, and neutral) and to examine the influence of key facial features (mouth, eyes, and eyebrows) on emotion recognition. Across the experiments, three presentation conditions were employed: (1) a full face; (2) individual feature only (with two other features concealed); and (3) one feature concealed with two other features presented. The cartoon face images used in this study were converted from a set of real faces acted by Chinese posers, and the observers were Chinese. The results show that happy cartoon expressions were recognized more accurately than neutral and sad expressions, which was consistent with the happiness recognition advantage revealed in real face studies. Compared with real facial expressions, sad cartoon expressions were perceived as sadder, and happy cartoon expressions were perceived as less happy, regardless of whether full-face or single facial features were viewed. For cartoon faces, the mouth was demonstrated to be a feature that is sufficient and necessary for the recognition of happiness, and the eyebrows were sufficient and necessary for the recognition of sadness. This study helps to clarify the perception mechanism underlying emotion recognition in cartoon faces and sheds some light on directions for future research on intelligent human-computer interactions.


2003 ◽  
Vol 14 (4) ◽  
pp. 373-376 ◽  
Author(s):  
Abigail A. Marsh ◽  
Hillary Anger Elfenbein ◽  
Nalini Ambady

We report evidence for nonverbal “accents,” subtle differences in the appearance of facial expressions of emotion across cultures. Participants viewed photographs of Japanese nationals and Japanese Americans in which posers' muscle movements were standardized to eliminate differences in expressions, cultural or otherwise. Participants guessed the nationality of posers displaying emotional expressions at above-chance levels, and with greater accuracy than they judged the nationality of the same posers displaying neutral expressions. These findings indicate that facial expressions of emotion can contain nonverbal accents that identify the expresser's nationality or culture. Cultural differences are intensified during the act of expressing emotion, rather than residing only in facial features or other static elements of appearance. This evidence suggests that extreme positions regarding the universality of emotional expressions are incomplete.


2018 ◽  
Vol 11 (4) ◽  
pp. 50-69 ◽  
Author(s):  
V.A. Barabanschikov ◽  
O.A. Korolkova ◽  
E.A. Lobodinskaya

We studied the perception of human facial emotional expressions during step-function stroboscopic presentation of changing mimics. Consecutive stages of each of the six basic facial expressions were pre sented to the participants: neutral face (300 ms) — expression of medium intensity (10—40 ms) — intense expression (30—120 ms) — expression of medium intensity (10—40 ms) — neutral face (100 ms). Alternative forced choice task was used to categorize the facial expressions. The results were compared to previous studies (Barabanschikov, Korolkova, Lobodinskaya, 2015; 2016), conducted using the same paradigm but with boxcar-function change of the expression: neutral face — intense expression — neutral face. We found that the dynamics of facial expression recognition, as well as errors and recognition time are almost identical in conditions of boxcar- and step-function presentation. One factor influencing the recognition rate is the proportion of presentation time of static (neutral) and changing (facial expression) aspects of the stimulus. In suboptimal conditions of facial expression perception (minimal presentation time of 10+30+10 ms and reduced intensity of expressions) we revealed stroboscopic sensibilization — a previously described phenomenon of enhanced recognition rate of low-attractive expressions (disgust, sadness, fear and anger), which has been previously found in conditions of boxcar-function presentation of expressions. We confirmed the similarity of influence of real and apparent motion on the recognition of basic facial emotional expressions.


2020 ◽  
Author(s):  
Tamara Van Der Zant ◽  
Jessica Reid ◽  
Catherine J. Mondloch ◽  
Nicole L. Nelson

Perceptions of traits (such as trustworthiness or dominance) are influenced by the emotion displayed on a face. For instance, the same individual is reported as more trustworthy when they look happy than when they look angry. This overextension of emotional expressions has been shown with facial expression but whether this phenomenon also occurs when viewing postural expressions was unknown. We sought to examine how expressive behaviour of the body would influence judgements of traits and how sensitivity to this cue develops. In the context of a storybook, adults (N = 35) and children (aged 5 to 8 years; N = 60) selected one of two partners to help face a challenge. The challenges required either a trustworthy or dominant partner. Participants chose between a partner with an emotional (happy/angry) face and neutral body or one with a neutral face and emotional body. As predicted, happy over neutral facial expressions were preferred when selecting a trustworthy partner and angry postural expressions were preferred over neutral when selecting a dominant partner. Children’s performance was not adult-like on most tasks. The results demonstrate that emotional postural expressions can also influence judgements of others’ traits, but that postural influence on trait judgements develops throughout childhood.


Author(s):  
Nikolaos Bourbakis

Detecting faces and facial expressions has become a common task in human-computer interaction systems. A face-facial detection system must be able to detect faces under various conditions and extract their facial expressions. Many approaches for face detection have been proposed in the literature mainly dealing with the detection or recognition of faces in still conditions rather than the person’s facial expressions and the reflecting emotional behavior. In this paper, the author describes a synergistic methodology for detecting frontal high-resolution color faces and for recognizing their facial expressions accurately in realistic conditions both indoor and outdoor, and with a variety of conditions (shadows, high-lights, non-white lights). The methodology associates these facial expressions to emotional behavior. It extracts important facial features, such as eyes, eyebrows, nose, mouth (lips) and defines them as the primitive elements of an alphabet of a simple formal language in order to synthesize these facial features and generate emotional expressions. The main goal of this effort is to monitor emotional behavior and learn from it. Illustrative examples are also provided for proving the concept of the methodology.


2020 ◽  
Author(s):  
Joshua W Maxwell ◽  
Eric Ruthruff ◽  
michael joseph

Are facial expressions of emotion processed automatically? Some authors have not found this to be the case (Tomasik et al., 2009). Here we revisited the question with a novel experimental logic – the backward correspondence effect (BCE). In three dual-task studies, participants first categorized a sound (Task 1) and then indicated the location of a target face (Task 2). In Experiment 1, Task 2 required participants to search for one facial expression of emotion (angry or happy). We observed positive BCEs, indicating that facial expressions of emotion bypassed the central attentional bottleneck and thus were processed in a capacity-free, automatic manner. In Experiment 2, we replicated this effect but found that morphed emotional expressions (which were used by Tomasik) were not processed automatically. In Experiment 3, we observed similar BCEs for another type of face processing previously shown to be capacity-free – identification of familiar faces (Jung et al., 2013). We conclude that facial expressions of emotion are identified automatically when sufficiently unambiguous.


2014 ◽  
Vol 2014 ◽  
pp. 1-11 ◽  
Author(s):  
Dina Tell ◽  
Denise Davidson ◽  
Linda A. Camras

Eye gaze direction and expression intensity effects on emotion recognition in children with autism disorder and typically developing children were investigated. Children with autism disorder and typically developing children identified happy and angry expressions equally well. Children with autism disorder, however, were less accurate in identifying fear expressions across intensities and eye gaze directions. Children with autism disorder rated expressions with direct eyes, and 50% expressions, as more intense than typically developing children. A trend was also found for sad expressions, as children with autism disorder were less accurate in recognizing sadness at 100% intensity with direct eyes than typically developing children. Although the present research showed that children with autism disorder are sensitive to eye gaze direction, impairments in the recognition of fear, and possibly sadness, exist. Furthermore, children with autism disorder and typically developing children perceive the intensity of emotional expressions differently.


1986 ◽  
Vol 62 (2) ◽  
pp. 419-423 ◽  
Author(s):  
Gilles Kirouac ◽  
Martin Bouchard ◽  
Andrée St-Pierre

The purpose of this study was to measure the capacity of human subjects to match facial expressions of emotions and behavioral categories that represented the motivational states they are supposed to illustrate. 100 university students were shown facial stimuli they had to classify using ethological behavioral categories. The results showed that accuracy of judgment was over-all lower than what was usually found when fundamental emotional categories were used. The data also indicated that the relation between emotional expressions and behavioral tendencies was more complex than expected.


PLoS ONE ◽  
2021 ◽  
Vol 16 (1) ◽  
pp. e0245777
Author(s):  
Fanny Poncet ◽  
Robert Soussignan ◽  
Margaux Jaffiol ◽  
Baptiste Gaudelus ◽  
Arnaud Leleu ◽  
...  

Recognizing facial expressions of emotions is a fundamental ability for adaptation to the social environment. To date, it remains unclear whether the spatial distribution of eye movements predicts accurate recognition or, on the contrary, confusion in the recognition of facial emotions. In the present study, we asked participants to recognize facial emotions while monitoring their gaze behavior using eye-tracking technology. In Experiment 1a, 40 participants (20 women) performed a classic facial emotion recognition task with a 5-choice procedure (anger, disgust, fear, happiness, sadness). In Experiment 1b, a second group of 40 participants (20 women) was exposed to the same materials and procedure except that they were instructed to say whether (i.e., Yes/No response) the face expressed a specific emotion (e.g., anger), with the five emotion categories tested in distinct blocks. In Experiment 2, two groups of 32 participants performed the same task as in Experiment 1a while exposed to partial facial expressions composed of actions units (AUs) present or absent in some parts of the face (top, middle, or bottom). The coding of the AUs produced by the models showed complex facial configurations for most emotional expressions, with several AUs in common. Eye-tracking data indicated that relevant facial actions were actively gazed at by the decoders during both accurate recognition and errors. False recognition was mainly associated with the additional visual exploration of less relevant facial actions in regions containing ambiguous AUs or AUs relevant to other emotional expressions. Finally, the recognition of facial emotions from partial expressions showed that no single facial actions were necessary to effectively communicate an emotional state. In contrast, the recognition of facial emotions relied on the integration of a complex set of facial cues.


Sign in / Sign up

Export Citation Format

Share Document