scholarly journals Perception of basic emotion blends from facial expressions of virtual characters: pure, mixed, or complex

Author(s):  
Meeri Mäkäräinen ◽  
Jari Kätsyri ◽  
Tapio Takala
Author(s):  
Rafael Calvo ◽  
Sidney D'Mello ◽  
Jonathan Gratch ◽  
Arvid Kappas ◽  
Magalie Ochs ◽  
...  

2020 ◽  
Vol 10 (16) ◽  
pp. 5636
Author(s):  
Wafaa Alsaggaf ◽  
Georgios Tsaramirsis ◽  
Norah Al-Malki ◽  
Fazal Qudus Khan ◽  
Miadah Almasry ◽  
...  

Computer-controlled virtual characters are essential parts of most virtual environments and especially computer games. Interaction between these virtual agents and human players has a direct impact on the believability of and immersion in the application. The facial animations of these characters are a key part of these interactions. The player expects the elements of the virtual world to act in a similar manner to the real world. For example, in a board game, if the human player wins, he/she would expect the computer-controlled character to be sad. However, the reactions, more specifically, the facial expressions of virtual characters in most games are not linked with the game events. Instead, they have pre-programmed or random behaviors without any understanding of what is really happening in the game. In this paper, we propose a virtual character facial expression probabilistic decision model that will determine when various facial animations should be played. The model was developed by studying the facial expressions of human players while playing a computer videogame that was also developed as part of this research. The model is represented in the form of trees with 15 extracted game events as roots and 10 associated animations of facial expressions with their corresponding probability of occurrence. Results indicated that only 1 out of 15 game events had a probability of producing an unexpected facial expression. It was found that the “win, lose, tie” game events have more dominant associations with the facial expressions than the rest of game events, followed by “surprise” game events that occurred rarely, and finally, the “damage dealing” events.


2011 ◽  
Vol 2 (2) ◽  
pp. 28-47 ◽  
Author(s):  
Diana Arellano ◽  
Javier Varona ◽  
Francisco J. Perales

One of the milestones in creation of virtual characters is the achievement of believability, which can be done through the representation of emotions using behaviours, voice, or facial expressions. To know which emotions to elicit in a variety of situations it is necessary to have a framework for reasoning, which is why context representation is important when creating synthetic emotions. It provides a description of what is occurring around the character, eliciting different emotions in the same situation or the same emotions in different situations. The novelty of this work is the representation of context, not only as events in the world, but also as the internal characteristics of the character, which when related with the events, give believable emotional responses.


Author(s):  
Alan J. Fridlund

This chapter documents the twin origins of the behavioral ecology view (BECV) of human facial expressions, in (1) the empirical weakness and internal contradictions of the accounts proposed by basic emotion theory (BET) and particularly the neurocultural theory of Paul Ekman et al., and (2) newer understandings about the evolution of animal signaling and communication. BET conceives of our facial expressions as quasi-reflexes which are triggered by universal, modular emotion programs but require management in each culture lest they emerge unthrottled. Unlike BET, BECV regards our facial expressions as contingent signals of intent toward interactants within specific contexts of interaction, even when we are alone and our interactants are ourselves, objects, or implicit others. BECV’s functionalist, externalist view does not deny “emotion,” however it is defined, but does not require it to explain human facial displays.


Sensors ◽  
2021 ◽  
Vol 21 (17) ◽  
pp. 5986
Author(s):  
Sung Park ◽  
Si Pyoung Kim ◽  
Mincheol Whang

With the prevalence of virtual avatars and the recent emergence of metaverse technology, there has been an increase in users who express their identity through an avatar. The research community focused on improving the realistic expressions and non-verbal communication channels of virtual characters to create a more customized experience. However, there is a lack in the understanding of how avatars can embody a user’s signature expressions (i.e., user’s habitual facial expressions and facial appearance) that would provide an individualized experience. Our study focused on identifying elements that may affect the user’s social perception (similarity, familiarity, attraction, liking, and involvement) of customized virtual avatars engineered considering the user’s facial characteristics. We evaluated the participant’s subjective appraisal of avatars that embodied the participant’s habitual facial expressions or facial appearance. Results indicated that participants felt that the avatar that embodied their habitual expressions was more similar to them than the avatar that did not. Furthermore, participants felt that the avatar that embodied their appearance was more familiar than the avatar that did not. Designers should be mindful about how people perceive individuated virtual avatars in order to accurately represent the user’s identity and help users relate to their avatar.


2017 ◽  
Vol 2017 ◽  
pp. 1-8 ◽  
Author(s):  
Yongrui Huang ◽  
Jianhao Yang ◽  
Pengkai Liao ◽  
Jiahui Pan

This paper proposes two multimodal fusion methods between brain and peripheral signals for emotion recognition. The input signals are electroencephalogram and facial expression. The stimuli are based on a subset of movie clips that correspond to four specific areas of valance-arousal emotional space (happiness, neutral, sadness, and fear). For facial expression detection, four basic emotion states (happiness, neutral, sadness, and fear) are detected by a neural network classifier. For EEG detection, four basic emotion states and three emotion intensity levels (strong, ordinary, and weak) are detected by two support vector machines (SVM) classifiers, respectively. Emotion recognition is based on two decision-level fusion methods of both EEG and facial expression detections by using a sum rule or a production rule. Twenty healthy subjects attended two experiments. The results show that the accuracies of two multimodal fusion detections are 81.25% and 82.75%, respectively, which are both higher than that of facial expression (74.38%) or EEG detection (66.88%). The combination of facial expressions and EEG information for emotion recognition compensates for their defects as single information sources.


2011 ◽  
Vol 2 (1) ◽  
pp. 1
Author(s):  
Roberto Cesar Cavalcante Vieira ◽  
Creto Vidal ◽  
Joaquim Bento Cavalcante-Neto

Virtual tridimensional creatures are active actors in many types of applications nowadays, such as virtual reality, games and computer animation. The virtual actors encountered in those applications are very diverse, but usually have humanlike behavior and facial expressions. This paper deals with the mapping of facial expressions between virtual characters, based on anthropometric proportions and geometric manipulations by moving influence zones. Facial proportions of a base model is used to transfer expressions to any other model with similar global characteristics (if the base model is a human, for instance, the other models need to have two eyes, one nose and one mouth). With this solution, it is possible to insert new virtual characters in real-time applications without having to go through the tedious process of customizing the characters’ emotions.


Author(s):  
José-Miguel Fernández-Dols

The notion that there are universal facial expressions of basic emotion remains a dominant idea in the study of emotion. Inspired by pragmatics, and based on behavioral ecology and psychological constructionism, this chapter provides an alternative to the concept of facial expression of basic emotion: the concept of natural facial expression. Actual, observable natural facial expressions do not mean self-contained, discrete basic emotions; they are instead related to different components of diverse emotional episodes. Their communicative function is not semantic (e.g., a smile does not means happiness) but pragmatic (e.g., a smile prompts, on the receiver’s side, important inferences about the context and course of the interaction between sender and receiver).


Sign in / Sign up

Export Citation Format

Share Document