scholarly journals Laughter and smiling facial expression modelling for the generation of virtual affective behavior

PLoS ONE ◽  
2021 ◽  
Vol 16 (5) ◽  
pp. e0251057
Author(s):  
Miquel Mascaró ◽  
Francisco J. Serón ◽  
Francisco J. Perales ◽  
Javier Varona ◽  
Ramon Mas

Laughter and smiling are significant facial expressions used in human to human communication. We present a computational model for the generation of facial expressions associated with laughter and smiling in order to facilitate the synthesis of such facial expressions in virtual characters. In addition, a new method to reproduce these types of laughter is proposed and validated using databases of generic and specific facial smile expressions. In particular, a proprietary database of laugh and smile expressions is also presented. This database lists the different types of classified and generated laughs presented in this work. The generated expressions are validated through a user study with 71 subjects, which concluded that the virtual character expressions built using the presented model are perceptually acceptable in quality and facial expression fidelity. Finally, for generalization purposes, an additional analysis shows that the results are independent of the type of virtual character’s appearance.

2020 ◽  
Vol 10 (16) ◽  
pp. 5636
Author(s):  
Wafaa Alsaggaf ◽  
Georgios Tsaramirsis ◽  
Norah Al-Malki ◽  
Fazal Qudus Khan ◽  
Miadah Almasry ◽  
...  

Computer-controlled virtual characters are essential parts of most virtual environments and especially computer games. Interaction between these virtual agents and human players has a direct impact on the believability of and immersion in the application. The facial animations of these characters are a key part of these interactions. The player expects the elements of the virtual world to act in a similar manner to the real world. For example, in a board game, if the human player wins, he/she would expect the computer-controlled character to be sad. However, the reactions, more specifically, the facial expressions of virtual characters in most games are not linked with the game events. Instead, they have pre-programmed or random behaviors without any understanding of what is really happening in the game. In this paper, we propose a virtual character facial expression probabilistic decision model that will determine when various facial animations should be played. The model was developed by studying the facial expressions of human players while playing a computer videogame that was also developed as part of this research. The model is represented in the form of trees with 15 extracted game events as roots and 10 associated animations of facial expressions with their corresponding probability of occurrence. Results indicated that only 1 out of 15 game events had a probability of producing an unexpected facial expression. It was found that the “win, lose, tie” game events have more dominant associations with the facial expressions than the rest of game events, followed by “surprise” game events that occurred rarely, and finally, the “damage dealing” events.


2018 ◽  
Vol 15 (4) ◽  
pp. 172988141878315 ◽  
Author(s):  
Nicole Lazzeri ◽  
Daniele Mazzei ◽  
Maher Ben Moussa ◽  
Nadia Magnenat-Thalmann ◽  
Danilo De Rossi

Human communication relies mostly on nonverbal signals expressed through body language. Facial expressions, in particular, convey emotional information that allows people involved in social interactions to mutually judge the emotional states and to adjust its behavior appropriately. First studies aimed at investigating the recognition of facial expressions were based on static stimuli. However, facial expressions are rarely static, especially in everyday social interactions. Therefore, it has been hypothesized that the dynamics inherent in a facial expression could be fundamental in understanding its meaning. In addition, it has been demonstrated that nonlinguistic and linguistic information can contribute to reinforce the meaning of a facial expression making it easier to be recognized. Nevertheless, few studies have been performed on realistic humanoid robots. This experimental work aimed at demonstrating the human-like expressive capability of a humanoid robot by examining whether the effect of motion and vocal content influenced the perception of its facial expressions. The first part of the experiment aimed at studying the recognition capability of two kinds of stimuli related to the six basic expressions (i.e. anger, disgust, fear, happiness, sadness, and surprise): static stimuli, that is, photographs, and dynamic stimuli, that is, video recordings. The second and third parts were focused on comparing the same six basic expressions performed by a virtual avatar and by a physical robot under three different conditions: (1) muted facial expressions, (2) facial expressions with nonlinguistic vocalizations, and (3) facial expressions with an emotionally neutral verbal sentence. The results show that static stimuli performed by a human being and by the robot were more ambiguous than the corresponding dynamic stimuli on which motion and vocalization were associated. This hypothesis has been also investigated with a 3-dimensional replica of the physical robot demonstrating that even in case of a virtual avatar, dynamic and vocalization improve the emotional conveying capability.


Author(s):  
Ritvik Tiwari ◽  
Rudra Thorat ◽  
Vatsal Abhani ◽  
Shakti Mahapatro

Emotion recognition based on facial expression is an intriguing research field, which has been presented and applied in various spheres such as safety, health and in human machine interfaces. Researchers in this field are keen in developing techniques that can prove to be an aid to interpret, decode facial expressions and then extract these features in order to achieve a better prediction by the computer. With advancements in deep learning, the different types of prospects of this technique are exploited to achieve a better performance. We spotlight these contributions, the architecture and the databases used and present the progress made by comparing the proposed methods and the results obtained. The interest of this paper is to guide the technology enthusiasts by reviewing recent works and providing insights to make improvements to this field.


2021 ◽  
Vol 2 ◽  
Author(s):  
Yann Glémarec ◽  
Jean-Luc Lugrin ◽  
Anne-Gwenn Bosser ◽  
Aryana Collins Jackson ◽  
Cédric Buche ◽  
...  

In this paper, we present a virtual audience simulation system for Virtual Reality (VR). The system implements an audience perception model controlling the nonverbal behaviors of virtual spectators, such as facial expressions or postures. Groups of virtual spectators are animated by a set of nonverbal behavior rules representing a particular audience attitude (e.g., indifferent or enthusiastic). Each rule specifies a nonverbal behavior category: posture, head movement, facial expression and gaze direction as well as three parameters: type, frequency and proportion. In a first user-study, we asked participants to pretend to be a speaker in VR and then create sets of nonverbal behaviour parameters to simulate different attitudes. Participants manipulated the nonverbal behaviours of single virtual spectator to match a specific levels of engagement and opinion toward them. In a second user-study, we used these parameters to design different types of virtual audiences with our nonverbal behavior rules and evaluated their perceptions. Our results demonstrate our system’s ability to create virtual audiences with three types of different perceived attitudes: indifferent, critical, enthusiastic. The analysis of the results also lead to a set of recommendations and guidelines regarding attitudes and expressions for future design of audiences for VR therapy and training applications.


2020 ◽  
Author(s):  
Monica Perusquía-Hernández

Smiles are one of the most ubiquitous facial expressions. They are often interpreted as a signalling cue of positive emotion. However, as any other facial expression, smiles can also be voluntarily fabricated, masked or inhibited with different communication goals. This review discusses automatic identification of smile genuineness. First, emotions and their bodily manifestation are introduced. Second, an overview of the literature on different types of smiles is provided. Afterwards, different techniques used to investigate smile production are described. These techniques range from human video-coding, bio-signal inspection, and novel sensors that, together with automated techniques using machine learning, aim to investigate facial expression characteristic’s beyond human perception. Next, a general summary of the spatio-temporal shape of a smile is provided. Finally, the remaining challenges regarding individual and cultural differences are discussed.


2015 ◽  
Vol 2015 ◽  
pp. 1-16 ◽  
Author(s):  
Ying Tang ◽  
Jia Yu ◽  
Chen Li ◽  
Jing Fan

Multimodal visualization of network data is a method considering various types of nodes and visualizing them based on their types, or modes. Compared to traditional network visualization of nodes of the same mode, the new method treats different modes of entities in corresponding ways and presents the relations between them more clearly. In this paper, we apply the new method to visualize movie network data, a typical multimodal graph data that contains nodes of different types and connections between them. We use an improved force-directed layout algorithm to present the movie persons as the foreground and a density map to present films as the background. By combining the foreground and background, the movie network data are presented in one picture properly. User interactions are provided including detailed pie charts visible/invisible, zooming, and panning. We apply our visualization method to the Chinese movie data from Douban website. In order to testify the effectiveness of our method, we design and perform the user study of which the statistics are analyzed.


2020 ◽  
Author(s):  
Julian Jara-Ettinger ◽  
Paula Rubio-Fernandez

A foundational assumption of human communication is that speakers ought to say as much as necessary, but no more. How speakers determine what is necessary in a given context, however, is unclear. In studies of referential communication, this expectation is often formalized as the idea that speakers should construct reference by selecting the shortest, sufficiently informative, description. Here we propose that reference production is, instead, a process whereby speakers adopt listeners’ perspectives to facilitate their visual search, without concern for utterance length. We show that a computational model of our proposal predicts graded acceptability judgments with quantitative accuracy, systematically outperforming brevity models. Our model also explains crosslinguistic differences in speakers’ propensity to over-specify in different visual contexts. Our findings suggest that reference production is best understood as driven by a cooperative goal to help the listener understand the intended message, rather than by an egocentric effort to minimize utterance length.


2020 ◽  
Author(s):  
Jonathan Yi ◽  
Philip Pärnamets ◽  
Andreas Olsson

Responding appropriately to others’ facial expressions is key to successful social functioning. Despite the large body of work on face perception and spontaneous responses to static faces, little is known about responses to faces in dynamic, naturalistic situations, and no study has investigated how goal directed responses to faces are influenced by learning during dyadic interactions. To experimentally model such situations, we developed a novel method based on online integration of electromyography (EMG) signals from the participants’ face (corrugator supercilii and zygomaticus major) during facial expression exchange with dynamic faces displaying happy and angry facial expressions. Fifty-eight participants learned by trial-and-error to avoid receiving aversive stimulation by either reciprocate (congruently) or respond opposite (incongruently) to the expression of the target face. Our results validated our method, showing that participants learned to optimize their facial behavior, and replicated earlier findings of faster and more accurate responses in congruent vs. incongruent conditions. Moreover, participants performed better on trials when confronted with smiling, as compared to frowning, faces, suggesting it might be easier to adapt facial responses to positively associated expressions. Finally, we applied drift diffusion and reinforcement learning models to provide a mechanistic explanation for our findings which helped clarifying the underlying decision-making processes of our experimental manipulation. Our results introduce a new method to study learning and decision-making in facial expression exchange, in which there is a need to gradually adapt facial expression selection to both social and non-social reinforcements.


2020 ◽  
Author(s):  
Joshua W Maxwell ◽  
Eric Ruthruff ◽  
michael joseph

Are facial expressions of emotion processed automatically? Some authors have not found this to be the case (Tomasik et al., 2009). Here we revisited the question with a novel experimental logic – the backward correspondence effect (BCE). In three dual-task studies, participants first categorized a sound (Task 1) and then indicated the location of a target face (Task 2). In Experiment 1, Task 2 required participants to search for one facial expression of emotion (angry or happy). We observed positive BCEs, indicating that facial expressions of emotion bypassed the central attentional bottleneck and thus were processed in a capacity-free, automatic manner. In Experiment 2, we replicated this effect but found that morphed emotional expressions (which were used by Tomasik) were not processed automatically. In Experiment 3, we observed similar BCEs for another type of face processing previously shown to be capacity-free – identification of familiar faces (Jung et al., 2013). We conclude that facial expressions of emotion are identified automatically when sufficiently unambiguous.


Author(s):  
Bernardo Breve ◽  
Stefano Cirillo ◽  
Mariano Cuofano ◽  
Domenico Desiato

AbstractGestural expressiveness plays a fundamental role in the interaction with people, environments, animals, things, and so on. Thus, several emerging application domains would exploit the interpretation of movements to support their critical designing processes. To this end, new forms to express the people’s perceptions could help their interpretation, like in the case of music. In this paper, we investigate the user’s perception associated with the interpretation of sounds by highlighting how sounds can be exploited for helping users in adapting to a specific environment. We present a novel algorithm for mapping human movements into MIDI music. The algorithm has been implemented in a system that integrates a module for real-time tracking of movements through a sample based synthesizer using different types of filters to modulate frequencies. The system has been evaluated through a user study, in which several users have participated in a room experience, yielding significant results about their perceptions with respect to the environment they were immersed.


Sign in / Sign up

Export Citation Format

Share Document