Effects of Interacting with Facial Expressions and Controllers in Different Virtual Environments on Presence, Usability, Affect, and Neurophysiological Signals

Author(s):  
Arindam Dey ◽  
Amit Barde ◽  
Bowen Yuan ◽  
Ekansh Sareen ◽  
Chelsea Dobbins ◽  
...  
2020 ◽  
Vol 10 (16) ◽  
pp. 5636
Author(s):  
Wafaa Alsaggaf ◽  
Georgios Tsaramirsis ◽  
Norah Al-Malki ◽  
Fazal Qudus Khan ◽  
Miadah Almasry ◽  
...  

Computer-controlled virtual characters are essential parts of most virtual environments and especially computer games. Interaction between these virtual agents and human players has a direct impact on the believability of and immersion in the application. The facial animations of these characters are a key part of these interactions. The player expects the elements of the virtual world to act in a similar manner to the real world. For example, in a board game, if the human player wins, he/she would expect the computer-controlled character to be sad. However, the reactions, more specifically, the facial expressions of virtual characters in most games are not linked with the game events. Instead, they have pre-programmed or random behaviors without any understanding of what is really happening in the game. In this paper, we propose a virtual character facial expression probabilistic decision model that will determine when various facial animations should be played. The model was developed by studying the facial expressions of human players while playing a computer videogame that was also developed as part of this research. The model is represented in the form of trees with 15 extracted game events as roots and 10 associated animations of facial expressions with their corresponding probability of occurrence. Results indicated that only 1 out of 15 game events had a probability of producing an unexpected facial expression. It was found that the “win, lose, tie” game events have more dominant associations with the facial expressions than the rest of game events, followed by “surprise” game events that occurred rarely, and finally, the “damage dealing” events.


2006 ◽  
Vol 15 (4) ◽  
pp. 359-372 ◽  
Author(s):  
Jeremy N Bailenson ◽  
Nick Yee ◽  
Dan Merget ◽  
Ralph Schroeder

The realism of avatars in terms of behavior and form is critical to the development of collaborative virtual environments. In the study we utilized state of the art, real-time face tracking technology to track and render facial expressions unobtrusively in a desktop CVE. Participants in dyads interacted with each other via either a video-conference (high behavioral realism and high form realism), voice only (low behavioral realism and low form realism), or an “emotibox” that rendered the dimensions of facial expressions abstractly in terms of color, shape, and orientation on a rectangular polygon (high behavioral realism and low form realism). Verbal and non-verbal self-disclosure were lowest in the videoconference condition while self-reported copresence and success of transmission and identification of emotions were lowest in the emotibox condition. Previous work demonstrates that avatar realism increases copresence while decreasing self-disclosure. We discuss the possibility of a hybrid realism solution that maintains high copresence without lowering self-disclosure, and the benefits of such an avatar on applications such as distance learning and therapy.


PLoS ONE ◽  
2016 ◽  
Vol 11 (9) ◽  
pp. e0161794 ◽  
Author(s):  
Soo Youn Oh ◽  
Jeremy Bailenson ◽  
Nicole Krämer ◽  
Benjamin Li

1996 ◽  
Vol 5 (4) ◽  
pp. 402-415
Author(s):  
Sunil K. Singh ◽  
Steven D. Pieper ◽  
Jethran Guinness ◽  
Dan O. Popa

This paper addresses the modeling and computational issues associated with the control and coordination of head, eyes, and facial expressions of virtual human actors. The emphasis, as much as possible, is on using accurate physics-based computations for motion computation. Some key issues discussed in this work include the use of kinematics and inverse kinematics, trajectory planning, and the use of finite element methods to model soft tissue deformations.


2012 ◽  
Vol 91 (1) ◽  
pp. 17-21 ◽  
Author(s):  
Michael C. Philipp ◽  
Katherine R. Storrs ◽  
Eric J. Vanman

2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Catherine Oh Kruzic ◽  
David Kruzic ◽  
Fernanda Herrera ◽  
Jeremy Bailenson

AbstractThis study focuses on the individual and joint contributions of two nonverbal channels (i.e., face and upper body) in avatar mediated-virtual environments. 140 dyads were randomly assigned to communicate with each other via platforms that differentially activated or deactivated facial and bodily nonverbal cues. The availability of facial expressions had a positive effect on interpersonal outcomes. More specifically, dyads that were able to see their partner’s facial movements mapped onto their avatars liked each other more, formed more accurate impressions about their partners, and described their interaction experiences more positively compared to those unable to see facial movements. However, the latter was only true when their partner’s bodily gestures were also available and not when only facial movements were available. Dyads showed greater nonverbal synchrony when they could see their partner’s bodily and facial movements. This study also employed machine learning to explore whether nonverbal cues could predict interpersonal attraction. These classifiers predicted high and low interpersonal attraction at an accuracy rate of 65%. These findings highlight the relative significance of facial cues compared to bodily cues on interpersonal outcomes in virtual environments and lend insight into the potential of automatically tracked nonverbal cues to predict interpersonal attitudes.


2003 ◽  
Vol 15 (2) ◽  
pp. 69-71 ◽  
Author(s):  
Thomas W. Schubert

Abstract. The sense of presence is the feeling of being there in a virtual environment. A three-component self report scale to measure sense of presence is described, the components being sense of spatial presence, involvement, and realness. This three-component structure was developed in a survey study with players of 3D games (N = 246) and replicated in a second survey study (N = 296); studies using the scale for measuring the effects of interaction on presence provide evidence for validity. The findings are explained by the Potential Action Coding Theory of presence, which assumes that presence develops from mental model building and suppression of the real environment.


2003 ◽  
Vol 17 (3) ◽  
pp. 113-123 ◽  
Author(s):  
Jukka M. Leppänen ◽  
Mirja Tenhunen ◽  
Jari K. Hietanen

Abstract Several studies have shown faster choice-reaction times to positive than to negative facial expressions. The present study examined whether this effect is exclusively due to faster cognitive processing of positive stimuli (i.e., processes leading up to, and including, response selection), or whether it also involves faster motor execution of the selected response. In two experiments, response selection (onset of the lateralized readiness potential, LRP) and response execution (LRP onset-response onset) times for positive (happy) and negative (disgusted/angry) faces were examined. Shorter response selection times for positive than for negative faces were found in both experiments but there was no difference in response execution times. Together, these results suggest that the happy-face advantage occurs primarily at premotoric processing stages. Implications that the happy-face advantage may reflect an interaction between emotional and cognitive factors are discussed.


Sign in / Sign up

Export Citation Format

Share Document