scholarly journals Facial expressions contribute more than body movements to conversational outcomes in avatar-mediated virtual environments

2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Catherine Oh Kruzic ◽  
David Kruzic ◽  
Fernanda Herrera ◽  
Jeremy Bailenson

AbstractThis study focuses on the individual and joint contributions of two nonverbal channels (i.e., face and upper body) in avatar mediated-virtual environments. 140 dyads were randomly assigned to communicate with each other via platforms that differentially activated or deactivated facial and bodily nonverbal cues. The availability of facial expressions had a positive effect on interpersonal outcomes. More specifically, dyads that were able to see their partner’s facial movements mapped onto their avatars liked each other more, formed more accurate impressions about their partners, and described their interaction experiences more positively compared to those unable to see facial movements. However, the latter was only true when their partner’s bodily gestures were also available and not when only facial movements were available. Dyads showed greater nonverbal synchrony when they could see their partner’s bodily and facial movements. This study also employed machine learning to explore whether nonverbal cues could predict interpersonal attraction. These classifiers predicted high and low interpersonal attraction at an accuracy rate of 65%. These findings highlight the relative significance of facial cues compared to bodily cues on interpersonal outcomes in virtual environments and lend insight into the potential of automatically tracked nonverbal cues to predict interpersonal attitudes.

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Philip Furley ◽  
Florian Klingner ◽  
Daniel Memmert

AbstractThe present research attempted to extend prior research that showed that thin-slices of pre-performance nonverbal behavior (NVB) of professional darts players gives valid information to observers about subsequent performance tendencies. Specifically, we investigated what kind of nonverbal cues were associated with success and informed thin-slice ratings. Participants (N = 61) were first asked to estimate the performance of a random sample of videos showing the preparatory NVB of professional darts players (N = 47) either performing well (470 clips) or poorly (470 clips). Preparatory NVB was assessed via preparation times and Active Appearance Modeling using Noldus FaceReader. Results showed that observers could distinguish between good and poor performance based on thin-slices of preparatory NVB (p = 0.001, d = 0.87). Further analyses showed that facial expressions prior to poor performance showed more arousal (p = 0.011, ƞ2p = 0.10), sadness (p = 0.040, ƞ2p = 0.04), and anxiety (p = 0.009, ƞ2p = 0.09) and preparation times were shorter (p = 0.001, ƞ2p = 0.36) prior to poor performance than good performance. Lens model analyses showed preparation times (p = 0.001, rho = 0.18), neutral (p = 0.001, rho = 0.13), sad (rho = 0.12), and facial expressions of arousal (p = 0.001, rho = 0.11) to be correlated with observers’ performance ratings. Hence, preparation times and facial cues associated with a player’s level of arousal, neutrality, and sadness seem to be valid nonverbal cues that observers utilize to infer information about subsequent perceptual-motor performance.


2021 ◽  
Vol 11 (4) ◽  
pp. 1428
Author(s):  
Haopeng Wu ◽  
Zhiying Lu ◽  
Jianfeng Zhang ◽  
Xin Li ◽  
Mingyue Zhao ◽  
...  

This paper addresses the problem of Facial Expression Recognition (FER), focusing on unobvious facial movements. Traditional methods often cause overfitting problems or incomplete information due to insufficient data and manual selection of features. Instead, our proposed network, which is called the Multi-features Cooperative Deep Convolutional Network (MC-DCN), maintains focus on the overall feature of the face and the trend of key parts. The processing of video data is the first stage. The method of ensemble of regression trees (ERT) is used to obtain the overall contour of the face. Then, the attention model is used to pick up the parts of face that are more susceptible to expressions. Under the combined effect of these two methods, the image which can be called a local feature map is obtained. After that, the video data are sent to MC-DCN, containing parallel sub-networks. While the overall spatiotemporal characteristics of facial expressions are obtained through the sequence of images, the selection of keys parts can better learn the changes in facial expressions brought about by subtle facial movements. By combining local features and global features, the proposed method can acquire more information, leading to better performance. The experimental results show that MC-DCN can achieve recognition rates of 95%, 78.6% and 78.3% on the three datasets SAVEE, MMI, and edited GEMEP, respectively.


2013 ◽  
Vol 796 ◽  
pp. 513-518
Author(s):  
Rong Jin ◽  
Bing Fei Gu ◽  
Guo Lian Liu

In this paper 110 female undergraduates in Soochow University are measured by using 3D non-contact measurement system and manual measurement. 3D point cloud data of human body is taken as research objects by using anti-engineering software, and secondary development of point cloud data is done on the basis of optimizing point cloud data. In accordance with the definition of the human chest width points and other feature points, and in the operability of the three-dimensional point cloud data, the width, thickness, and length dimensions of the curve through the chest width point are measured. Classification of body type is done by choosing the ratio values as classification index which is the ratio between thickness and width of the curve. The generation rules of the chest curve are determined for each type by using linear regression method. Human arm model could be established by the computer automatically. Thereby the individual model of the female upper body mannequin modeling can be improved effectively.


2016 ◽  
Vol 13 (122) ◽  
pp. 20160414 ◽  
Author(s):  
Mehdi Moussaïd ◽  
Mubbasir Kapadia ◽  
Tyler Thrash ◽  
Robert W. Sumner ◽  
Markus Gross ◽  
...  

Understanding the collective dynamics of crowd movements during stressful emergency situations is central to reducing the risk of deadly crowd disasters. Yet, their systematic experimental study remains a challenging open problem due to ethical and methodological constraints. In this paper, we demonstrate the viability of shared three-dimensional virtual environments as an experimental platform for conducting crowd experiments with real people. In particular, we show that crowds of real human subjects moving and interacting in an immersive three-dimensional virtual environment exhibit typical patterns of real crowds as observed in real-life crowded situations. These include the manifestation of social conventions and the emergence of self-organized patterns during egress scenarios. High-stress evacuation experiments conducted in this virtual environment reveal movements characterized by mass herding and dangerous overcrowding as they occur in crowd disasters. We describe the behavioural mechanisms at play under such extreme conditions and identify critical zones where overcrowding may occur. Furthermore, we show that herding spontaneously emerges from a density effect without the need to assume an increase of the individual tendency to imitate peers. Our experiments reveal the promise of immersive virtual environments as an ethical, cost-efficient, yet accurate platform for exploring crowd behaviour in high-risk situations with real human subjects.


2016 ◽  
Vol 9 (1) ◽  
pp. 5-20 ◽  
Author(s):  
B.B. Velichkovsky ◽  
A.N. Gusev ◽  
V.F. Vinogradova ◽  
O.A. Arbekova

User interaction with a virtual reality system may be accompanied with a sense of presence, the illusion of reality of virtual environment. The emergence of a sense of presence is determined by both technological and psychological factors. The authors show that a sense of presence may depend on the individual characteristics of cognitive control, i.e. the system of metacognition providing cognitive system setting on the solution of specific problems in context. It was found that the expression of a feeling of presence may depend on the efficiency of the control switch functions, interference suppression and updating of working memory. At the same time, the dependence of the severity of the sense of presence on the effectiveness of cognitive control differs in virtual environments with different levels of immersion.


PLoS ONE ◽  
2021 ◽  
Vol 16 (8) ◽  
pp. e0255570
Author(s):  
Motonori Kurosumi ◽  
Koji Mizukoshi ◽  
Maya Hongo ◽  
Miyuki G. Kamachi

We form impressions of others by observing their constant and dynamically-shifting facial expressions during conversation and other daily life activities. However, conventional aging research has mainly considered the changing characteristics of the skin, such as wrinkles and age-spots, within very limited states of static faces. In order to elucidate the range of aging impressions that we make in daily life, it is necessary to consider the effects of facial movement. This study investigated the effects of facial movement on age impressions. An age perception test using Japanese women as face models was employed to verify the effects of the models’ age-dependent facial movements on age impression in 112 participants (all women, aged 20–49 years) as observers. Further, the observers’ gaze was analyzed to identify the facial areas of interests during age perception. The results showed that cheek movement affects age impressions, and that the impressions increase depending on the model’s age. These findings will facilitate the development of new means of provoking a more youthful impression by approaching anti-aging from a different viewpoint of facial movement.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Andry Chowanda

AbstractSocial interactions are important for us, humans, as social creatures. Emotions play an important part in social interactions. They usually express meanings along with the spoken utterances to the interlocutors. Automatic facial expressions recognition is one technique to automatically capture, recognise, and understand emotions from the interlocutor. Many techniques proposed to increase the accuracy of emotions recognition from facial cues. Architecture such as convolutional neural networks demonstrates promising results for emotions recognition. However, most of the current models of convolutional neural networks require an enormous computational power to train and process emotional recognition. This research aims to build compact networks with depthwise separable layers while also maintaining performance. Three datasets and three other similar architectures were used to be compared with the proposed architecture. The results show that the proposed architecture performed the best among the other architectures. It achieved up to 13% better accuracy and 6–71% smaller and more compact than the other architectures. The best testing accuracy achieved by the architecture was 99.4%.


Author(s):  
Simon M. Breil ◽  
Sarah Osterholz ◽  
Steffen Nestler ◽  
Mitja D. Back

This chapter summarizes research on nonverbal expressions of behavior (nonverbal cues) and how they contribute to the accuracy of personality judgments. First, it presents a conceptual overview of relevant nonverbal cues in the domains of facial expressions, body language, paralanguage, and appearance as well as approaches to assess these cues on different levels of aggregation. It then summarizes research on the validity of nonverbal cues (what kind of nonverbal cues are good indicators of personality?) and the utilization of nonverbal cues (what kind of nonverbal cues lead to personality impressions?), resulting in a catalog of those cues that drive judgment accuracy for different traits. Finally, it discusses personal and situational characteristics that moderate the expression and utilization of nonverbal cues and give an outlook for future research.


2018 ◽  
Vol 28 (09) ◽  
pp. 1831-1856 ◽  
Author(s):  
Alessandro Ciallella ◽  
Emilio N. M. Cirillo ◽  
Petru L. Curşeu ◽  
Adrian Muntean

We present modeling strategies that describe the motion and interaction of groups of pedestrians in obscured spaces. We start off with an approach based on balance equations in terms of measures and then we exploit the descriptive power of a probabilistic cellular automaton model.Based on a variation of the simple symmetric random walk on the square lattice, we test the interplay between population size and an interpersonal attraction parameter for the evacuation of confined and darkened spaces. We argue that information overload and coordination costs associated with information processing in small groups are two key processes that influence the evacuation rate. Our results show that substantial computational resources are necessary to compensate for incomplete information — the more individuals in (information processing) groups the higher the exit rate for low population size. For simple social systems, it is likely that the individual representations are not redundant and large group sizes ensure that this non-redundant information is actually available to a substantial number of individuals. For complex social systems, information redundancy makes information evaluation and transfer inefficient and, as such, group size becomes a drawback rather than a benefit. The effect of group sizes on outgoing fluxes, evacuation times and wall effects is carefully studied with a Monte Carlo framework accounting also for the presence of an internal obstacle.


Sign in / Sign up

Export Citation Format

Share Document