Multimodal practices for negative assessments as delicate matters: Incomplete syntax, facial expressions, and head movements

2021 ◽  
Vol 7 (1) ◽  
pp. 549-568
Author(s):  
Xiaoting Li

Abstract This paper contributes to the discussion of fuzzy boundaries by investigating negative assessments of the recipient and non-present parties that are syntactically incomplete. Particularly, it explores how the speaker uses syntax and bodily visual conduct to accomplish the delicate action of negatively assessing others and to solicit the recipient to collaboratively complete negative assessments. Based on an examination of approximately 5 h of everyday Mandarin face-to-face conversations, the study shows that incomplete syntax, facial expressions, and head shakes constitute multimodal practices in making negative assessments of the recipient and a non-present third party. Leaving assessments syntactically incomplete and displaying negative evaluative stance through facial expressions such as lip-pursing and eyebrow furrows and head shakes show the speaker’s orientation to the negative assessments as a delicate action. The facial expressions after incomplete syntax demonstrate that participants orient to the hesitation in the delivery of a TCU/turn-in-progress not as production problem, but rather an interactional problem. This study shows that the boundaries of assessment turns may be blurry, and that one assessment may be collaboratively produced by two participants, which exemplifies a specific aspect of weak cesuras and fuzzy boundaries of units and actions in interaction.

Author(s):  
Yongmian Zhang ◽  
Jixu Chen ◽  
Yan Tong ◽  
Qiang Ji

This chapter describes a probabilistic framework for faithful reproduction of spontaneous facial expressions on a synthetic face model in a real time interactive application. The framework consists of a coupled Bayesian network (BN) to unify the facial expression analysis and synthesis into one coherent structure. At the analysis end, we cast the facial action coding system (FACS) into a dynamic Bayesian network (DBN) to capture relationships between facial expressions and the facial motions as well as their uncertainties and dynamics. The observations fed into the DBN facial expression model are measurements of facial action units (AUs) generated by an AU model. Also implemented by a DBN, the AU model captures the rigid head movements and nonrigid facial muscular movements of a spontaneous facial expression. At the synthesizer, a static BN reconstructs the Facial Animation Parameters (FAPs) and their intensity through the top-down inference according to the current state of facial expression and pose information output by the analysis end. The two BNs are connected statically through a data stream link. The novelty of using the coupled BN brings about several benefits. First, a facial expression is inferred through both spatial and temporal inference so that the perceptual quality of animation is less affected by the misdetection of facial features. Second, more realistic looking facial expressions can be reproduced by modeling the dynamics of human expressions in facial expression analysis. Third, very low bitrate (9 bytes per frame) in data transmission can be achieved.


2011 ◽  
pp. 1637-1654
Author(s):  
Hirohiko Sagawa ◽  
Masaru Takeuchi

We have developed a sign language teaching system that uses sign language recognition and generation methods to overcome three problems with current learning materials: a lack of information about non-manual gestures (facial expressions, glances, head movements, etc.), display of gestures from only one or two points of view, and a lack of feedback about the correctness of the learner’s gestures. Experimental evaluation by 24 non-hearing-impaired people demonstrated that the system is effective for learning sign language.


2018 ◽  
Vol 10 (11) ◽  
pp. 4005 ◽  
Author(s):  
Jasper de Vries ◽  
Séverine van Bommel ◽  
Karin Peters

Online collaboration to deal with (global) environmental and public health problems continues to grow as the quality of technology for communication improves. In these collaborations, trust is seen as important for sustainable collaborations and organizations. However, face-to-face communication, which is often lacking in these contexts, is seen as a pre-requisite for trust development. Therefore, this paper aims to explore empirically which factors influence the emergence of trust in the early stages of online collaboration. Using the relevant literature, we conducted a series of interviews around projects in the field of public health and the environment on the interface between science and practice. The results show that trust does develop between participants. This trust is strongly influenced by perceived ability and integrity, fostered by reputation, third-party perceptions, and project structure. In these contexts, these types of trust facilitate collaboration but are also influenced by a wider set of aspects such as power, expectations, and uncertainty. However, from the results we also conclude that online collaboration does not create benevolence and a shared identity, thereby limiting further trust development and leading to less strong relations. Strong relations, however, are deemed important to reach creative and innovative solutions and long-term sustainable collaboration and organizations.


Autism ◽  
2020 ◽  
pp. 136236132095169 ◽  
Author(s):  
Roser Cañigueral ◽  
Jamie A Ward ◽  
Antonia F de C Hamilton

Communication with others relies on coordinated exchanges of social signals, such as eye gaze and facial displays. However, this can only happen when partners are able to see each other. Although previous studies report that autistic individuals have difficulties in planning eye gaze and making facial displays during conversation, evidence from real-life dyadic tasks is scarce and mixed. Across two studies, here we investigate how eye gaze and facial displays of typical and high-functioning autistic individuals are modulated by the belief in being seen and potential to show true gaze direction. Participants were recorded with an eye-tracking and video-camera system while they completed a structured Q&A task with a confederate under three social contexts: pre-recorded video, video-call and face-to-face. Typical participants gazed less to the confederate and produced more facial displays when they were being watched and when they were speaking. Contrary to our hypotheses, eye gaze and facial motion patterns in autistic participants were overall similar to the typical group. This suggests that high-functioning autistic participants are able to use eye gaze and facial displays as social signals. Future studies will need to investigate to what extent this reflects spontaneous behaviour or the use of compensation strategies. Lay abstract When we are communicating with other people, we exchange a variety of social signals through eye gaze and facial expressions. However, coordinated exchanges of these social signals can only happen when people involved in the interaction are able to see each other. Although previous studies report that autistic individuals have difficulties in using eye gaze and facial expressions during social interactions, evidence from tasks that involve real face-to-face conversations is scarce and mixed. Here, we investigate how eye gaze and facial expressions of typical and high-functioning autistic individuals are modulated by the belief in being seen by another person, and by being in a face-to-face interaction. Participants were recorded with an eye-tracking and video-camera system while they completed a structured Q&A task with a confederate under three social contexts: pre-recorded video (no belief in being seen, no face-to-face), video-call (belief in being seen, no face-to-face) and face-to-face (belief in being seen and face-to-face). Typical participants gazed less to the confederate and made more facial expressions when they were being watched and when they were speaking. Contrary to our hypotheses, eye gaze and facial expression patterns in autistic participants were overall similar to the typical group. This suggests that high-functioning autistic participants are able to use eye gaze and facial expressions as social signals. Future studies will need to investigate to what extent this reflects spontaneous behaviour or the use of compensation strategies.


2020 ◽  
Vol 17 (1) ◽  
pp. 43-58 ◽  
Author(s):  
Kimberly McCarthy ◽  
Jone L. Pearce ◽  
John Morton ◽  
Sarah Lyon

Purpose The emerging literature on computer-mediated communication at the study lacks depth in terms of elucidating the consequences of the effects of incivility on employees. This study aims to compare face-to-face incivility with incivility encountered via e-mail on both task performance and performance evaluation. Design/methodology/approach In two experimental studies, the authors test whether exposure to incivility via e-mail reduces individual task performance beyond that of face-to-face incivility and weather exposure to that incivility results in lower performance evaluations for third-parties. Findings The authors show that being exposed to cyber incivility does decrease performance on a subsequent task. The authors also find that exposure to rudeness, both face-to-face and via e-mail, is contagious and results in lower performance evaluation scores for an uninvolved third party. Originality/value This research comprises an empirically grounded study of incivility in the context of e-mail at study, highlights distinctions between it and face-to-face rudeness and reveals the potential risks that cyber incivility poses for employees.


2016 ◽  
Vol 29 (3) ◽  
pp. 697-710 ◽  
Author(s):  
Evin Aktar ◽  
Cristina Colonnesi ◽  
Wieke de Vente ◽  
Mirjana Majdandžić ◽  
Susan M. Bögels

AbstractThe present study investigated the associations of mothers' and fathers' lifetime depression and anxiety symptoms, and of infants' negative temperament with parents' and infants' gaze, facial expressions of emotion, and synchrony. We observed infants' (age between 3.5 and 5.5 months, N = 101) and parents' gaze and facial expressions during 4-min naturalistic face-to-face interactions. Parents' lifetime symptoms of depression and anxiety were assessed with clinical interviews, and infants' negative temperament was measured with standardized observations. Parents with more depressive symptoms and their infants expressed less positive and more neutral affect. Parents' lifetime anxiety symptoms were not significantly related to parents' expressions of affect, while they were linked to longer durations of gaze to parent, and to more positive and negative affect in infants. Parents' lifetime depression or anxiety was not related to synchrony. Infants' temperament did not predict infants' or parents' interactive behavior. The study reveals that more depression symptoms in parents are linked to more neutral affect from parents and from infants during face-to-face interactions, while parents' anxiety symptoms are related to more attention to parent and less neutral affect from infants (but not from parents).


Author(s):  
Bernd J. Kröger ◽  
Peter Birkholz ◽  
Christiane Neuschaefer-Rube

AbstractWhile we are capable of modeling the shape, e.g. face, arms, etc. of humanoid robots in a nearly natural or human-like way, it is much more difficult to generate human-like facial or body movements and human-like behavior like e.g. speaking and co-speech gesturing. In this paper it will be argued for a developmental robotics approach for learning to speak. On the basis of current literature a blueprint of a brain model will be outlined for this kind of robots and preliminary scenarios for knowledge acquisition will be described. Furthermore it will be illustrated that natural speech acquisition mainly results from learning during face-to-face communication and it will be argued that learning to speak should be based on human-robot face-to-face communication. Here the human acts like a caretaker or teacher and the robot acts like a speech-acquiring toddler. This is a fruitful basic scenario not only for learning to speak, but also for learning to communicate in general, including to produce co-verbal manual gestures and to produce co-verbal facial expressions.


Sign in / Sign up

Export Citation Format

Share Document