Objective eye-gaze behaviour during face-to-face communication with proficient alaryngeal speakers: a preliminary study

Author(s):  
Paul Evitts ◽  
Robert Gallop
2021 ◽  
Vol 12 ◽  
Author(s):  
Dimosthenis Kontogiorgos ◽  
Joakim Gustafson

In face-to-face interaction, speakers establish common ground incrementally, the mutual belief of understanding. Instead of constructing “one-shot” complete utterances, speakers tend to package pieces of information in smaller fragments (what Clark calls “installments”). The aim of this paper was to investigate how speakers' fragmented construction of utterances affect the cognitive load of the conversational partners during utterance production and comprehension. In a collaborative furniture assembly, participants instructed each other how to build an IKEA stool. Pupil diameter was measured as an outcome of effort and cognitive processing in the collaborative task. Pupillometry data and eye-gaze behaviour indicated that more cognitive resources were required by speakers to construct fragmented rather than non-fragmented utterances. Such construction of utterances by audience design was associated with higher cognitive load for speakers. We also found that listeners' cognitive resources were decreased in each new speaker utterance, suggesting that speakers' efforts in the fragmented construction of utterances were successful to resolve ambiguities. The results indicated that speaking in fragments is beneficial for minimising collaboration load, however, adapting to listeners is a demanding task. We discuss implications for future empirical research on the design of task-oriented human-robot interactions, and how assistive social robots may benefit from the production of fragmented instructions.


2014 ◽  
Vol 23 (3) ◽  
pp. 132-139 ◽  
Author(s):  
Lauren Zubow ◽  
Richard Hurtig

Children with Rett Syndrome (RS) are reported to use multiple modalities to communicate although their intentionality is often questioned (Bartolotta, Zipp, Simpkins, & Glazewski, 2011; Hetzroni & Rubin, 2006; Sigafoos et al., 2000; Sigafoos, Woodyatt, Tuckeer, Roberts-Pennell, & Pittendreigh, 2000). This paper will present results of a study analyzing the unconventional vocalizations of a child with RS. The primary research question addresses the ability of familiar and unfamiliar listeners to interpret unconventional vocalizations as “yes” or “no” responses. This paper will also address the acoustic analysis and perceptual judgments of these vocalizations. Pre-recorded isolated vocalizations of “yes” and “no” were presented to 5 listeners (mother, father, 1 unfamiliar, and 2 familiar clinicians) and the listeners were asked to rate the vocalizations as either “yes” or “no.” The ratings were compared to the original identification made by the child's mother during the face-to-face interaction from which the samples were drawn. Findings of this study suggest, in this case, the child's vocalizations were intentional and could be interpreted by familiar and unfamiliar listeners as either “yes” or “no” without contextual or visual cues. The results suggest that communication partners should be trained to attend to eye-gaze and vocalizations to ensure the child's intended choice is accurately understood.


Author(s):  
Ding Ding ◽  
Mark A Neerincx ◽  
Willem-Paul Brinkman

Abstract Virtual cognitions (VCs) are a stream of simulated thoughts people hear while emerged in a virtual environment, e.g. by hearing a simulated inner voice presented as a voice over. They can enhance people’s self-efficacy and knowledge about, for example, social interactions as previous studies have shown. Ownership and plausibility of these VCs are regarded as important for their effect, and enhancing both might, therefore, be beneficial. A potential strategy for achieving this is the synchronization of the VCs with people’s eye fixation using eye-tracking technology embedded in a head-mounted display. Hence, this paper tests this idea in the context of a pre-therapy for spider and snake phobia to examine the ability to guide people’s eye fixation. An experiment with 24 participants was conducted using a within-subjects design. Each participant was exposed to two conditions: one where the VCs were adapted to eye gaze of the participant and the other where they were not adapted, i.e. the control condition. The findings of a Bayesian analysis suggest that credibly more ownership was reported and more eye-gaze shift behaviour was observed in the eye-gaze-adapted condition than in the control condition. Compared to the alternative of no or negative mediation, the findings also give some more credibility to the hypothesis that ownership, at least partly, positively mediates the effect eye-gaze-adapted VCs have on eye-gaze shift behaviour. Only weak support was found for plausibility as a mediator. These findings help improve insight into how VCs affect people.


TEM Journal ◽  
2021 ◽  
pp. 508-516
Author(s):  
Deepti Mishra ◽  
Gonca Gokce Menekse Dalveren ◽  
Frode S. Volden ◽  
Carly Grace Allen

Group work is a necessary element of engineering education and group members need information about one another, group process, shared attention and mutual understanding during group discussions. There are several important elements for establishing and maintaining a group discussion such as participant’s role, seating arrangement, verbal and non-verbal cues, eye gaze, gestures etc. The present study investigates these elements for identifying the behavior of group members in a blend of traditional face-to-face discussion along with computer supported cooperative work (CSCW) setting. The results of this study have shown that, speaking duration is the key factor for identifying the leadership in a group and participants mostly used eye gazes for turn taking. Although this study is a mix of face-to-face and CSCW discussion setting, participants mostly behave like faceto- face group discussion. However, unlike the previous studies involving face-to-face discussion, the relation between seating arrangement and amount of attention is not apparent from the data during this study.


Humaniora ◽  
2011 ◽  
Vol 2 (1) ◽  
pp. 518
Author(s):  
Esther Widhi Andangsari

This study is a preliminary study about social networking and text relationship among young adulthood. The purpose of this study is to get information or description about text relationship through social networking. Method of this study is qualitative method with phenomenology approach. The phenomenon of using social networking to build relationship with others is growing popular especially among young adulthood. Observing this phenomenon accurately, there is a changing in interaction pattern. It was a physically interaction or face to face interaction. But as growing popularity of technology or internet access, today interaction can do through online and without face to face interaction. Surprisingly, this online interaction and without face to face interaction is very popular at the present. From this preliminary study, the findings are social networking become a media to share emotion, opinion openly among people. Text relationship through social networking also need emotional setting which is substituted electronically and it is virtual emotional and not the real emotional. Social networking still give a chance to people to gather face to face, not only virtual gathering. 


Autism ◽  
2020 ◽  
pp. 136236132095169 ◽  
Author(s):  
Roser Cañigueral ◽  
Jamie A Ward ◽  
Antonia F de C Hamilton

Communication with others relies on coordinated exchanges of social signals, such as eye gaze and facial displays. However, this can only happen when partners are able to see each other. Although previous studies report that autistic individuals have difficulties in planning eye gaze and making facial displays during conversation, evidence from real-life dyadic tasks is scarce and mixed. Across two studies, here we investigate how eye gaze and facial displays of typical and high-functioning autistic individuals are modulated by the belief in being seen and potential to show true gaze direction. Participants were recorded with an eye-tracking and video-camera system while they completed a structured Q&A task with a confederate under three social contexts: pre-recorded video, video-call and face-to-face. Typical participants gazed less to the confederate and produced more facial displays when they were being watched and when they were speaking. Contrary to our hypotheses, eye gaze and facial motion patterns in autistic participants were overall similar to the typical group. This suggests that high-functioning autistic participants are able to use eye gaze and facial displays as social signals. Future studies will need to investigate to what extent this reflects spontaneous behaviour or the use of compensation strategies. Lay abstract When we are communicating with other people, we exchange a variety of social signals through eye gaze and facial expressions. However, coordinated exchanges of these social signals can only happen when people involved in the interaction are able to see each other. Although previous studies report that autistic individuals have difficulties in using eye gaze and facial expressions during social interactions, evidence from tasks that involve real face-to-face conversations is scarce and mixed. Here, we investigate how eye gaze and facial expressions of typical and high-functioning autistic individuals are modulated by the belief in being seen by another person, and by being in a face-to-face interaction. Participants were recorded with an eye-tracking and video-camera system while they completed a structured Q&A task with a confederate under three social contexts: pre-recorded video (no belief in being seen, no face-to-face), video-call (belief in being seen, no face-to-face) and face-to-face (belief in being seen and face-to-face). Typical participants gazed less to the confederate and made more facial expressions when they were being watched and when they were speaking. Contrary to our hypotheses, eye gaze and facial expression patterns in autistic participants were overall similar to the typical group. This suggests that high-functioning autistic participants are able to use eye gaze and facial expressions as social signals. Future studies will need to investigate to what extent this reflects spontaneous behaviour or the use of compensation strategies.


2001 ◽  
Vol 28 (2) ◽  
pp. 325-349 ◽  
Author(s):  
SPENCER D. KELLY

Recently, much research has explored the role that nonverbal pointing behaviours play in children's early acquisition of language, for example during word learning. However, few researchers have considered the possibility that these behaviours may continue to play a role in language comprehension as children develop more sophisticated language skills. The present study investigates the role that eye gaze and pointing gestures play in three- to five-year-olds understanding of complex pragmatic communication. Experiment 1 demonstrates that children (N = 29) better understand videotapes of a mother making indirect requests to a child when the requests are accompanied by nonverbal pointing behaviours. Experiment 2 uses a different methodology in which children (N = 27) are actual participants rather than observers in order to generalize the findings to naturalistic, face-to-face interactions. The results from both experiments suggest that broader units of analysis beyond the verbal message may be needed in studying children's continuing understanding of pragmatic processes.


2012 ◽  
Vol 4 (2) ◽  
pp. 99-114 ◽  
Author(s):  
Catherine I. Phillips ◽  
Christopher R. Sears ◽  
Penny M. Pexman

AbstractThe present research examines the effects of body-object interaction (BOI) on eye gaze behaviour in a reading task. BOI measures perceptions of the ease with which a human body can physically interact with a word's referent. A set of high BOI words (e.g. cat) and a set of low BOI words (e.g. sun) were selected, matched on imageability and concreteness (as well as other lexical and semantic variables). Facilitatory BOI effects were observed: gaze durations and total fixation durations were shorter for high BOI words, and participants made fewer regressions to high BOI words. The results provide evidence of a BOI effect on non-manual responses and in a situation that taps normal reading processes. We discuss how the results (a) suggest that stored motor information (as measured by BOI ratings) is relevant to lexical semantics, and (b) are consistent with an embodied view of cognition (Wilson 2002).


Sign in / Sign up

Export Citation Format

Share Document