scholarly journals Visual Cues to Restore Student Attention based on Eye Gaze Drift, and Application to an Offshore Training System

Author(s):  
Andrew Yoshimura ◽  
Adil Khokhar ◽  
Christoph W Borst
2014 ◽  
Vol 23 (3) ◽  
pp. 132-139 ◽  
Author(s):  
Lauren Zubow ◽  
Richard Hurtig

Children with Rett Syndrome (RS) are reported to use multiple modalities to communicate although their intentionality is often questioned (Bartolotta, Zipp, Simpkins, & Glazewski, 2011; Hetzroni & Rubin, 2006; Sigafoos et al., 2000; Sigafoos, Woodyatt, Tuckeer, Roberts-Pennell, & Pittendreigh, 2000). This paper will present results of a study analyzing the unconventional vocalizations of a child with RS. The primary research question addresses the ability of familiar and unfamiliar listeners to interpret unconventional vocalizations as “yes” or “no” responses. This paper will also address the acoustic analysis and perceptual judgments of these vocalizations. Pre-recorded isolated vocalizations of “yes” and “no” were presented to 5 listeners (mother, father, 1 unfamiliar, and 2 familiar clinicians) and the listeners were asked to rate the vocalizations as either “yes” or “no.” The ratings were compared to the original identification made by the child's mother during the face-to-face interaction from which the samples were drawn. Findings of this study suggest, in this case, the child's vocalizations were intentional and could be interpreted by familiar and unfamiliar listeners as either “yes” or “no” without contextual or visual cues. The results suggest that communication partners should be trained to attend to eye-gaze and vocalizations to ensure the child's intended choice is accurately understood.


2021 ◽  
pp. 182-188
Author(s):  
Laura Bishop ◽  
Carlos Cancino-Chacón ◽  
Werner Goebl

In the Western art music tradition, among many others, top ensembles are distinguished on the basis of their creative interpretation and expressivity, rather than purely on the precision of their synchronization. This chapter proposes that visual cues serve as social motivators during ensemble performance, promoting performers’ creative engagement with the music and each other. This chapter discusses findings from a study in which skilled duo musicians’ use of visual cues (eye gaze and body motion) was examined across the course of a rehearsal session. Results show that performers are driven to interact visually: (1) by temporal irregularity in the music and (2) by increased familiarity with the music and their co-performer. Synchronization success was unimpaired during a “blind” performance where performers could not see each other. Ensemble musicians thus choose to supplement their auditory interactions with visual cues despite their visual interactions offering no apparent benefit to synchronization.


2020 ◽  
pp. 026765831989682
Author(s):  
Dato Abashidze ◽  
Kim McDonough ◽  
Yang Gao

Recent research that explored how input exposure and learner characteristics influence novel L2 morphosyntactic pattern learning has exposed participants to either text or static images rather than dynamic visual events. Furthermore, it is not known whether incorporating eye gaze cues into dynamic visual events enhances dual pattern learning. Therefore, this exploratory eye-tracking study examined whether eye gaze cues during dynamic visual events facilitate novel L2 pattern learning. University students ( n = 72) were exposed to 36 training videos with two dual novel morphosyntactic patterns in pseudo-Georgian: completed events ( bich-ma kocn-ul gogoit, ‘boy kissed girl’) and ongoing actions ( bich-su kocn-ar gogoit, ‘boy is kissing girl’). They then carried out an immediate test with 24 items using the same vocabulary words, followed by a generalization test with 24 items created from new vocabulary words. Results indicated that learners who received the eye gaze cues scored significantly higher on the immediate test and relied on the verb cues more than on the noun cues. A post-hoc analysis of eye-movement data indicated that the gaze cues elicited longer looks to the correct images. Findings are discussed in relation to visual cues and novel morphosyntactic pattern learning.


2021 ◽  
Vol 2 ◽  
Author(s):  
Prasanth Sasikumar ◽  
Soumith Chittajallu ◽  
Navindd Raj ◽  
Huidong Bai ◽  
Mark Billinghurst

Conventional training and remote collaboration systems allow users to see each other’s faces, heightening the sense of presence while sharing content like videos or slideshows. However, these methods lack depth information and a free 3D perspective of the training content. This paper investigates the impact of volumetric playback in a Mixed Reality (MR) spatial training system. We describe the MR system in a mechanical assembly scenario that incorporates various instruction delivery cues. Building upon previous research, four spatial instruction cues were explored; “Annotation”, “Hand gestures”, “Avatar”, and “Volumetric playback”. Through two user studies that simulated a real-world mechanical assembly task, we found that the volumetric visual cue enhanced spatial perception in the tested MR training tasks, exhibiting increased co-presence and system usability while reducing mental workload and frustration. We also found that the given tasks required less effort and mental load when eye gaze was incorporated. Eye gaze on its own was not perceived to be very useful, but it helped to compliment the hand gesture cues. Finally, we discuss limitations, future work and potential applications of our system.


2020 ◽  
Author(s):  
Briony Banks ◽  
Emma Gowen ◽  
Kevin Munro ◽  
patti adank

Visual cues from a speaker’s face may improve perceptual adaptation to degraded speech over time, but current evidence is limited. We aimed to replicate results from previous studies and extend them to more demanding speech stimuli (sentences), to better represent real-life, challenging speech comprehension. In addition, we investigated whether particular eye gaze patterns towards the speaker’s mouth were related to adaptation, hypothesising that listeners who looked more at the speaker’s mouth would show greater adaptation. A group of listeners were presented with noise-vocoded sentences in audiovisual format while a control group were presented with the audio signal only, presented congruently with a still image of the speaker’s face. Results of previous adaptation studies were partially replicated: the audiovisual group had better recognition throughout and adapted slightly more rapidly, but both groups showed an equal amount of improvement overall (after exposure to 90 sentences). Longer fixations on the speaker’s mouth in the audiovisual group were related to better overall accuracy, although evidence for this relationship was relatively weak. An exploratory analysis further showed that the duration of fixations to the speaker’s mouth decreased over time. The results suggest that the benefits from visual cues to adaptation to unfamiliar speech vary more than previously thought. Longer fixations on a speaker’s mouth may play a role in successfully decoding these cues, but more evidence is needed to fully establish how patterns of eye gaze are related to audiovisual speech recognition.


Author(s):  
Arne Seeliger ◽  
Gerrit Merz ◽  
Christian Holz ◽  
Stefan Feuerriegel
Keyword(s):  
Eye Gaze ◽  

Author(s):  
Marc Swerts ◽  
Emiel Krahmer

Communication partners not only exchange information through the auditory channel but also use a wide range of visual cues, such as facial expressions, eye gaze, hand gestures, and other types of body language. The latter set of features can be subsumed under the term ‘visual prosody’. Visual cues have been shown to be communicatively relevant, as they not only help to improve speech intelligibility, especially in noisy conditions, but can also signal information that is not expressed through the words or the syntactic structure of an utterance. This chapter examines how visual prosody can be put to use to highlight specific communicative functions, and how such cues relate to auditory forms of prosody. It also presents results of comparative analyses of visual prosody to shed light on resemblances and differences in how it is being exploited in different cultures.


Author(s):  
Hailiang Wang ◽  
Calvin K. L. Or

Objective Simulation and eye tracking were used to examine the effects of text enhancement, identical prescription-package names, visual cues, and verbal provocation on visual searches of look-alike drug names. Background Look-alike drug names can cause confusion and medication errors, which jeopardize patient safety. The effectiveness of many strategies that may prevent these problems requires evaluation. Method We conducted two experiments that were based on a four-way, repeated-measures design. The within-subject factors were text enhancement, identical prescription-package names, visual cues, and verbal provocation. In Experiment 1, 40 nurses searched for and selected a target drug from an array of drug packages on a pharmacy shelf mock-up. In Experiment 2, the eye movements of another 40 nurses were tracked while they performed a computer-based drug search task. Results Text enhancement had no significant effect on the drug search. Nurses selected the target drugs more quickly and easily when the prescriptions and drug packages shared identical drug name formats. The use of a visual cue to direct nurses’ attention facilitated their visual searches and improved their eye gaze behaviors. The nurses reported greater mental effort if they were provoked verbally during the drug search. Conclusion Efficient and practical strategies should be adopted for designs that facilitate accurate drug search. Among these strategies are using identical name appearances on drug prescriptions and packages, using a visual cue to direct nurses’ attention, and avoiding rushing nurses while they are concentrating. Application The findings aim to inspire recommendations for work system designs that will improve the visual search of look-alike drug names.


2020 ◽  
Vol 36 (3) ◽  
pp. 359-370 ◽  
Author(s):  
Kim McDonough ◽  
Pavel Trofimovich ◽  
Libing Lu ◽  
Dato Abashidze

Visual cues may help second language (L2) speakers perceive interactional feedback and reformulate their nontarget forms, particularly when paired with recasts, as recasts can be difficult to perceive as corrective. This study explores whether recasts have a visual signature and whether raters can perceive a recast’s corrective function. Transcripts of conversations between a bilingual French–English interlocutor and L2 English university students ( n = 24) were analysed for recasts and noncorrective repetitions with rising and declarative intonation. Videos of those excerpts ( k = 96) were then analysed for the interlocutor’s provision of visual cues during the recast and repetition turns, including eye gaze duration, nods, blinks, and other facial expressions (frowns, eyebrow raises). The videos were rated by 96 undergraduate university students who were randomly assigned to three viewing conditions: clear voice/clear face, clear voice/blurred face, or distorted voice/clear face. Using a 100-millimeter scale with two anchor points (0% = he’s making a comment, and 100% = he’s correcting an error), they rated the corrective function of the interlocutors’ responses while their eye gaze was tracked. Raters reliably distinguished recasts from repetitions through their ratings (although they were generally low), but not through their eye gaze behaviors.


2018 ◽  
Vol 71 (7) ◽  
pp. 1526-1534 ◽  
Author(s):  
Thomas Gallagher-Mitchell ◽  
Victoria Simms ◽  
Damien Litchfield

In this article, we present an investigation into the use of visual cues during number line estimation and their influence on cognitive processes for reducing number line estimation error. Participants completed a 0-1000 number line estimation task before and after a brief intervention in which they observed static-visual or dynamic-visual cues (control, anchor, gaze cursor, mouse cursor) and also made estimation marks to test effective number-target estimation. Results indicated that a significant pre-test to post-test reduction in estimation error was present for dynamic-visual cues of modelled eye-gaze and mouse cursor. However, there was no significant performance difference between pre- and post-test for the control or static anchor conditions. Findings are discussed in relation to the extent to which anchor points alone are meaningful in promoting successful segmentation of the number line and whether dynamic cues promote the utility of these locations in reducing error through attentional guidance.


Sign in / Sign up

Export Citation Format

Share Document