scholarly journals Data-driven group comparisons of eye fixations to dynamic stimuli

2021 ◽  
pp. 174702182110480
Author(s):  
Tochukwu Onwuegbusi ◽  
Frouke Hermens ◽  
Todd Hogue

Recent advances in software and hardware have allowed eye tracking to move away from static images to more ecologically relevant video streams. The analysis of eye tracking data for such dynamic stimuli, however, is not without challenges. The frame-by-frame coding of regions of interest (ROIs) is labour-intensive and computer vision techniques to automatically code such ROIs are not yet mainstream, restricting the use of such stimuli. Combined with the more general problem of defining relevant ROIs for video frames, methods are needed that facilitate data analysis. Here, we present a first evaluation of an easy-to-implement data-driven method with the potential to address these issues. To test the new method, we examined the differences in eye movements of self-reported politically left- or right-wing leaning participants to video clips of left- and right-wing politicians. The results show that our method can accurately predict group membership on the basis of eye movement patterns, isolate video clips that best distinguish people on the political left–right spectrum, and reveal the section of each video clip with the largest group differences. Our methodology thereby aids the understanding of group differences in gaze behaviour, and the identification of critical stimuli for follow-up studies or for use in saccade diagnosis.

2020 ◽  
Author(s):  
Hamidullah Binol ◽  
M. Khalid Khan Niazi ◽  
Garth Essig ◽  
Jay Shah ◽  
Jameson K Mattingly ◽  
...  

Objectives: With the increasing emphasis on developing effective telemedicine approaches in Otolaryngology, this study explored whether a single composite image stitched from a digital otoscopy video provides acceptable diagnostic information to make an accurate diagnosis, as compared with that provided by the full video. Methods: Five Ear, Nose, and Throat (ENT) physicians reviewed the same set of 78 digital otoscope eardrum videos from four eardrum conditions: normal, effusion, retraction, and tympanosclerosis, along with the composite images generated by a SelectStitch method that selectively uses video frames with computer-assisted selection, as well as a Stitch method that incorporates all the video frames. Participants provided a diagnosis for each item along with a rating of diagnostic confidence. Diagnostic accuracy for each pathology of SelectStitch was compared with accuracy when reviewing the entire video clip and when reviewing the Stitch image. Results: There were no significant differences in diagnostic accuracy for physicians reviewing SelectStitch images and full video clips, but both provided better diagnostic accuracy than Stitch images. The inter-reader agreement was moderate. Conclusion: Equal to using full video clips, composite images of eardrums generated by SelectStitch provided sufficient information for ENTs to make the correct diagnoses for most pathologies. These findings suggest that use of a composite eardrum image may be sufficient for telemedicine approaches to ear diagnosis, eliminating the need for storage and transmission of large video files, along with future applications for improved documentation in electronic medical record systems, patient/family counseling, and clinical training.


2020 ◽  
Author(s):  
David Harris ◽  
Mark Wilson ◽  
Tim Holmes ◽  
Toby de Burgh ◽  
Samuel James Vine

Head-mounted eye tracking has been fundamental for developing an understanding of sporting expertise, as the way in which performers sample visual information from the environment is a major determinant of successful performance. There is, however, a long running tension between the desire to study realistic, in-situ gaze behaviour and the difficulties of acquiring accurate ocular measurements in dynamic and fast-moving sporting tasks. Here, we describe how immersive technologies, such as virtual reality, offer an increasingly compelling approach for conducting eye movement research in sport. The possibility of studying gaze behaviour in representative and realistic environments, but with high levels of experimental control, could enable significant strides forward for eye tracking in sport and improve understanding of how eye movements underpin sporting skills. By providing a rationale for virtual reality as an optimal environment for eye tracking research, as well as outlining practical considerations related to hardware, software and data analysis, we hope to guide researchers and practitioners in the use of this approach.


Author(s):  
Aideen McParland ◽  
Stephen Gallagher ◽  
Mickey Keenan

AbstractA defining feature of ASD is atypical gaze behaviour, however, eye-tracking studies in ‘real-world’ settings are limited, and the possibility of improving gaze behaviour for ASD children is largely unexplored. This study investigated gaze behaviour of ASD and typically developing (TD) children in their classroom setting. Eye-tracking technology was used to develop and pilot an operant training tool to positively reinforce typical gaze behaviour towards faces. Visual and statistical analyses of eye-tracking data revealed different gaze behaviour patterns during live interactions for ASD and TD children depending on the interaction type. All children responded to operant training with longer looking times observed on face stimuli post training. The promising application of operant gaze training in ecologically valid settings is discussed.


2019 ◽  
Vol 63 (4) ◽  
pp. 689-712
Author(s):  
K. Rothermich ◽  
O. Caivano ◽  
L.J. Knoll ◽  
V. Talwar

Interpreting other people’s intentions during communication represents a remarkable challenge for children. Although many studies have examined children’s understanding of, for example, sarcasm, less is known about their interpretation. Using realistic audiovisual scenes, we invited 124 children between 8 and 12 years old to watch video clips of young adults using different speaker intentions. After watching each video clip, children answered questions about the characters and their beliefs, and the perceived friendliness of the speaker. Children’s responses reveal age and gender differences in the ability to interpret speaker belief and social intentions, especially for scenarios conveying teasing and prosocial lies. We found that the ability to infer speaker belief of prosocial lies and to interpret social intentions increases with age. Our results suggest that children at the age of 8 years already show adult-like abilities to understand literal statements, whereas the ability to infer specific social intentions, such as teasing and prosocial lies, is still developing between the age of 8 and 12 years. Moreover, girls performed better in classifying prosocial lies and sarcasm as insincere than boys. The outcomes expand our understanding of how children observe speaker intentions and suggest further research into the development of teasing and prosocial lie interpretation.


2021 ◽  
pp. 135676672110533
Author(s):  
Georgiana-Denisse Savin ◽  
Cristina Fleșeriu ◽  
Larissa Batrancea

In recent years, the number of studies in tourism using the eye tracking technique has increased and started generating valuable information for both academics and the industry. However, there is a gap in the literature concerning systematic reviews focused on recent articles and their findings. Thus, the aim of this study is to close this gap by systematically analysing 70 research papers tackling the subject of eye tracking in tourism and published in highly ranked tourism journals. The study identifies the most popular topics and trends for eye tracking research, as well as the most used types of visual stimuli, such as exhibitions, restaurant menus, promotional pictures or websites. The study also details on measurements specific for the analysis of eye tracking data, including fixations, saccades and heat maps. Results are emphasized along with their theoretical and practical implications. In addition, we highlight the lack of the use of dynamic stimuli in the existing literature and suggest further research directions using the eye tracking technique.


2016 ◽  
Vol 8 (3) ◽  
pp. 292-300 ◽  
Author(s):  
Jan-Willem van Prooijen ◽  
André P. M. Krouwel

Dogmatic intolerance—defined as a tendency to reject, and consider as inferior, any ideological belief that differs from one’s own—is often assumed to be more prominent at the political right than at the political left. In the present study, we make two novel contributions to this perspective. First, we show that dogmatic intolerance is stronger among left- and right-wing extremists than moderates in both the European Union (Study 1) as well as the United States (Study 2). Second, in Study 3, participants were randomly assigned to describe a strong or a weak political belief that they hold. Results revealed that compared to weak beliefs, strong beliefs elicited stronger dogmatic intolerance, which in turn was associated with willingness to protest, denial of free speech, and support for antisocial behavior. We conclude that independent of content, extreme political beliefs predict dogmatic intolerance.


Author(s):  
Hedda Martina Šola ◽  
Fayyaz Hussain Qureshi ◽  
Sarwar Khawaja

In recent years, the newly emerging discipline of neuromarketing, which employs brain (emotions and behaviour) research in an organisational context, has grown in prominence in academic and practice literature. With the increasing growth of online teaching, COVID-19 left no option for higher education institutions to go online. As a result, students who attend an online course are more prone to lose focus and attention, resulting in poor academic performance. Therefore, the primary purpose of this study is to observe the learner's behaviour while making use of an online learning platform. This study presents neuromarketing to enhance students' learning performance and motivation in an online classroom. Using a web camera, we used facial coding and eye-tracking techniques to study students' attention, motivation, and interest in an online classroom. In collaboration with Oxford Business College's marketing team, the Institute for Neuromarketing distributed video links via email, a student representative from Oxford Business College, the WhatsApp group, and a newsletter developed explicitly for that purpose to 297 students over the course of five days. To ensure the research was both realistic and feasible, the instructors in the videos were different, and students were randomly allocated to one video link lasting 90 seconds (n=142) and a second one lasting 10 minutes (n=155). An online platform for self-service called Tobii Sticky was used to measure facial coding and eye-tracking. During the 90-second online lecture, participants' gaze behaviour was tracked overtime to gather data on their attention distribution, and emotions were evaluated using facial coding. In contrast, the 10-minute film looked at emotional involvement. The findings show that students lose their listening focus when no supporting visual material or virtual board is used, even during a brief presentation. Furthermore, when they are exposed to a single shareable piece of content for longer than 5.24 minutes, their motivation and mood decline; however, when new shareable material or a class activity is introduced, their motivation and mood rise. JEL: I20; I21 <p> </p><p><strong> Article visualizations:</strong></p><p><img src="/-counters-/edu_01/0805/a.php" alt="Hit counter" /></p>


Author(s):  
Sandeep Mathias ◽  
Diptesh Kanojia ◽  
Abhijit Mishra ◽  
Pushpak Bhattacharya

Gaze behaviour has been used as a way to gather cognitive information for a number of years. In this paper, we discuss the use of gaze behaviour in solving different tasks in natural language processing (NLP) without having to record it at test time. This is because the collection of gaze behaviour is a costly task, both in terms of time and money. Hence, in this paper, we focus on research done to alleviate the need for recording gaze behaviour at run time. We also mention different eye tracking corpora in multiple languages, which are currently available and can be used in natural language processing. We conclude our paper by discussing applications in a domain - education - and how learning gaze behaviour can help in solving the tasks of complex word identification and automatic essay grading.


2015 ◽  
Vol 52 ◽  
pp. 601-713 ◽  
Author(s):  
Haonan Yu ◽  
N. Siddharth ◽  
Andrei Barbu ◽  
Jeffrey Mark Siskind

We present an approach to simultaneously reasoning about a video clip and an entire natural-language sentence. The compositional nature of language is exploited to construct models which represent the meanings of entire sentences composed out of the meanings of the words in those sentences mediated by a grammar that encodes the predicate-argument relations. We demonstrate that these models faithfully represent the meanings of sentences and are sensitive to how the roles played by participants (nouns), their characteristics (adjectives), the actions performed (verbs), the manner of such actions (adverbs), and changing spatial relations between participants (prepositions) affect the meaning of a sentence and how it is grounded in video. We exploit this methodology in three ways. In the first, a video clip along with a sentence are taken as input and the participants in the event described by the sentence are highlighted, even when the clip depicts multiple similar simultaneous events. In the second, a video clip is taken as input without a sentence and a sentence is generated that describes an event in that clip. In the third, a corpus of video clips is paired with sentences which describe some of the events in those clips and the meanings of the words in those sentences are learned. We learn these meanings without needing to specify which attribute of the video clips each word in a given sentence refers to. The learned meaning representations are shown to be intelligible to humans.


2018 ◽  
Vol 11 (2) ◽  
Author(s):  
Sarah Vandemoortele ◽  
Kurt Feyaerts ◽  
Mark Reybrouck ◽  
Geert De Bièvre ◽  
Geert Brône ◽  
...  

Few investigations into the nonverbal communication in ensemble playing have focused on gaze behaviour up to now. In this study, the gaze behaviour of musicians playing in trios was recorded using the recently developed technique of mobile eye-tracking. Four trios (clarinet, violin, piano) were recorded while rehearsing and while playing several runs through the same musical fragment. The current article reports on an initial exploration of the data in which we describe how often gazing at the partner occurred. On the one hand, we aim to identify possible contrasting cases. On the other, we look for tendencies across the run-throughs. We discuss the quantified gaze behaviour in relation to the existing literature and the current research design.


Sign in / Sign up

Export Citation Format

Share Document