scholarly journals Spatial and time domain analysis of eye-tracking data during screening of brain magnetic resonance images

PLoS ONE ◽  
2021 ◽  
Vol 16 (12) ◽  
pp. e0260717
Author(s):  
Abdulla Al Suman ◽  
Carlo Russo ◽  
Ann Carrigan ◽  
Patrick Nalepka ◽  
Benoit Liquet-Weiland ◽  
...  

Introduction Eye-tracking research has been widely used in radiology applications. Prior studies exclusively analysed either temporal or spatial eye-tracking features, both of which alone do not completely characterise the spatiotemporal dynamics of radiologists’ gaze features. Purpose Our research aims to quantify human visual search dynamics in both domains during brain stimuli screening to explore the relationship between reader characteristics and stimuli complexity. The methodology can be used to discover strategies to aid trainee radiologists in identifying pathology, and to select regions of interest for machine vision applications. Method The study was performed using eye-tracking data 5 seconds in duration from 57 readers (15 Brain-experts, 11 Other-experts, 5 Registrars and 26 Naïves) for 40 neuroradiological images as stimuli (i.e., 20 normal and 20 pathological brain MRIs). The visual scanning patterns were analysed by calculating the fractal dimension (FD) and Hurst exponent (HE) using re-scaled range (R/S) and detrended fluctuation analysis (DFA) methods. The FD was used to measure the spatial geometrical complexity of the gaze patterns, and the HE analysis was used to measure participants’ focusing skill. The focusing skill is referred to persistence/anti-persistence of the participants’ gaze on the stimulus over time. Pathological and normal stimuli were analysed separately both at the “First Second” and full “Five Seconds” viewing duration. Results All experts were more focused and a had higher visual search complexity compared to Registrars and Naïves. This was seen in both the pathological and normal stimuli in the first and five second analyses. The Brain-experts subgroup was shown to achieve better focusing skill than Other-experts due to their domain specific expertise. Indeed, the FDs found when viewing pathological stimuli were higher than those in normal ones. Viewing normal stimuli resulted in an increase of FD found in five second data, unlike pathological stimuli, which did not change. In contrast to the FDs, the scanpath HEs of pathological and normal stimuli were similar. However, participants’ gaze was more focused for “Five Seconds” than “First Second” data. Conclusions The HE analysis of the scanpaths belonging to all experts showed that they have greater focus than Registrars and Naïves. This may be related to their higher visual search complexity than non-experts due to their training and expertise.

Sensors ◽  
2020 ◽  
Vol 20 (23) ◽  
pp. 6908
Author(s):  
Sebastian Brückner ◽  
Jan Schneider ◽  
Olga Zlatkin-Troitschanskaia ◽  
Hendrik Drachsler

Learning to solve graph tasks is one of the key prerequisites of acquiring domain-specific knowledge in most study domains. Analyses of graph understanding often use eye-tracking and focus on analyzing how much time students spend gazing at particular areas of a graph—Areas of Interest (AOIs). To gain a deeper insight into students’ task-solving process, we argue that the gaze shifts between students’ fixations on different AOIs (so-termed transitions) also need to be included in holistic analyses of graph understanding that consider the importance of transitions for the task-solving process. Thus, we introduced Epistemic Network Analysis (ENA) as a novel approach to analyze eye-tracking data of 23 university students who solved eight multiple-choice graph tasks in physics and economics. ENA is a method for quantifying, visualizing, and interpreting network data allowing a weighted analysis of the gaze patterns of both correct and incorrect graph task solvers considering the interrelations between fixations and transitions. After an analysis of the differences in the number of fixations and the number of single transitions between correct and incorrect solvers, we conducted an ENA for each task. We demonstrate that an isolated analysis of fixations and transitions provides only a limited insight into graph solving behavior. In contrast, ENA identifies differences between the gaze patterns of students who solved the graph tasks correctly and incorrectly across the multiple graph tasks. For instance, incorrect solvers shifted their gaze from the graph to the x-axis and from the question to the graph comparatively more often than correct solvers. The results indicate that incorrect solvers often have problems transferring textual information into graphical information and rely more on partly irrelevant parts of a graph. Finally, we discuss how the findings can be used to design experimental studies and for innovative instructional procedures in higher education.


2011 ◽  
Vol 40 (594) ◽  
Author(s):  
Susanne Bødker

<span style="font-family: Arial; font-size: x-small;"><span style="font-family: Arial; font-size: x-small;"><p>Dual eye-tracking (DUET) is a promising methodology to study and support</p> <p>collaborative work. The method consists of simultaneously recording the gaze of two</p> <p>collaborators working on a common task. The main themes addressed in the workshop</p> <p>are eye-tracking methodology (how to translate gaze measures into descriptions of joint</p> <p>action, how to measure and model gaze alignment between collaborators, how to address</p> <p>task specificity inherent to eye-tracking data) and more generally future applications of</p> <p>dual eye-tracking in CSCW. The DUET workshop will bring together scholars who</p> <p>currently develop the approach as well as a larger audience interested in applications of</p> <p>eye-tracking in collaborative situations. The workshop format will combine paper</p> <p>presentations and discussions. The papers are available online as PDF documents at</p> <p>http://www.dualeyetracking.org/DUET2011/.</p></span></span>


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jessica Dawson ◽  
Alan Kingstone ◽  
Tom Foulsham

AbstractPeople are drawn to social, animate things more than inanimate objects. Previous research has also shown gaze following in humans, a process that has been linked to theory of mind (ToM). In three experiments, we investigated whether animacy and ToM are involved when making judgements about the location of a cursor in a scene. In Experiment 1, participants were told that this cursor represented the gaze of an observer and were asked to decide whether the observer was looking at a target object. This task is similar to that carried out by researchers manually coding eye-tracking data. The results showed that participants were biased to perceive the gaze cursor as directed towards animate objects (faces) compared to inanimate objects. In Experiments 2 and 3 we tested the role of ToM, by presenting the same scenes to new participants but now with the statement that the cursor was generated by a ‘random’ computer system or by a computer system designed to seek targets. The bias to report that the cursor was directed toward faces was abolished in Experiment 2, and minimised in Experiment 3. Together, the results indicate that people attach minds to the mere representation of an individual's gaze, and this attribution of mind influences what people believe an individual is looking at.


2019 ◽  
Vol 40 (8) ◽  
pp. 850-861 ◽  
Author(s):  
Piotr Pietruski ◽  
Bartłomiej Noszczyk ◽  
Adriana M Paskal ◽  
Wiktor Paskal ◽  
Łukasz Paluch ◽  
...  

Abstract Background Little is known about breast cancer survivors’ perception of breast attractiveness. A better understanding of this subjective concept could contribute to the improvement of patient-reported outcomes after reconstructive surgeries and facilitate the development of new methods for assessing breast reconstruction outcomes. Objectives The aim of this eye-tracking (ET)-based study was to verify whether mastectomy altered women’s visual perception of breast aesthetics and symmetry. Methods A group of 30 women after unilateral mastectomy and 30 healthy controls evaluated the aesthetics and symmetry of various types of female breasts displayed as highly standardized digital images. Gaze patterns of women from the study groups were recorded using an ET system and subjected to a comparative analysis. Results Regardless of the study group, the longest fixation duration and the highest fixation number were found in the nipple-areola complex. This area was also the most common region of the initial fixation. Several significant between-group differences were identified; the gaze patterns of women after mastectomy were generally characterized by longer fixation times for the inframammary fold, lower pole, and upper half of the breast. Conclusions Mastectomy might affect women’s visual perception patterns during the evaluation of breast aesthetics and symmetry. ET data might improve our understanding of breast attractiveness and constitute the basis for a new reliable method for the evaluation of outcomes of reconstructive breast surgeries.


2020 ◽  
Vol 10 (13) ◽  
pp. 4508 ◽  
Author(s):  
Armel Quentin Tchanou ◽  
Pierre-Majorique Léger ◽  
Jared Boasen ◽  
Sylvain Senecal ◽  
Jad Adam Taher ◽  
...  

Gaze convergence of multiuser eye movements during simultaneous collaborative use of a shared system interface has been proposed as an important albeit sparsely explored construct in human-computer interaction literature. Here, we propose a novel index for measuring the gaze convergence of user dyads and address its validity through two consecutive eye-tracking studies. Eye-tracking data of user dyads were synchronously recorded while they simultaneously performed tasks on shared system interfaces. Results indicate the validity of the proposed gaze convergence index for measuring the gaze convergence of dyads. Moreover, as expected, our gaze convergence index was positively associated with dyad task performance and negatively associated with dyad cognitive load. These results suggest the utility of (theoretical or practical) applications such as synchronized gaze convergence displays in diverse settings. Further research perspectives, particularly into the construct’s nomological network, are warranted.


CJEM ◽  
2020 ◽  
Vol 22 (S1) ◽  
pp. S37-S37
Author(s):  
W. Lee ◽  
J. Chenkin

Introduction: Assessment of point-of-care ultrasound (POCUS) competency has been reliant on practical, visual and written examinations performed 1-on-1 with an examiner. These tools attempt to assess competency through subjective ratings, checklists and multiple-choice questions that are labour intensive using surrogate measures. Eye-tracking has been used on a limited basis in various fields of medicine for training and assessment. This technology explores visual processing and holds great promise as a tool to monitor training progress towards the development of expertise. We hypothesize that eye-tracking may differentiate novices and experts as they progress to become competent in interpretation of POCUS images and provide an objective measure in assessment of competency. Methods: Medical students, residents and attending physicians working in an academic emergency department were recruited. Participants viewed a series of 16 ultrasound video clips in a POCUS protocol for Focused Assessment using Sonography in Trauma (FAST). The gaze pattern of the participants was recorded using a commercially available eye-tracking device. The primary outcome was the gaze parameters including total gaze time in the area of interest (AOI), average time to fixation on the AOI, number of fixations in the AOI and average duration of first fixation on the AOI. Secondary outcome was the accuracy on the interpretation of the FAST scan. Results: Four novices and eight experts completed this study. The total gaze time in the AOI (mean +/- SD) was 76.72 +/- 18.84s among experts vs 53.64 +/- 10.33s among novices (p = 0.048), average time to fixation on the AOI was 0.561 +/- 0.319s vs 1.048 +/- 0.280s (p = 0.027), number of fixations in the AOI was 158.9 +/- 29.0 vs 121.8 +/- 17.5 (p = 0.042) and average duration of first fixation was 0.444 +/- 0.119s vs 0.390 +/- 0.024s (p = 0.402). The accuracy of the answers was 79.7 +/- 14.1% vs 45.3 +/- 21.9% (p = 0.007). Conclusion: In this pilot study, eye tracking shows potential to differentiate between POCUS experts and novices by their gaze patterns. Gaze patterns captured by eye tracking may not necessarily translate to cognitive processing. However, it allows educators to visualise the thought processes of the learner by their gaze patterns and provide insight on how to guide them towards competency. Future studies are needed to further validate the metrics for competency in POCUS applications.


2015 ◽  
Author(s):  
Zhengqiang Jiang ◽  
Zhihua Liang ◽  
Mini Das ◽  
Howard C. Gifford

Sensors ◽  
2021 ◽  
Vol 21 (22) ◽  
pp. 7668
Author(s):  
Niharika Kumari ◽  
Verena Ruf ◽  
Sergey Mukhametov ◽  
Albrecht Schmidt ◽  
Jochen Kuhn ◽  
...  

Remote eye tracking has become an important tool for the online analysis of learning processes. Mobile eye trackers can even extend the range of opportunities (in comparison to stationary eye trackers) to real settings, such as classrooms or experimental lab courses. However, the complex and sometimes manual analysis of mobile eye-tracking data often hinders the realization of extensive studies, as this is a very time-consuming process and usually not feasible for real-world situations in which participants move or manipulate objects. In this work, we explore the opportunities to use object recognition models to assign mobile eye-tracking data for real objects during an authentic students’ lab course. In a comparison of three different Convolutional Neural Networks (CNN), a Faster Region-Based-CNN, you only look once (YOLO) v3, and YOLO v4, we found that YOLO v4, together with an optical flow estimation, provides the fastest results with the highest accuracy for object detection in this setting. The automatic assignment of the gaze data to real objects simplifies the time-consuming analysis of mobile eye-tracking data and offers an opportunity for real-time system responses to the user’s gaze. Additionally, we identify and discuss several problems in using object detection for mobile eye-tracking data that need to be considered.


Sign in / Sign up

Export Citation Format

Share Document