scholarly journals Deep learning investigation for chess player attention prediction using eye-tracking and game data

Author(s):  
Justin Le Louedec ◽  
Thomas Guntz ◽  
James L. Crowley ◽  
Dominique Vaufreydaz
2021 ◽  
Author(s):  
Dilber Cetintas ◽  
Taner Tuncer Firat

2021 ◽  
Vol 14 (2) ◽  
Author(s):  
Xin Liu ◽  
Bin Zheng ◽  
Xiaoqin Duan ◽  
Wenjing He ◽  
Yuandong Li ◽  
...  

Eye-tracking can help decode the intricate control mechanism in human performance. In healthcare, physicians-in-training requires extensive practice to improve their healthcare skills. When a trainee encounters any difficulty in the practice, they will need feedback from experts to improve their performance. The personal feedback is time-consuming and subjected to bias. In this study, we tracked the eye movements of trainees during their colonoscopic performance in simulation. We applied deep learning algorithms to detect the eye-tracking metrics on the moments of navigation lost (MNL), a signature sign for performance difficulty during colonoscopy. Basic human eye gaze and pupil characteristics were learned and verified by the deep convolutional generative adversarial networks (DCGANs); the generated data were fed to the Long Short-Term Memory (LSTM) networks with three different data feeding strategies to classify MNLs from the entire colonoscopic procedure. Outputs from deep learning were compared to the expert’s judgment on the MNLs based on colonoscopic videos. The best classification outcome was achieved when we fed human eye data with 1000 synthesized eye data, where accuracy (90%), sensitivity (90%), and specificity (88%) were optimized. This study built an important foundation for our work of developing a self-adaptive education system for training healthcare skills using simulation.


10.2196/27706 ◽  
2021 ◽  
Author(s):  
Federica Cilia ◽  
Romuald Carette ◽  
Mahmoud Elbattah ◽  
Gilles Dequen ◽  
Jean-Luc Guérin ◽  
...  

2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Alexandros Karargyris ◽  
Satyananda Kashyap ◽  
Ismini Lourentzou ◽  
Joy T. Wu ◽  
Arjun Sharma ◽  
...  

AbstractWe developed a rich dataset of Chest X-Ray (CXR) images to assist investigators in artificial intelligence. The data were collected using an eye-tracking system while a radiologist reviewed and reported on 1,083 CXR images. The dataset contains the following aligned data: CXR image, transcribed radiology report text, radiologist’s dictation audio and eye gaze coordinates data. We hope this dataset can contribute to various areas of research particularly towards explainable and multimodal deep learning/machine learning methods. Furthermore, investigators in disease classification and localization, automated radiology report generation, and human-machine interaction can benefit from these data. We report deep learning experiments that utilize the attention maps produced by the eye gaze dataset to show the potential utility of this dataset.


Sensors ◽  
2021 ◽  
Vol 21 (12) ◽  
pp. 4143
Author(s):  
Michael Barz ◽  
Daniel Sonntag

Processing visual stimuli in a scene is essential for the human brain to make situation-aware decisions. These stimuli, which are prevalent subjects of diagnostic eye tracking studies, are commonly encoded as rectangular areas of interest (AOIs) per frame. Because it is a tedious manual annotation task, the automatic detection and annotation of visual attention to AOIs can accelerate and objectify eye tracking research, in particular for mobile eye tracking with egocentric video feeds. In this work, we implement two methods to automatically detect visual attention to AOIs using pre-trained deep learning models for image classification and object detection. Furthermore, we develop an evaluation framework based on the VISUS dataset and well-known performance metrics from the field of activity recognition. We systematically evaluate our methods within this framework, discuss potentials and limitations, and propose ways to improve the performance of future automatic visual attention detection methods.


2019 ◽  
Vol 51 ◽  
pp. 101-115 ◽  
Author(s):  
Naji Khosravan ◽  
Haydar Celik ◽  
Baris Turkbey ◽  
Elizabeth C. Jones ◽  
Bradford Wood ◽  
...  

2021 ◽  
Vol 3 (1) ◽  
pp. e200047
Author(s):  
Joseph N. Stember ◽  
Haydar Celik ◽  
David Gutman ◽  
Nathaniel Swinburne ◽  
Robert Young ◽  
...  

2020 ◽  
Author(s):  
Ori Ossmy ◽  
Danyang Han ◽  
Brianna Kaplan ◽  
Melody Xu ◽  
Catherine Bianco ◽  
...  

Observing actions provides important information about other people’s goals and the means they use to achieve their goals. Preschoolers (N=22) and adults (N=22) watched videorecorded actors use efficient and inefficient means of grasping a hammer to pound a peg. Eye tracking showed that participants at both ages looked equally long at the goal target (the peg) but adults looked longer than children at the means—how actors grasped the hammer. Deep learning analysis of participants’ eye gaze distinguished observation of efficient from inefficient grasps for adults, but not for children. Moreover, only adults showed differential physiological responses while observing efficient versus inefficient grasps in action-related neural activity (EEG) and pupil dilation. Thus, children can actively direct their gaze to look at goal-directed actions without seeing whether the means are efficient or not. Moreover, findings suggest that the development of action perception is built from children’s own motor experiences.


Sign in / Sign up

Export Citation Format

Share Document