scholarly journals Detection of developmental dyslexia with machine learning using eye movement data

Array ◽  
2021 ◽  
pp. 100087
Author(s):  
Peter Raatikainen ◽  
Jarkko Hautala ◽  
Otto Loberg ◽  
Tommi Kärkkäinen ◽  
Paavo Leppänen ◽  
...  
2021 ◽  
Vol 15 ◽  
Author(s):  
Hyeju Jang ◽  
Thomas Soroski ◽  
Matteo Rizzo ◽  
Oswald Barral ◽  
Anuj Harisinghani ◽  
...  

Alzheimer’s disease (AD) is a progressive neurodegenerative condition that results in impaired performance in multiple cognitive domains. Preclinical changes in eye movements and language can occur with the disease, and progress alongside worsening cognition. In this article, we present the results from a machine learning analysis of a novel multimodal dataset for AD classification. The cohort includes data from two novel tasks not previously assessed in classification models for AD (pupil fixation and description of a pleasant past experience), as well as two established tasks (picture description and paragraph reading). Our dataset includes language and eye movement data from 79 memory clinic patients with diagnoses of mild-moderate AD, mild cognitive impairment (MCI), or subjective memory complaints (SMC), and 83 older adult controls. The analysis of the individual novel tasks showed similar classification accuracy when compared to established tasks, demonstrating their discriminative ability for memory clinic patients. Fusing the multimodal data across tasks yielded the highest overall AUC of 0.83 ± 0.01, indicating that the data from novel tasks are complementary to established tasks.


2021 ◽  
Vol 11 (10) ◽  
pp. 1337
Author(s):  
Alae Eddine El Hmimdi ◽  
Lindsey M Ward ◽  
Themis Palpanas ◽  
Zoï Kapoula

There is evidence that abnormalities in eye movements exist during reading in dyslexic individuals. A few recent studies applied Machine Learning (ML) classifiers to such eye movement data to predict dyslexia. A general problem with these studies is that eye movement data sets are limited to reading saccades and fixations that are confounded by reading difficulty, e.g., it is unclear whether abnormalities are the consequence or the cause of reading difficulty. Recently, Ward and Kapoula used LED targets (with the REMOBI & AIDEAL method) to demonstrate abnormalities of large saccades and vergence eye movements in depth demonstrating intrinsic eye movement problems independent from reading in dyslexia. In another study, binocular eye movements were studied while reading two texts: one using the “Alouette” text, which has no meaning and requires word decoding, the other using a meaningful text. It was found the Alouette text exacerbates eye movement abnormalities in dyslexics. In this paper, we more precisely quantify the quality of such eye movement descriptors for dyslexia detection. We use the descriptors produced in the four different setups as input to multiple classifiers and compare their generalization performances. Our results demonstrate that eye movement data from the Alouette test predicts dyslexia with an accuracy of 81.25%; similarly, we were able to predict dyslexia with an accuracy of 81.25% when using data from saccades to LED targets on the Remobi device and 77.3% when using vergence movements to LED targets. Noticeably, eye movement data from the meaningful text produced the lowest accuracy (70.2%). In a subsequent analysis, ML algorithms were applied to predict reading speed based on eye movement descriptors extracted from the meaningful reading, then from Remobi saccade and vergence tests. Remobi vergence eye movement descriptors can predict reading speed even better than eye movement descriptors from the meaningful reading test.


2019 ◽  
Vol 24 (4) ◽  
pp. 297-311
Author(s):  
José David Moreno ◽  
José A. León ◽  
Lorena A. M. Arnal ◽  
Juan Botella

Abstract. We report the results of a meta-analysis of 22 experiments comparing the eye movement data obtained from young ( Mage = 21 years) and old ( Mage = 73 years) readers. The data included six eye movement measures (mean gaze duration, mean fixation duration, total sentence reading time, mean number of fixations, mean number of regressions, and mean length of progressive saccade eye movements). Estimates were obtained of the typified mean difference, d, between the age groups in all six measures. The results showed positive combined effect size estimates in favor of the young adult group (between 0.54 and 3.66 in all measures), although the difference for the mean number of fixations was not significant. Young adults make in a systematic way, shorter gazes, fewer regressions, and shorter saccadic movements during reading than older adults, and they also read faster. The meta-analysis results confirm statistically the most common patterns observed in previous research; therefore, eye movements seem to be a useful tool to measure behavioral changes due to the aging process. Moreover, these results do not allow us to discard either of the two main hypotheses assessed for explaining the observed aging effects, namely neural degenerative problems and the adoption of compensatory strategies.


2014 ◽  
Author(s):  
Bernhard Angele ◽  
Elizabeth R. Schotter ◽  
Timothy Slattery ◽  
Tara L. Chaloukian ◽  
Klinton Bicknell ◽  
...  

Author(s):  
Ayush Kumar ◽  
Prantik Howlader ◽  
Rafael Garcia ◽  
Daniel Weiskopf ◽  
Klaus Mueller

Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5178
Author(s):  
Sangbong Yoo ◽  
Seongmin Jeong ◽  
Seokyeon Kim ◽  
Yun Jang

Gaze movement and visual stimuli have been utilized to analyze human visual attention intuitively. Gaze behavior studies mainly show statistical analyses of eye movements and human visual attention. During these analyses, eye movement data and the saliency map are presented to the analysts as separate views or merged views. However, the analysts become frustrated when they need to memorize all of the separate views or when the eye movements obscure the saliency map in the merged views. Therefore, it is not easy to analyze how visual stimuli affect gaze movements since existing techniques focus excessively on the eye movement data. In this paper, we propose a novel visualization technique for analyzing gaze behavior using saliency features as visual clues to express the visual attention of an observer. The visual clues that represent visual attention are analyzed to reveal which saliency features are prominent for the visual stimulus analysis. We visualize the gaze data with the saliency features to interpret the visual attention. We analyze the gaze behavior with the proposed visualization to evaluate that our approach to embedding saliency features within the visualization supports us to understand the visual attention of an observer.


1972 ◽  
Vol 35 (1) ◽  
pp. 103-110
Author(s):  
Phillip Kleespies ◽  
Morton Wiener

This study explored (1) for evidence of visual input at so-called “subliminal” exposure durations, and (2) whether the response, if any, was a function of the thematic content of the stimulus. Thematic content (threatening versus non-threatening) and stimulus structure (angular versus curved) were varied independently under “subliminal,” “part-cue,” and “identification” exposure conditions. With Ss' reports and the frequency and latency of first eye movements (“orienting reflex”) as input indicators, there was no evidence of input differences which are a function of thematic content at any exposure duration, and the “report” data were consistent with the eye-movement data.


Sign in / Sign up

Export Citation Format

Share Document