scholarly journals Predicting Dyslexia and Reading Speed in Adolescents from Eye Movements in Reading and Non-Reading Tasks: A Machine Learning Approach

2021 ◽  
Vol 11 (10) ◽  
pp. 1337
Author(s):  
Alae Eddine El Hmimdi ◽  
Lindsey M Ward ◽  
Themis Palpanas ◽  
Zoï Kapoula

There is evidence that abnormalities in eye movements exist during reading in dyslexic individuals. A few recent studies applied Machine Learning (ML) classifiers to such eye movement data to predict dyslexia. A general problem with these studies is that eye movement data sets are limited to reading saccades and fixations that are confounded by reading difficulty, e.g., it is unclear whether abnormalities are the consequence or the cause of reading difficulty. Recently, Ward and Kapoula used LED targets (with the REMOBI & AIDEAL method) to demonstrate abnormalities of large saccades and vergence eye movements in depth demonstrating intrinsic eye movement problems independent from reading in dyslexia. In another study, binocular eye movements were studied while reading two texts: one using the “Alouette” text, which has no meaning and requires word decoding, the other using a meaningful text. It was found the Alouette text exacerbates eye movement abnormalities in dyslexics. In this paper, we more precisely quantify the quality of such eye movement descriptors for dyslexia detection. We use the descriptors produced in the four different setups as input to multiple classifiers and compare their generalization performances. Our results demonstrate that eye movement data from the Alouette test predicts dyslexia with an accuracy of 81.25%; similarly, we were able to predict dyslexia with an accuracy of 81.25% when using data from saccades to LED targets on the Remobi device and 77.3% when using vergence movements to LED targets. Noticeably, eye movement data from the meaningful text produced the lowest accuracy (70.2%). In a subsequent analysis, ML algorithms were applied to predict reading speed based on eye movement descriptors extracted from the meaningful reading, then from Remobi saccade and vergence tests. Remobi vergence eye movement descriptors can predict reading speed even better than eye movement descriptors from the meaningful reading test.


2021 ◽  
Vol 11 (8) ◽  
pp. 990
Author(s):  
Lindsey M. Ward ◽  
Zoi Kapoula

Dyslexic adolescents demonstrate deficits in word decoding, recognition, and oculomotor coordination as compared to healthy controls. Our lab recently showed intrinsic deficits in large saccades and vergence movements with a Remobi device independent from reading. This shed new light on the field of dyslexia, as it has been debated in the literature whether the deficits in eye movements are a cause or consequence of reading difficulty. The present study investigates how these oculomotor problems are compensated for or aggravated by text difficulty. A total of 46 dyslexic and 41 non-dyslexic adolescents’ eye movements were analyzed while reading L’Alouette, a dyslexia screening test, and 35 Kilos D’Espoir, a children’s book with a reading age of 10 years. While reading the more difficult text, dyslexics made more mistakes, read slower, and made more regressive saccades; moreover, they made smaller amplitude saccades with abnormal velocity profiles (e.g., higher peak velocity but lower average velocity) and significantly higher saccade disconjugacy. While reading the simpler text, these differences persisted; however, the difference in saccade disconjugacy, although present, was no longer significant, nor was there a significant difference in the percentage of regressive saccades. We propose that intrinsic eye movement abnormalities in dyslexics such as saccade disconjugacy, abnormal velocity profiles, and cognitively associated regressive saccades can be particularly exacerbated if the reading text relies heavily on word decoding to extract meaning; increased number of regressive saccades are a manifestation of reading difficulty and not a problem of eye movement per se. These interpretations are in line with the motor theory of visual attention and our previous research describing the relationship between binocular motor control, attention, and cognition that exists outside of the field of dyslexia.



2019 ◽  
Vol 24 (4) ◽  
pp. 297-311
Author(s):  
José David Moreno ◽  
José A. León ◽  
Lorena A. M. Arnal ◽  
Juan Botella

Abstract. We report the results of a meta-analysis of 22 experiments comparing the eye movement data obtained from young ( Mage = 21 years) and old ( Mage = 73 years) readers. The data included six eye movement measures (mean gaze duration, mean fixation duration, total sentence reading time, mean number of fixations, mean number of regressions, and mean length of progressive saccade eye movements). Estimates were obtained of the typified mean difference, d, between the age groups in all six measures. The results showed positive combined effect size estimates in favor of the young adult group (between 0.54 and 3.66 in all measures), although the difference for the mean number of fixations was not significant. Young adults make in a systematic way, shorter gazes, fewer regressions, and shorter saccadic movements during reading than older adults, and they also read faster. The meta-analysis results confirm statistically the most common patterns observed in previous research; therefore, eye movements seem to be a useful tool to measure behavioral changes due to the aging process. Moreover, these results do not allow us to discard either of the two main hypotheses assessed for explaining the observed aging effects, namely neural degenerative problems and the adoption of compensatory strategies.



Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5178
Author(s):  
Sangbong Yoo ◽  
Seongmin Jeong ◽  
Seokyeon Kim ◽  
Yun Jang

Gaze movement and visual stimuli have been utilized to analyze human visual attention intuitively. Gaze behavior studies mainly show statistical analyses of eye movements and human visual attention. During these analyses, eye movement data and the saliency map are presented to the analysts as separate views or merged views. However, the analysts become frustrated when they need to memorize all of the separate views or when the eye movements obscure the saliency map in the merged views. Therefore, it is not easy to analyze how visual stimuli affect gaze movements since existing techniques focus excessively on the eye movement data. In this paper, we propose a novel visualization technique for analyzing gaze behavior using saliency features as visual clues to express the visual attention of an observer. The visual clues that represent visual attention are analyzed to reveal which saliency features are prominent for the visual stimulus analysis. We visualize the gaze data with the saliency features to interpret the visual attention. We analyze the gaze behavior with the proposed visualization to evaluate that our approach to embedding saliency features within the visualization supports us to understand the visual attention of an observer.



1972 ◽  
Vol 35 (1) ◽  
pp. 103-110
Author(s):  
Phillip Kleespies ◽  
Morton Wiener

This study explored (1) for evidence of visual input at so-called “subliminal” exposure durations, and (2) whether the response, if any, was a function of the thematic content of the stimulus. Thematic content (threatening versus non-threatening) and stimulus structure (angular versus curved) were varied independently under “subliminal,” “part-cue,” and “identification” exposure conditions. With Ss' reports and the frequency and latency of first eye movements (“orienting reflex”) as input indicators, there was no evidence of input differences which are a function of thematic content at any exposure duration, and the “report” data were consistent with the eye-movement data.



Array ◽  
2021 ◽  
pp. 100087
Author(s):  
Peter Raatikainen ◽  
Jarkko Hautala ◽  
Otto Loberg ◽  
Tommi Kärkkäinen ◽  
Paavo Leppänen ◽  
...  


2020 ◽  
Author(s):  
Šimon Kucharský ◽  
Daan Roelof van Renswoude ◽  
Maartje Eusebia Josefa Raijmakers ◽  
Ingmar Visser

Describing, analyzing and explaining patterns in eye movement behavior is crucial for understanding visual perception. Further, eye movements are increasingly used in informing cognitive process models. In this article, we start by reviewing basic characteristics and desiderata for models of eye movements. Specifically, we argue that there is a need for models combining spatial and temporal aspects of eye-tracking data (i.e., fixation durations and fixation locations), that formal models derived from concrete theoretical assumptions are needed to inform our empirical research, and custom statistical models are useful for detecting specific empirical phenomena that are to be explained by said theory. In this article, we develop a conceptual model of eye movements, or specifically, fixation durations and fixation locations, and from it derive a formal statistical model --- meeting our goal of crafting a model useful in both the theoretical and empirical research cycle. We demonstrate the use of the model on an example of infant natural scene viewing, to show that the model is able to explain different features of the eye movement data, and to showcase how to identify that the model needs to be adapted if it does not agree with the data. We conclude with discussion of potential future avenues for formal eye movement models.



Author(s):  
Gavindya Jayawardena ◽  
Sampath Jayarathna

Eye-tracking experiments involve areas of interest (AOIs) for the analysis of eye gaze data. While there are tools to delineate AOIs to extract eye movement data, they may require users to manually draw boundaries of AOIs on eye tracking stimuli or use markers to define AOIs. This paper introduces two novel techniques to dynamically filter eye movement data from AOIs for the analysis of eye metrics from multiple levels of granularity. The authors incorporate pre-trained object detectors and object instance segmentation models for offline detection of dynamic AOIs in video streams. This research presents the implementation and evaluation of object detectors and object instance segmentation models to find the best model to be integrated in a real-time eye movement analysis pipeline. The authors filter gaze data that falls within the polygonal boundaries of detected dynamic AOIs and apply object detector to find bounding-boxes in a public dataset. The results indicate that the dynamic AOIs generated by object detectors capture 60% of eye movements & object instance segmentation models capture 30% of eye movements.



2018 ◽  
pp. 1587-1599
Author(s):  
Hiroaki Koma ◽  
Taku Harada ◽  
Akira Yoshizawa ◽  
Hirotoshi Iwasaki

Detecting distracted states can be applied to various problems such as danger prevention when driving a car. A cognitive distracted state is one example of a distracted state. It is known that eye movements express cognitive distraction. Eye movements can be classified into several types. In this paper, the authors detect a cognitive distraction using classified eye movement types when applying the Random Forest machine learning algorithm, which uses decision trees. They show the effectiveness of considering eye movement types for detecting cognitive distraction when applying Random Forest. The authors use visual experiments with still images for the detection.



2008 ◽  
Vol 3 (2) ◽  
pp. 149-175 ◽  
Author(s):  
Ian Cunnings ◽  
Harald Clahsen

The avoidance of regular but not irregular plurals inside compounds (e.g., *rats eater vs. mice eater) has been one of the most widely studied morphological phenomena in the psycholinguistics literature. To examine whether the constraints that are responsible for this contrast have any general significance beyond compounding, we investigated derived word forms containing regular and irregular plurals in two experiments. Experiment 1 was an offline acceptability judgment task, and Experiment 2 measured eye movements during reading derived words containing regular and irregular plurals and uninflected base nouns. The results from both experiments show that the constraint against regular plurals inside compounds generalizes to derived words. We argue that this constraint cannot be reduced to phonological properties, but is instead morphological in nature. The eye-movement data provide detailed information on the time-course of processing derived word forms indicating that early stages of processing are affected by a general constraint that disallows inflected words from feeding derivational processes, and that the more specific constraint against regular plurals comes in at a subsequent later stage of processing. We argue that these results are consistent with stage-based models of language processing.



2020 ◽  
Vol 11 (1) ◽  
Author(s):  
Nachiappan Valliappan ◽  
Na Dai ◽  
Ethan Steinberg ◽  
Junfeng He ◽  
Kantwon Rogers ◽  
...  

Abstract Eye tracking has been widely used for decades in vision research, language and usability. However, most prior research has focused on large desktop displays using specialized eye trackers that are expensive and cannot scale. Little is known about eye movement behavior on phones, despite their pervasiveness and large amount of time spent. We leverage machine learning to demonstrate accurate smartphone-based eye tracking without any additional hardware. We show that the accuracy of our method is comparable to state-of-the-art mobile eye trackers that are 100x more expensive. Using data from over 100 opted-in users, we replicate key findings from previous eye movement research on oculomotor tasks and saliency analyses during natural image viewing. In addition, we demonstrate the utility of smartphone-based gaze for detecting reading comprehension difficulty. Our results show the potential for scaling eye movement research by orders-of-magnitude to thousands of participants (with explicit consent), enabling advances in vision research, accessibility and healthcare.



Sign in / Sign up

Export Citation Format

Share Document