Age Differences in Eye Movements During Reading: Degenerative Problems or Compensatory Strategy?

2019 ◽  
Vol 24 (4) ◽  
pp. 297-311
Author(s):  
José David Moreno ◽  
José A. León ◽  
Lorena A. M. Arnal ◽  
Juan Botella

Abstract. We report the results of a meta-analysis of 22 experiments comparing the eye movement data obtained from young ( Mage = 21 years) and old ( Mage = 73 years) readers. The data included six eye movement measures (mean gaze duration, mean fixation duration, total sentence reading time, mean number of fixations, mean number of regressions, and mean length of progressive saccade eye movements). Estimates were obtained of the typified mean difference, d, between the age groups in all six measures. The results showed positive combined effect size estimates in favor of the young adult group (between 0.54 and 3.66 in all measures), although the difference for the mean number of fixations was not significant. Young adults make in a systematic way, shorter gazes, fewer regressions, and shorter saccadic movements during reading than older adults, and they also read faster. The meta-analysis results confirm statistically the most common patterns observed in previous research; therefore, eye movements seem to be a useful tool to measure behavioral changes due to the aging process. Moreover, these results do not allow us to discard either of the two main hypotheses assessed for explaining the observed aging effects, namely neural degenerative problems and the adoption of compensatory strategies.

2008 ◽  
Vol 3 (2) ◽  
pp. 149-175 ◽  
Author(s):  
Ian Cunnings ◽  
Harald Clahsen

The avoidance of regular but not irregular plurals inside compounds (e.g., *rats eater vs. mice eater) has been one of the most widely studied morphological phenomena in the psycholinguistics literature. To examine whether the constraints that are responsible for this contrast have any general significance beyond compounding, we investigated derived word forms containing regular and irregular plurals in two experiments. Experiment 1 was an offline acceptability judgment task, and Experiment 2 measured eye movements during reading derived words containing regular and irregular plurals and uninflected base nouns. The results from both experiments show that the constraint against regular plurals inside compounds generalizes to derived words. We argue that this constraint cannot be reduced to phonological properties, but is instead morphological in nature. The eye-movement data provide detailed information on the time-course of processing derived word forms indicating that early stages of processing are affected by a general constraint that disallows inflected words from feeding derivational processes, and that the more specific constraint against regular plurals comes in at a subsequent later stage of processing. We argue that these results are consistent with stage-based models of language processing.


2018 ◽  
Vol 72 (4) ◽  
pp. 847-857
Author(s):  
Rebecca L Johnson ◽  
Sarah Rose Slate ◽  
Allison R Teevan ◽  
Barbara J Juhasz

Research exploring the processing of morphologically complex words, such as compound words, has found that they are decomposed into their constituent parts during processing. Although much is known about the processing of compound words, very little is known about the processing of lexicalised blend words, which are created from parts of two words, often with phoneme overlap (e.g., brunch). In the current study, blends were matched with non-blend words on a variety of lexical characteristics, and blend processing was examined using two tasks: a naming task and an eye-tracking task that recorded eye movements during reading. Results showed that blend words were processed more slowly than non-blend control words in both tasks. Blend words led to longer reaction times in naming and longer processing times on several eye movement measures compared to non-blend words. This was especially true for blends that were long, rated low in word familiarity, but were easily recognisable as blends.


2018 ◽  
Author(s):  
Benjamin Gagl ◽  
Julius Golch ◽  
Stefan Hawelka ◽  
Jona Sassenhagen ◽  
Klara Gregorova ◽  
...  

AbstractAcross languages, the speech signal is characterized by a predominant modulation of the amplitude spectrum at ~4-5 Hz, reflecting the processing of linguistic information chunks (i.e., syllables or words) approximately every 200 ms. Interestingly, ~200 ms is also the typical duration of eye fixations during reading. Prompted by this observation, we estimated the frequency at which German readers sample text, and demonstrate that they read sentences at a rate of ~5 Hz. We then examined the generality of this finding in a meta-analysis including 14 languages. We replicated the empirical result for German and observed that fixation-based sampling frequencies vary across languages between 3.9 and 5.2 Hz. Remarkably, we identified a systematic rate reduction from easy to difficult writing systems. Finally, we directly investigated in a new experiment the association between speech spectrum and eye-movement sampling frequency at a person-specific level and found a significant correlation. Based on this evidence, we argue that during reading, the rate of our eye movements is tuned to supply information to language comprehension processes at a preferred rate, coincident with the typical rate of speech.Significance StatementAcross languages, speech is produced and perceived at a rate of ~4-5Hz. When listening to speech, our brain capitalizes this temporal structure to segment speech. We show empirically that while reading our eyes sample text at the same rate, and generalize this finding in a meta-analysis to 14 languages. Reading rates vary between 3.9 and 5.2Hz – i.e., within the typical range of the speech signal. We demonstrate that the difficulty of writing systems underpins this variance. Lastly, we also demonstrate that the speech rate between persons is correlated with the rate at which their eyes sample text. The speech rate of spoken language appears to act as a driving force for the voluntary control of eye movements during reading.


Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5178
Author(s):  
Sangbong Yoo ◽  
Seongmin Jeong ◽  
Seokyeon Kim ◽  
Yun Jang

Gaze movement and visual stimuli have been utilized to analyze human visual attention intuitively. Gaze behavior studies mainly show statistical analyses of eye movements and human visual attention. During these analyses, eye movement data and the saliency map are presented to the analysts as separate views or merged views. However, the analysts become frustrated when they need to memorize all of the separate views or when the eye movements obscure the saliency map in the merged views. Therefore, it is not easy to analyze how visual stimuli affect gaze movements since existing techniques focus excessively on the eye movement data. In this paper, we propose a novel visualization technique for analyzing gaze behavior using saliency features as visual clues to express the visual attention of an observer. The visual clues that represent visual attention are analyzed to reveal which saliency features are prominent for the visual stimulus analysis. We visualize the gaze data with the saliency features to interpret the visual attention. We analyze the gaze behavior with the proposed visualization to evaluate that our approach to embedding saliency features within the visualization supports us to understand the visual attention of an observer.


1972 ◽  
Vol 35 (1) ◽  
pp. 103-110
Author(s):  
Phillip Kleespies ◽  
Morton Wiener

This study explored (1) for evidence of visual input at so-called “subliminal” exposure durations, and (2) whether the response, if any, was a function of the thematic content of the stimulus. Thematic content (threatening versus non-threatening) and stimulus structure (angular versus curved) were varied independently under “subliminal,” “part-cue,” and “identification” exposure conditions. With Ss' reports and the frequency and latency of first eye movements (“orienting reflex”) as input indicators, there was no evidence of input differences which are a function of thematic content at any exposure duration, and the “report” data were consistent with the eye-movement data.


2020 ◽  
Author(s):  
Šimon Kucharský ◽  
Daan Roelof van Renswoude ◽  
Maartje Eusebia Josefa Raijmakers ◽  
Ingmar Visser

Describing, analyzing and explaining patterns in eye movement behavior is crucial for understanding visual perception. Further, eye movements are increasingly used in informing cognitive process models. In this article, we start by reviewing basic characteristics and desiderata for models of eye movements. Specifically, we argue that there is a need for models combining spatial and temporal aspects of eye-tracking data (i.e., fixation durations and fixation locations), that formal models derived from concrete theoretical assumptions are needed to inform our empirical research, and custom statistical models are useful for detecting specific empirical phenomena that are to be explained by said theory. In this article, we develop a conceptual model of eye movements, or specifically, fixation durations and fixation locations, and from it derive a formal statistical model --- meeting our goal of crafting a model useful in both the theoretical and empirical research cycle. We demonstrate the use of the model on an example of infant natural scene viewing, to show that the model is able to explain different features of the eye movement data, and to showcase how to identify that the model needs to be adapted if it does not agree with the data. We conclude with discussion of potential future avenues for formal eye movement models.


Author(s):  
Gavindya Jayawardena ◽  
Sampath Jayarathna

Eye-tracking experiments involve areas of interest (AOIs) for the analysis of eye gaze data. While there are tools to delineate AOIs to extract eye movement data, they may require users to manually draw boundaries of AOIs on eye tracking stimuli or use markers to define AOIs. This paper introduces two novel techniques to dynamically filter eye movement data from AOIs for the analysis of eye metrics from multiple levels of granularity. The authors incorporate pre-trained object detectors and object instance segmentation models for offline detection of dynamic AOIs in video streams. This research presents the implementation and evaluation of object detectors and object instance segmentation models to find the best model to be integrated in a real-time eye movement analysis pipeline. The authors filter gaze data that falls within the polygonal boundaries of detected dynamic AOIs and apply object detector to find bounding-boxes in a public dataset. The results indicate that the dynamic AOIs generated by object detectors capture 60% of eye movements & object instance segmentation models capture 30% of eye movements.


Author(s):  
Anne E. Cook ◽  
Wei Wei

This chapter provides an overview of eye movement-based reading measures and the types of inferences that may be drawn from each. We provide logistical advice about how to set up stimuli for eye tracking experiments, with different level processes (word, sentence, and discourse) and commonly employed measures of eye movements during reading in mind. We conclude with examples from our own research of studies of eye movements during reading at the word, sentence, and discourse levels, as well as some considerations for future research.


Perception ◽  
10.1068/p3470 ◽  
2003 ◽  
Vol 32 (7) ◽  
pp. 793-804 ◽  
Author(s):  
Nicholas J Wade ◽  
Benjamin W Tatler ◽  
Dieter Heller

Dodge, in 1916, suggested that the French term ‘saccade’ should be used for describing the rapid movements of the eyes that occur while reading. Previously he had referred to these as type I movements. Javal had used the term ‘saccade’ in 1879, when describing experiments conducted in his laboratory by Lamare. Accordingly, Javal has been rightly credited with assigning the term to rapid eye movements. In English these rapid rotations had been called jerks, and they had been observed and measured before Lamare's studies of reading. Rapid sweeps of the eyes occur as one phase of nystagmus; they were observed by Wells in 1792 who used an afterimage technique, and they were illustrated by Crum Brown in 1878. Afterimages were used in nineteenth-century research on eye movements and eye position; they were also employed by Hering in 1879, to ascertain how the eyes moved during reading. In the previous year, Javal had employed afterimages in his investigations of reading, but this was to demonstrate that the eyes moved horizontally rather than vertically. Hering's and Lamare's auditory method established the discontinuous nature of eye movements during reading, and the photographic methods introduced by Dodge and others in the early twentieth century enabled their characteristics to be determined with greater accuracy.


2021 ◽  
Author(s):  
Peyman Shokrollahi

Measures of sleep physiology, not obvious to the human eye, may provide important clues to disease states, and responses to therapy. A significant amount of eye movement data is not attended to clinically in routine sleep studies because these data are too long, about six to eight hours in duration, and they are also mixed with many unknown artifacts usually produced from EEG signals or other activities. This research describes how eye movements were different in depressed patients who used antidepressant medications, compared to those who did not. The goal is to track antidepressant medications effects on sleep eye movements. Clinically used SSRIs such as Prozac (Fluoxetine), Celexa (Citalopram), Zoloft (Sertraline), the SNRI Effexor (Venlafaxine) have been considered in this study to assess the possible connections between eye movements recorded during sleep and serotonin activities. The novelty of this research is in the assessment of sleep eye movement, in order to track the antidepressant medications' effect on the brain through EOG channels. EOG analysis is valuable because it is a noninvasive method, and the following research is looking for findings that are invisible to the eyes of professional clinicians. This thesis focuses on quantifying sleep eye movements, with two techniques: autoregressive modeling and wavelet analysis. The eye movement detection software (EMDS) with more than 1500 lines was developed for detecting sleep eye movements. AR coefficients were derived from the sleep eye movements of the patients who were exposed to antidepressant medications, and those who were not, and then they are classified by means of linear discriminant analysis. also for wavelet analysis, discrete wavelet coefficients have been used for classifying sleep eye movements of the patients who were exposed to medication and those who were not.


Sign in / Sign up

Export Citation Format

Share Document