Synthesis

Author(s):  
Erik D. Reichle

This chapter opens with a discussion of the limitations of current models of reading, and moves on to the reasons why more comprehensive models of reading are necessary to advance our understanding of the mental, perceptual, and motoric processes that support reading. The chapter then provides a comparative analysis of the various approaches that have been adopted to model reading, and how the theoretical assumptions of models of word identification, sentence processing, discourse representation, and eye-movement control might be combined to build a more comprehensive model of reading in its entirety. The remainder of the chapter then describes one such model, Über-Reader, and a series of simulations to illustrate how the model explains word identification, sentence processing, the encoding and recall of discourse meaning, and the patterns of eye movements that are observed during reading. The final sections of the chapter then address both the limitations and possible future applications of the model.

Author(s):  
Erik D. Reichle

This chapter describes what has been learned about reading architecture, or how the mental processes that support word identification, sentence processing, and discourse representation during reading are coordinated with the systems that support vision, attention, and eye-movement control. The chapter reviews key findings that shed light on the nature of reading architecture, mainly using the results of eye-movement experiments. The chapter then reviews precursor theories and models of the reading architecture—early attempts to explain and simulate reading in its entirety. The chapter goes on to review a large, representative sample of the models that have been used to simulate and understand natural reading. Models are reviewed in their order of development to show how they have evolved to accommodate new empirical findings. The chapter concludes with an explicit comparative analysis of the models and a discussion of the empirical findings that each model can and cannot explain.


2009 ◽  
Vol 101 (2) ◽  
pp. 934-947 ◽  
Author(s):  
Masafumi Ohki ◽  
Hiromasa Kitazawa ◽  
Takahito Hiramatsu ◽  
Kimitake Kaga ◽  
Taiko Kitamura ◽  
...  

The anatomical connection between the frontal eye field and the cerebellar hemispheric lobule VII (H-VII) suggests a potential role of the hemisphere in voluntary eye movement control. To reveal the involvement of the hemisphere in smooth pursuit and saccade control, we made a unilateral lesion around H-VII and examined its effects in three Macaca fuscata that were trained to pursue visually a small target. To the step (3°)-ramp (5–20°/s) target motion, the monkeys usually showed an initial pursuit eye movement at a latency of 80–140 ms and a small catch-up saccade at 140–220 ms that was followed by a postsaccadic pursuit eye movement that roughly matched the ramp target velocity. After unilateral cerebellar hemispheric lesioning, the initial pursuit eye movements were impaired, and the velocities of the postsaccadic pursuit eye movements decreased. The onsets of 5° visually guided saccades to the stationary target were delayed, and their amplitudes showed a tendency of increased trial-to-trial variability but never became hypo- or hypermetric. Similar tendencies were observed in the onsets and amplitudes of catch-up saccades. The adaptation of open-loop smooth pursuit velocity, tested by a step increase in target velocity for a brief period, was impaired. These lesion effects were recognized in all directions, particularly in the ipsiversive direction. A recovery was observed at 4 wk postlesion for some of these lesion effects. These results suggest that the cerebellar hemispheric region around lobule VII is involved in the control of smooth pursuit and saccadic eye movements.


1998 ◽  
Vol 38 (8) ◽  
pp. 1129-1144 ◽  
Author(s):  
Keith Rayner ◽  
Martin H. Fischer ◽  
Alexander Pollatsek

2021 ◽  
Author(s):  
Maximilian M. Rabe ◽  
Dario Paape ◽  
Shravan Vasishth ◽  
Ralf Engbert

Integrating eye-movement control and sentence processing would mark an important step forward for mathematical models of natural language processing. We present an integrated approach by combining the SWIFT model of eye-movement control (Engbert et al., 2005) with key components of the LV05 (Lewis & Vasishth, 2005) parser. The integrated generative model can reproduce reading time patterns that have been explained in terms of similarity-based interference in the psycholinguistic literature. A crucial problem for such complex models is parameter estimation. We build upon recent advances on successful parameter identification in dynamical models, investigate likelihood profiles for single parameters, and present pilot results on MCMC sampling within a Bayesian framework of parameter inference.


2019 ◽  
Vol 50 (2) ◽  
pp. 500-512
Author(s):  
Li Zhang ◽  
Guoli Yan ◽  
Li Zhou ◽  
Zebo Lan ◽  
Valerie Benson

Abstract The current study examined eye movement control in autistic (ASD) children. Simple targets were presented in isolation, or with central, parafoveal, or peripheral distractors synchronously. Sixteen children with ASD (47–81 months) and nineteen age and IQ matched typically developing children were instructed to look to the target as accurately and quickly as possible. Both groups showed high proportions (40%) of saccadic errors towards parafoveal and peripheral distractors. For correctly executed eye movements to the targets, centrally presented distractors produced the longest latencies (time taken to initiate eye movements), followed by parafoveal and peripheral distractor conditions. Central distractors had a greater effect in the ASD group, indicating evidence for potential atypical voluntary attentional control in ASD children.


2011 ◽  
Vol 4 (1) ◽  
Author(s):  
Tessa Warren ◽  
Erik D. Reichle ◽  
Nikole D. Patson

The current study investigated how a post-lexical complexity manipulation followed by a lexical complexity manipulation affects eye movements during reading. Both manipulations caused disruption in all measures on the manipulated words, but the patterns of spillover differed. Critically, the effects of the two kinds of manipulations did not interact, and there was no evidence that post-lexical processing difficulty delayed lexical processing on the next word (c.f. Henderson & Ferreira, 1990). This suggests that post-lexical processing of one word and lexical processing of the next can proceed independently and likely in parallel. This finding is consistent with the assumptions of the E-Z Reader model of eye movement control in reading (Reichle, Warren, & McConnell, 2009).


Author(s):  
Erik D. Reichle

This book describes computational models of reading, or models that simulate and explain the mental processes that support the reading of text. The book provides introductory chapters on both reading research and computer models. The central chapters of the book then review what has been learned about reading from empirical research on four core reading processes: word identification, sentence processing, discourse representation, and how these three processes are coordinated with visual processing, attention, and eye-movement control. These central chapters also review an influential sample of computer models that have been developed to explain these key empirical findings, as well as comparative analyses of those models. The final chapter attempts to integrate this empirical and theoretical work by both describing a new comprehensive model of reading, Über-Reader, and reporting several simulations to illustrate how the model accounts for many of the basic phenomena related to reading.


Perception ◽  
1976 ◽  
Vol 5 (4) ◽  
pp. 461-465 ◽  
Author(s):  
Ann Saye

This experiment examined the effects of adding five different kinds of prominent monocular features to a large-disparity random-dot stereogram. It was found that features which enclosed the disparate area produced the shortest initial perception times for fusion. The longer initial perception times for stimuli containing features without this enclosing property are explained in terms of less-helpful guidance of saccadic eye movements prior to the establishment of fusion. Subsequent reductions in perception times for these latter stimuli could be due to perceptual learning within the eye movement control system.


2003 ◽  
Vol 26 (4) ◽  
pp. 445-476 ◽  
Author(s):  
Erik D. Reichle ◽  
Keith Rayner ◽  
Alexander Pollatsek

The E-Z Reader model (Reichle et al. 1998; 1999) provides a theoretical framework for understanding how word identification, visual processing, attention, and oculomotor control jointly determine when and where the eyes move during reading. In this article, we first review what is known about eye movements during reading. Then we provide an updated version of the model (E-Z Reader 7) and describe how it accounts for basic findings about eye movement control in reading. We then review several alternative models of eye movement control in reading, discussing both their core assumptions and their theoretical scope. On the basis of this discussion, we conclude that E-Z Reader provides the most comprehensive account of eye movement control during reading. Finally, we provide a brief overview of what is known about the neural systems that support the various components of reading, and suggest how the cognitive constructs of our model might map onto this neural architecture.


Author(s):  
Syed Hussain Ather

AbstractIn "Slow-fast control of eye movements: an instance of Zeeman’s model for an action," Clement and Akman extended Zeeman's model for the heartbeat to describe eye movement control of different species using aspects of catastrophe theory. The scientists created a model that gives an example of how the techniques of catastrophe theory can be used to understand information processing by biological organisms, a key aspect of biological cybernetics. They tested how well the system of equations for Zeeman's model could be applied to saccadic eye movements.


Sign in / Sign up

Export Citation Format

Share Document