Computational Models of Reading

Author(s):  
Erik D. Reichle

This book describes computational models of reading, or models that simulate and explain the mental processes that support the reading of text. The book provides introductory chapters on both reading research and computer models. The central chapters of the book then review what has been learned about reading from empirical research on four core reading processes: word identification, sentence processing, discourse representation, and how these three processes are coordinated with visual processing, attention, and eye-movement control. These central chapters also review an influential sample of computer models that have been developed to explain these key empirical findings, as well as comparative analyses of those models. The final chapter attempts to integrate this empirical and theoretical work by both describing a new comprehensive model of reading, Über-Reader, and reporting several simulations to illustrate how the model accounts for many of the basic phenomena related to reading.

Author(s):  
Erik D. Reichle

This chapter opens with a discussion of the limitations of current models of reading, and moves on to the reasons why more comprehensive models of reading are necessary to advance our understanding of the mental, perceptual, and motoric processes that support reading. The chapter then provides a comparative analysis of the various approaches that have been adopted to model reading, and how the theoretical assumptions of models of word identification, sentence processing, discourse representation, and eye-movement control might be combined to build a more comprehensive model of reading in its entirety. The remainder of the chapter then describes one such model, Über-Reader, and a series of simulations to illustrate how the model explains word identification, sentence processing, the encoding and recall of discourse meaning, and the patterns of eye movements that are observed during reading. The final sections of the chapter then address both the limitations and possible future applications of the model.


2003 ◽  
Vol 26 (4) ◽  
pp. 445-476 ◽  
Author(s):  
Erik D. Reichle ◽  
Keith Rayner ◽  
Alexander Pollatsek

The E-Z Reader model (Reichle et al. 1998; 1999) provides a theoretical framework for understanding how word identification, visual processing, attention, and oculomotor control jointly determine when and where the eyes move during reading. In this article, we first review what is known about eye movements during reading. Then we provide an updated version of the model (E-Z Reader 7) and describe how it accounts for basic findings about eye movement control in reading. We then review several alternative models of eye movement control in reading, discussing both their core assumptions and their theoretical scope. On the basis of this discussion, we conclude that E-Z Reader provides the most comprehensive account of eye movement control during reading. Finally, we provide a brief overview of what is known about the neural systems that support the various components of reading, and suggest how the cognitive constructs of our model might map onto this neural architecture.


2018 ◽  
Author(s):  
Benjamin Gagl ◽  
Jona Sassenhagen ◽  
Sophia Haan ◽  
Klara Gregorova ◽  
Fabio Richlan ◽  
...  

AbstractMost current models assume that the perceptual and cognitive processes of visual word recognition and reading operate upon neuronally coded domain-general low-level visual representations – typically oriented line representations. We here demonstrate, consistent with neurophysiological theories of Bayesian-like predictive neural computations, that prior visual knowledge of words may be utilized to ‘explain away’ redundant and highly expected parts of the visual percept. Subsequent processing stages, accordingly, operate upon an optimized representation of the visual input, the orthographic prediction error, highlighting only the visual information relevant for word identification. We show that this optimized representation is related to orthographic word characteristics, accounts for word recognition behavior, and is processed early in the visual processing stream, i.e., in V4 and before 200 ms after word-onset. Based on these findings, we propose that prior visual-orthographic knowledge is used to optimize the representation of visually presented words, which in turn allows for highly efficient reading processes.


Author(s):  
Erik D. Reichle

This chapter describes what has been learned about reading architecture, or how the mental processes that support word identification, sentence processing, and discourse representation during reading are coordinated with the systems that support vision, attention, and eye-movement control. The chapter reviews key findings that shed light on the nature of reading architecture, mainly using the results of eye-movement experiments. The chapter then reviews precursor theories and models of the reading architecture—early attempts to explain and simulate reading in its entirety. The chapter goes on to review a large, representative sample of the models that have been used to simulate and understand natural reading. Models are reviewed in their order of development to show how they have evolved to accommodate new empirical findings. The chapter concludes with an explicit comparative analysis of the models and a discussion of the empirical findings that each model can and cannot explain.


2003 ◽  
Vol 26 (4) ◽  
pp. 481-482 ◽  
Author(s):  
Ralf Engbert ◽  
Reinhold Kliegl

Computational models such as E-Z Reader and SWIFT are ideal theoretical tools to test quantitatively our current understanding of eye-movement control in reading. Here we present a mathematical analysis of word skipping in the E-Z Reader model by semianalytic methods, to highlight the differences in current modeling approaches. In E-Z Reader, the word identification system must outperform the oculomotor system to induce word skipping. In SWIFT, there is competition among words to be selected as a saccade target. We conclude that it is the question of competitors in the “game” of word skipping that must be solved in eye movement research.


Author(s):  
Shravan Vasishth ◽  
Bruno Nicenboim ◽  
Felix Engelmann ◽  
Frank Burchert

2020 ◽  
Vol 1 (4) ◽  
pp. 381-401
Author(s):  
Ryan Staples ◽  
William W. Graves

Determining how the cognitive components of reading—orthographic, phonological, and semantic representations—are instantiated in the brain has been a long-standing goal of psychology and human cognitive neuroscience. The two most prominent computational models of reading instantiate different cognitive processes, implying different neural processes. Artificial neural network (ANN) models of reading posit nonsymbolic, distributed representations. The dual-route cascaded (DRC) model instead suggests two routes of processing, one representing symbolic rules of spelling–to–sound correspondence, the other representing orthographic and phonological lexicons. These models are not adjudicated by behavioral data and have never before been directly compared in terms of neural plausibility. We used representational similarity analysis to compare the predictions of these models to neural data from participants reading aloud. Both the ANN and DRC model representations corresponded to neural activity. However, the ANN model representations correlated to more reading-relevant areas of cortex. When contributions from the DRC model were statistically controlled, partial correlations revealed that the ANN model accounted for significant variance in the neural data. The opposite analysis, examining the variance explained by the DRC model with contributions from the ANN model factored out, revealed no correspondence to neural activity. Our results suggest that ANNs trained using distributed representations provide a better correspondence between cognitive and neural coding. Additionally, this framework provides a principled approach for comparing computational models of cognitive function to gain insight into neural representations.


1999 ◽  
Vol 11 (3) ◽  
pp. 300-311 ◽  
Author(s):  
Edmund T. Rolls ◽  
Martin J. Tovée ◽  
Stefano Panzeri

Backward masking can potentially provide evidence of the time needed for visual processing, a fundamental constraint that must be incorporated into computational models of vision. Although backward masking has been extensively used psychophysically, there is little direct evidence for the effects of visual masking on neuronal responses. To investigate the effects of a backward masking paradigm on the responses of neurons in the temporal visual cortex, we have shown that the response of the neurons is interrupted by the mask. Under conditions when humans can just identify the stimulus, with stimulus onset asynchronies (SOA) of 20 msec, neurons in macaques respond to their best stimulus for approximately 30 msec. We now quantify the information that is available from the responses of single neurons under backward masking conditions when two to six faces were shown. We show that the information available is greatly decreased as the mask is brought closer to the stimulus. The decrease is more marked than the decrease in firing rate because it is the selective part of the firing that is especially attenuated by the mask, not the spontaneous firing, and also because the neuronal response is more variable at short SOAs. However, even at the shortest SOA of 20 msec, the information available is on average 0.1 bits. This compares to 0.3 bits with only the 16-msec target stimulus shown and a typical value for such neurons of 0.4 to 0.5 bits with a 500-msec stimulus. The results thus show that considerable information is available from neuronal responses even under backward masking conditions that allow the neurons to have their main response in 30 msec. This provides evidence for how rapid the processing of visual information is in a cortical area and provides a fundamental constraint for understanding how cortical information processing operates.


Author(s):  
Mark S. Seidenberg

Connectionist computational models have been extensively used in the study of reading: how children learn to read, skilled reading, and reading impairments (dyslexia). The models are computer programs that simulate detailed aspects of behaviour. This article provides an overview of connectionist models of reading, with an emphasis on the “triangle” framework. The term “connectionism” refers to a broad, varied set of ideas, loosely connected by an emphasis on the notion that complexity, at different grain sizes or scales ranging from neurons to overt behaviour, emerges from the aggregate behaviour of large networks of simple processing units. This article focuses on the parallel distributed processing variety developed by Rumelhart, McClelland, and Hinton (1986). First, it describes basic elements of connectionist models of reading: task orientation, distributed representations, learning, hidden units, and experience. The article then looks at how models are used to establish causal effects, along with quasiregularity and division of labor.


Sign in / Sign up

Export Citation Format

Share Document