Understanding language understanding: computational models of reading. Ashwin Ram and Kenneth Moorman (Eds). The MIT Press, Cambridge, MA, 1999. No. of pages 511. ISBN 0-262-18192-4. Price $50.00.

2002 ◽  
Vol 16 (7) ◽  
pp. 864-865
Author(s):  
Thomas Capo
2020 ◽  
Vol 1 (4) ◽  
pp. 381-401
Author(s):  
Ryan Staples ◽  
William W. Graves

Determining how the cognitive components of reading—orthographic, phonological, and semantic representations—are instantiated in the brain has been a long-standing goal of psychology and human cognitive neuroscience. The two most prominent computational models of reading instantiate different cognitive processes, implying different neural processes. Artificial neural network (ANN) models of reading posit nonsymbolic, distributed representations. The dual-route cascaded (DRC) model instead suggests two routes of processing, one representing symbolic rules of spelling–to–sound correspondence, the other representing orthographic and phonological lexicons. These models are not adjudicated by behavioral data and have never before been directly compared in terms of neural plausibility. We used representational similarity analysis to compare the predictions of these models to neural data from participants reading aloud. Both the ANN and DRC model representations corresponded to neural activity. However, the ANN model representations correlated to more reading-relevant areas of cortex. When contributions from the DRC model were statistically controlled, partial correlations revealed that the ANN model accounted for significant variance in the neural data. The opposite analysis, examining the variance explained by the DRC model with contributions from the ANN model factored out, revealed no correspondence to neural activity. Our results suggest that ANNs trained using distributed representations provide a better correspondence between cognitive and neural coding. Additionally, this framework provides a principled approach for comparing computational models of cognitive function to gain insight into neural representations.


Author(s):  
Mark S. Seidenberg

Connectionist computational models have been extensively used in the study of reading: how children learn to read, skilled reading, and reading impairments (dyslexia). The models are computer programs that simulate detailed aspects of behaviour. This article provides an overview of connectionist models of reading, with an emphasis on the “triangle” framework. The term “connectionism” refers to a broad, varied set of ideas, loosely connected by an emphasis on the notion that complexity, at different grain sizes or scales ranging from neurons to overt behaviour, emerges from the aggregate behaviour of large networks of simple processing units. This article focuses on the parallel distributed processing variety developed by Rumelhart, McClelland, and Hinton (1986). First, it describes basic elements of connectionist models of reading: task orientation, distributed representations, learning, hidden units, and experience. The article then looks at how models are used to establish causal effects, along with quasiregularity and division of labor.


Author(s):  
Erik D. Reichle

This book describes computational models of reading, or models that simulate and explain the mental processes that support the reading of text. The book provides introductory chapters on both reading research and computer models. The central chapters of the book then review what has been learned about reading from empirical research on four core reading processes: word identification, sentence processing, discourse representation, and how these three processes are coordinated with visual processing, attention, and eye-movement control. These central chapters also review an influential sample of computer models that have been developed to explain these key empirical findings, as well as comparative analyses of those models. The final chapter attempts to integrate this empirical and theoretical work by both describing a new comprehensive model of reading, Über-Reader, and reporting several simulations to illustrate how the model accounts for many of the basic phenomena related to reading.


Author(s):  
Noriko Ito ◽  
◽  
Toru Sugimoto ◽  
Yusuke Takahashi ◽  
Shino Iwashita ◽  
...  

We propose two computational models - one of a language within context based on systemic functional linguistic theory and one of context-sensitive language understanding. The model of a language within context called the Semiotic Base characterizes contextual, semantic, lexicogrammatical, and graphological aspects of input texts. The understanding process is divided into shallow and deep analyses. Shallow analysis consists of morphological and dependency analyses and word concept and case relation assignment, mainly by existing natural language processing tools and machine-readable dictionaries. Results are used to detect the contextual configuration of input text in contextual analysis. This is followed by deep analyses of lexicogrammar, semantics, and concepts, conducted by referencing a subset of resources related to the detected context. Our proposed models have been implemented in Java and verified by integrating them into such applications as dialog-based question-and-answer (Q&A).


Sign in / Sign up

Export Citation Format

Share Document