Insensitivity of the Human Sentence-Processing System to Hierarchical Structure

2011 ◽  
Vol 22 (6) ◽  
pp. 829-834 ◽  
Author(s):  
Stefan L. Frank ◽  
Rens Bod
Cognition ◽  
1996 ◽  
Vol 59 (1) ◽  
pp. 23-59 ◽  
Author(s):  
Edward Gibson ◽  
Neal Pearlmutter ◽  
Enriqueta Canseco-Gonzalez ◽  
Gregory Hickok

2009 ◽  
Vol 33 (4) ◽  
pp. 583-609 ◽  
Author(s):  
Gerry T. M. Altmann ◽  
Jelena Mirković

2002 ◽  
Vol 23 (3) ◽  
pp. 471-477
Author(s):  
Ngoni Chipere

This book attempts to integrate symbolic processing, in the form of minimalism, with connectionism. Minimalism represents sentences as symbolic structures resulting from a formal process of syntactic derivation. Connectionism, on the other hand, represents sentences as patterns of association between linguistic features. These patterns are said to obey statistical regularities of linguistic usage instead of formal linguistic rules. The authors of the book argue that human sentence processing displays both structural and statistical characteristics and therefore requires the integration of the two views.


Cognition ◽  
1988 ◽  
Vol 30 (3) ◽  
pp. 191-238 ◽  
Author(s):  
Gerry Altmann ◽  
Mark Steedman

2019 ◽  
Author(s):  
Stefan L. Frank

Although computational models can simulate aspects of human sentence processing, research on this topic has remained almost exclusively limited to the single language case. The current review presents an overview of the state of the art in computational cognitive models of sentence processing, and discusses how recent sentence-processing models can be used to study bi- and multilingualism. Recent results from cognitive modelling and computational linguistics suggest that phenomena specific to bilingualism can emerge from systems that have no dedicated components for handling multiple languages. Hence, accounting for human bi-/multilingualism may not require models that are much more sophisticated than those for the monolingual case.


2018 ◽  
Author(s):  
Christoph Aurnhammer ◽  
Stefan L. Frank

The Simple Recurrent Network (SRN) has a long tradition in cognitive models of language processing. More recently, gated recurrent networks have been proposed that often outperform the SRN on natural language processing tasks. Here, we investigate whether two types of gated networks perform better as cognitive models of sentence reading than SRNs, beyond their advantage as language models.This will reveal whether the filtering mechanism implemented in gated networks corresponds to an aspect of human sentence processing.We train a series of language models differing only in the cell types of their recurrent layers. We then compute word surprisal values for stimuli used in self-paced reading, eye-tracking, and electroencephalography experiments, and quantify the surprisal values' fit to experimental measures that indicate human sentence reading effort.While the gated networks provide better language models, they do not outperform their SRN counterpart as cognitive models when language model quality is equal across network types. Our results suggest that the different architectures are equally valid as models of human sentence processing.


Sign in / Sign up

Export Citation Format

Share Document