scholarly journals Interaction with Context During Recurrent Neural Network Sentence Processing

2020 ◽  
Author(s):  
Forrest Davis ◽  
Marten van Schijndel

Syntactic ambiguities in isolated sentences can lead to increased difficulty in incremental sentence processing, a phenomenon known as a garden-path effect. This difficulty, however, can be alleviated for humans when they are presented with supporting discourse contexts. We tested whether recurrent neural network (RNN) language models (LMs) could learn linguistic representations that are similarly influenced by discourse context. RNN LMs have been claimed to learn a variety of syntactic constructions. However, recent work has suggested that pragmatically conditioned syntactic phenomena are not acquired by RNNs. In comparing model behavior to human behavior, we show that our models can, in fact, learn pragmatic constraints that alleviate garden-path effects given the correct training and testing conditions. This suggests that some aspects of linguistically relevant pragmatic knowledge can be learned from distributional information alone.

2019 ◽  
Author(s):  
Stefan L. Frank ◽  
John Hoeks

Recurrent neural network (RNN) models of sentence processing have recently displayed a remarkable ability to learn aspects of structure comprehension, as evidenced by their ability to account for reading times on sentences with local syntactic ambiguities (i.e., garden-path effects). Here, we investigate if these models can also simulate the effect of semantic appropriateness of the ambiguity's readings. RNNs-based estimates of surprisal of the disambiguating verb of sentences with an NP/S-coordination ambiguity (as in `The wizard guards the king and the princess protects ...') show identical patters to human reading times on the same sentences: Surprisal is higher on ambiguous structures than on their disambiguated counterparts and this effect is weaker, but not absent, in cases of poor thematic fit between the verb and its potential object (`The teacher baked the cake and the baker made ...'). These results show that an RNN is able to simultaneously learn about structural and semantic relations between words and suggest that garden-path phenomena may be more closely related to word predictability than traditionally assumed.


2016 ◽  
Vol 24 (8) ◽  
pp. 1438-1449 ◽  
Author(s):  
Xunying Liu ◽  
Xie Chen ◽  
Yongqiang Wang ◽  
Mark J. F. Gales ◽  
Philip C. Woodland

2019 ◽  
Vol E102.D (12) ◽  
pp. 2557-2567
Author(s):  
Ryo MASUMURA ◽  
Taichi ASAMI ◽  
Takanobu OBA ◽  
Sumitaka SAKAUCHI ◽  
Akinori ITO

Sign in / Sign up

Export Citation Format

Share Document