On Learning Interpreted Languages with Recurrent Models
Keyword(s):
Abstract Can recurrent neural nets, inspired by human sequential data processing, learn to understand language? We construct simplified datasets reflecting core properties of natural language as modeled in formal syntax and semantics: recursive syntactic structure and compositionality. We find LSTM and GRU networks to generalise to compositional interpretation well, but only in the most favorable learning settings, with a well-paced curriculum, extensive training data, and left-to-right (but not right-to-left) composition.
2010 ◽
Vol 130
(1)
◽
pp. 83-91
◽
Keyword(s):
2007 ◽
Vol 33
(1)
◽
pp. 105-133
◽
2020 ◽
Vol 34
(05)
◽
pp. 7554-7561
2020 ◽
Vol 34
(05)
◽
pp. 8504-8511
2015 ◽
Vol 3
◽
pp. 461-473
◽