An Application of a Recurrent Neural Model for Parsing Natural Language
Neural models for dealing with symbolic processing are in their infancy. Success thus far can be defined by the parsing of very simple phrases and a small set of words into small, fixed size frames. Many of these systems do not scale well as one increases the number of words or the phrase length. These models are limited with respect to the large number of epochs required to train and the error rates. In the discussion that follows we will address the issue of training. We will present an analysis which will provide a lower bound on the error rate. The approach presents simple extensions to the basic learning algorithm and make use of a closest neighbor algorithm for correctness. Other issues such as generalization versus memorization, optimum hidden layer size and teaching the network after the initial training phase are also discussed.