scholarly journals Learning the Past Tense of English Verbs: The Symbolic Pattern Associator vs. Connectionist Models

1994 ◽  
Vol 1 ◽  
pp. 209-229 ◽  
Author(s):  
C. X. Ling

Learning the past tense of English verbs - a seemingly minor aspect of language acquisition - has generated heated debates since 1986, and has become a landmark task for testing the adequacy of cognitive modeling. Several artificial neural networks (ANNs) have been implemented, and a challenge for better symbolic models has been posed. In this paper, we present a general-purpose Symbolic Pattern Associator (SPA) based upon the decision-tree learning algorithm ID3. We conduct extensive head-to-head comparisons on the generalization ability between ANN models and the SPA under different representations. We conclude that the SPA generalizes the past tense of unseen verbs better than ANN models by a wide margin, and we offer insights as to why this should be the case. We also discuss a new default strategy for decision-tree learning algorithms.

Philosophy ◽  
1932 ◽  
Vol 7 (26) ◽  
pp. 201-214
Author(s):  
John Laird

It is the custom, nowadays, to say that “realism” is very dead indeed, and to speak of it invariably in the past tense, or only in the historical present. What happened, we are told, was that, during the first quarter of the twentieth century, two distinct bodies of men propounded either “naïf” or “new” realism. The naif realists followed Mr. G. E. Moore—to some extent and without his consent; and they were called naif (by some misogynist) because the masculine form of the adjective expressed their rugged creed better than the more usual feminine form. (They were esprits forts rather than esprits fins).


Author(s):  
Satoshi Kura ◽  
Hiroshi Unno ◽  
Ichiro Hasuo

AbstractWe present a novel decision tree-based synthesis algorithm of ranking functions for verifying program termination. Our algorithm is integrated into the workflow of CounterExample Guided Inductive Synthesis (CEGIS). CEGIS is an iterative learning model where, at each iteration, (1) a synthesizer synthesizes a candidate solution from the current examples, and (2) a validator accepts the candidate solution if it is correct, or rejects it providing counterexamples as part of the next examples. Our main novelty is in the design of a synthesizer: building on top of a usual decision tree learning algorithm, our algorithm detects cycles in a set of example transitions and uses them for refining decision trees. We have implemented the proposed method and obtained promising experimental results on existing benchmark sets of (non-)termination verification problems that require synthesis of piecewise-defined lexicographic affine ranking functions.


Sign in / Sign up

Export Citation Format

Share Document