scholarly journals Fast Learning in Complex Domains

Author(s):  
Robert Worden

Bayesian formulations of learning imply that whenever the evidence for a correlation between events in an animal’s habitat is sufficient, the correlation is learned. This implies that regularities can be learnt rapidly, from small numbers of learning examples. This speed of learning gives maximum possible fitness, and no faster learning is possible. There is evidence in many domains that animals and people can learn at nearly Bayesian optimal speeds. These domains include associative conditioning, and the more complex domains of navigation and language. There are computational models of learning which learn at near-Bayesian speeds in complex domains, and which can scale well – to learn thousands of pieces of knowledge (i.e., relations and associations). These are not neural net models. They can be defined in computational terms, as algorithms and data structures at David Marr’s [1] Level Two. Their key data structures are composite feature structures, which are graphs of multiple linked nodes. This leads to the hypothesis that animal learning results not from deep neural nets (which typically require thousands of training exam-ples), but from neural implementations of the Level Two models of fast learning; and that neu-rons provide the facilities needed to implement those models at Marr’s Level Three. The required facilities include feature structures, dynamic binding, one-shot memory for many feature struc-tures, pattern-based associative retrieval, unification and generalization of feature structures. These may be supported by multiplexing of data and metadata in the same neural fibres.

2003 ◽  
Vol 125 (2) ◽  
pp. 572-579 ◽  
Author(s):  
S. A. Nelson ◽  
Z. S. Filipi ◽  
D. N. Assanis

A technique which uses trained neural nets to model the compressor in the context of a turbocharged diesel engine simulation is introduced. This technique replaces the usual interpolation of compressor maps with the evaluation of a smooth mathematical function. Following presentation of the methodology, the proposed neural net technique is validated against data from a truck type, 6-cylinder 14-liter diesel engine. Furthermore, with the introduction of an additional parameter, the proposed neural net can be trained to simulate an entire family of compressors. As a demonstration, a family of compressors of different sizes is represented with a single neural net model which is subsequently used for matching calculations with intercooled and nonintercooled engine configurations at different speeds. This novel approach readily allows for evaluation of various options within a wide range of possible compressor configurations prior to prototype production. It can also be used to represent the variable geometry machine regardless of the method used to vary compressor characteristics. Hence, it is a powerful design tool for selection of the best compressor for a given diesel engine system and for broader system optimization studies.


1991 ◽  
Vol 15 (1) ◽  
pp. 1-40
Author(s):  
Lucio Costa

RIASSUNTO La ricerca sul linguaggio naturale condotta in Intelligenza Artificiale si è sviluppata, malgrado le apparenze, in modo alquanto indipendente dal la-voro dei linguisti. Da un lato sono stati elaborati modelli computazionali delle facoltà di lunguaggio che si configurano come largamente autonomi rispetto a quelli sviluppati in linguistica. D'altro lato, l'implementazione dei sistemi è stata influenzata da soluzioni pragmatiche connesse all'efficacia computazionale delle regole indipendenti dal contesto, alla necessità di evitare componenti trasformazionali inversi e ad una concezione rappresenta-zionale del significato. Il presente articolo propone l'interesse dei lavori lin-guistici di Z. S. Harris e M. Gross ai fini dello sviluppo di un'analisi sintat-tica automatica che sia a controllo diffuso e incentrata sul comportamento idiosincratico delle unità lessicali. Essa è anche inquadrata nel tentativo di gettare luce sulla natura del processo denotazionale. SUMMARY In spite of the claim on the interactions between artificial intelligence (AI) and linguistics, AI research on natural language has developed independently from the work of linguists. On one hand, computational models of the faculties of language which are independent from the models developed in linguistics have been worked out. On the other hand, the AI system design has been oriented towards practical solutions, whose main motivations where to use context-free rules, to avoid an inverse transformational component, and to represent meanings by some data structures. This paper is about the linguistic works of Z.S. Harris and M. Gross to develop automatic distributed control parsing which takes seriously into account the indiosyncratic behaviour of the lexical items. The general framework for the discussion is the procedural nature of the denotational process.


Sign in / Sign up

Export Citation Format

Share Document