Computational Models of Scientific Discovery: Do They Compute?

1989 ◽  
Vol 34 (10) ◽  
pp. 895-897 ◽  
Author(s):  
Robert J. Sternberg
1991 ◽  
Vol 6 (4) ◽  
pp. 259-305 ◽  
Author(s):  
Sakir Kocabas

AbstractComputational modelling of scientific discovery is emerging as an important research field in artificial intelligence. Various computational systems modelling different aspects of scientific research and discovery have been developed. This paper looks at some of these models in order to examine how knowledge is organized in such systems, what forms of representation they have, how their methods of learning and representation are integrated, and the effects of representation on learning. The paper also describes the achievements and shortcomings of these systems, and discusses the obstacles in developing more comprehensive models.


Acta Numerica ◽  
2018 ◽  
Vol 27 ◽  
pp. 353-450 ◽  
Author(s):  
J. Tinsley Oden

The use of computational models and simulations to predict events that take place in our physical universe, or to predict the behaviour of engineered systems, has significantly advanced the pace of scientific discovery and the creation of new technologies for the benefit of humankind over recent decades, at least up to a point. That ‘point’ in recent history occurred around the time that the scientific community began to realize that true predictive science must deal with many formidable obstacles, including the determination of the reliability of the models in the presence of many uncertainties. To develop meaningful predictions one needs relevant data, itself possessing uncertainty due to experimental noise; in addition, one must determine model parameters, and concomitantly, there is the overriding need to select and validate models given the data and the goals of the simulation.This article provides a broad overview of predictive computational science within the framework of what is often called the science of uncertainty quantification. The exposition is divided into three major parts. In Part 1, philosophical and statistical foundations of predictive science are developed within a Bayesian framework. There the case is made that the Bayesian framework provides, perhaps, a unique setting for handling all of the uncertainties encountered in scientific prediction. In Part 2, general frameworks and procedures for the calculation and validation of mathematical models of physical realities are given, all in a Bayesian setting. But beyond Bayes, an introduction to information theory, the maximum entropy principle, model sensitivity analysis and sampling methods such as MCMC are presented. In Part 3, the central problem of predictive computational science is addressed: the selection, adaptive control and validation of mathematical and computational models of complex systems. The Occam Plausibility Algorithm, OPAL, is introduced as a framework for model selection, calibration and validation. Applications to complex models of tumour growth are discussed.


Author(s):  
José I. Latorre ◽  
María T. Soto-Sanfiel

We reflect on the typical sequence of complex emotions associated with the process of scientific discovery. It is proposed that the same sequence is found to underlie many forms of media entertainment, albeit substantially scaled down. Hence, a distinct theory of intellectual entertainment is put forward. The seemingly timeless presence of multiple forms of intellectual entertainment finds its roots in a positive moral approval of the self of itself.


Author(s):  
Kim Uittenhove ◽  
Patrick Lemaire

In two experiments, we tested the hypothesis that strategy performance on a given trial is influenced by the difficulty of the strategy executed on the immediately preceding trial, an effect that we call strategy sequential difficulty effect. Participants’ task was to provide approximate sums to two-digit addition problems by using cued rounding strategies. Results showed that performance was poorer after a difficult strategy than after an easy strategy. Our results have important theoretical and empirical implications for computational models of strategy choices and for furthering our understanding of strategic variations in arithmetic as well as in human cognition in general.


Author(s):  
Manuel Perea ◽  
Victoria Panadero

The vast majority of neural and computational models of visual-word recognition assume that lexical access is achieved via the activation of abstract letter identities. Thus, a word’s overall shape should play no role in this process. In the present lexical decision experiment, we compared word-like pseudowords like viotín (same shape as its base word: violín) vs. viocín (different shape) in mature (college-aged skilled readers), immature (normally reading children), and immature/impaired (young readers with developmental dyslexia) word-recognition systems. Results revealed similar response times (and error rates) to consistent-shape and inconsistent-shape pseudowords for both adult skilled readers and normally reading children – this is consistent with current models of visual-word recognition. In contrast, young readers with developmental dyslexia made significantly more errors to viotín-like pseudowords than to viocín-like pseudowords. Thus, unlike normally reading children, young readers with developmental dyslexia are sensitive to a word’s visual cues, presumably because of poor letter representations.


Sign in / Sign up

Export Citation Format

Share Document