scholarly journals Bayesian models of cognition revisited: Setting optimality aside and letting data drive psychological theory.

2017 ◽  
Vol 124 (4) ◽  
pp. 410-441 ◽  
Author(s):  
Sean Tauber ◽  
Daniel J. Navarro ◽  
Amy Perfors ◽  
Mark Steyvers
2011 ◽  
Vol 34 (4) ◽  
pp. 215-231 ◽  
Author(s):  
Matt Jones ◽  
Bradley C. Love

AbstractMathematical developments in probabilistic inference have led to optimism over the prospects for Bayesian models of cognition. Our target article calls for better differentiation of these technical developments from theoretical contributions. It distinguishes between Bayesian Fundamentalism, which is theoretically limited because of its neglect of psychological mechanism, and Bayesian Enlightenment, which integrates rational and mechanistic considerations and is thus better positioned to advance psychological theory. The commentaries almost uniformly agree that mechanistic grounding is critical to the success of the Bayesian program. Some commentaries raise additional challenges, which we address here. Other commentaries claim that all Bayesian models are mechanistically grounded, while at the same time holding that they should be evaluated only on a computational level. We argue this contradictory stance makes it difficult to evaluate a model's scientific contribution, and that the psychological commitments of Bayesian models need to be made more explicit.


2019 ◽  
Author(s):  
Sean Tauber ◽  
Danielle Navarro ◽  
Amy Perfors ◽  
Mark Steyvers

Recent debates in the psychological literature have raised questions about the assumptions that underpin Bayesian models of cognition and what inferences they license about human cognition. In this paper we revisit this topic, arguing that there are 2 qualitatively different ways in which a Bayesian model could be constructed. The most common approach uses a Bayesian model as a normative standard upon which to license a claim about optimality. In the alternative approach, a descriptive Bayesian model need not correspond to any claim that the underlying cognition is optimal or rational, and is used solely as a tool for instantiating a substantive psychological theory. We present 3 case studies in which these 2 perspectives lead to different computational models and license different conclusions about human cognition. We demonstrate how the descriptive Bayesian approach can be used to answer different sorts of questions than the optimal approach, especially when combined with principled tools for model evaluation and model selection. More generally we argue for the importance of making a clear distinction between the 2 perspectives. Considerable confusion results when descriptive models and optimal models are conflated, and if Bayesians are to avoid contributing to this confusion it is important to avoid making normative claims when none are intended.


2014 ◽  
Vol 22 (3) ◽  
pp. 614-628 ◽  
Author(s):  
Pernille Hemmer ◽  
Sean Tauber ◽  
Mark Steyvers

2010 ◽  
Vol 1 (6) ◽  
pp. 811-823 ◽  
Author(s):  
Nick Chater ◽  
Mike Oaksford ◽  
Ulrike Hahn ◽  
Evan Heit

2011 ◽  
Vol 34 (4) ◽  
pp. 169-188 ◽  
Author(s):  
Matt Jones ◽  
Bradley C. Love

AbstractThe prominence of Bayesian modeling of cognition has increased recently largely because of mathematical advances in specifying and deriving predictions from complex probabilistic models. Much of this research aims to demonstrate that cognitive behavior can be explained from rational principles alone, without recourse to psychological or neurological processes and representations. We note commonalities between this rational approach and other movements in psychology – namely, Behaviorism and evolutionary psychology – that set aside mechanistic explanations or make use of optimality assumptions. Through these comparisons, we identify a number of challenges that limit the rational program's potential contribution to psychological theory. Specifically, rational Bayesian models are significantly unconstrained, both because they are uninformed by a wide range of process-level data and because their assumptions about the environment are generally not grounded in empirical measurement. The psychological implications of most Bayesian models are also unclear. Bayesian inference itself is conceptually trivial, but strong assumptions are often embedded in the hypothesis sets and the approximation algorithms used to derive model predictions, without a clear delineation between psychological commitments and implementational details. Comparing multiple Bayesian models of the same task is rare, as is the realization that many Bayesian models recapitulate existing (mechanistic level) theories. Despite the expressive power of current Bayesian models, we argue they must be developed in conjunction with mechanistic considerations to offer substantive explanations of cognition. We lay out several means for such an integration, which take into account the representations on which Bayesian inference operates, as well as the algorithms and heuristics that carry it out. We argue this unification will better facilitate lasting contributions to psychological theory, avoiding the pitfalls that have plagued previous theoretical movements.


Author(s):  
Joseph L. Austerweil ◽  
Samuel J. Gershman ◽  
Thomas L. Griffiths

Probability theory forms a natural framework for explaining the impressive success of people at solving many difficult inductive problems, such as learning words and categories, inferring the relevant features of objects, and identifying functional relationships. Probabilistic models of cognition use Bayes’s rule to identify probable structures or representations that could have generated a set of observations, whether the observations are sensory input or the output of other psychological processes. In this chapter we address an important question that arises within this framework: How do people infer representations that are complex enough to faithfully encode the world but not so complex that they “overfit” noise in the data? We discuss nonparametric Bayesian models as a potential answer to this question. To do so, first we present the mathematical background necessary to understand nonparametric Bayesian models. We then delve into nonparametric Bayesian models for three types of hidden structure: clusters, features, and functions. Finally, we conclude with a summary and discussion of open questions for future research.


Author(s):  
Thomas L. Griffiths ◽  
Charles Kemp ◽  
Joshua B. Tenenbaum

2021 ◽  
Author(s):  
Ansgar D Endress

As simpler scientific theories are preferable to more convoluted ones, it is plausible to assume (and widely assumed, especially in recent Bayesian models of cognition) that biological learners are also guided by simplicity considerations when acquiring mental representations, and that formal measures of complexity might indicate which learning problems are harder and which ones are easier. However, the history of science suggests that simpler scientific theories are not necessarily more useful if more convoluted ones make calculations easier. Here, I suggest that a similar conclusion applies to mental representations. Using case studies from perception, associative learning and rule learning, I show that formal measures of complexity critically depend on assumptions about the underlying representational and processing primitives and are generally unrelated to what is actually easy to learn and process in humans. An empirically viable notion of complexity thus need to take into consideration the representational and processing primitives that are available to actual learners even if this leads to formally complex explanations.


2009 ◽  
Vol 32 (1) ◽  
pp. 89-90 ◽  
Author(s):  
Thomas L. Griffiths

AbstractBayesian Rationality (Oaksford & Chater 2007) illustrates the strengths of Bayesian models of cognition: the systematicity of rational explanations, transparent assumptions about human learners, and combining structured symbolic representation with statistics. However, the book also highlights some of the challenges this approach faces: of providing psychological mechanisms, explaining the origins of the knowledge that guides human learning, and accounting for how people make genuinely new discoveries.


Sign in / Sign up

Export Citation Format

Share Document