Structure and Flexibility in Bayesian Models of Cognition

Author(s):  
Joseph L. Austerweil ◽  
Samuel J. Gershman ◽  
Thomas L. Griffiths

Probability theory forms a natural framework for explaining the impressive success of people at solving many difficult inductive problems, such as learning words and categories, inferring the relevant features of objects, and identifying functional relationships. Probabilistic models of cognition use Bayes’s rule to identify probable structures or representations that could have generated a set of observations, whether the observations are sensory input or the output of other psychological processes. In this chapter we address an important question that arises within this framework: How do people infer representations that are complex enough to faithfully encode the world but not so complex that they “overfit” noise in the data? We discuss nonparametric Bayesian models as a potential answer to this question. To do so, first we present the mathematical background necessary to understand nonparametric Bayesian models. We then delve into nonparametric Bayesian models for three types of hidden structure: clusters, features, and functions. Finally, we conclude with a summary and discussion of open questions for future research.

Author(s):  
Thomas L. Griffiths ◽  
Adam N. Sanborn ◽  
Kevin R. Canini ◽  
Daniel J. Navarro ◽  
Joshua B. Tenenbaum

2014 ◽  
Vol 9 (2) ◽  
pp. 307-330 ◽  
Author(s):  
Juhee Lee ◽  
Steven N. MacEachern ◽  
Yiling Lu ◽  
Gordon B. Mills

Author(s):  
Dominik Joho ◽  
Gian Diego Tipaldi ◽  
Nikolas Engelhard ◽  
Cyrill Stachniss ◽  
Wolfram Burgard

2010 ◽  
Vol 105 (490) ◽  
pp. 458-472 ◽  
Author(s):  
Lu Ren ◽  
David Dunson ◽  
Scott Lindroth ◽  
Lawrence Carin

Sign in / Sign up

Export Citation Format

Share Document