bayesian models of cognition
Recently Published Documents


TOTAL DOCUMENTS

16
(FIVE YEARS 4)

H-INDEX

5
(FIVE YEARS 0)

Entropy ◽  
2021 ◽  
Vol 23 (7) ◽  
pp. 801
Author(s):  
Dhaval Adjodah ◽  
Yan Leng ◽  
Shi Kai Chong ◽  
P. M. Krafft ◽  
Esteban Moro ◽  
...  

A critical question relevant to the increasing importance of crowd-sourced-based finance is how to optimize collective information processing and decision-making. Here, we investigate an often under-studied aspect of the performance of online traders: beyond focusing on just accuracy, what gives rise to the trade-off between risk and accuracy at the collective level? Answers to this question will lead to designing and deploying more effective crowd-sourced financial platforms and to minimizing issues stemming from risk such as implied volatility. To investigate this trade-off, we conducted a large online Wisdom of the Crowd study where 2037 participants predicted the prices of real financial assets (S&P 500, WTI Oil and Gold prices). Using the data collected, we modeled the belief update process of participants using models inspired by Bayesian models of cognition. We show that subsets of predictions chosen based on their belief update strategies lie on a Pareto frontier between accuracy and risk, mediated by social learning. We also observe that social learning led to superior accuracy during one of our rounds that occurred during the high market uncertainty of the Brexit vote.


2021 ◽  
Author(s):  
Ansgar D Endress

As simpler scientific theories are preferable to more convoluted ones, it is plausible to assume (and widely assumed, especially in recent Bayesian models of cognition) that biological learners are also guided by simplicity considerations when acquiring mental representations, and that formal measures of complexity might indicate which learning problems are harder and which ones are easier. However, the history of science suggests that simpler scientific theories are not necessarily more useful if more convoluted ones make calculations easier. Here, I suggest that a similar conclusion applies to mental representations. Using case studies from perception, associative learning and rule learning, I show that formal measures of complexity critically depend on assumptions about the underlying representational and processing primitives and are generally unrelated to what is actually easy to learn and process in humans. An empirically viable notion of complexity thus need to take into consideration the representational and processing primitives that are available to actual learners even if this leads to formally complex explanations.


2020 ◽  
Author(s):  
Christopher Martin Mikkelsen Cox ◽  
Riccardo Fusaroli ◽  
Tamar Keren-Portnoy ◽  
Andreas Roepstorff

Bayesian accounts of development posit that infants form predictions about the causes of sensory signals in their environment and select actions that resolve the largest amount of uncertainty. This paper considers how this approach to infant development can inform and unify insights from experimental research on early cognitive development and language acquisition. In order to establish whether infants’ early inferential abilities conform to the basic assumptions of a Bayesian approach to cognition, we first conduct a systematic review of experimental studies on infants’ ability to form predictions about probabilistic contingencies. These studies provide evidence that infants exhibit sensitivity to the probabilistic structure of their surrounding environment and recruit their own uncertainty to guide their exploration of information in the world. We then demonstrate how these Bayesian computational principles may apply in the context of language acquisition by conducting a second systematic review of experiments on the facilitative role of infants’ vocal production. These studies indicate that infants are more likely to produce and allocate attention to those speech sounds that best afford the opportunity to reduce prediction error over time. This paper demonstrates how Bayesian models of cognition can offer a unifying framework to advance the understanding of cognitive processes in early development. This framework not only gives a larger perspective to current findings, but also provides conceptual tools to enable investigation of infants’ individual trajectories of behavioural change.


2019 ◽  
Author(s):  
Sean Tauber ◽  
Danielle Navarro ◽  
Amy Perfors ◽  
Mark Steyvers

Recent debates in the psychological literature have raised questions about the assumptions that underpin Bayesian models of cognition and what inferences they license about human cognition. In this paper we revisit this topic, arguing that there are 2 qualitatively different ways in which a Bayesian model could be constructed. The most common approach uses a Bayesian model as a normative standard upon which to license a claim about optimality. In the alternative approach, a descriptive Bayesian model need not correspond to any claim that the underlying cognition is optimal or rational, and is used solely as a tool for instantiating a substantive psychological theory. We present 3 case studies in which these 2 perspectives lead to different computational models and license different conclusions about human cognition. We demonstrate how the descriptive Bayesian approach can be used to answer different sorts of questions than the optimal approach, especially when combined with principled tools for model evaluation and model selection. More generally we argue for the importance of making a clear distinction between the 2 perspectives. Considerable confusion results when descriptive models and optimal models are conflated, and if Bayesians are to avoid contributing to this confusion it is important to avoid making normative claims when none are intended.


2018 ◽  
Author(s):  
Jian-Qiao Zhu ◽  
Adam N Sanborn ◽  
Nick Chater

Human probability judgments are systematically biased, in apparent tension with Bayesian models of cognition. But perhaps the brain does not represent probabilities explicitly, but approximates probabilistic calculations through a process of sampling, as used in computational probabilistic models in statistics. Naïve probability estimates can be obtained by calculating the relative frequency of an event within a sample, but these estimates tend to be extreme when the sample size is small. We propose instead that people use a generic prior to improve the accuracy of their probability estimates based on samples, and we call this model the Bayesian sampler. The Bayesian sampler trades off the coherence of probabilistic judgments for improved accuracy, and provides a single framework for explaining phenomena associated with diverse biases and heuristics such as conservatism and the conjunction fallacy. The approach turns out to provide a rational reinterpretation of “noise” in an important recent model of probability judgment, the probability theory plus noise model (Costello & Watts, 2014, 2016a, 2017, 2019; Costello, Watts, & Fisher, 2018), making equivalent average predictions for simple events, conjunctions, and disjunctions. The Bayesian sampler does, however, make distinct predictions for conditional probabilities, and we show in a new experiment that this model better captures these judgments both qualitatively and quantitatively.


2018 ◽  
Author(s):  
Elizabeth Bonawitz ◽  
Stephanie Denison ◽  
Alison Gopnik ◽  
Tom Griffiths

People can behave in a way that is consistent with Bayesian models of cognition, despite the fact that performing exact Bayesian inference is computationally challenging. What algorithms could people be using to make this possible? We show that a simple sequential algorithm “Win-Stay, Lose-Sample”, inspired by the Win-Stay, Lose-Shift (WSLS) principle, can be used to approximate Bayesian inference. We investigate the behavior of adults and preschoolers on two causal learning tasks to test whether people might use a similar algorithm. These studies use a “mini-microgenetic method”, investigating how people sequentially update their beliefs as they encounter new evidence. Experiment 1 investigates a deterministic causal learning scenario and Experiments 2 and 3 examine how people make inferences in a stochastic scenario. The behavior of adults and preschoolers in these experiments is consistent with our Bayesian version of the WSLS principle. This algorithm provides both a practical method for performing Bayesian inference and a new way to understand people’s judgments.


2017 ◽  
Author(s):  
Ishita Dasgupta ◽  
Eric Schulz ◽  
Noah D. Goodman ◽  
Samuel J. Gershman

AbstractBayesian models of cognition assume that people compute probability distributions over hypotheses. However, the required computations are frequently intractable or prohibitively expensive. Since people often encounter many closely related distributions, selective reuse of computations (amortized inference) is a computationally efficient use of the brain’s limited resources. We present three experiments that provide evidence for amortization in human probabilistic reasoning. When sequentially answering two related queries about natural scenes, participants’ responses to the second query systematically depend on the structure of the first query. This influence is sensitive to the content of the queries, only appearing when the queries are related. Using a cognitive load manipulation, we find evidence that people amortize summary statistics of previous inferences, rather than storing the entire distribution. These findings support the view that the brain trades off accuracy and computational cost, to make efficient use of its limited cognitive resources to approximate probabilistic inference.


2017 ◽  
Vol 124 (4) ◽  
pp. 410-441 ◽  
Author(s):  
Sean Tauber ◽  
Daniel J. Navarro ◽  
Amy Perfors ◽  
Mark Steyvers

2017 ◽  
Author(s):  
Ishita Dasgupta ◽  
Eric Schulz ◽  
Noah D. Goodman ◽  
Samuel J. Gershman

AbstractBayesian models of cognition posit that people compute probability distributions over hypotheses, possibly by constructing a sample-based approximation. Since people encounter many closely related distributions, a computationally efficient strategy is to selectively reuse computations – either the samples themselves or some summary statistic. We refer to these reuse strategies as amortized inference. In two experiments, we present evidence consistent with amortization. When sequentially answering two related queries about natural scenes, we show that answers to the second query vary systematically depending on the structure of the first query. Using a cognitive load manipulation, we find evidence that people cache summary statistics rather than raw sample sets. These results enrich our notions of how the brain approximates probabilistic inference.


Author(s):  
Joseph L. Austerweil ◽  
Samuel J. Gershman ◽  
Thomas L. Griffiths

Probability theory forms a natural framework for explaining the impressive success of people at solving many difficult inductive problems, such as learning words and categories, inferring the relevant features of objects, and identifying functional relationships. Probabilistic models of cognition use Bayes’s rule to identify probable structures or representations that could have generated a set of observations, whether the observations are sensory input or the output of other psychological processes. In this chapter we address an important question that arises within this framework: How do people infer representations that are complex enough to faithfully encode the world but not so complex that they “overfit” noise in the data? We discuss nonparametric Bayesian models as a potential answer to this question. To do so, first we present the mathematical background necessary to understand nonparametric Bayesian models. We then delve into nonparametric Bayesian models for three types of hidden structure: clusters, features, and functions. Finally, we conclude with a summary and discussion of open questions for future research.


Sign in / Sign up

Export Citation Format

Share Document