scholarly journals Rational arbitration between statistics and rules in human sequence learning

Author(s):  
Maxime Maheu ◽  
Florent Meyniel ◽  
Stanislas Dehaene

AbstractDetecting and learning temporal regularities is essential to accurately predict the future. Past research indicates that humans are sensitive to two types of sequential regularities: deterministic rules, which afford sure predictions, and statistical biases, which govern the probabilities of individual items and their transitions. How does the human brain arbitrate between those two types? We used finger tracking to continuously monitor the online build-up of evidence, confidence, false alarms and changes-of-mind during sequence learning. All these aspects of behaviour conformed tightly to a hierarchical Bayesian inference model with distinct hypothesis spaces for statistics and rules, yet linked by a single probabilistic currency. Alternative models based either on a single statistical mechanism or on two non-commensurable systems were rejected. Our results indicate that a hierarchical Bayesian inference mechanism, capable of operating over several distinct hypothesis spaces, underlies the human capability to learn both statistics and rules.

2019 ◽  
Vol 15 (6) ◽  
pp. e1007043 ◽  
Author(s):  
Payam Piray ◽  
Amir Dezfouli ◽  
Tom Heskes ◽  
Michael J. Frank ◽  
Nathaniel D. Daw

2018 ◽  
Author(s):  
Megan A K Peters ◽  
Ling-Qi Zhang ◽  
Ladan Shams

The material-weight illusion (MWI) is one example in a class of weight perception illusions that seem to defy principled explanation. In this illusion, when an observer lifts two objects of the same size and mass, but that appear to be made of different materials, the denser-looking (e.g., metal-look) object is perceived as lighter than the less-dense-looking (e.g., polystyrene-look) object. Like the size-weight illusion (SWI), this perceptual illusion occurs in the opposite direction of predictions from an optimal Bayesian inference process, which predicts that the denser-looking object should be perceived as heavier, not lighter. The presence of this class of illusions challenges the often-tacit assumption that Bayesian inference holds universal explanatory power to describe human perception across (nearly) all domains: If an entire class of perceptual illusions cannot be captured by the Bayesian framework, how could it be argued that human perception truly follows optimal inference? However, we recently showed that the SWI can be explained by an optimal hierarchical Bayesian causal inference process (Peters, Ma & Shams, 2016) in which the observer uses haptic information to arbitrate among competing hypotheses about objects’ possible density relationship. Here we extend the model to demonstrate that it can readily explain the MWI as well. That hierarchical Bayesian inference can explain both illusions strongly suggests that even puzzling percepts arise from optimal inference processes.


2018 ◽  
Author(s):  
Payam Piray ◽  
Amir Dezfouli ◽  
Tom Heskes ◽  
Michael J. Frank ◽  
Nathaniel D. Daw

AbstractComputational modeling plays an important role in modern neuroscience research. Much previous research has relied on statistical methods, separately, to address two problems that are actually interdependent. First, given a particular computational model, Bayesian hierarchical techniques have been used to estimate individual variation in parameters over a population of subjects, leveraging their population-level distributions. Second, candidate models are themselves compared, and individual variation in the expressed model estimated, according to the fits of the models to each subject. The interdependence between these two problems arises because the relevant population for estimating parameters of a model depends on which other subjects express the model. Here, we propose a hierarchical Bayesian inference (HBI) framework for concurrent model comparison, parameter estimation and inference at the population level, combining previous approaches. We show that this framework has important advantages for both parameter estimation and model comparison theoretically and experimentally. The parameters estimated by the HBI show smaller errors compared to other methods. Model comparison by HBI is robust against outliers and is not biased towards overly simplistic models. Furthermore, the fully Bayesian approach of HBI enables researchers to quantify uncertainty in group parameter estimates, for each candidate model separately, and to perform statistical tests on parameters of a population.


Sign in / Sign up

Export Citation Format

Share Document