scholarly journals The Size-Weight Illusion is not anti-Bayesian after all: a unifying Bayesian account

PeerJ ◽  
2016 ◽  
Vol 4 ◽  
pp. e2124 ◽  
Author(s):  
Megan A.K. Peters ◽  
Wei Ji Ma ◽  
Ladan Shams

When we lift two differently-sized but equally-weighted objects, we expect the larger to be heavier, but the smallerfeelsheavier. However, traditional Bayesian approaches with “larger is heavier” priors predict the smaller object should feellighter; this Size-Weight Illusion (SWI) has thus been labeled “anti-Bayesian” and has stymied psychologists for generations. We propose that previous Bayesian approaches neglect the brain’s inference process about density. In our Bayesian model, objects’ perceived heaviness relationship is based on both their size and inferred density relationship: observers evaluate competing, categorical hypotheses about objects’ relative densities, the inference about which is then used to produce the final estimate of weight. The model can qualitatively and quantitatively reproduce the SWI and explain other researchers’ findings, and also makes a novel prediction, which we confirmed. This same computational mechanism accounts for other multisensory phenomena and illusions; that the SWI follows the same process suggests that competitive-prior Bayesian inference can explain human perception across many domains.

2018 ◽  
Author(s):  
Megan A K Peters ◽  
Ling-Qi Zhang ◽  
Ladan Shams

The material-weight illusion (MWI) is one example in a class of weight perception illusions that seem to defy principled explanation. In this illusion, when an observer lifts two objects of the same size and mass, but that appear to be made of different materials, the denser-looking (e.g., metal-look) object is perceived as lighter than the less-dense-looking (e.g., polystyrene-look) object. Like the size-weight illusion (SWI), this perceptual illusion occurs in the opposite direction of predictions from an optimal Bayesian inference process, which predicts that the denser-looking object should be perceived as heavier, not lighter. The presence of this class of illusions challenges the often-tacit assumption that Bayesian inference holds universal explanatory power to describe human perception across (nearly) all domains: If an entire class of perceptual illusions cannot be captured by the Bayesian framework, how could it be argued that human perception truly follows optimal inference? However, we recently showed that the SWI can be explained by an optimal hierarchical Bayesian causal inference process (Peters, Ma & Shams, 2016) in which the observer uses haptic information to arbitrate among competing hypotheses about objects’ possible density relationship. Here we extend the model to demonstrate that it can readily explain the MWI as well. That hierarchical Bayesian inference can explain both illusions strongly suggests that even puzzling percepts arise from optimal inference processes.


2018 ◽  
Author(s):  
Megan A K Peters ◽  
Ling-Qi Zhang ◽  
Ladan Shams

The material-weight illusion (MWI) is one example in a class of weight perception illusions that seem to defy principled explanation. In this illusion, when an observer lifts two objects of the same size and mass, but that appear to be made of different materials, the denser-looking (e.g., metal-look) object is perceived as lighter than the less-dense-looking (e.g., polystyrene-look) object. Like the size-weight illusion (SWI), this perceptual illusion occurs in the opposite direction of predictions from an optimal Bayesian inference process, which predicts that the denser-looking object should be perceived as heavier, not lighter. The presence of this class of illusions challenges the often-tacit assumption that Bayesian inference holds universal explanatory power to describe human perception across (nearly) all domains: If an entire class of perceptual illusions cannot be captured by the Bayesian framework, how could it be argued that human perception truly follows optimal inference? However, we recently showed that the SWI can be explained by an optimal hierarchical Bayesian causal inference process (Peters, Ma & Shams, 2016) in which the observer uses haptic information to arbitrate among competing hypotheses about objects’ possible density relationship. Here we extend the model to demonstrate that it can readily explain the MWI as well. That hierarchical Bayesian inference can explain both illusions strongly suggests that even puzzling percepts arise from optimal inference processes.


PeerJ ◽  
2018 ◽  
Vol 6 ◽  
pp. e5760 ◽  
Author(s):  
Megan A.K. Peters ◽  
Ling-Qi Zhang ◽  
Ladan Shams

The material-weight illusion (MWI) is one example in a class of weight perception illusions that seem to defy principled explanation. In this illusion, when an observer lifts two objects of the same size and mass, but that appear to be made of different materials, the denser-looking (e.g., metal-look) object is perceived as lighter than the less-dense-looking (e.g., polystyrene-look) object. Like the size-weight illusion (SWI), this perceptual illusion occurs in the opposite direction of predictions from an optimal Bayesian inference process, which predicts that the denser-looking object should be perceived as heavier, not lighter. The presence of this class of illusions challenges the often-tacit assumption that Bayesian inference holds universal explanatory power to describe human perception across (nearly) all domains: If an entire class of perceptual illusions cannot be captured by the Bayesian framework, how could it be argued that human perception truly follows optimal inference? However, we recently showed that the SWI can be explained by an optimal hierarchical Bayesian causal inference process (Peters, Ma & Shams, 2016) in which the observer uses haptic information to arbitrate among competing hypotheses about objects’ possible density relationship. Here we extend the model to demonstrate that it can readily explain the MWI as well. That hierarchical Bayesian inference can explain both illusions strongly suggests that even puzzling percepts arise from optimal inference processes.


2018 ◽  
Author(s):  
Elina Stengård ◽  
Ronald van den Berg

AbstractOptimal Bayesian models have been highly successful in describing human performance on perceptual decision-making tasks, such as cue combination and visual search. However, recent studies have argued that these models are often overly flexible and therefore lack explanatory power. Moreover, there are indications that neural computation is inherently imprecise, which makes it implausible that humans would perform optimally on any non-trivial task. Here, we reconsider human performance on a visual search task by using an approach that constrains model flexibility and tests for computational imperfections. Subjects performed a target detection task in which targets and distractors were tilted ellipses with orientations drawn from Gaussian distributions with different means. We varied the amount of overlap between these distributions to create multiple levels of external uncertainty. We also varied the level of sensory noise, by testing subjects under both short and unlimited display times. On average, empirical performance – measured as d’ – fell 18.1% short of optimal performance. We found no evidence that the magnitude of this suboptimality was affected by the level of internal or external uncertainty. The data were well accounted for by a Bayesian model with imperfections in its computations. This “imperfect Bayesian” model convincingly outperformed the “flawless Bayesian” model as well as all ten heuristic models that we tested. These results suggest that perception is founded on Bayesian principles, but with suboptimalities in the implementation of these principles. The view of perception as imperfect Bayesian inference can provide a middle ground between traditional Bayesian and anti-Bayesian views.Author summaryThe main task of perceptual systems is to make truthful inferences about the environment. The sensory input to these systems is often astonishingly imprecise, which makes human perception prone to error. Nevertheless, numerous studies have reported that humans often perform as accurately as is possible given these sensory imprecisions. This suggests that the brain makes optimal use of the sensory input and computes without error. The validity of this claim has recently been questioned for two reasons. First, it has been argued that a lot of the evidence for optimality comes from studies that used overly flexible models. Second, optimality in human perception is implausible due to limitations inherent to neural systems. In this study, we reconsider optimality in a standard visual perception task by devising a research method that addresses both concerns. In contrast to previous studies, we find clear indications of suboptimalities. Our data are best explained by a model that is based on the optimal decision strategy, but with imperfections in its execution.


2011 ◽  
Author(s):  
Sharat Chikkerur ◽  
Thomas Serre ◽  
Cheston Tan ◽  
Tomaso Poggio

2021 ◽  
Author(s):  
Joseph M Barnby ◽  
Nichola Raihani ◽  
Peter Dayan

To benefit from social interactions, people need to predict how their social partners will behave. Such predictions arise through integrating prior expectations with evidence from observations, but where the priors come from and whether they influence the integration is not clear. Furthermore, this process can be affected by factors such as paranoia, in which the tendency to form biased impressions of others is common. Using a modified social value orientation (SVO) task in a large online sample (n=697), we showed that participants used a Bayesian inference process to learn about partners, with priors that were based on their own preferences. Paranoia was associated with preferences for earning more than a partner and less flexible beliefs regarding a partner’s social preferences. Alignment between the preferences of participants and their partners was associated with better predictions and with reduced attributions of harmful intent to partners.


2019 ◽  
Author(s):  
Mark Andrews

The study of memory for texts has had an long tradition of research in psychology. According to most general accounts, the recognition or recall of items in a text is based on querying a memory representation that is built up on the basis of background knowledge. The objective of this paper is to describe and thoroughly test a Bayesian model of these general accounts. In particular, we present a model that describes how we use our background knowledge to form memories in terms of Bayesian inference of statistical patterns in the text, followed by posterior predictive inference of the words that are typical of those inferred patterns. This provides us with precise predictions about which words will be remembered, whether veridically or erroneously, from any given text. We tested these predictions using behavioural data from a memory experiment using a large sample of randomly chosen texts from a representative corpus of British English. The results show that the probability of remembering any given word in the text, whether falsely or veridically, is well predicted by the Bayesian model. Moreover, compared to nontrivial alternative models of text memory, by every measure used in the analyses, the predictions of the Bayesian model were superior, often overwhelmingly so. We conclude that these results provide strong evidence in favour of the Bayesian account of text memory that we have presented in this paper.


2021 ◽  
Author(s):  
Dmytro Perepolkin ◽  
Benjamin Goodrich ◽  
Ullrika Sahlin

This paper extends the application of indirect Bayesian inference to probability distributions defined in terms of quantiles of the observable quantities. Quantile-parameterized distributions are characterized by high shape flexibility and interpretability of its parameters, and are therefore useful for elicitation on observables. To encode uncertainty in the quantiles elicited from experts, we propose a Bayesian model based on the metalog distribution and a version of the Dirichlet prior. The resulting “hybrid” expert elicitation protocol for characterizing uncertainty in parameters using questions about the observable quantities is discussed and contrasted to parametric and predictive elicitation.


Entropy ◽  
2020 ◽  
Vol 22 (8) ◽  
pp. 890
Author(s):  
Sergey Oladyshkin ◽  
Farid Mohammadi ◽  
Ilja Kroeker ◽  
Wolfgang Nowak

Gaussian process emulators (GPE) are a machine learning approach that replicates computational demanding models using training runs of that model. Constructing such a surrogate is very challenging and, in the context of Bayesian inference, the training runs should be well invested. The current paper offers a fully Bayesian view on GPEs for Bayesian inference accompanied by Bayesian active learning (BAL). We introduce three BAL strategies that adaptively identify training sets for the GPE using information-theoretic arguments. The first strategy relies on Bayesian model evidence that indicates the GPE’s quality of matching the measurement data, the second strategy is based on relative entropy that indicates the relative information gain for the GPE, and the third is founded on information entropy that indicates the missing information in the GPE. We illustrate the performance of our three strategies using analytical- and carbon-dioxide benchmarks. The paper shows evidence of convergence against a reference solution and demonstrates quantification of post-calibration uncertainty by comparing the introduced three strategies. We conclude that Bayesian model evidence-based and relative entropy-based strategies outperform the entropy-based strategy because the latter can be misleading during the BAL. The relative entropy-based strategy demonstrates superior performance to the Bayesian model evidence-based strategy.


2011 ◽  
Vol 128-129 ◽  
pp. 637-641
Author(s):  
Lan Luo ◽  
Qiong Hai Dai ◽  
Chun Xiang Xu ◽  
Shao Quan Jiang

The cipher algorithms are categorized by block cipher, stream cipher and HASH, and they are weighed in faithful transmission which is known as independent condition. In faithful transmission, the ciphers are studied because of their root cipher. Intelligent application of ciphers is a direction that uses Bayesian model of cognition science. Bayesian inference is a rational engine for solving such problems within a probabilistic framework, and consequently is the heart of most probabilistic models of weighing the ciphers. The approach of this paper is that ciphers, which are considered as a suitable weight cipher to kinds of networks, are ranged based on root ciphers. This paper shows the other kinds of transformation among the different cipher algorithms themselves.


Sign in / Sign up

Export Citation Format

Share Document