scholarly journals A recurrent cortical model can parsimoniously explain the effect of expectations on sensory processes

2021 ◽  
Author(s):  
Buse M. Urgen ◽  
Huseyin Boyaci

AbstractThe effect of prior knowledge and expectations on perceptual and decision-making processes have been extensively studied. Yet, the computational mechanisms underlying those effects have been a controversial issue. Recently, using a recursive Bayesian updating scheme, unmet expectations have been shown to entail further computations, and consequently delay perceptual processes. Here we take a step further and model these empirical findings with a recurrent cortical model, which was previously suggested to approximate Bayesian inference (Heeger, 2017). Our model fitting results show that the cortical model can successfully predict the behavioral effects of expectation. That is, when the actual sensory input does not match with the expectations, the sensory process needs to be completed with additional, and consequently longer, computations. We suggest that this process underlies the delay in perceptual thresholds in unmet expectations. Overall our findings demonstrate that a parsimonious recurrent cortical model can explain the effects of expectation on sensory processes.

2017 ◽  
Vol 14 (134) ◽  
pp. 20170340 ◽  
Author(s):  
Aidan C. Daly ◽  
Jonathan Cooper ◽  
David J. Gavaghan ◽  
Chris Holmes

Bayesian methods are advantageous for biological modelling studies due to their ability to quantify and characterize posterior variability in model parameters. When Bayesian methods cannot be applied, due either to non-determinism in the model or limitations on system observability, approximate Bayesian computation (ABC) methods can be used to similar effect, despite producing inflated estimates of the true posterior variance. Owing to generally differing application domains, there are few studies comparing Bayesian and ABC methods, and thus there is little understanding of the properties and magnitude of this uncertainty inflation. To address this problem, we present two popular strategies for ABC sampling that we have adapted to perform exact Bayesian inference, and compare them on several model problems. We find that one sampler was impractical for exact inference due to its sensitivity to a key normalizing constant, and additionally highlight sensitivities of both samplers to various algorithmic parameters and model conditions. We conclude with a study of the O'Hara–Rudy cardiac action potential model to quantify the uncertainty amplification resulting from employing ABC using a set of clinically relevant biomarkers. We hope that this work serves to guide the implementation and comparative assessment of Bayesian and ABC sampling techniques in biological models.


2016 ◽  
Vol 27 (4) ◽  
pp. 1003-1040 ◽  
Author(s):  
Andrej Aderhold ◽  
Dirk Husmeier ◽  
Marco Grzegorczyk

2020 ◽  
Vol 34 (06) ◽  
pp. 10251-10258
Author(s):  
Tom Silver ◽  
Kelsey R. Allen ◽  
Alex K. Lew ◽  
Leslie Pack Kaelbling ◽  
Josh Tenenbaum

Humans can learn many novel tasks from a very small number (1–5) of demonstrations, in stark contrast to the data requirements of nearly tabula rasa deep learning methods. We propose an expressive class of policies, a strong but general prior, and a learning algorithm that, together, can learn interesting policies from very few examples. We represent policies as logical combinations of programs drawn from a domain-specific language (DSL), define a prior over policies with a probabilistic grammar, and derive an approximate Bayesian inference algorithm to learn policies from demonstrations. In experiments, we study six strategy games played on a 2D grid with one shared DSL. After a few demonstrations of each game, the inferred policies generalize to new game instances that differ substantially from the demonstrations. Our policy learning is 20–1,000x more data efficient than convolutional and fully convolutional policy learning and many orders of magnitude more computationally efficient than vanilla program induction. We argue that the proposed method is an apt choice for tasks that have scarce training data and feature significant, structured variation between task instances.


Sign in / Sign up

Export Citation Format

Share Document