exact bayesian inference
Recently Published Documents


TOTAL DOCUMENTS

18
(FIVE YEARS 4)

H-INDEX

6
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Darío Cuevas Rivera ◽  
Stefan J. Kiebel

Humans have been shown to adapt their movements when a sudden change to the dynamics of the environment is introduced, a phenomenon called motor adaptation. If the change is reverted, the adaptation is also quickly reverted. Human are also able to adapt to multiple changes in dynamics presented separately, and to be able to switch between adapted movements on the fly. Such switching relies on contextual information which is often noisy or misleading, which affects the switch between adaptations. In this work, we introduce a computational model to explain the behavioral phenomena effected by uncertain contextual information. Specifically, we present a hierarchical model for motor adaptation based on exact Bayesian inference. This model explicitly takes into account contextual information and how the dynamics of context inference affect adaptation and action selection. We show how the proposed model provides a unifying explanation for four different experimentally-established phenomena: (i) effects of sensory cues and proprioceptive information on switching between tasks, (ii) the effects of previously-learned adaptations on switching between tasks, (iii) the effects of training history on behavior in new contexts, in addition to (iv) the well-studied savings, de-adaptation and spontaneous recovery.


2021 ◽  
pp. 1-72
Author(s):  
Vasiliki Liakoni ◽  
Alireza Modirshanechi ◽  
Wulfram Gerstner ◽  
Johanni Brea

Surprise-based learning allows agents to rapidly adapt to nonstationary stochastic environments characterized by sudden changes. We show that exact Bayesian inference in a hierarchical model gives rise to a surprise-modulated trade-off between forgetting old observations and integrating them with the new ones. The modulation depends on a probability ratio, which we call the Bayes factor surprise, that tests the prior belief against the current belief. We demonstrate that in several existing approximate algorithms, the Bayes Factor Surprise modulates the rate of adaptation to new observations. We derive three novel surprise-based algorithms, one in the family of particle filters, one in the family of variational learning, and one in the family of message passing, that have constant scaling in observation sequence length and particularly simple update dynamics for any distribution in the exponential family. Empirical results show that these surprise-based algorithms estimate parameters better than alternative approximate approaches and reach levels of performance comparable to computationally more expensive algorithms. The Bayes Factor Surprise is related to but different from the Shannon Surprise. In two hypothetical experiments, we make testable predictions for physiological indicators that dissociate the Bayes factor surprise from the Shannon Surprise. The theoretical insight of casting various approaches as surprise-based learning, as well as the proposed online algorithms, may be applied to the analysis of animal and human behavior and to reinforcement learning in nonstationary environments.


2018 ◽  
Vol 34 (21) ◽  
pp. 3638-3645 ◽  
Author(s):  
Kris V Parag ◽  
Oliver G Pybus

2017 ◽  
Vol 52 (1) ◽  
pp. 130-144 ◽  
Author(s):  
Chung-chieh Shan ◽  
Norman Ramsey

Cell Systems ◽  
2016 ◽  
Vol 3 (5) ◽  
pp. 480-490.e13 ◽  
Author(s):  
Justin Feigelman ◽  
Stefan Ganscha ◽  
Simon Hastreiter ◽  
Michael Schwarzfischer ◽  
Adam Filipczyk ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document