Bayesian Inference on the Brain: Bayesian Solutions to Selected Problems in Neuroimaging

Author(s):  
John Aston ◽  
Adam Johansen
Keyword(s):  
2019 ◽  
Author(s):  
Wolfgang M. Pauli ◽  
Matt Jones

AbstractAdaptive behavior in even the simplest decision-making tasks requires predicting future events in an environment that is generally nonstationary. As an inductive problem, this prediction requires a commitment to the statistical process underlying environmental change. This challenge can be formalized in a Bayesian framework as a question of choosing a generative model for the task dynamics. Previous learning models assume, implicitly or explicitly, that nonstationarity follows either a continuous diffusion process or a discrete changepoint process. Each approach is slow to adapt when its assumptions are violated. A new mixture of Bayesian experts framework proposes separable brain systems approximating inference under different assumptions regarding the statistical structure of the environment. This model explains data from a laboratory foraging task, in which rats experienced a change in reward contingencies after pharmacological disruption of dorsolateral (DLS) or dorsomedial striatum (DMS). The data and model suggest DLS learns under a diffusion prior whereas DMS learns under a changepoint prior. The combination of these two systems offers a new explanation for how the brain handles inference in an uncertain environment.One Sentence SummaryAdaptive foraging behavior can be explained by separable brain systems approximating Bayesian inference under different assumptions about dynamics of the environment.


2008 ◽  
Vol 100 (6) ◽  
pp. 2981-2996 ◽  
Author(s):  
Paul R. MacNeilage ◽  
Narayan Ganesan ◽  
Dora E. Angelaki

Spatial orientation is the sense of body orientation and self-motion relative to the stationary environment, fundamental to normal waking behavior and control of everyday motor actions including eye movements, postural control, and locomotion. The brain achieves spatial orientation by integrating visual, vestibular, and somatosensory signals. Over the past years, considerable progress has been made toward understanding how these signals are processed by the brain using multiple computational approaches that include frequency domain analysis, the concept of internal models, observer theory, Bayesian theory, and Kalman filtering. Here we put these approaches in context by examining the specific questions that can be addressed by each technique and some of the scientific insights that have resulted. We conclude with a recent application of particle filtering, a probabilistic simulation technique that aims to generate the most likely state estimates by incorporating internal models of sensor dynamics and physical laws and noise associated with sensory processing as well as prior knowledge or experience. In this framework, priors for low angular velocity and linear acceleration can explain the phenomena of velocity storage and frequency segregation, both of which have been modeled previously using arbitrary low-pass filtering. How Kalman and particle filters may be implemented by the brain is an emerging field. Unlike past neurophysiological research that has aimed to characterize mean responses of single neurons, investigations of dynamic Bayesian inference should attempt to characterize population activities that constitute probabilistic representations of sensory and prior information.


2015 ◽  
Vol 27 (2) ◽  
pp. 306-328 ◽  
Author(s):  
Thomas H. B. FitzGerald ◽  
Philipp Schwartenbeck ◽  
Michael Moutoussis ◽  
Raymond J. Dolan ◽  
Karl Friston

Deciding how much evidence to accumulate before making a decision is a problem we and other animals often face, but one that is not completely understood. This issue is particularly important because a tendency to sample less information (often known as reflection impulsivity) is a feature in several psychopathologies, such as psychosis. A formal understanding of information sampling may therefore clarify the computational anatomy of psychopathology. In this theoretical letter, we consider evidence accumulation in terms of active (Bayesian) inference using a generic model of Markov decision processes. Here, agents are equipped with beliefs about their own behavior—in this case, that they will make informed decisions. Normative decision making is then modeled using variational Bayes to minimize surprise about choice outcomes. Under this scheme, different facets of belief updating map naturally onto the functional anatomy of the brain (at least at a heuristic level). Of particular interest is the key role played by the expected precision of beliefs about control, which we have previously suggested may be encoded by dopaminergic neurons in the midbrain. We show that manipulating expected precision strongly affects how much information an agent characteristically samples, and thus provides a possible link between impulsivity and dopaminergic dysfunction. Our study therefore represents a step toward understanding evidence accumulation in terms of neurobiologically plausible Bayesian inference and may cast light on why this process is disordered in psychopathology.


2009 ◽  
Vol 102 (1) ◽  
pp. 1-6 ◽  
Author(s):  
Kenji Morita

On the basis of accumulating behavioral and neural evidences, it has recently been proposed that the brain neural circuits of humans and animals are equipped with several specific properties, which ensure that perceptual decision making implemented by the circuits can be nearly optimal in terms of Bayesian inference. Here, I introduce the basic ideas of such a proposal and discuss its implications from the standpoint of biophysical modeling developed in the framework of dynamical systems.


2016 ◽  
Vol 116 (2) ◽  
pp. 369-379 ◽  
Author(s):  
Jonathan Tong ◽  
Vy Ngo ◽  
Daniel Goldreich

To perceive, the brain must interpret stimulus-evoked neural activity. This is challenging: The stochastic nature of the neural response renders its interpretation inherently uncertain. Perception would be optimized if the brain used Bayesian inference to interpret inputs in light of expectations derived from experience. Bayesian inference would improve perception on average but cause illusions when stimuli violate expectation. Intriguingly, tactile, auditory, and visual perception are all prone to length contraction illusions, characterized by the dramatic underestimation of the distance between punctate stimuli delivered in rapid succession; the origin of these illusions has been mysterious. We previously proposed that length contraction illusions occur because the brain interprets punctate stimulus sequences using Bayesian inference with a low-velocity expectation. A novel prediction of our Bayesian observer model is that length contraction should intensify if stimuli are made more difficult to localize. Here we report a tactile psychophysical study that tested this prediction. Twenty humans compared two distances on the forearm: a fixed reference distance defined by two taps with 1-s temporal separation and an adjustable comparison distance defined by two taps with temporal separation t ≤ 1 s. We observed significant length contraction: As t was decreased, participants perceived the two distances as equal only when the comparison distance was made progressively greater than the reference distance. Furthermore, the use of weaker taps significantly enhanced participants' length contraction. These findings confirm the model's predictions, supporting the view that the spatiotemporal percept is a best estimate resulting from a Bayesian inference process.


Author(s):  
Wen-Hao Zhang ◽  
Tai Sing Lee ◽  
Brent Doiron ◽  
Si Wu

AbstractThe brain performs probabilistic inference to interpret the external world, but the underlying neuronal mechanisms remain not well understood. The stimulus structure of natural scenes exists in a high-dimensional feature space, and how the brain represents and infers the joint posterior distribution in this rich, combinatorial space is a challenging problem. There is added difficulty when considering the neuronal mechanics of this representation, since many of these features are computed in parallel by distributed neural circuits. Here, we present a novel solution to this problem. We study continuous attractor neural networks (CANNs), each representing and inferring a stimulus attribute, where attractor coupling supports sampling-based inference on the multivariate posterior of the high-dimensional stimulus features. Using perturbative analysis, we show that the dynamics of coupled CANNs realizes Langevin sampling on the stimulus feature manifold embedded in neural population responses. In our framework, feedforward inputs convey the likelihood, reciprocal connections encode the stimulus correlational priors, and the internal Poisson variability of the neurons generate the correct random walks for sampling. Our model achieves high-dimensional joint probability representation and Bayesian inference in a distributed manner, where each attractor network infers the marginal posterior of the corresponding stimulus feature. The stimulus feature can be read out simply with a linear decoder based only on local activities of each network. Simulation experiments confirm our theoretical analysis. The study provides insight into the fundamental neural mechanisms for realizing efficient high-dimensional probabilistic inference.


2017 ◽  
Author(s):  
Evan Remington ◽  
Mehrdad Jazayeri

AbstractSensorimotor skills rely on performing noisy sensorimotor computations on noisy sensory measurements. Bayesian models suggest that humans compensate for measurement noise and reduce behavioral variability by biasing perception toward prior expectations. Whether the same holds for noise in sensorimotor computations is not known. Testing human subjects in tasks with different levels of sensorimotor complexity, we found a similar bias-variance tradeoff associated with increased sensorimotor noise. This result was accurately captured by a model which implements Bayesian inference after – not before – sensorimotor transformation. These results indicate that humans perform “late inference” downstream of sensorimotor computations rather than, or in addition to, “early inference” in the perceptual domain. The brain thus possesses internal models of noise in both sensory measurements and sensorimotor computations.


2020 ◽  
Author(s):  
Hiroshi Yokoyama ◽  
Keiichi Kitajo

AbstractRecent neuroscience studies suggest that flexible changes in functional brain networks are associated with cognitive functions. Therefore, the technique that detects changes in dynamical brain structures, which is called “dynamic functional connectivity (DFC) analysis”, has become important for the clarification of the crucial roles of functional brain networks. Conventional methods analyze DFC applying static indices based on the correlation between each pair of time-series data in the different brain areas to estimate network couplings. However, correlation-based indices lead to incorrect conclusions contaminated by spurious correlations between time-series data. These spurious correlation issues of network analysis could be reduced by performing the analysis assuming data structures based on a relevant model. Therefore, we propose a novel approach that combines the following two methods: (1) model-based network estimation assuming a dynamical system for time evolution, and (2) sequential estimation of model parameters based on Bayesian inference. We, thus, assumed that the model parameters reflect dynamical structures of functional brain networks. Moreover, by given the model parameter as prior distribution of the Bayesian inference, the network changes can be quantified based on the comparison between prior and posterior distributions of model parameters. In this comparison, we used the Kullback-Leibler (KL) divergence as an index for such changes. To validate our method, we applied it to numerical data and electroencephalographic (EEG) data. As a result, we confirmed that the KL divergence increased only when changes in dynamical structures occurred. Our proposed method successfully estimated both network couplings and change points of dynamic structures in the numerical and EEG data. The results suggest that our proposed method is useful in revealing the neural basis of dynamic functional networks.Author summaryWe proposed a method for detecting changes in dynamical brain networks. Although the detection of temporal changes in network dynamics from neural data has become more important (aiming to elucidate the role of neural dynamics in the brain), an adequate method for detecting the time-evolving dynamics of brain networks from neural data is yet to be established. To address this issue, we proposed a new approach to the detection of change points of dynamical network structures of the brain combining data-driven estimation of a coupled phase oscillator model and sequential Bayesian inference. As the advantage of applying Bayesian inference, by given the model parameter as the prior distribution, the extent of change can be quantified based on the comparison between prior and posterior distributions. Specifically, by using the Kullback-Leibler divergence as an index for change in the dynamical structures, we could successfully detect the neuroscientifically relevant dynamics reflected as changes from prior distribution of model parameters. The results indicate that the model-based approach for the detection of change points of functional brain networks would be convenient to interpret the dynamics of the brain.


Sign in / Sign up

Export Citation Format

Share Document