scholarly journals A theory of cortical responses

2005 ◽  
Vol 360 (1456) ◽  
pp. 815-836 ◽  
Author(s):  
Karl Friston

This article concerns the nature of evoked brain responses and the principles underlying their generation. We start with the premise that the sensory brain has evolved to represent or infer the causes of changes in its sensory inputs. The problem of inference is well formulated in statistical terms. The statistical fundaments of inference may therefore afford important constraints on neuronal implementation. By formulating the original ideas of Helmholtz on perception, in terms of modern-day statistical theories, one arrives at a model of perceptual inference and learning that can explain a remarkable range of neurobiological facts. It turns out that the problems of inferring the causes of sensory input (perceptual inference) and learning the relationship between input and cause (perceptual learning) can be resolved using exactly the same principle. Specifically, both inference and learning rest on minimizing the brain's free energy, as defined in statistical physics. Furthermore, inference and learning can proceed in a biologically plausible fashion. Cortical responses can be seen as the brain’s attempt to minimize the free energy induced by a stimulus and thereby encode the most likely cause of that stimulus. Similarly, learning emerges from changes in synaptic efficacy that minimize the free energy, averaged over all stimuli encountered. The underlying scheme rests on empirical Bayes and hierarchical models of how sensory input is caused. The use of hierarchical models enables the brain to construct prior expectations in a dynamic and context-sensitive fashion. This scheme provides a principled way to understand many aspects of cortical organization and responses. The aim of this article is to encompass many apparently unrelated anatomical, physiological and psychophysical attributes of the brain within a single theoretical perspective. In terms of cortical architectures, the theoretical treatment predicts that sensory cortex should be arranged hierarchically, that connections should be reciprocal and that forward and backward connections should show a functional asymmetry (forward connections are driving, whereas backward connections are both driving and modulatory). In terms of synaptic physiology, it predicts associative plasticity and, for dynamic models, spike-timing-dependent plasticity. In terms of electrophysiology, it accounts for classical and extra classical receptive field effects and long-latency or endogenous components of evoked cortical responses. It predicts the attenuation of responses encoding prediction error with perceptual learning and explains many phenomena such as repetition suppression, mismatch negativity (MMN) and the P300 in electroencephalography. In psychophysical terms, it accounts for the behavioural correlates of these physiological phenomena, for example, priming and global precedence. The final focus of this article is on perceptual learning as measured with the MMN and the implications for empirical studies of coupling among cortical areas using evoked sensory responses.

2009 ◽  
Vol 05 (01) ◽  
pp. 83-114 ◽  
Author(s):  
KARL FRISTON ◽  
STEFAN KIEBEL

This paper summarizes our recent attempts to integrate action and perception within a single optimization framework. We start with a statistical formulation of Helmholtz's ideas about neural energy to furnish a model of perceptual inference and learning that can explain a remarkable range of neurobiological facts. Using constructs from statistical physics it can be shown that the problems of inferring the causes of our sensory inputs and learning regularities in the sensorium can be resolved using exactly the same principles. Furthermore, inference and learning can proceed in a biologically plausible fashion. The ensuing scheme rests on Empirical Bayes and hierarchical models of how sensory information is generated. The use of hierarchical models enables the brain to construct prior expectations in a dynamic and context-sensitive fashion. This scheme provides a principled way to understand many aspects of the brain's organization and responses. We will demonstrate the brain-like dynamics that this scheme entails by using models of birdsongs that are based on chaotic attractors with autonomous dynamics. This provides a nice example of how non-linear dynamics can be exploited by the brain to represent and predict dynamics in the environment.


2019 ◽  
Vol 113 (5-6) ◽  
pp. 495-513 ◽  
Author(s):  
Thomas Parr ◽  
Karl J. Friston

Abstract Active inference is an approach to understanding behaviour that rests upon the idea that the brain uses an internal generative model to predict incoming sensory data. The fit between this model and data may be improved in two ways. The brain could optimise probabilistic beliefs about the variables in the generative model (i.e. perceptual inference). Alternatively, by acting on the world, it could change the sensory data, such that they are more consistent with the model. This implies a common objective function (variational free energy) for action and perception that scores the fit between an internal model and the world. We compare two free energy functionals for active inference in the framework of Markov decision processes. One of these is a functional of beliefs (i.e. probability distributions) about states and policies, but a function of observations, while the second is a functional of beliefs about all three. In the former (expected free energy), prior beliefs about outcomes are not part of the generative model (because they are absorbed into the prior over policies). Conversely, in the second (generalised free energy), priors over outcomes become an explicit component of the generative model. When using the free energy function, which is blind to future observations, we equip the generative model with a prior over policies that ensure preferred (i.e. priors over) outcomes are realised. In other words, if we expect to encounter a particular kind of outcome, this lends plausibility to those policies for which this outcome is a consequence. In addition, this formulation ensures that selected policies minimise uncertainty about future outcomes by minimising the free energy expected in the future. When using the free energy functional—that effectively treats future observations as hidden states—we show that policies are inferred or selected that realise prior preferences by minimising the free energy of future expectations. Interestingly, the form of posterior beliefs about policies (and associated belief updating) turns out to be identical under both formulations, but the quantities used to compute them are not.


2012 ◽  
Vol 367 (1591) ◽  
pp. 988-1000 ◽  
Author(s):  
Andreas Kleinschmidt ◽  
Philipp Sterzer ◽  
Geraint Rees

Few phenomena are as suitable as perceptual multistability to demonstrate that the brain constructively interprets sensory input. Several studies have outlined the neural circuitry involved in generating perceptual inference but only more recently has the individual variability of this inferential process been appreciated. Studies of the interaction of evoked and ongoing neural activity show that inference itself is not merely a stimulus-triggered process but is related to the context of the current brain state into which the processing of external stimulation is embedded. As brain states fluctuate, so does perception of a given sensory input. In multistability, perceptual fluctuation rates are consistent for a given individual but vary considerably between individuals. There has been some evidence for a genetic basis for these individual differences and recent morphometric studies of parietal lobe regions have identified neuroanatomical substrates for individual variability in spontaneous switching behaviour. Moreover, disrupting the function of these latter regions by transcranial magnetic stimulation yields systematic interference effects on switching behaviour, further arguing for a causal role of these regions in perceptual inference. Together, these studies have advanced our understanding of the biological mechanisms by which the brain constructs the contents of consciousness from sensory input.


Author(s):  
Chang Sub Kim

AbstractThe free energy principle (FEP) in the neurosciences stipulates that all viable agents induce and minimize informational free energy in the brain to fit their environmental niche. In this study, we continue our effort to make the FEP a more physically principled formalism by implementing free energy minimization based on the principle of least action. We build a Bayesian mechanics (BM) by casting the formulation reported in the earlier publication (Kim in Neural Comput 30:2616–2659, 2018, 10.1162/neco_a_01115) to considering active inference beyond passive perception. The BM is a neural implementation of variational Bayes under the FEP in continuous time. The resulting BM is provided as an effective Hamilton’s equation of motion and subject to the control signal arising from the brain’s prediction errors at the proprioceptive level. To demonstrate the utility of our approach, we adopt a simple agent-based model and present a concrete numerical illustration of the brain performing recognition dynamics by integrating BM in neural phase space. Furthermore, we recapitulate the major theoretical architectures in the FEP by comparing our approach with the common state-space formulations.


The present paper describes an investigation of diffusion in the solid state. Previous experimental work has been confined to the case in which the free energy of a mixture is a minimum for the single-phase state, and diffusion decreases local differences of concentration. This may be called ‘diffusion downhill’. However, it is possible for the free energy to be a minimum for the two-phase state; diffusion may then increase differences of concentration; and so may be called ‘diffusion uphill’. Becker (1937) has proposed a simple theoretical treatment of these two types of diffusion in a binary alloy. The present paper describes an experimental test of this theory, using the unusual properties of the alloy Cu 4 FeNi 3 . This alloy is single phase above 800° C and two-phase at lower temperatures, both the phases being face-centred cubic; the essential difference between the two phases is their content of copper. On dissociating from one phase into two the alloy develops a series of intermediate structures showing striking X-ray patterns which are very sensitive to changes of structure. It was found possible to utilize these results for a quantitative study of diffusion ‘uphill’ and ‘downhill’ in the alloy. The experimental results, which can be expressed very simply, are in fair agreement with conclusions drawn from Becker’s theory. It was found that Fick’s equation, dc / dt = D d2c / dx2 , can, within the limits of error, be applied in all cases, with the modification that c denotes the difference of the measured copper concentration from its equilibrium value. The theory postulates that D is the product of two factors, of which one is D 0f the coefficient of diffusion that would be measured if the alloy were an ideal solid solution. The theory is able to calculate D/D 0 , if only in first approximation, and the experiments confirm this calculation. It was found that in most cases the speed of diffusion—‘uphill’ or ‘downhill’—has the order of magnitude of D 0 . * Now with British Electrical Research Association.


2004 ◽  
Vol 27 (3) ◽  
pp. 377-396 ◽  
Author(s):  
Rick Grush

The emulation theory of representation is developed and explored as a framework that can revealingly synthesize a wide variety of representational functions of the brain. The framework is based on constructs from control theory (forward models) and signal processing (Kalman filters). The idea is that in addition to simply engaging with the body and environment, the brain constructs neural circuits that act as models of the body and environment. During overt sensorimotor engagement, these models are driven by efference copies in parallel with the body and environment, in order to provide expectations of the sensory feedback, and to enhance and process sensory information. These models can also be run off-line in order to produce imagery, estimate outcomes of different actions, and evaluate and develop motor plans. The framework is initially developed within the context of motor control, where it has been shown that inner models running in parallel with the body can reduce the effects of feedback delay problems. The same mechanisms can account for motor imagery as the off-line driving of the emulator via efference copies. The framework is extended to account for visual imagery as the off-line driving of an emulator of the motor-visual loop. I also show how such systems can provide for amodal spatial imagery. Perception, including visual perception, results from such models being used to form expectations of, and to interpret, sensory input. I close by briefly outlining other cognitive functions that might also be synthesized within this framework, including reasoning, theory of mind phenomena, and language.


2021 ◽  
pp. 1-12
Author(s):  
Joonkoo Park ◽  
Sonia Godbole ◽  
Marty G. Woldorff ◽  
Elizabeth M. Brannon

Abstract Whether and how the brain encodes discrete numerical magnitude differently from continuous nonnumerical magnitude is hotly debated. In a previous set of studies, we orthogonally varied numerical (numerosity) and nonnumerical (size and spacing) dimensions of dot arrays and demonstrated a strong modulation of early visual evoked potentials (VEPs) by numerosity and not by nonnumerical dimensions. Although very little is known about the brain's response to systematic changes in continuous dimensions of a dot array, some authors intuit that the visual processing stream must be more sensitive to continuous magnitude information than to numerosity. To address this possibility, we measured VEPs of participants viewing dot arrays that changed exclusively in one nonnumerical magnitude dimension at a time (size or spacing) while holding numerosity constant and compared this to a condition where numerosity was changed while holding size and spacing constant. We found reliable but small neural sensitivity to exclusive changes in size and spacing; however, changing numerosity elicited a much more robust modulation of the VEPs. Together with previous work, these findings suggest that sensitivity to magnitude dimensions in early visual cortex is context dependent: The brain is moderately sensitive to changes in size and spacing when numerosity is held constant, but sensitivity to these continuous variables diminishes to a negligible level when numerosity is allowed to vary at the same time. Neurophysiological explanations for the encoding and context dependency of numerical and nonnumerical magnitudes are proposed within the framework of neuronal normalization.


1967 ◽  
Vol 12 (2) ◽  
pp. 105-124
Author(s):  
Peter Brawley ◽  
Robert Pos

To summarize briefly: Converging data from many disciplines — psychology, psychiatry, social theory, biochemistry, neuropharmacology, neurophysiology — point to the sensory input regulating mechanism of the central nervous system as a critical factor in the production of hallucinoses and psychotic experience. There is good evidence that what we have called the informational underload model ‘holds considerable promise for improving our understanding of many clinical and non-clinical phenomena of interest to psychiatry. The evidence suggests that a neurophysiological, internal informational underload syndrome may be a final common pathway of psychotic experience. The question as to where such a syndrome might occur in the brain, together with the question of whether such an informational underload syndrome might be due to toxins, genetic factors, conditioning processes, anxiety or dissociation, or other causes, has to be left open. What is needed now, is research directed at these two questions: 1) does such an internal informational underload syndrome occur in the brain, 2) when, where, and under what circumstances does it occur?


2020 ◽  
Author(s):  
Matthias Loidolt ◽  
Lucas Rudelt ◽  
Viola Priesemann

AbstractHow does spontaneous activity during development prepare cortico-cortical connections for sensory input? We here analyse the development of sequence memory, an intrinsic feature of recurrent networks that supports temporal perception. We use a recurrent neural network model with homeostatic and spike-timing-dependent plasticity (STDP). This model has been shown to learn specific sequences from structured input. We show that development even under unstructured input increases unspecific sequence memory. Moreover, networks “pre-shaped” by such unstructured input subsequently learn specific sequences faster. The key structural substrate is the emergence of strong and directed synapses due to STDP and synaptic competition. These construct self-amplifying preferential paths of activity, which can quickly encode new input sequences. Our results suggest that memory traces are not printed on a tabula rasa, but instead harness building blocks already present in the brain.


PLoS Biology ◽  
2021 ◽  
Vol 19 (11) ◽  
pp. e3001465
Author(s):  
Ambra Ferrari ◽  
Uta Noppeney

To form a percept of the multisensory world, the brain needs to integrate signals from common sources weighted by their reliabilities and segregate those from independent sources. Previously, we have shown that anterior parietal cortices combine sensory signals into representations that take into account the signals’ causal structure (i.e., common versus independent sources) and their sensory reliabilities as predicted by Bayesian causal inference. The current study asks to what extent and how attentional mechanisms can actively control how sensory signals are combined for perceptual inference. In a pre- and postcueing paradigm, we presented observers with audiovisual signals at variable spatial disparities. Observers were precued to attend to auditory or visual modalities prior to stimulus presentation and postcued to report their perceived auditory or visual location. Combining psychophysics, functional magnetic resonance imaging (fMRI), and Bayesian modelling, we demonstrate that the brain moulds multisensory inference via 2 distinct mechanisms. Prestimulus attention to vision enhances the reliability and influence of visual inputs on spatial representations in visual and posterior parietal cortices. Poststimulus report determines how parietal cortices flexibly combine sensory estimates into spatial representations consistent with Bayesian causal inference. Our results show that distinct neural mechanisms control how signals are combined for perceptual inference at different levels of the cortical hierarchy.


Sign in / Sign up

Export Citation Format

Share Document