scholarly journals Hierarchical Frequency Tagging reveals neural markers of predictive coding under varying uncertainty

2016 ◽  
Author(s):  
Noam Gordon ◽  
Roger Koenig-Robert ◽  
Naotsugu Tsuchiya ◽  
Jeroen van Boxtel ◽  
Jakob Hohwy

AbstractUnderstanding the integration of top-down and bottom-up signals is essential for the study of perception. Current accounts of predictive coding describe this in terms of interactions between state units encoding expectations or predictions, and error units encoding prediction error. However, direct neural evidence for such interactions has not been well established. To achieve this, we combined EEG methods that preferentially tag different levels in the visual hierarchy: Steady State Visual Evoked Potential (SSVEP at 10Hz, tracking bottom-up signals) and Semantic Wavelet-Induced Frequency Tagging (SWIFT at 1.3Hz tracking top-down signals). Importantly, we examined intermodulation components (IM, e.g., 11.3Hz) as a measure of integration between these signals. To examine the influence of expectation and predictions on the nature of such integration, we constructed 50-second movie streams and modulated expectation levels for upcoming stimuli by varying the proportion of images presented across trials. We found SWIFT, SSVEP and IM signals to differ in important ways. SSVEP was strongest over occipital electrodes and was not modified by certainty. Conversely, SWIFT signals were evident over temporo- and parieto-occipital areas and decreased as a function of increasing certainty levels. Finally, IMs were evident over occipital electrodes and increased as a function of certainty. These results link SSVEP, SWIFT and IM signals to sensory evidence, predictions, prediction errors and hypothesis-testing - the core elements of predictive coding. These findings provide neural evidence for the integration of top-down and bottom-up information in perception, opening new avenues to studying such interactions in perception while constraining neuronal models of predictive coding.SIGNIFICANCE STATEMENTThere is a growing understanding that both top-down and bottom-up signals underlie perception. But how do these signals interact? And how does this process depend on the signals’ probabilistic properties? ‘Predictive coding’ theories of perception describe this in terms how well top-down predictions fit with bottom-up sensory input. Identifying neural markers for such signal integration is therefore essential for the study of perception and predictive coding theories in particular. The novel Hierarchical Frequency Tagging method simultaneously tags top-down and bottom-up signals in EEG recordings, while obtaining a measure for the level of integration between these signals. Our results suggest that top-down predictions indeed integrate with bottom-up signals in a manner that is modulated by the predictability of the sensory input.

eLife ◽  
2017 ◽  
Vol 6 ◽  
Author(s):  
Noam Gordon ◽  
Roger Koenig-Robert ◽  
Naotsugu Tsuchiya ◽  
Jeroen JA van Boxtel ◽  
Jakob Hohwy

There is a growing understanding that both top-down and bottom-up signals underlie perception. But it is not known how these signals integrate with each other and how this depends on the perceived stimuli’s predictability. ‘Predictive coding’ theories describe this integration in terms of how well top-down predictions fit with bottom-up sensory input. Identifying neural markers for such signal integration is therefore essential for the study of perception and predictive coding theories. To achieve this, we combined EEG methods that preferentially tag different levels in the visual hierarchy. Importantly, we examined intermodulation components as a measure of integration between these signals. Our results link the different signals to core aspects of predictive coding, and suggest that top-down predictions indeed integrate with bottom-up signals in a manner that is modulated by the predictability of the sensory input, providing evidence for predictive coding and opening new avenues to studying such interactions in perception.


2018 ◽  
Author(s):  
Noam Gordon ◽  
Naotsugu Tsuchiya ◽  
Roger Koenig-Robert ◽  
Jakob Hohwy

AbstractPerception results from the integration of incoming sensory information with pre-existing information available in the brain. In this EEG (electroencephalography) study we utilised the Hierarchical Frequency Tagging method to examine how such integration is modulated by expectation and attention. Using intermodulation (IM) components as a measure of non-linear signal integration, we show in three different experiments that both expectation and attention enhance integration between top-down and bottom-up signals. Based on multispectral phase coherence, we present two direct physiological measures to demonstrate the distinct yet related mechanisms of expectation and attention. Specifically, our results link expectation to the modulation of prediction signals and the integration of top-down and bottom-up information at lower levels of the visual hierarchy. Meanwhile, they link attention to the propagation of ascending signals and the integration of information at higher levels of the visual hierarchy. These results are consistent with the predictive coding account of perception.


2019 ◽  
Author(s):  
Yuru Song ◽  
Mingchen Yao ◽  
Helen Kemprecos ◽  
Áine Byrne ◽  
Zhengdong Xiao ◽  
...  

AbstractPain is a complex, multidimensional experience that involves dynamic interactions between sensory-discriminative and affective-emotional processes. Pain experiences have a high degree of variability depending on their context and prior anticipation. Viewing pain perception as a perceptual inference problem, we use a predictive coding paradigm to characterize both evoked and spontaneous pain. We record the local field potentials (LFPs) from the primary somatosensory cortex (S1) and the anterior cingulate cortex (ACC) of freely behaving rats—two regions known to encode the sensory-discriminative and affective-emotional aspects of pain, respectively. We further propose a framework of predictive coding to investigate the temporal coordination of oscillatory activity between the S1 and ACC. Specifically, we develop a high-level, empirical and phenomenological model to describe the macroscopic dynamics of bottom-up and top-down activity. Supported by recent experimental data, we also develop a mechanistic mean-field model to describe the mesoscopic population neuronal dynamics in the S1 and ACC populations, in both naive and chronic pain-treated animals. Our proposed predictive coding models not only replicate important experimental findings, but also provide new mechanistic insight into the uncertainty of expectation, placebo or nocebo effect, and chronic pain.Author SummaryPain perception in the mammalian brain is encoded through multiple brain circuits. The experience of pain is often associated with brain rhythms or neuronal oscillations at different frequencies. Understanding the temporal coordination of neural oscillatory activity from different brain regions is important for dissecting pain circuit mechanisms and revealing differences between distinct pain conditions. Predictive coding is a general computational framework to understand perceptual inference by integrating bottom-up sensory information and top-down expectation. Supported by experimental data, we propose a predictive coding framework for pain perception, and develop empirical and biologically-constrained computational models to characterize oscillatory dynamics of neuronal populations from two cortical circuits—one for the sensory-discriminative experience and the other for affective-emotional experience, and further characterize their temporal coordination under various pain conditions. Our computational study of biologically-constrained neuronal population model reveals important mechanistic insight on pain perception, placebo analgesia, and chronic pain.


2018 ◽  
Author(s):  
Christian D. Márton ◽  
Makoto Fukushima ◽  
Corrie R. Camalier ◽  
Simon R. Schultz ◽  
Bruno B. Averbeck

AbstractPredictive coding is a theoretical framework that provides a functional interpretation of top-down and bottom up interactions in sensory processing. The theory has suggested that specific frequency bands relay bottom-up and top-down information (e.g. “γ up, β down”). But it remains unclear whether this notion generalizes to cross-frequency interactions. Furthermore, most of the evidence so far comes from visual pathways. Here we examined cross-frequency coupling across four sectors of the auditory hierarchy in the macaque. We computed two measures of cross-frequency coupling, phase-amplitude coupling (PAC) and amplitude-amplitude coupling (AAC). Our findings revealed distinct patterns for bottom-up and top-down information processing among cross-frequency interactions. Both top-down and bottom-up made prominent use of low frequencies: low-to-low frequency (θ, α, β) and low frequency-to-high γ couplings were predominant top-down, while low frequency-to-low γ couplings were predominant bottom-up. These patterns were largely preserved across coupling types (PAC and AAC) and across stimulus types (natural and synthetic auditory stimuli), suggesting they are a general feature of information processing in auditory cortex. Moreover, our findings showed that low-frequency PAC alternated between predominantly top-down or bottom-up over time. Altogether, this suggests sensory information need not be propagated along separate frequencies upwards and downwards. Rather, information can be unmixed by having low frequencies couple to distinct frequency ranges in the target region, and by alternating top-down and bottom-up processing over time.1SignificanceThe brain consists of highly interconnected cortical areas, yet the patterns in directional cortical communication are not fully understood, in particular with regards to interactions between different signal components across frequencies. We employed a a unified, computationally advantageous Granger-causal framework to examine bi-directional cross-frequency interactions across four sectors of the auditory cortical hierarchy in macaques. Our findings extend the view of cross-frequency interactions in auditory cortex, suggesting they also play a prominent role in top-down processing. Our findings also suggest information need not be propagated along separate channels up and down the cortical hierarchy, with important implications for theories of information processing in the brain such as predictive coding.


Author(s):  
Mariana von Mohr ◽  
Aikaterini Fotopoulou

Pain and pleasant touch have been recently classified as interoceptive modalities. This reclassification lies at the heart of long-standing debates questioning whether these modalities should be defined as sensations on their basis of neurophysiological specificity at the periphery or as homeostatic emotions on the basis of top-down convergence and modulation at the spinal and brain levels. Here, we outline the literature on the peripheral and central neurophysiology of pain and pleasant touch. We next recast this literature within a recent Bayesian predictive coding framework, namely active inference. This recasting puts forward a unifying model of bottom-up and top-down determinants of pain and pleasant touch and the role of social factors in modulating the salience of peripheral signals reaching the brain.


2020 ◽  
Vol 30 (4) ◽  
pp. 589-615 ◽  
Author(s):  
Matthew Crosby

AbstractIn ‘Computing Machinery and Intelligence’, Turing, sceptical of the question ‘Can machines think?’, quickly replaces it with an experimentally verifiable test: the imitation game. I suggest that for such a move to be successful the test needs to be relevant, expansive, solvable by exemplars, unpredictable, and lead to actionable research. The Imitation Game is only partially successful in this regard and its reliance on language, whilst insightful for partially solving the problem, has put AI progress on the wrong foot, prescribing a top-down approach for building thinking machines. I argue that to fix shortcomings with modern AI systems a nonverbal operationalisation is required. This is provided by the recent Animal-AI Testbed, which translates animal cognition tests for AI and provides a bottom-up research pathway for building thinking machines that create predictive models of their environment from sensory input.


1989 ◽  
Vol 46 (3) ◽  
pp. 277-282
Author(s):  
Robert Jones
Keyword(s):  
Top Down ◽  

“Steinbeck, like liberation theologians, recognizes that corporate sin must yield to some form of grace that transcends individuals. The great owners and bankers are caught in the sinful system just as tightly as the migrants. What the novel [The Grapes of Wrath] suggests, and what the liberationists assert, is that change in the system will be in the direction of justice only if it comes from the bottom up rather than from the top down.”


2018 ◽  
Author(s):  
Hinze Hogendoorn ◽  
Anthony N Burkitt

AbstractHierarchical predictive coding is an influential model of cortical organization, in which sequential hierarchical layers are connected by feedback connections carrying predictions, as well as feedforward connections carrying prediction errors. To date, however, predictive coding models have neglected to take into account that neural transmission itself takes time. For a time-varying stimulus, such as a moving object, this means that feedback predictions become misaligned with new sensory input. We present an extended model implementing both feed-forward and feedback extrapolation mechanisms that realigns feedback predictions to minimize prediction error. This realignment has the consequence that neural representations across all hierarchical stages become aligned in real-time. Using visual motion as an example, we show that the model is neurally plausible, that it is consistent with evidence of extrapolation mechanisms throughout the visual hierarchy, that it predicts several known motion-position illusions, and that it provides a solution to the temporal binding problem.


2021 ◽  
Author(s):  
Yuening Yan ◽  
Jiayu Zhan ◽  
Robin A. A. Ince ◽  
Philippe G. Schyns

The prevalent conception of vision-for-categorization suggests an interplay of two dynamic flows of information within the occipito-ventral pathway. The bottom-up flow progressively reduces the high-dimensional input into a lower-dimensional representation that is compared with memory to produce categorization behavior. The top-down flow predicts category information (i.e. features) from memory that propagates down the same hierarchy to facilitate input processing and behavior. However, the neural mechanisms that support such dynamic feature propagation up and down the visual hierarchy and how they facilitate behavior remain unclear. Here, we studied them using a prediction experiment that cued participants (N = 11) to the spatial location (left vs. right) and spatial frequency (SF, Low, LSF, vs. High, HSF) contents of an upcoming Gabor patch. Using concurrent MEG recordings of each participant's neural activity, we compared the top-down flow of representation of the predicted Gabor contents (i.e. left vs. right; LSF vs. HSF) to their bottom-up flow. We show (1) that top-down prediction improves speed of categorization in all participants, (2) the top-down flow of prediction reverses the bottom-up representation of the Gabor stimuli, going from deep right fusiform gyrus sources down to occipital cortex sources contra-lateral to the expected Gabor location and (3) that predicted Gabors are better represented when the stimulus is eventually shown, leading to faster categorizations. Our results therefore trace the dynamic top-down flow of a predicted visual content that chronologically and hierarchically reversed bottom-up processing, further facilitates visual representations in early visual cortex and subsequent categorization behavior.


2021 ◽  
Author(s):  
Abdullahi Ali ◽  
Nasir Ahmad ◽  
Elgar de Groot ◽  
Marcel A. J. van Gerven ◽  
Tim C. Kietzmann

AbstractPredictive coding represents a promising framework for understanding brain function. It postulates that the brain continuously inhibits predictable sensory input, ensuring a preferential processing of surprising elements. A central aspect of this view is its hierarchical connectivity, involving recurrent message passing between excitatory bottom-up signals and inhibitory top-down feedback. Here we use computational modelling to demonstrate that such architectural hard-wiring is not necessary. Rather, predictive coding is shown to emerge as a consequence of energy efficiency. When training recurrent neural networks to minimise their energy consumption while operating in predictive environments, the networks self-organise into prediction and error units with appropriate inhibitory and excitatory interconnections, and learn to inhibit predictable sensory input. Moving beyond the view of purely top-down driven predictions, we demonstrate via virtual lesioning experiments that networks perform predictions on two timescales: fast lateral predictions among sensory units, and slower prediction cycles that integrate evidence over time.


Sign in / Sign up

Export Citation Format

Share Document